Saturday 10 September 2016

Please can we stop pretending library visitor counts are performance data?

From an operations management perspective there is some use for library visitor count data:
  • It's useful to have an idea of when your peak throughputs occur so that you can deploy resources accordingly.
  • When you're designing new library spaces it's good to have a rough ballpark figure of throughputs for designing in capacity and customer flows.
  • It's good for morale to be able to acknowledge and celebrate when you've safely handled a significant number of visitors.
From a service management perspective there is one use for library visitor count data:
  • It's a salutary reminder of how little you know of your customers if the only available data are about loans and PC sessions.
Otherwise, they're not a lot of use. You see, the thing about visitor counts is that most of them are poor data and from a service delivery perspective all of them are meaningless.

Why are so many visitor counts poor data?

We'll assume that everyone's acting in good faith and nobody's playing the numbers.

If you've got an automated visitor count system and you're running your library as a stand-alone service you have the best chance of having good visitor count data. A lot depends on the way the count is done; the positioning of the counter; and the frequency of data sampling and verification.
  • A badly-placed counter can miss visitors — a head count could miss all the children, for instance, or the customer flows could by-pass the counter completely. 
  • Beam-interrupt counters that try to count the number of legs and divide by two have their issues. Back when we installed them at Rochdale we'd heard the urban myth about what happens when you walk past with a ladder. Having enquiring minds we tried it with a seven-rung stepladder and discovered that there was some truth in the story so long as the ladder was being carried horizontally by two toddlers, so we didn't worry about that too much. We did worry that one toddler with little legs = one interrupt = half a visitor. And that a child running into a library didn't get counted (children run into good libraries because they're exciting). People with walking sticks or wheelchairs seemed to be inconsistently recorded. So the figures were only really useful to us as rough ballpark figures.
  • Things happen, so there need to be ongoing checks on the reliability of the equipment, the data and the procedures for collection and collation.
If you've got an automated visitor count system in a shared-service building it's a bit more complicated.
  • The visitor count is potentially useful for the operational management of the building but not really for the library service. 
  • Do you include or exclude the people who came in to renew their taxi licence or claim housing benefit or have an appointment with the nurse in the clinic? 
    • If you include them how is this data useful for the management of the library service? 
    • If you exclude them how do you capture the visitors who come in for a taxi licence/HB enquiry/clinical appointment then take advantage of the fact there's a library in the same building? 
  • How do you exclude the people coming in for meetings with other services? 
  • How do you exclude the passage of staff from the other services? 
It gets messy quickly.

And then there are manual counts…

What's wrong with manual counts? Well:
  • There's the timing of them. It's unlikely that you'll have the resources to do the count all day every day. If you have, you'll find a lot of other things for them to be doing at the same time. (If you *do* have a FTE devoted exclusively to counting visitors you need your bumps feeling.) The data will be a sample. It's easier to do the sampling during the hours when the library's relatively are quiet, but what would invalidate the sample. So there will inevitably be stresses of distraction and confusion.

  • Then there's the seasonal variations. And do you want to base your annual count on that week when you've arranged for the workmen to come in to fix the heating? And so on.

    You could take the sensible view that you're not going to extrapolate the figures, you're just going to do a year-on-year comparison of counts conducted the same way at the same point in the calendar every year. Which is useful if the figures are purely for internal use or published as trends rather than absolute data.

    If that absolute data's used to compare and contrast with another library authority it becomes a lot less useful as you're comparing apples with pears:
    • The methodology may be different. 
    • There may be good local reasons for differences in the  seasonal variation — half term holidays at schools being obvious examples. 
    • There  may be differences in the library calendar — the reading festival may be that week, or it could be the breathing space after last week's reading festival. Back in the nineties our managers were concerned that visitor counts could be distorted by events in the library so, ironically, the two weeks of the visitor count were the only ones with no events in the library, no class visits, no author visits, etc.
  • Then there's the counting. A manual visitor count is easy when the library's quiet. When it's busy you're too busy dealing with your customers to count the visitors. You can try and do a catch-up later but that's always going to be an approximation based on memory and chance observation. And your finger may slip on the clicker you're using for the count. Or you could be one of those people who sometimes have four bars in their five-bar gates.
So there are issues with the visitor count data.

Why is this data meaningless?

A person has come into the building. And…? The visitor to the buildings isn't necessarily a user of the library or a customer of the service, any more than the chap who gets off the train at Waverley Station is necessarily a Scotsman.

There were forty visitors to the library
  • Three people borrowed some books
  • Three people used the computers
  • One came to read the electricity meter
  • One came to sort out the radiator that hasn’t been working
  • One came to deliver a parcel
  • One came to use the loo
  • A drunk came in to make a row and throw his shoe through the window
  • A policeman came in to take a statement about it
  • A building manager came in to inspect the damage
  • A joiner came in to board up the window
  • A glazier came in to replace the window
  • A cat ran in, pursued by:
  • A dog, pursued by:
  • The dog’s owner, pursued by:
  • Three children who followed to see the fun
  • We don’t know what the rest did.
A visitor comes into the library.
  • What did they do? 
  • What did you do? 
  • Was it any good? 
  • Where the right resources available at the right time for the right people? 
  • Did the visitor engage with the service at all? 
The visitor count tells you none of this. So how can it be any sort of reflection of the performance of your service? You can't use data about people you don't know have engaged with your service to measure its performance because the only information you have is about the people, you don't have any about the service delivery.

Attendance figures are important to sporting venues because attendance generates income. They don't determine the trophies that team collect in the course of a season. Nor do you see schools with banners proclaiming: "According to OFSTED more people walked through our front door than any other primary school in Loamshire!"

Visitor counts are not performance data. Performance is about the delivery of outcomes, not throughputs.

1 comment:

  1. Excellent article to show those in senior management of local authorities who only look at footfall rather than the value of a visit to the library

    ReplyDelete