Tuesday, 18 October 2016

Library data part two: what do we know about the stock?

In principle stock data is much the least problematic data set held by libraries when it comes to trying to map it and potentially share it across local authority boundaries or make the data openly-available. There are good reasons for this:
  • Every English public library service has a catalogue of resources
  • There has been decades' worth of data-sharing for the purposes of interlibrary loans including, but not limited to, the UnityUK database
  • There are long-established standards for title-level bibliographic data
  • The outsourcing of most bibliographic metadata, limits the number of original sources of data and so imposes some consistency
Added to this can be the data mapping work involved in setting up an interface with the evidence-based stock management system CollectionHQ and the increased use of library management systems in consortium settings. Both of these get library systems people thinking about the way their data maps against external frameworks,

Technically, data about virtual stock holdings can be treated the same way as physical stock holdings. Culturally, there is some variation in approach between library services.

For the purposes of this post we'll assume that all stock has been catalogued and the records held in the library management system. In reality this will be true of most, if not all, lending library stock and a high proportion of whatever reference library stock there is these days. Many local studies collections and special collections are still playing catch-up

Title-level bibliographic data

All the bibliographic records come from the same place so this is standard data and would be easy to share and compare, right? Well… up to a point, Lord Copper.
  • Not all library authorities are buying in MARC records.
  • Of those that do, not all of them are retrospectively updating their old records so they'll have a mix of bought-in MARC records and locally-sourced records which may or may not be good MARC records in the first place and which certainly have variations in the mapping details.
  • Those that did do a retrospective update may have hit a few glitches. Like the library authority that had an LMS that had ISBN as a required field and so had to put dummy data in this field which turned out to be the valid ISBNs of extremely different titles to the ones they actually had. (This wasn't Rochdale, though it did cause us some collateral damage.)
  • There may be local additions to commercial MARC records, for instance local context-specific subject headings and notes.
  • Commercial MARC records may not be available for some very local or special collection materials so these will need to be locally-sourced.
Taking these into factors into consideration this would be much the most the most reliably uniform component of a national core data set for libraries if any such were ever developed. The data available would be either:
  • A full MARC record + the unique identifier for this bib record in this LMS (this is required to act as a link between the title-level data and the item-level data); or
  • A non-MARC record including:
    • Title
    • Author
    • Publisher
    • Publication date
    • ISBN/ISSN or other appropriate control number, if available
    • Class number
    • Unique identifier for this bib record
    (I think there's a limit to the amount of non-MARC data that should be admissible.)
For the purposes of this game RDA-compliant records can be assumed to be ordinary MARC21 records (there's a heap of potential MARC mapping issues involved in any national sharing exercise which we won't go into here). I can see the need for the use of FRBR by public libraries but I don't see it happening any time soon so it's not considered here.

Item-level holdings data

The library catalogue includes holdings data as well as bibliographic data so that, too, could be part of a national data set. The detail and format of this data can vary between LMSs and from one library authority to another:
  • Some, but not all, item records may have at least some of their data held in MARC 876 — 878 tag format
  • The traditional concept of a "collection" may be described in different fields according to the LMS or the local policy. Usually it would be labelled as one or other of item type, item category or collection.
Which data to include? Or rather, which would be most likely to be consistently-recorded? My guess:
  • Unique identifier (usually a barcode)
  • Location
  • Key linking to the appropriate bibliographic record
  • Item type/item category/collection label best approximating to the traditional concept of "collection"
  • Cost/value
  • Use, which would generally mean the number of issues
  • Current status of the item
After that the variations start to kick in big time.

There are a few devils in the detail, for instance:
  • There is no standard set of "collections," though there is a de facto standard set of higher-level item types:
    • Adult Fiction
    • Adult Non-Fiction
    • Children's Fiction
    • Children's Non-Fiction
    • Reference
    • Audiovisual
    • Everything else
    The item type/item category/ collection for each library authority would need to be mapped against a standard schedule of “Item types.” For instance, when I used to pull out stock data for CIPFA returns I didn't have the appropriate categories available in fields in the item records; so in Dynix I had a dictionary item set up to do the necessary in Recall and with Spydus I set up a formula field in a Crystal Report, in both cases it involved a formula including sixty-odd "If… Then… Else…" statements.
    • Are those already used for CIPFA adequate or would a new suite need to be developed and agreed?
    • Would this translation be done at the library output stage or the data aggregation stage?
      For CIPFA our translation was done at output, for CollectionHQ it was done at data aggregation stage according to previously-defined mapping.
  • Cost could be the actual acquired cost including discount; the supplier's list price at time of purchase, without discount; or the default replacement cost for that type of item applied by the LMS.
  • Use count data may be tricky:
    • It could be for the lifetime of the item or just from the time that data was added to this particular LMS if the legacy data was lost during the migration from one system to another. 
    • Some LMSs record both "current use" (e.g. reset at the beginning of the financial year) and total use. You need to be able to identify one from the other.
    • The use of loanable e-books/e-audiobooks may not be available as this depends on the integration of the LMS with the supplier’s management system.
    • Curated web pages would be treated as reference stock and not have a use count.
    • Some LMSs allow the recording of reference use as in-house use.
  • Item status is always interesting:
    • Does this status mean the item is actually in stock?
    • Is the item available?
    • Has the item gone walkies/been withdrawn?
    • Again, this would have to be a mapping exercise, similar to the one we did for CollectionHQ

So what have we got?

Overall, then, we could say that every public library could put their hand to a fair bit of title-level data that's reasonably consistent in both structure and content; and some item-level data that wouldn't be difficult to be structurally consistent but would need a bit of work to map the content to a consistent level.

Monday, 17 October 2016

Library data part one: variations on a theme

Over the Summer I've been doing a bit of work for the Public Libraries Taskforce and that set me thinking about the data that public library services hold. Each one holds a shedload of data about its resources, its customers and its performance, but each one holds a slightly different shedload to its neighbours. Why would that be?

Technical reasons

  • There are surprisingly few standard data structures in play in public libraries
  • Different management systems hold data in different ways
  • Even if the data has the same structure a different suite of descriptive labels may be in use

Human reasons

  • An organisation might not feel the need to record the data at all
  • The quality — or not — of the data may not be a priority so elements may be missing
  • Naming conventions, etc. may change over time without retroactive conversion, leading to internal inconsistency
  • The data may still be on bits of paper
Having said that there are some key data that are generally common to all, though variable in detail. I'll have a look at those over the next few posts.

Disheartening the visitor

For a long time — nearly twenty years — I had a very clear candidate for Worst Entrance To A Public Library Ever, though thankfully that particular entrance barely survived the millennium. I've now found one that's worse. No names, no pack drill, it wouldn't be fair to the staff who I know are trying their best in very trying circumstances.

The other day I popped into this library. I've been meaning to go and have a nosy for a while. Up to a few years ago this town had a reasonably busy little library, nothing special, in a simple brick two-story box of a building. The shopping area of the town got redeveloped quite extensively, one of the casualties being the old library. It was the replacement I'd been meaning to visit.

The good news is that the building's well-signed in the shopping area and made easy to find because there's a lot of colourful and useful library posters in the window. The first bit of bad news is that it's on the first floor above a supermarket so you can't idly walk past, see the library in use and be tempted in. But the posters and notices in the window try to draw you in.

library lobby with escalatorSadly, once you are drawn in you're in a small lobby with just enough room to wheel a buggy round to a lift or else take the escalator directly in front of you. Everything is grey: pale grey walls, mid grey ceiling, dark grey carpet, steel grey lift doors and escalator. It's all a bit soulless. Nothing much invites you to go up the escalator: it rises up into a dark grey shadow with no knowing that anything's up there, least of all a library. All in all pretty nasty.

Once you get upstairs it's slightly better, though that's despite the design of the library not because of it. The colour scheme is followed again, relentlessly, with grey metal shelving and an extensive network of exposed pipes in the ceiling space also painted mid-grey, the building designer obviously being a big fan of warehouse shopping chic. Or else one of the Borg. The overall effect was softened as far as possible by posters and displays but there wasn't physically a lot of scope for making it a much more human environment. Which was a shame as there were good things going on in there including a very enthusiastic rhythm and rhyme session going on in the enclosure that was the children's library. Lots of colourful books on the shelves may have helped a bit but this is a local authority that was closing libraries and cutting book funds back when the rest of us were refurbishing and replenishing so the staff didn't have many resources to play with there.

Generally speaking this is just the worst of a trend I've seen over the past few years, new library builds by architects and designers who see the library space as being like an office or else just a room with a few shelves of books in it. For all the consultations that go on it's evident that the designers haven't made any effort to understand how the business of the library is run:
  • The need to invite the visitor in, ease them back out again and leave them wanting to come back soon; 
  • The different lines of flow for different kinds of use and different kinds of customer;
  • The ease of navigation so that somebody standing in the entrance knows immediately where they need to go;
  • The essential requirement of lines of sight for staff so that they can provide unobtrusive supervision and support;
  • The capability for change in response to early experience of use (like some landscape designers who only hard pave paths after a few months so that paths follow the "cow lines" established by the people using the space) and to allow for development of delivery of the services on offer;
  • Most of all, the acknowledgement that the library space is a human space so people have to feel comfortable in it.
None of this costs anything except a bit of effort and a willingness to understand the desired outcomes that are being designed for, Sadly…

Friday, 30 September 2016

Don't blame technology for bad management attitudes

Once upon a time, back when the millennium bug was a thing — or we thought it was — the Director of Recreation & Community Services as was told me to put together a roadmap of the IT developments the library service should be undertaking if money wasn't a big issue. We didn't have the money  and there wasn't any immediate prospect of having it but it would give him a sense of where we wanted to be and the opportunities we'd like to grab if they came along. The results included the usual suspects: we desperately needed to get all the libraries networked and onto the library management system and staff on enquiry desks needed access to the internet as well as the library catalogue, and internet access for the public would be good. I also said that we needed to invest in self-service circulation.

I'm now going to say something heretical, please bear with me. There is no intrinsic value in stamping a due date in a book. There, I've said it. The value in a staff-mediated issue/checkout transaction is:
  • The borrower gets to borrow an item
  • The loan is recorded
  • There's the human interaction, which may be the only one some people get that day
  • The library staff get the opportunity to provide information about other library resources and services ("We've got that author's new book on order," "Did you know we've started having toddlers' tales sessions on Wednesday afternoons?" etc., etc.)
The value of the return/checkin transaction is similar.

To my mind the high-value parts of any transaction are the ones you need to put your resources into. Anything that takes resources away from the high-value areas needs to be designed out.

In those days our busiest library was issuing between four and five thousand items every Saturday. More to the point, six Library Assistants were issuing between four and five thousand items every Saturday. And returning a similar number, which creates a lot more work as something needs to be done with all that incoming  stock. A lot of the time the queues at the counter were awful with staff and customers both having a stressful experience. Consequently the value of the issue transaction was compromised — the loan was effected and recorded but there was no time for the human stuff: both the staff and the customers felt under pressure to get the transaction over and done with as quickly as possible. Some customers even gave up and didn't bother: they wanted to become borrowers and our process stopped it happening. If someone desperately needed that human interaction they were badly short-changed. At the returns desk it was even worse: some borrowers lost patience and just left items on the corner of the counter and in the confusion these sometimes got back onto the shelves without the return having been recorded. The rest of the week there were other, smaller, stress points and other events and activities in the library added to the mix. By this stage we'd successfully made the case for some more Library Assistant hours but there's a limit to the number of bodies and workstations you that can physically fit behind even the huge counter that was in this library.

So I argued that we needed to include some self-service issue/return functionality to try and ease the burden a bit. Some people just want to be in and out, they want to borrow an item or return it and they're not much fussed about anything else. Some people would have privacy or safeguarding issues that could be addressed by allowing them to self-issue a book. Giving these people the self-service option would address their needs and also reduce the queue, allowing more time for the people who did need the human stuff at the counter. The director, whose background was adult and community education, was enthusiastic about the idea: "You mean that we could get the library staff off that production line at the counter so that instead of stamping books they could be doing something more interesting like helping people to find things and getting them interested in something new?" Yes, we could have.

Some years, and a couple of directors, later we were in a position to credibly rattle the begging bowl to fund self-service circulation. The world had changed somewhat. Library managers nationwide had picked up the idea that this functionality was a way of saving money on staffing and particularly cutting Library Assistant hours. Although I got cross when my library managers saw self-service as an opportunity to cut staffing costs I couldn't really blame them as individuals: they were coming late to a game that already had this established narrative. It was a massive pity but there we were. We weren't alone. And as Austerity took its toll self-service circulation became one of the quick fixes — I know of one library authority that cut staffing hours on the basis that kiosks were going to be installed at a couple of libraries and the next year cut the hours further because the self same kiosks had been installed. So self-service kiosks replaced staff instead of freeing them up to do other, more important, things.

The point to this story is that the technology wasn't to blame. Technology is never neutral — design must have an end in view — but the way that it is used and the consequent outcomes are largely down to human decision. In this case the opportunity to enrich the very many parts of the public library service that aren't the issue and return of books was passed over because neither the staff nor the service were being valued by "Professionals" in managerial positions. It wasn't a decision forced on them by outside forces, it was one they came to themselves collectively at the turn of the millennium and which they then transmitted to the people who hold the purse strings. Which makes it deuced hard for people now arguing the case that public libraries have never been only about issue counts: if front-line staff can be replaced by one-trick-pony kiosks all that other stuff can't have been all that important could it? Well, yes it was and yes it is and it's scandalous that enabling technology's been abused in this way.

Library authorities are repeating this mistake with technologies such as Open+. Open+ is a good way of extending the use of a building and some of its resources. Many of the running costs of a building are incurred whether or not that building's in use: the fabric of the building deteriorates regardless and the lights may be out but you'll need the heating on sometimes unless you fancy having a lot of burst pipes in Winter. So it makes sense to maximise the return on this investment in running costs by maximising the building's availability for use. Especially if that use is currently particularly limited: if you have a building and it only has a useful life of ten or twenty hours a week then this is a huge waste. In these cases using a technology like Open+ makes sense: it allows access to the building as a community venue or a quiet study space and you could make stock available for self issue/return, thus extending the reach of part of the library service. What it doesn't do is replace the shedload of other stuff that gets delivered — or should be delivered — by the library service in that building. It isn't a replacement for a library service, it just extends some people's access to some of those parts of that service that can be passively delivered.

Anything else is yet another abuse of library technology.

Monday, 12 September 2016

What do visitor counts tell us?

Why did I get into a bate about visitor figures the other day? It's largely because of the spate of recent reports and commentaries about the decline of the public library service based on these numbers. I think there is an over reliance on what is, after all, pretty rubbish data.

Visitor counts are not measures of use. They are an approximation of — sometimes a wild stab at — the number of people who entered a building. If you were to tell me that visitor counts have declined by 30% over a given period I'd take your word for it. Personal experience and observation suggests that fewer people are in some (not all) of the libraries I visit than there used to be so you may be right. And the closing of libraries and nibbling away at opening hours over the past quarter of a century won't have helped any. But I'd be extremely sceptical that you had any forensic evidence to back up your percentage.

Does it actually matter that there are fewer visits? If I can sit in my living room and reserve a book then go and visit the library to pick it up I have immediately cut down the number of visits by at least 50%. But the library has delivered the same service, and much more conveniently for me. While I'm in the library I can still avail myself of all its other services and indulge in a bit of serendipitous discovery amongst the shelves but I am not compelled to an earlier visit with the sole purpose of queuing up at the counter to ask for a reservation to be placed.

Ah but issue figures are going down as well… And? Public libraries never only issued books. Literally an infinitely greater number of people use the public PCs in the library than they did in 1964. Do we have fifty years' worth of attendance figures for story times and author visits? Do we have decades-worth of comparative data of use for quiet study? Or any and all of the other stuff? How do we know that libraries aren't having fewer but richer visits?

But we're delivering less of a service… Are you capable of delivering a better quality of service now? Did you overstretch yourselves in the past and sometimes end up shortchanging your customer service? Were you giving one minute of your time to people who needed five? Were people put off asking for help because they saw that you were busy? High throughput isn't always a measure of high quality.

But visits are down… Do we have annual totals for the number of people who walked through the door, saw the length of the queue at the counter and thought: "I'll come back later?" No, we don't.

Which is why I got in a bit of a bate about it.

Saturday, 10 September 2016

Please can we stop pretending library visitor counts are performance data?

From an operations management perspective there is some use for library visitor count data:
  • It's useful to have an idea of when your peak throughputs occur so that you can deploy resources accordingly.
  • When you're designing new library spaces it's good to have a rough ballpark figure of throughputs for designing in capacity and customer flows.
  • It's good for morale to be able to acknowledge and celebrate when you've safely handled a significant number of visitors.
From a service management perspective there is one use for library visitor count data:
  • It's a salutary reminder of how little you know of your customers if the only available data are about loans and PC sessions.
Otherwise, they're not a lot of use. You see, the thing about visitor counts is that most of them are poor data and from a service delivery perspective all of them are meaningless.

Why are so many visitor counts poor data?

We'll assume that everyone's acting in good faith and nobody's playing the numbers.

If you've got an automated visitor count system and you're running your library as a stand-alone service you have the best chance of having good visitor count data. A lot depends on the way the count is done; the positioning of the counter; and the frequency of data sampling and verification.
  • A badly-placed counter can miss visitors — a head count could miss all the children, for instance, or the customer flows could by-pass the counter completely. 
  • Beam-interrupt counters that try to count the number of legs and divide by two have their issues. Back when we installed them at Rochdale we'd heard the urban myth about what happens when you walk past with a ladder. Having enquiring minds we tried it with a seven-rung stepladder and discovered that there was some truth in the story so long as the ladder was being carried horizontally by two toddlers, so we didn't worry about that too much. We did worry that one toddler with little legs = one interrupt = half a visitor. And that a child running into a library didn't get counted (children run into good libraries because they're exciting). People with walking sticks or wheelchairs seemed to be inconsistently recorded. So the figures were only really useful to us as rough ballpark figures.
  • Things happen, so there need to be ongoing checks on the reliability of the equipment, the data and the procedures for collection and collation.
If you've got an automated visitor count system in a shared-service building it's a bit more complicated.
  • The visitor count is potentially useful for the operational management of the building but not really for the library service. 
  • Do you include or exclude the people who came in to renew their taxi licence or claim housing benefit or have an appointment with the nurse in the clinic? 
    • If you include them how is this data useful for the management of the library service? 
    • If you exclude them how do you capture the visitors who come in for a taxi licence/HB enquiry/clinical appointment then take advantage of the fact there's a library in the same building? 
  • How do you exclude the people coming in for meetings with other services? 
  • How do you exclude the passage of staff from the other services? 
It gets messy quickly.

And then there are manual counts…

What's wrong with manual counts? Well:
  • There's the timing of them. It's unlikely that you'll have the resources to do the count all day every day. If you have, you'll find a lot of other things for them to be doing at the same time. (If you *do* have a FTE devoted exclusively to counting visitors you need your bumps feeling.) The data will be a sample. It's easier to do the sampling during the hours when the library's relatively are quiet, but what would invalidate the sample. So there will inevitably be stresses of distraction and confusion.

  • Then there's the seasonal variations. And do you want to base your annual count on that week when you've arranged for the workmen to come in to fix the heating? And so on.

    You could take the sensible view that you're not going to extrapolate the figures, you're just going to do a year-on-year comparison of counts conducted the same way at the same point in the calendar every year. Which is useful if the figures are purely for internal use or published as trends rather than absolute data.

    If that absolute data's used to compare and contrast with another library authority it becomes a lot less useful as you're comparing apples with pears:
    • The methodology may be different. 
    • There may be good local reasons for differences in the  seasonal variation — half term holidays at schools being obvious examples. 
    • There  may be differences in the library calendar — the reading festival may be that week, or it could be the breathing space after last week's reading festival. Back in the nineties our managers were concerned that visitor counts could be distorted by events in the library so, ironically, the two weeks of the visitor count were the only ones with no events in the library, no class visits, no author visits, etc.
  • Then there's the counting. A manual visitor count is easy when the library's quiet. When it's busy you're too busy dealing with your customers to count the visitors. You can try and do a catch-up later but that's always going to be an approximation based on memory and chance observation. And your finger may slip on the clicker you're using for the count. Or you could be one of those people who sometimes have four bars in their five-bar gates.
So there are issues with the visitor count data.

Why is this data meaningless?

A person has come into the building. And…? The visitor to the buildings isn't necessarily a user of the library or a customer of the service, any more than the chap who gets off the train at Waverley Station is necessarily a Scotsman.

There were forty visitors to the library
  • Three people borrowed some books
  • Three people used the computers
  • One came to read the electricity meter
  • One came to sort out the radiator that hasn’t been working
  • One came to deliver a parcel
  • One came to use the loo
  • A drunk came in to make a row and throw his shoe through the window
  • A policeman came in to take a statement about it
  • A building manager came in to inspect the damage
  • A joiner came in to board up the window
  • A glazier came in to replace the window
  • A cat ran in, pursued by:
  • A dog, pursued by:
  • The dog’s owner, pursued by:
  • Three children who followed to see the fun
  • We don’t know what the rest did.
A visitor comes into the library.
  • What did they do? 
  • What did you do? 
  • Was it any good? 
  • Where the right resources available at the right time for the right people? 
  • Did the visitor engage with the service at all? 
The visitor count tells you none of this. So how can it be any sort of reflection of the performance of your service? You can't use data about people you don't know have engaged with your service to measure its performance because the only information you have is about the people, you don't have any about the service delivery.

Attendance figures are important to sporting venues because attendance generates income. They don't determine the trophies that team collect in the course of a season. Nor do you see schools with banners proclaiming: "According to OFSTED more people walked through our front door than any other primary school in Loamshire!"

Visitor counts are not performance data. Performance is about the delivery of outcomes, not throughputs.

Wednesday, 17 August 2016

How many?

I've started doing some work with the Libraries Taskforce. I'd been to one of their workshops and it was pretty apparent that potentially there should be a lot of work needing doing by the less than a handful of people involved and it wasn't easy to see how they'd be able to do it on their own. I've got some time now that I've retired from Rochdale Council so I asked them if they needed a hand with anything and they said yes please. So I'm lending a hand with the work strand that's hoping to develop a core data set for English public libraries that can be openly-available for both public use and operational analysis. It's a voluntary effort on my part; it's something I'm interested in and have been impatient about and it's a piece of work that should have some very useful outcomes.

Whenever you start talking about English public libraries data the elephant in the room very quickly makes its presence known. Before we can talk credibly about anything very much there is one inescapable question desperately needing an answer:
Just how many English public libraries are there anyway?
There is no definitive answer. There is no definitive list. There are at least half a dozen well-founded, properly researched lists. They each give a different answer and when you start comparing them you find differences in the detail. There are perfectly valid reasons for this:
  • Each had been devised and researched for its own purposes without reference to what had gone before. Each started from scratch and each had a differently-patchy response from library authorities when questionnaires were posted.
  • This data's not easy to keep up to date at a national level — especially these days! So some libraries will have closed, a few will have opened, some will have moved and some will have been renamed. 
  • It wasn't always clear just how old the lists were. Some had been compiled as part of some wider project and there wouldn't necessarily have been the resource available to do any updating anyway.
So the decision was made to tackle this head on so that it could be settled once and for all so that the world could move on and they were a few weeks into this work when I signed on. Very broadly, here's the process:
  • Julia from the Taskforce, who has infinitely more patience than me, trawled every English local authority's web site for the details of their public libraries.
  • Between us we scoured the other lists and added any libraries we found in there that we couldn't find in Julia's list.
  • We then went through this amended list to see if we could identify any points of confusion, for instance where "Trumpton Central Library" has moved from one place to another or where "Greendale Library" has become "The Mrs Goggins Memorial Information and Learning Hub."
  • The Taskforce has sent each library authority a list of what we think are their libraries asking them to check to see whether or not these details are correct.
  • The results will be collated and the data published by the Taskforce.
Ten years ago this would have been pretty straightforward. These days the picture is complicated by the various forms of "community library" that have sprung up over the past few years. These run the gamut from "this library is part of the statutory provision though it is staffed by volunteers some of the time" all the way to "we wish them well on their venture but they're nothing to do with us." So where a public library has become a "community library" of one sort or another that needs ro be indicated in the data.

Will this list be 100% correct? Probably not at first, this is a human venture after all. But even if it's only 98% correct in the first instance it should be treated as the definite article. It will then need to be corrected and updated as a matter of course; if that's devolved to the individual library authorities the work becomes manageable and the data becomes authoritative.

Why should anyone bother?

What's in it for anyone to keep their bit of this list up to date and details correct? In my opinion:
  • It's basic information that should as a matter of principle be available to the public.
  • In the past year alone, this question has tied up time and effort that could have been more usefully-occupied. All those enquiries, and FoI requests, and debates about data that could just be openly-available and signposted whenever the question arose.
  • It is essential to the credibility of any English public library statistics. If the number of libraries is suspect then how trustworthy are any of the statistics being bandied around? If the simplest quantitative evidence — the number of libraries — is iffy then how much faith can be placed in quantitative or qualitative evidence that's more exacting to collect?

    For instance, counting the number of libraries within a local authority boundary if you're responsible for supporting or managing them is a piece of piss. Reliably counting the number of visitors to any one of those libraries most definitely isn't — I have 80% confidence in the numbers coming out of any automated system (not necessarily due to technical issues) and to my mind if you're relying on manual counts you may as well be burning chicken feathers. So when I hear that visits to English public libraries have dropped by a significant percentage over a given number of years I may be prepared to accept this in the light of a wider narrative, personal observation and anecdotal evidence but I have no empirical reason to know that this is the case. 
That's why.

Sunday, 24 July 2016

Visible security

A long, long time ago, back when we first put the People's Network into Rochdale's libraries, I got worried about the physical security of all the kit we were putting out there. A few months before we'd had a break-in at Langley Library (back when it was in a stand-alone building next to the bus stop): somebody had drilled through a wall panel into the stock room and pinched a couple of the "homework" PCs we had in the children's library. So I worried about it a bit.

To my mind there's two types of security:
  • Rendering something for all intents and purposes non-existent to anybody who isn't authorised to know about it.
  • Something in your face that says: "We both know there's something here but it would be a pain in the arse for you to try and have away with it.
In a library context where we were wanting everyone to know there were PCs for public use invisibility wasn't an option so we needed something conspicuous to put off the scallies and sneak thieves.

Somebody, I can't remember who, pointed me in the direction of a chap called Peter Radcliffe, trading locally as Nexus Computers who sold computer safes. At the time Peter was selling metal computer safes. There was nothing subtle about these: they were made of plate metal and the fittings used to fix them to the furniture were uncompromisingly industrial. They did the job brilliantly. Only one thing bothered me.

Back in those days computers came in three colours: "white" (beige), grey (beige) and beige (grey). Dead boring. About this time Rochdale was in the early stages of a complete refurbishment of nearly all its libraries. The aim was to make them lighter, brighter and more colourful and I wasn't much keen on installing a lot of battleship grey boxes into this bright new landscape. So I asked Peter if they were available in any other colours. He came back with a paint sheet from the metal-bashing shop he was working with.

"I'll have that," I said.
"Are you sure?"
"Yes. In gloss finish."
"You're mental!"
"Can you do it?"
"The customer is always right. We can do it if you really want it. Are you really sure? Really?"
"Yes, please."

The colour? It was

  Signal Violet 
a bank of PCs at Alkrington Library And so it came to be.

I got a bit of stick about it. Not least because I hadn't spent any time whatever consulting anyone about it (ordinarily I'd accept a good shin-kicking for that but I only had four months to do a complete implementation of the People's Network from scratch across the whole borough and we had just had the one and only planning meeting where we had spent three hours watching a debate about the position of a chair in a particular library). But I stuck to my guns. Still do, in fact:
  • This was aggressively-visible security.
  • It fitted in with the bright, colourful feel of the libraries.
  • They were an easy visual cue. We hadn't consciously gone in for any branding at that stage but it provided a consistent, obvious "Here be computers" message to the libraries' customers.
If I had my time again would I make that same decision? Dead right I would. And Peter still thinks I was barmy.

Wednesday, 20 July 2016

Archaeology: Library staff training ideas

I was digging round on an old laptop looking for a couple of photos when I bumped into this. Back in 2007 I was the library service's union rep, amongst my very many other sins. At that time Lifelong Learning UK (LLUK) was looking at the options for public library staff training. One of the stakeholders being consulted was our union, Unison, and the consultation documents were passed along to local reps. After a few conversations with members in Rochdale's libraries this is the response they agreed that I should send back. We live in a different world now but even in these austere times I think there would be scope for adopting some of this. I shan't be holding my breath, mind.

LLUK consultation on public library training

We feel that this is potentially a useful opportunity for addressing the training and development needs of public library staff while they try to provide relevant and appropriate services to their customers in a fast-changing world. There are, however, three issues which we feel could compromise, or even completely de-rail, the outcomes of this exercise if they are not addressed:

(1) Currently the only reward system for excellent public librarians is for them to move into generic management. This is perverse as:

·         An excellent librarian is not necessarily a good generic manager;
·         If the United Kingdom is serious about wanting to be a player in the Knowledge Economy then it cannot afford to waste the community-level knowledge management, information literacy and reading development skill sets of the public librarian. Why go to the trouble of sending somebody to library school if they’re going to spend all day doing sickness returns and reporting building repairs?

CILIP's current position on this is unfortunate: it is very keen to promote librarians as professionals but this is the only profession where personal advancement is predicated on the abandoning of professional practice

(2) The limited opportunities for career development of public librarians effectively imposes a glass ceiling on the career development of non-librarians in the public library sector. This has an increasingly difficult effect on the recruitment and retention of staff. There is another undesirable effect: there are excellent managers in all sectors of the economy who are not librarians; if the staff resources of the public library sector includes non-librarians with the potential to be excellent managers we should want to want to develop and keep their skills and abilities rather than stifle or lose them. There needs to be a career development path for non-librarians that rewards the excellent work that many of them do in the public library sector and provides the sector with the opportunity to use their talents.

(3) Public libraries are very front-line-focused, often to the detriment of support and training activities. There is no point in there being an excellent training and development package in place if staff do not have the chance to take up the opportunity. As we saw with the NOF-funded training, the need to cover the needs of the service can turn a major opportunity into a worry for managers and a source of resentment for front-line staff. This is especially true in authorities like Rochdale where scant staff are thinly spread over many service points. If staff can't take up the development opportunities then the exercise becomes at best an irrelevance and at worst a bad joke and an impediment to good staff relations.

It is also worth flagging up a potential abuse of the outcomes of this exercise: there is the danger that any qualification will become another recruitment hurdle rather than a career development opportunity. We already see nationwide that an essential requirement for ECDL is being used as the lazy recruiter's shorthand for "must be able to demonstrate computer literacy and we don't want the bother of working out how to define our needs in any measurable way, nor do we want to provide ECDL training for our staff." ECDL has become the modern equivalent of "must have six O-levels." Anyone with a degree in computing but no ECDL need not apply. One of our members attended a national event on the future of public libraries and was shocked that a workshop on staff training needs became a bunch of chief librarians moaning that not enough people with ECDL apply for jobs in libraries. It is important that qualifications should be recognised as proof of some degree of attainment but it is also important to recognise that there is more than one way to demonstrate most skill sets.

Given LLUK's brief it is unreasonable to expect that it should be able to tackle the root and branch structural issues involved in (1) and (2), especially given CILIP's current support of the status quo. However, we do feel that LLUK can usefully call for the need to retain and reward the public librarian's skill set at a community level. LLUK can also usefully acknowledge the importance of, and seek to exploit, the skill sets, experience and dedication of all public library staff in the informal learning economy. We also feel that any training and development outcomes should reflect the needs of the public library service, not the particular management structures.

Starting from where we are the challenge is to provide career development opportunities for all library staff without making librarians feel that their qualifications are worthless. Luckily, in one key area the obvious training path does precisely that. 

We propose that LLUK put together a qualification for Public Library Management, covering three core elements:

·         The public library service: the philosophy and aims of the service; good practice models for the delivery of Information Literacy, Reader Development and Cultural Identity at strategic and community levels; the statutory basis for the services we provide (not just the Public Library Act!); the government-level matrix management of the public library service, including the challenges and opportunities in the statutory bases of every governmental department; national and international public library NGOs; public library performance measurement; and good practice models for cross-sectorial working with academic organisations and with the voluntary and private sectors.
·         Public administration: local government organisation, philosophy and finance; working with elected members; statutory controls; good practice models for interdepartmental working; local government performance management and development; and effective (and appropriate) lobbying.
·         Management skills: project management; personnel management and development; financial planning and control; performance measurement; resource procurement; writing business plans, funding bids and reports; and good practice models for strategic and tactical planning, operational management and delivery.

This qualification would provide a good grounding for people wishing to move into the management of public library services. The advantage to the librarian would be that they will already have covered parts of this curriculum in their library training and so will start the course with their feet a couple of rungs up the ladder. The advantage to the non-librarian would be that there would be a ladder in the first place. The advantage to the service and, importantly, its customers would be that the people running the service will have had the opportunity for a more robust foundation to their skill set than is often currently the case.

We also suggest that LLUK look at the provision of courses and qualifications addressing front-line skills such as:

·         Customer care, including dealing with enquiries effectively; applying appropriate limits to delivery (not getting out of your depth and how to tell a customer that there are some things the library can't, or won't, do and they can't have everything they want when and how they want it in this life); assertiveness skills; and presentational skills.
·         Activity and event management, including small-scale project planning and management; marketing and promotion; health and safety; addressing the audience; keeping it practical; delivering the event; and reporting back to interested parties.
·         Community development, including how to work with community groups; promoting and providing the library as a venue; and promoting and providing library services outside the library.
·         Computer skills, including training in the use of computers in a public library context (contra the standard ECDL training, where people who are working day in, day out with sophisticated databases in their library management systems are told that they don’t know about databases if they don’t know how to build a flat table in MS Access); how to support public use of computers; e-learning and e-government taxonomies and metadata standards; Web 2.0 tools; etc.

It should be noted that a lot of existing courses already cover these areas. Unfortunately they are provided by very many different organisations and at irregular intervals and it can be a full-time job just tracking down what's available. It is also unfortunate that so many of the courses that are available are provided only at London venues. The cost of overnight stays or exhorbitant rail fares is a barrier to the take up of those courses. It would be useful if some sort of co-ordinated pattern of training held at regional venues could be established.

Making staff available for training is a major issue. One solution would be for DCMS or MLA to fund additional staff cover for this purpose. Experience would suggest that this would be both inadequate and unsustained. An alternative would be for one of the CPA-related public library performance indicators to be a measure of the training time per capita of staff, with due safeguards for what constituted "training time." This would give local authorities a financial incentive to provide training cover for library staff. By happy accident it would also give library managers an argument against freezing or cutting front-line vacancies for budgetary purposes. Or else we could carry on as we are, in which case LLUK is wasting its time.

Wednesday, 8 June 2016

Learning from experience

When you're doing any sort of serious developmental work, whether it's solving a problem or exploiting a new opportunity, you have to start by defining your preferred outcome then look at all the possible options for getting you there. One of the things that constantly dismayed me when working in public libraries was just how limited "all the possible" usually was, and that these limitations were often imposed as a matter of principle rather than practice. Often summed up with a look of complacent contentment and the mantra: "Ah well, you see, they don't work the same as libraries." I still bump into this thinking quite regularly on the lists and social media, often as a dogmatic statement along the lines of "Libraries have got nothing to learn from…" This is, of course, the veriest nonsense: libraries can learn a great deal from other business operations, if only the reasons why adopting a particular policy or process would be a bad idea in a public library context (and that aren't "Ah well, you see, they don't work the same as libraries.")

The other reason why it is nonsense is that although a business operation may be very different to yours some of the operational functionality requirements may be very similar, or the same. There's no particular philosophical or ethical difference between opening the public doors to a library first thing in the morning and opening the public doors of a shop, for instance.

The exciting thing about not imposing any constraints on where ideas can come from is that you can often find alternatives to the organisation's "default setting" that are cheaper and more effective to apply.

The responsibilities in my last job included a transport management system called Tranman. Very generally speaking it was used to manage the council's fleet of vehicles and the jobs done in the vehicle workshop. Some of the jobs done in the workshop were part of the routine maintenance work included in the service level agreements with the council departments using the vehicles. A lot weren't: repairs necessitated by accidents or driver abuse were chargeable extras and the workshop also did a lot of private repair work, services and MOTs as part of the operation's income generation. Extremely different to the business operation and purpose of the public libraries I was also supporting.

They had a problem. A lot of the jobs were taking ages to complete and invoice on the system because they were missing information about the parts that were issued to them. Even worse, some jobs had been completed and invoiced without including the rechargeable costs of the parts. This was playing hob with their cash flow and making the accountants cross. So we had a look at it and basically the situation was:
  • There was a manual process involving somebody trying to catch up with a pile of paperwork at the end of the month
  • There was a technical solution which could automate the stock issue but wasn't being used
What was needed was for the parts to be issued to the jobs in as close to real time as possible and that wasn't going to be done manually from the paperwork.

We had a long, hard look at the technical solution. None of us were big fans. It was hard work to get set up and running and, frankly, was over-engineered for its purpose. On paper it looked great:
  • A piece of software on the stores manager's PC generated barcoded labels for each part, the barcode including the appropriate part number and bin, together with a human-readable description.
  • The parts bins were labelled up accordingly.
  • When a part was to be issued to a job the store man entered the job number into a PDA, read the appropriate barcode and entered the number of items being issued.
  • The PDA was then plugged into a console attached to the store manager's PC and the data uploaded into Tranman.
In reality it wasn't so hot:
  • The barcoded labels soon got scruffy and unusable.
  • We could make the barcodes on the labels as big as we wanted but the descriptive text was always tiny and difficult to read, especially in some of the darker corners of the store room.
  • The keys on the PDA were very small (6mm across), which was difficult enough for me with my never-done-a-stroke-of-hard-labour-in-his-life delicate fingers and not a lot of fun for workers who'd spent years bashing their finger ends on bits of metal on cold Winter's days. The incidence of input error was high.
  • Uploading the data was a bit hit and miss at first: it took quite a while for us to iron out the problems and making sure the solutions were properly packaged into the virtual application. (I for one was praying nothing happened to that PC as I wasn't confident we wouldn't have at least a few of these problems again with a new deployment of the software.)
  • It wasn't easy to see where and when something went wrong. The lack of transparency meant that you couldn't do any quality control to make sure that the right parts actually were being issued to the right job.
Something had to be done. 

We looked at various somethings: we spent far too long trying to sort out better barcode labels; then we had a fruitless quest for a more user-friendly PDA that could do the job. Over the next few months, in between a pile of other pieces of work we were doing with this system, we tweaked this process as best we could but in the end it felt more like we were working through levels in an arcade game rather than getting anywhere with improving the process and addressing the cash flow problem. By this stage I was thoroughly fed up and went and had a sulk in my tent. On reflection it was apparent that all we had been doing was following the same ploughed rut over and over again. 

We'd let our problem-solving be dictated by the "solution" rather than either the problem or the desired outcome.

So I looked again. And wondered why the issuing of parts to a job in a vehicle workshop was very much different to the issuing of library books to a borrower…

The solution I came up with was cheap, not especially elegant but did the job and could be seen to do the job. There were two elements:
  • A standard barcode reader, same as you'd see in a shop or library
  • A folder containing sheets of labels including all the information needing to be entered into the job record:
    • The description of the part in a large, readable font
    • The part number as a barcode (using the "3 of 9" font)
    • The bin number as a barcode
    • The store location as a barcode
The process was:
  • The request comes in for the part for the job
  • The stores man gets the part then opens the job record in Tranman
  • The part number, bin number and location are input by reading the barcodes from the appropriate sheet
  • The only manual input is the number of parts
  • The parts are issued to the workshop
For out-of-hours working the process was:
  • The request for the part is recorded on a paper job sheet
  • The fitter working on the job gets the part
  • Next working day the stores man issues the part in Tranman as above
Getting a working test sheet turned out to be a bit of a problem because initially I was doing it in MS Word and the formatting it imposed was disabling the begin and end codes of the barcode (an asterisk *). Jacking it in and doing the job in Wordpad gave us some test sheets we could use to demonstrate proof of concept. I used Crystal Reports for the final print out of all the parts.

Sadly, this is as far as I got with it by the time I was leaving. We'd checked that each step should work as expected but we'd not given the process much hammer and inevitably there'd be some snagging to do. One thing I'd have liked to have done, given the time, would be to modify the Crystal Report so that the first few pages were limited to the high-volume items, to save the stores men having to routinely wade through so many pages.

So that's how I applied a bit of standard public library functionality to a problem in a completely other environment.

So what are the takeaways from this?
  • Don't mistake differences in business operation with differences in operational functionality. In this case, issuing an item in real time is issuing an item in real time.
  • If your problem-solving energies are being devoted to the solution perhaps you've got the wrong solution.
  • To solve a problem you have to know what problem you're solving and the outcome you desire of the solution.