There was some concern that any assessment would just be another stick to beat libraries with, like issue figures and visitor counts have become. Equally, would they become another set of targets to be gamed? Both are extremely valid concerns.
I think it's essential that public libraries have a solid suite of KPIs — not for comparison with so-called "peers" in a way that takes no heed of communities or contexts but for comparison with past performance and identifying strengths and weaknesses in the operation. But the "voluntary assessments" *shouldn't* be about KPIs: there are a couple of other useful functions they could perform.
One way would be to direct the stick upwards, towards the DCMS. It's vanishingly unlikely to happen but it could be that one of the Department's own performance narratives, published in its annual report, could include an assessment of the health of the national public library service derived from local returns.
- I don't like expenditure as a measure of performance (any bloody fool can spend money) but given that the audience for such a thing would be a political one that largely measures achievement by expenditure one of the assessment measures could be expenditure per capita population, which could be broken down into: investment in buildings; investment in skilled staff; investment in stock; and investment in community activities. (Did you spot the gear-shift there?)
- DCMS would — finally! — have to be able to report a definitive number of public libraries in the country, and any changes and trends.
- It would be interesting to see a national picture of:
- The number of staffed library open hours
- The number of library open hours manned by volunteers
- The number of unsupervised library open hours
- The number of "daytime" (9am — 5pm) hours libraries are closed and left fallow
- And so on. I won't go into more detail because it's so unlikely to happen it's hardly worth the wishing for.
Another, possibly more plausible, function would use the assessment as a feedback mechanism for continuous service improvement (absolutely not a set of targets!). They would be used to evaluate rather than monitor the performance of the service and help direct local decision-making.
- The assessment criteria could be deliberately aspirational and impossible to achieve: the assessment would evaluate the direction of travel of the service and the impact on resources, staff and the needs of the communities that the service serves.
- Comparisons would be with past performance rather than against "peers," which would remove any unwelcome competitive friction between organisations that should be working collaboratively,
- This would also mean that it would be harder to game the figures and it would be harder to coast on past glories as assessments would be reporting the direction of travel towards the impossible goal rather than the successful negotiation of an arbitrary obstacle.
- For example, one of the "impossible goals" could be 100% of the local population's being active members of the library (however that would be defined). Last year Library Service A could have attained 56% active membership and this year 58% while Library Service B managed 69% and 60%. Crude peer-to-peer analysis would suggest that Library Service B is performing better but A is actually making a better fist of continuous service improvement — it's the rate and direction of performance, not the absolute figures, that are the measure of how the service is being managed and resourced; they'll all have different baseline starting points.
There is a problem with this idea: while it works well in many organisations and is pretty standard performance management fare I don't think our political environment is adult enough not to try and turn this into a badly-fitting set of targets and league tables. It would be nice to be shown to be wrong, though.