Thursday 2 April 2015

Measuring weight with a ruler: MOOCs

Massive Online Open Courses (MOOCs) may have been ridiculously over-hyped and may or may not be game-changers in academia. To be honest I'm not that much fussed by the commentary one way or another, I like them and find them useful.

I have a confession, though: I've never yet finished one.

I'm one of the statistics that people pick up on to demonstrate "the failure" of MOOCs — the apparent high drop-out rate and relatively low number of students getting the appropriate bits of paper at the end of the course.

So why am I a drop out? Not because the courses aren't useful and interesting. The ones I've signed up for over the years have been instructive, informative, challenging and all that. I've read the materials and seen the videos. I've learned new stuff, found ideas I can apply elsewhere, heard interesting discussions and arguments. I've got wanted I wanted from each course, and nearly always a bit more besides. I just haven't felt the need to get the bit of paper at the end. If I want to take an examination, if I want to get a certificate, I'll do so. But I didn't want to and, thank Heaven, I didn't have to. Which suited me fine and thank you very much.

The educational industry's turning institutions into qualification mills concerned with league tables and rankings based on the confusion between quantitative metrication and qualitative outcomes is relatively new in the scheme of things. In part I see MOOCs as redressing the balance slightly: allowing the sharing of academic learning for its own sake rather than as part of a Fordian production line of qualifications.

Using completion figures to demonstrate the apparent failure of MOOCs is a failure of statistics: applying an inappropriate measurement to a situation and deriving an answer to a different question. In this case I'd argue that the value to the student lies in the exchange of knowledge more than in the attainment of qualification, which is a different species of outcome entirely that needs a different type of measurement for meaningful analysis and conclusion.

Similarly, in the library world, we need to be careful with our metrics. I've said it before and I dare say I'll drone on about it on my deathbed: just because a particular set of statistics has been used for years it doesn't mean they're necessarily all that important in assessing value. It could just be that that was the easiest (or even the only) thing that could be measured. The really important thing, always, has to be the value to the person at the receiving end. And that is never measurable by the passive aggregation of throughput stats.