Sunday, September 15, 2013

Learn to Live with Performance Metrics

In "Halt the avalanche of performance metrics" (Nature, 15 August 2013), Colin Macilwain puts a strident case that academics should oppose performance metrics, such as citation indices and other measures of the quality and quantity of research output. Similar arguments have been put opposing measures of the quality of university teaching, such as student feedback surveys.  Macilwain 's particular concern seems to be over power moving from the researchers to the university administrators and would prefer for each discipline to be left to run its own affairs. Where academics fund their own research and do no teaching, that is fine. But if someone else (the university overall, some government funding body or a private charity), then some way needs to be devised to measure how effective the researcher is. In the past an old boy's network ensured that a few senior academics recommended their friends for grants. Such a system is not in the public interest or that of the disciplines.

Similarly with teaching, some way needs to be devised to ensure the quality of the product. Regrettably, there are still university academics who see teaching as an unfortunate chore which keeps them from their research. Students are seen as a raw material to be refined though undergraduate and postgraduate courses, to produce a few postdoctoral researchers. The students who do not get through a PHD are considered a waste product to be discharged from the university of no further interest. It is not surprising that academics with this attitude get a rude shock when the students given them, low sores for teaching effectiveness, nor that the academics then try to limit the administering and public action of such surveys (not one likes to be told they are incompetent).

Macilwain singles out Snowball Metrics for criticism. This is a scheme where groups of academics can devise their own metrics and use software to collate them. There is a "Snowball Metrics Recipe Book" for the UK and it would be interesting to see how applicable this is elsewhere (and how much it has been designed to favour the "distinguished group of institutions" which devised the metrics), Macilwain warns that academics are playing into the hands of those who wish to use these metrics to rate them. However, given I am going to be rated anyway, I would prefer to have some say in how this is being done.

Leading universities did not build their reputations on, as Macilwain  says, "autonomous academics, working patiently with students". Leading universities have in place systems for the assessment and review of staff. These systems have grown up in an ad-hoc way and may be all but invisible to the outside observer, but they do exist. Academics do not research or teach in isolation, this is a group activity and the group monitors the performance of its members.

Performance Metrics seek to regularise and make more explicit the measures academics have always used. This can threaten those who have done well out of the informal systems of the past. There are risks in too rigid measures which are blindly applied by finding and promotion bodies. But there are also risks in clinging to a system which does not accord with general standards in the community.

Recently I have been looking for a masters in education to enrol in, so as to learn more about online and distance education. As a consumer, I do look at the various rankings of programs (particularly avoiding those with a below average performance), but will not simply blindly enrol in whatever has the largest score.

No comments:

Post a Comment