In "Halt the avalanche of performance metrics" (Nature, 15 August 2013),
Macilwain singles out Snowball Metrics for criticism. This is a scheme where groups of academics can devise their own metrics and use software to collate them. There is a "Snowball Metrics Recipe Book" for the UK and it would be interesting to see how applicable this is elsewhere (and how much it has been designed to favour the "distinguished group of institutions" which devised the metrics), Macilwain warns that academics are playing into the hands of those who wish to use these metrics to rate them. However, given I am going to be rated anyway, I would prefer to have some say in how this is being done.
Leading universities did not build their reputations on, as Macilwain says, "autonomous academics, working patiently with students". Leading universities have in place systems for the assessment and review of staff. These systems have grown up in an ad-hoc way and may be all but invisible to the outside observer, but they do exist. Academics do not research or teach in isolation, this is a group activity and the group monitors the performance of its members.
Performance Metrics seek to regularise and make more explicit the measures academics have always used. This can threaten those who have done well out of the informal systems of the past. There are risks in too rigid measures which are blindly applied by finding and promotion bodies. But there are also risks in clinging to a system which does not accord with general standards in the community.
Recently I have been looking for a masters in education to enrol in, so as to learn more about online and distance education. As a consumer, I do look at the various rankings of programs (particularly avoiding those with a below average performance), but will not simply blindly enrol in whatever has the largest score.