Thursday, August 8, 2019

Potential Improvements and Rorts With New Australian University Performance Measures

Dan Tehan, the Australian Federal Minister for Education, has announced that funding to universities will be "performance-based". Measures will be: graduate employment, first-year student completion rate, student satisfaction with teaching quality, plus participation of Indigenous, low socio-economic status, and regional and remote students. This provides opportunities for better teaching, but also a risk of rorts.

There was a review process, and the final report is available. I was surprised with the relatively mild response from Vice-chancellors, who met with the minister.

What might have worried the VCs was that the Minister said the "core business" of universities was "producing job-ready graduates with the skills to succeed in the modern economy". That is more the job of the vocational sector, including TAFEs. Universities have a far wider, long term role, even if just considering narrow economic outcomes. Universities produce inventions, and experts to implement them, but this process can take a decade, or more. Financial incentives which encourage universities to take a short term view could stop the flow of inventions and highly skilled people, stalling the economy.

Example of for Better Teaching

Having new funding linked metrics could provide an incentive for universities to use better teaching techniques. As an example, last year I designed a learning module to teach students to reflect on their learning, by having them write a job application. I ran this last semester for ANU Master of Computing international students. The module uses the full range of scaffolded m-learning, blended, flipped, peer assessed, group techniques I have been learning over the last six years, as a student of education.

This module was well received by the students, and is being run for all computer science students this semester in the ANU TechLauncher program. These students will have a better chance of getting a job as they will have been formally trained in looking for jobs and have had their application peer reviewed. The learning module is available under a Creative Commons license for free reuse.

The first-year student completion rate can be improved by better scaffolding of the education, project based group work, and a blend of online and classroom education. These techniques are well known, and I explored some in my book "Digital Teaching In Higher Education".

Opportunities for Better Student Performance

Student satisfaction with teaching quality can similarly be improved with better teaching techniques, and in particular with better assessment. But the best way to improve the student experience, I suggest,  is to employ trained, qualified educators. Academics will not willingly undertake teacher training and certification. One way around this is to incorporate the training in vocational degrees. Rather than treat teaching as an afterthought which an academic halfheartedly picks up after their formal education, make it part of the training of all professionals in their degrees.

Participation of Indigenous, low socio-economic status, and regional and remote students can be assisted with wider access to online education. This allows those with cultural, family, or work commitments to study without moving to a city. Also the rigorous design process required for online courses produces better courses, which can take into account the needs of non-traditional students. During my MEd studies I explored how to provide e-learning for high quality education.

One difficulty with online education is that it may not do well with the new metrics.  Online students take longer to complete and drop out at a higher rate. This is not due to any inherent problem with the teaching format, but because these courses attract students who are excluded from campus programs. The same factors which stop them attending on campus also result in lower, slower, completion rates.

Improved Teaching Techniques But Potential Gaming of New Metrics

New teaching techniques can improve completion rates and job outcomes. However, these may also be misused to game the system and manipulate the metrics for financial gain by unscrupulous operators. Coming up with reliable measures to base funding on will take considerable effort. The rorting of the vocational funding system shows how inventive people and organizations can be when it comes to exploiting an education funding system. Some obvious examples of how the new measures could be gamed:
  1. Graduate employment: The best students to have for vocationally oriented programs are those who already have a job, or have relevant work experience. Some educational programs require this, as an essential part of work integrated learning (and some professional bodies require it as part of degree accreditation). Due to the difficulty of finding suitable jobs for students, some universities have set up their own consulting companies to employ the students. However, these measures could be misused to make the employment statistics of look better.
  2. First-year student completion rate: Students who have already successfully completed a sub-degree program are much more likely to complete their degree. This can be done, by having students undertake a certificate, or diploma at a vocational education institution, in some cases associated with the university. Those students then get degree credit for their VET studies.  Similarly, universities can offer credit for completion of low cost cost online "MOOC" courses. Another approach is to enroll the students in nested program, where they get a sub-degree qualification first. These are all good ways to improve student outcome, but can be misused to game the statistics. Students enrolled at VET, in online MOOCs, and sub-degree programs do not don't count in the university degree statistics, so those who drop out are not counted.
  3. Student satisfaction with teaching quality: Progressive assessment, where the student is given small tests through their course, provide better feedback.  This allows students who are not succeeding at a subject to withdraw early, and focus on other studies. I use this approach routinely in courses, and the students like seeing how they are doing. However, this might be used to make the student satisfaction scores look better. Students who fail a course tend to give lower satisfaction scores. However, with progressive assessment, students withdraw before the end of the course. These students are not recorded as a fail, which is good, but also they do not get to fill in the student feedback survey, as it is administered at the end of the course, after they have withdrawn.

No comments:

Post a Comment