The
Australian Tertiary Education Quality and Standards Agency (TEQSA) has an
Experts advice hub which is particularly useful with the move to online education. One useful piece of advice is "
The prevention of contract cheating in an online environment" by
Associate Professor Phillip Dawson, Deakin University. At first I found this a bit annoying, as it starts with 3 myths about contract cheating: it is rare, it can be designed out, and is impossible to detect. After reading these I concluded that TEQSA wants me to give up on project based progressive assessment and just set one exam at the end of semester.
But reading on Professor Dawson does suggest assessment design approaches to help reduce the prevalence of cheating: reflections on practical work, individual work, and in-class tasks. Assessment which it is suggested most attracts cheating are ones with lots of marks, and short deadlines, such as take-home exams.
One of the key problems I suggest causing contract cheating is the lack of time and effort academics think they should put into assessment design and delivery. Like most, my first exposure to assessment was being asked to set exam questions and mark assignments, with no prior training. Completing a couple of courses on how to design and deliver assessment was a revelation. Much of what I had been doing turned out to be, at best, irrelevant, and in some cases counter productive. Also the amount of time needed to do assessment properly was sobering.
I had assumed that assessment was an afterthought tacked onto a course. Much of the frustration of academics perhaps comes from this assumption. Assessment should take up about half the staff time in a typical course. Once you accept this, it is less frustrating how much time it takes, as you expect this.
As the guide suggests, many small assessment tasks, with generous deadlines, place less pressure on students to cheat. Practical work, where each student has a different project and where they have to reflect on what they did makes cheating harder. Where the student has to explain what they did to the assessor this also helps. However, these all take much more work to design, administer and grade. These also take skills which the average academic doesn't have, because it was not part of their teacher training (assuming they received any teacher training).
As an example, I once sat at a course planning meeting where we discussed the assessment of reflective e-portfolios. As the discussion progressed, I realized that of the dozen tutors and lecturers there I was the only one who had ever completed a reflective e-portfolio as a student, and the only one with any formal training in how to assess them. Without that training ans experience, tutors were assessing the e-portfolios as if they were project assignments.
If you set out to design the assessment for a course, allocating marks to small tasks, thinking about the time the student, and the assessor, will need, it is possible to design most cheating out. However, I suggest emphasizing the benefits for the students, and for their teachers, of more realistic, better planned assessment.
One think academics need to decide is what they are doing assessment. If a graduate needs particular skills and knowledge to undertake a professional role, then there is no need for more than a pass/fail test. If the assessment is to identify those who will undertake further advanced studies, then more fine grained assessment is needed. However, these two approaches can be mixed in the one course. If you don't want students asking for extra marks on every little assessment task, then have these small tasks just count for a pass (or whatever is considered the minimum acceptable level). Reserve the fine grain marking for the important tests.
The obsession with research rankings, I suggest, could be countered by promoting education and impact ranking schemes. Rather than being a victim of rankings developed by media companies, universities could cooperate to create more useful measures which suit their interests. A good model for this is the “Webometrics Ranking of World Universities” which ranks ten thousand universities.
Australian universities failed to prepare for an international crisis preventing students getting to campus, even when warned of this. There were relatively simple measures, using e-learning, which other countries planned for and which some of us in Australia were ready with.
Universities are also not addressing the longer term risk of competition for students, particularly from China’s Belt and Road Education Plan. Previously I worried this may see a long term decline in the competitiveness of Australian universities. However, due to COVID-19 and international tensions, Australia's universities may have only a few years to change the way they deliver education, if they wish to remain in business.