Monday, November 30, 2020

Turnitin’s Gradescope Workshop at ASCILITE 2020 Conference

Greetings from the ASCILITE 2020 Conference Online, where I am taking part in a Turnitin Gradescope Workshop. Two papers I helped with are being presented tomorrow, but first I am brushing up my grading skills. Anu had TurnItIn and I have used their GradeMark tool, but that is much the same as online assignment marking with Moodle (apart from integration with TurnItIn's plagiarism detector). Hopefully Gradescope does more than this.

I worry that online and semi-automated grading tools may make the fixation students and teachers have with grades worse. These tools look objective and analytical, which grading is not. I am reasonably confident that I can decide that a piece of work is below, at, or above the required standard. However, I am not sure I can really say where the work sits on a seven point scale, let alone a mark out of 100.

What would be useful would be a way to evaluate the quality of grading, especially where multiple graders are used.

Despite being a vendor provided workshop, there was a minimum of sales pitch. However, it would have been useful to compare Gradescope to commonly used online assessment tools, such as the Quiz, Assignment and Workshop tools in Moodle.

Gradescope has something called a "dynamic rubric". This allows the grader to create a rubric during the marking. I have some difficulties with this idea. First of all conventional wisdom, and natural justice, say that the student should be provided with the rubric before the assessment, so they know how they will be graded. If the rubric is made up during marking, the student can't have had it before. Next, this removes the discipline from the assessment designer to think carefully about what they are trying to assess and how it will be marked. There could also be a lack of consistency if the rubric changes during the grading. However, if the grader has not been provided with any rubric, this would be better than nothing. It would also be useful where a problem is found with a question, and the expected answer is wrong.

Gradescope provides statistics overall and on individual questions, much the same as other online marking tools. What I had hoped for were ways for evaluating the quality of marking, between multiple graders, but there doesn't seem anything for this. There was also mention of the use of AI, but no details in the workshop. Overall Gradescope seems to have the features I would expect in any grading tool.

Before creating a complex marking scheme, educators need to stop and consider the purpose of the assessment and the cost and suffering they are causing students. In most cases school and university assessment is intended to find if the student's skills and knowledge are to a required standard. We want citizens and workers who can function in a modern society. There is no need to rank all the students, or mark them to within 1%, for this: pass/fail is sufficient.

Universities might like an easy way to rank applicants, but is that really the job of schools and are school assessments really much use for that? Similarly, university academics may want to use assessment to select students for advanced research courses, but few students will be undertaking these advanced courses, so is it worth the cost of administering this assessment for all students? Outside university, no one much cares about university grades: as long as you passed, that is all that matters.

No comments:

Post a Comment