Generative AI and HE assessment: What do we need to research?

By Jo Walton

Would you like to collaborate on something?

There are a lot of fascinating ongoing conversations about the use of generative AI within HE, and especially the issues it raises for assessment. Kelly Coate, writing for WonkHE, characterizes it as a moral panic, but still a potentially useful one:

If in academia we value things like authenticity, integrity, and originality, we need to be able to articulate why those values remain important in the age of generative AI. Doing this can only help students to make meaning from their higher education learning experience – in fact, it’s really what we should have been doing all along.

Our sense is that there are now many particular well-defined questions which could do with being studied in a more rigorous way, to move these conversations forward. How good are educators at identifying AI-generated text? How good are educators at evaluating their own ability to identify such text? What differences emerge across different assessment designs? How about different disciplines? If AI is deliberately included in the assessment, how accurately can assessors evaluate whether it has been used appropriately? How does the variation in grades assigned compare across non-AI-asssessed work and various types of AI-assisted work? These are all questions we can address from our own experience and miniature experiments. But it would be valuable to conduct some studies at scale, and have some data to debate.

SHL Digital would be keen to hear from colleagues, at Sussex or beyond, who are working on these issues or would be interesting in collaborating. Get in touch with j.c.walton@sussex.ac.uk.

Leave a comment