Source: The Conversation (Au and NZ) – By Armin Alimardani, Lecturer, School of Law, University of Wollongong
It’s been nearly two years since generative artificial intelligence was made widely available to the public. Some models showed great promise by passing academic and professional exams.
For instance, GPT-4 scored higher than 90% of the United States bar exam test takers. These successes led to concerns AI systems might also breeze through university-level assessments. However, my recent study paints a different picture, showing it isn’t quite the academic powerhouse some might think it is.
My study
To explore generative AI’s academic abilities, I looked at how it performed on an undergraduate criminal law final exam at the University of Wollongong – one of the core subjects students need to pass in their degrees. There were 225 students doing the exam.
The exam was for three hours and had two sections. The first asked students to evaluate a case study about criminal offences – and the likelihood of a successful prosecution. The second included a short essay and a set of short-answer questions.
The test questions evaluated a mix of skills, including legal knowledge, critical thinking and the ability to construct persuasive arguments.
Students were not allowed to use AI for their responses. And did the assessment in a supervised environment.
I used different AI models to create ten distinct answers to the exam questions.
Five papers were generated by just pasting the exam question into the AI tool without any prompts. For the other five, I gave detailed prompts and relevant legal content to see if that would improve the outcome.
I hand wrote the AI-generated answers in official exam booklets and used fake student names and numbers. These AI-generated answers were mixed with actual student exam answers and anonymously given to five tutors for grading.
Importantly, when marking, the tutors did not know AI had generated ten of the exam answers.
How did the AI papers perform?
When the tutors were interviewed after marking, none of them suspected any answers were AI-generated.
This shows the potential for AI to mimic student responses and educators’ inability to spot such papers.
But on the whole, the AI papers were not impressive.
While the AI did well in the essay-style question, it struggled with complex questions that required in-depth legal analysis.
This means even though AI can mimic human writing style, it lacks the nuanced understanding needed for complex legal reasoning.
The students’ exam average was 66%.
The AI papers that had no prompting, on average, only beat 4.3% of students. Two barely passed (the pass mark is 50%) and three failed.
In terms of the papers where prompts were used, on average, they beat 39.9% of students. Three of these papers weren’t impressive and received 50%, 51.7% and 60%, but two did quite well. One scored 73.3% and the other scored 78%.
What does this mean?
These findings have important implications for both education and professional standards.
Despite the hype, generative AI isn’t close to replacing humans in intellectually demanding tasks such as this law exam.
My study suggests AI should be viewed more like a tool, and when used properly, it can enhance human capabilities.
So schools and universities should concentrate on developing students’ skills to collaborate with AI and analyse its outputs critically, rather than relying on the tools’ ability to simply spit out answers.
Further, to make collaboration between AI and students possible, we may have to rethink some of the traditional notions we have about education and assessment.
For example, we might consider when a student prompts, verifies and edits an AI-generated work, that is their original contribution and should still be viewed as a valuable part of learning.
Armin Alimardani has a short-term, part-time contract with OpenAI as a consultant. The organisation had no input into the study set up or outcomes and did not fund the research. The views expressed in this article are the author’s own.
– ref. I got generative AI to attempt an undergraduate law exam. It struggled with complex questions – https://theconversation.com/i-got-generative-ai-to-attempt-an-undergraduate-law-exam-it-struggled-with-complex-questions-240021