In the next few blog posts, we will be looking specifically at the topic of AI in university teaching. The first question will be to what extent tools for AI recognition should be used in examination papers. In the second article, we will look at the legal and didactic use of AI in the creation of OER . We then discuss prompt tips for high-quality OER and conclude the series with AI skills that will soon be mandatory for university employees.
Examination papers with AI
The debate surrounding the use of artificial intelligence (AI) in academic examinations is becoming increasingly important. It is feared that unsupervised written work is more susceptible to cheating and therefore cannot be retained as a form of examination. In particular, the question of whether and how AI-supported performance can be recognized or sanctioned is of concern to universities and courts alike.
The Administrative Court of Munich, for example, ruled on two applications(application 1, application 2) from rejected applicants to the Master’s degree program for provisional admission to the program. They are said to have submitted an essay for their application that was allegedly generated using AI. Both applications were rejected by the court on the basis of prima facie evidence of deception.
According to the principle of performance under higher education law, examinations must be completed independently and without unauthorized aids. Candidates usually submit a corresponding declaration of independence. According to the court, if this includes an explicit waiver of the use of AI, this constitutes a breach of regulations if the work was created in whole or in part using AI. So-called AI detectors can be used to determine this. These are tools that analyze content to determine whether it was generated by an AI. However, the results of AI detectors alone are not a decisive indication. An independent opinion from human reviewers is also required, the court continued.
The reliability of AI detectors
The court apparently assumed that the currently available detectors are neither accurate nor reliable. In one study, 14 detectors were tested. It was found that the detectors tended to classify the analyzed texts as written by humans rather than recognizing them as generated text. For texts that were created by an AI and then revised, the accuracy is said to be only 50 percent. An AI-generated text that has been slightly revised by an examinee may no longer be reliably recognized by the detectors.
Due to these uncertainties, proof of deception should be based on a human analysis of typical AI characteristics in addition to the use of detectors. These include texts with perfect form, without spelling, punctuation or grammatical errors, as well as typical AI errors such as hallucinations, exaggerations or inaccurate references in quotations. Further information and tips on this can be found in the article “Recognizing AI text in examination papers” by Matthis Kepser.
If you would like to learn more about the connection between AI and OER, we would like to invite you to our next workshop “AI and OER in use”. We will inform you about the legal aspects of creating open educational materials with the help of AI. We will also show you how AI can help to make OER more effective and reduce the workload. You are welcome to register at support.twillo@tib.eu to register.
Download
Sample template as PDF (Adobe Reader), DocX (Microsoft) or OpenDocument (ODT) download.