“If You Want to Pass, Follow Me” – The Machine Has Already Done the Reading! Papers and Assignments in the Age of AI

“If You Want to Pass, Follow Me” – The Machine Has Already Done the Reading! Papers and Assignments in the Age of AI

Across German universities, humanities departments are wrestling with how to handle AI in academic work. Some voices call for strict limits, hoping to preserve traditional methods and keep AI out of student writing altogether. Others argue for actively teaching AI literacy so students can use new tools to think and write more effectively. A few go further, suggesting that AI has changed scholarly practice so profoundly that the classic term paper may no longer be a meaningful way to assess learning.

When I asked the participants of one of my most recent literature seminars who of them had used the MLA International Bibliography in research for term papers or assignments before, only a handful of them raised their hands. When I asked who of them had used generative AI systems for the same purpose, all hands went up. Large language models such as ChatGPT have become powerful and accessible tools for academic work. They can answer questions quickly, summarize complex ideas, and provide explanations that often feel as if they came from a human tutor. Students frequently describe them as time-savers—tools that gather and condense information much faster than traditional web searches. At the same time, these systems are still imperfect: they sometimes invent facts, misapply terminology, or fabricate references. As helpful as they can be, they need to be handled with a critical eye.

German universities are still developing consistent guidelines for AI use, and the situation varies widely. At FAU, the general rule is that AI use in term papers is prohibited unless a lecturer explicitly permits it. Some departments require students to sign affidavits confirming they have not used AI at all; others allow it but reserve the right to request an oral defense if AI use is suspected. Until official standards are established, decisions rest with individual instructors.

One of the central messages for students is that AI tools cannot replace foundational academic skills. I found the materials on the topic created Dr. Ulrike Hanke, educational scientist and higher-education didactics consultant, eye-opening in illustrating the limits of AI: She describes the following scenario: “Imagine you have an accident with your bicycle. You know you have been hurt, but have no idea how badly because you are still under shock and the adrenaline is running high. The emergency doctor arrives and asks you to hold on for a moment until they have uploaded images of your injuries to ChatGPT for diagnosis. How would you react?”

AI-generated image (Midjourney), after an example from  Ulrike Hanke, Prüfungskultur in einer Welt mit generativer KI. Udemy, 2025.

This scenario illustrates quite well what kind of knowledge and skills are needed here that cannot be simply replaced by handing over the problem to AI: how to examine a patient after an accident, identify and interpret indications if and where the arm is broken, what measures should be taken next, use the correct terminology (fracture of the ulna) to report to the hospital, judge critically what other injuries could occur in such an accident and if they are plausible in this case.

Understanding a topic, connecting new information to existing knowledge, structuring an argument, applying the right terminology, and thinking critically—these remain human tasks. Just as a neural network needs meaningful connections to function, students need their own internal network of concepts to evaluate information and make informed decisions. AI can support that process, but it cannot build it.

The crucial distinction is between using AI as a ghostwriter and using it as a research assistant. Asking a bot to produce a complete term paper raises questions of academic integrity, authorship, reliability, and comprehension. Students risk submitting work they cannot defend, based on sources that may not even exist. A more responsible approach is to use AI for outlining, brainstorming research paths, clarifying difficult concepts, or polishing language after the intellectual work has been done. In this role, AI can act as a sounding board or second reader rather than an replacement for original thinking.

In my view, the goal of AI-informed university teaching cannot be limited to teaching students clever prompting techniques, although it does help to raise awareness that, the clearer and more informed a prompt, the better the results. Strong academic work still depends on the student’s ability to evaluate what AI provides, to decide which suggestions are useful, to identify biases and generalisations, and to combine machine-generated input with their own disciplinary knowledge. Ideally, human and machine become a form of “co-intelligence”—each strengthening the other’s abilities.

To support transparent and ethical use, students in my seminar follow a clear framework. AI use is allowed but must be documented: a revised affidavit, a description of which tools were used and how, and a short reflection on one key decision in the writing process. I ask students to do this reflection regardless whether they used AI in their papers because it sheds light on the writing process, which, for university teachers, is often a black box. This documentation reinforces a simple truth: while AI may assist, the responsibility for the work—and for understanding it—always lies with the student.

Below, you can find two checklists for the use of generative AI in papers which I have used for my seminars: