This also raises profound ethical questions: How can data protection and equal opportunities be guaranteed? What responsibility do universities bear in the use and development of AI systems? And how can students and teachers be sensitized to a critical approach to this technology? Dealing with ethical aspects is essential to ensure that AI is used responsibly and in line with social values in the university context.
Ethical pitfalls in the use of AI at universities
AI can be a great support for teachers and students — especially in the form of easy-to-use tools such as ChatGPT, LeChat or Gemini. However, it is important to be careful and reflective when using these systems. Particular attention should be paid to the following challenges when using AI:
- The results generated by AI systems are based on limited data sets, some of which may not be up-to-date. Furthermore, data sets may contain false claims or be of low technical quality. The underlying data sets may also contain hidden evaluations, such as discriminatory prejudices. For teachers, this means that a critical review is necessary when using AI to create exam questions or evaluate texts. Students should also not accept AI-generated content — for example when researching literature or writing texts — without checking it.
- AI systems can also “hallucinate”. This means that AI systems sometimes construct apparent connections and present unproven or unprovable conclusions as proven. This can lead to considerable problems, particularly in academic papers, if sources are cited that do not actually exist — a challenge that students, supervisors and examiners must be aware of. These distortions and hallucinations are usually difficult for users to recognize if AI providers do not make the training data sets used and the functioning of the machine learning algorithms used transparent.
Users should always ask themselves whether AI results stand up to their own scrutiny and whether their dissemination respects the equal opportunities and dignity of all people!
- Users do not only operate AI tools with their user input (so-called “prompts”). This user input can also be used as training data for the AI. If users enter their personal data or the personal data of other people, this may not only violate data protection regulations. Especially if AI systems are fed with unrecognized discriminatory prejudices, these could be applied to personal data and disseminated. Universities should not only develop clear guidelines for the handling of sensitive data when using AI, but also establish procedures with which teachers and students can formally approve the use of their own materials — such as prompts, templates or assignment formats — by others. Templates for the transfer of use can be provided for this purpose, e.g. to allow third parties to reuse content in AI systems.
Users should always ask themselves whether entering personal data could harm them or others, whether the data can be anonymized or whether it can be dispensed with altogether!
- When using AI systems, users may delegate decisions to the AI. Especially when such decisions affect or could affect other people, there is a risk that the use of AI in decision-making processes, for example in selection procedures, objectifies those people. This means that decisions are made over their heads. Decisions should not be made without involving those (potentially) affected in the decision-making process and the reasoning, as otherwise their human dignity may be violated.
- Furthermore, when AI systems are used in decisions, it is not always clear who is responsible for those decisions, which can lead to further questions of liability. Universities are particularly challenged here to define clear responsibilities — both in administrative and teaching operations.
Users should ask themselves whether they should leave final decisions to AI systems. Making final decisions yourself means taking responsibility for your own actions, which may affect other people, and respecting the dignity of others!
5 principles for dealing with AI at universities
Universities can meet these and other challenges in dealing with AI by developing and defining binding rules for dealing with AI in university teaching. However, teachers and students should also strive for their own ethical debate on AI. In ethics, justifications for morally good actions are developed in response to the question ‘What should I do? To this end, the intentions for actions, the actions themselves and their possible consequences, such as benefits and harms for oneself and others, are examined and weighed up.
As an example, five clear principles for dealing with AI according to Floridi, et al., 2019 are presented here:
- Beneficence: AI should only be used in a way that promotes human welfare, preserves human dignity and is sustainable.
- Non-maleficence: Especially when risks cannot be completely avoided or there is uncertainty about the occurrence of possible risks, a principle of damage avoidance and damage limitation should also be pursued.
- Autonomy: The principle of autonomy calls on users to weigh up the transfer of decision-making and decision-making powers to AI against the preservation of their own and others’ freedom of action and decision-making. All possible stakeholders must be included in this consideration and particular priority must be given to safeguarding human freedom of action and decision-making and their dignity.
- Justice: The use of AI should serve to promote the well-being of every human being, to maintain solidarity, and to avoid injustice. When dealing with AI, the possible occurrence of discrimination (e.g. in the conclusions provided or in the underlying data sets used by an AI) should be taken into account, balanced or at least mitigated.
- Explicability: The use of AI obliges — as far as possible — to make one’s own usage intentions understandable and to accept one’s own accountability for the resulting consequences of use. In addition, these aspects must also be made transparent so that the benefits, risks and damage can be discussed by all those involved and by society as a whole.
And now? What should be done in this specific case?
Ethics cannot (and should not) provide rigid guidelines or instructions to be followed. Instead, applied ethics thrives on our own serious engagement with new and existing challenges and the open and transparent examination of our intentions, the actions that arise from them and the consequences that our actions have for us and others. This handout explains what a deeper ethical examination can look like, which tools of applied ethics are available for this and references to further reading: PDF file (276 kB), Docx file (75 kB)
Bibliography
Floridi, Luciano and Cowls, Josh. 2019. A Unified Framwork of Five Principles for AI in Society. Harward Data Science Review. 2019, Vol. 1, 1, pp. 1–14. https://doi.org/10.1162/99608f92.8cd550d1