• Use this search function to quickly and efficiently find interesting tips, guidance, templates, engaging blog posts on various topics, and up-to-date events.

Log in
/Blog/Ethics of AI use in the university context

Ethics of AI use in the university context

AI-generated. Public domain. Created with: ChatGPT. Based on Pixelchen by Sarah Brockmann.

Start

Whether in teach­ing, research or admin­is­tra­tion, AI sys­tems offer uni­ver­si­ties new oppor­tu­ni­ties for effi­ciency, per­son­al­iza­tion and inno­va­tion. But with progress comes press­ing ques­tions: How do we ensure that AI is used respon­si­bly? What eth­i­cal bound­aries must not be crossed? And how can uni­ver­si­ties pro­mote a reflec­tive approach to AI — among teach­ers and stu­dents alike? In this arti­cle, we take a look at the key chal­lenges and show which eth­i­cal prin­ci­ples are cru­cial for the use of AI at uni­ver­si­ties.

This also raises pro­found eth­i­cal ques­tions: How can data pro­tec­tion and equal oppor­tu­ni­ties be guar­an­teed? What respon­si­bil­ity do uni­ver­si­ties bear in the use and devel­op­ment of AI sys­tems? And how can stu­dents and teach­ers be sen­si­tized to a crit­i­cal approach to this tech­nol­ogy? Deal­ing with eth­i­cal aspects is essen­tial to ensure that AI is used respon­si­bly and in line with social val­ues in the uni­ver­sity con­text.

Ethical pitfalls in the use of AI at universities

AI can be a great sup­port for teach­ers and stu­dents — espe­cially in the form of easy-to-use tools such as Chat­GPT, LeChat or Gem­ini. How­ever, it is impor­tant to be care­ful and reflec­tive when using these sys­tems. Par­tic­u­lar atten­tion should be paid to the fol­low­ing chal­lenges when using AI:

  • The results gen­er­ated by AI sys­tems are based on lim­ited data sets, some of which may not be up-to-date. Fur­ther­more, data sets may con­tain false claims or be of low tech­ni­cal qual­ity. The under­ly­ing data sets may also con­tain hid­den eval­u­a­tions, such as dis­crim­i­na­tory prej­u­dices. For teach­ers, this means that a crit­i­cal review is nec­es­sary when using AI to cre­ate exam ques­tions or eval­u­ate texts. Stu­dents should also not accept AI-gen­er­ated con­tent — for exam­ple when research­ing lit­er­a­ture or writ­ing texts — with­out check­ing it.
  • AI sys­tems can also “hal­lu­ci­nate”. This means that AI sys­tems some­times con­struct appar­ent con­nec­tions and present unproven or unprov­able con­clu­sions as proven. This can lead to con­sid­er­able prob­lems, par­tic­u­larly in aca­d­e­mic papers, if sources are cited that do not actu­ally exist — a chal­lenge that stu­dents, super­vi­sors and exam­in­ers must be aware of. These dis­tor­tions and hal­lu­ci­na­tions are usu­ally dif­fi­cult for users to rec­og­nize if AI providers do not make the train­ing data sets used and the func­tion­ing of the machine learn­ing algo­rithms used trans­par­ent.

Users should always ask them­selves whether AI results stand up to their own scrutiny and whether their dis­sem­i­na­tion respects the equal oppor­tu­ni­ties and dig­nity of all peo­ple!

  • Users do not only oper­ate AI tools with their user input (so-called “prompts”). This user input can also be used as train­ing data for the AI. If users enter their per­sonal data or the per­sonal data of other peo­ple, this may not only vio­late data pro­tec­tion reg­u­la­tions. Espe­cially if AI sys­tems are fed with unrec­og­nized dis­crim­i­na­tory prej­u­dices, these could be applied to per­sonal data and dis­sem­i­nated. Uni­ver­si­ties should not only develop clear guide­lines for the han­dling of sen­si­tive data when using AI, but also estab­lish pro­ce­dures with which teach­ers and stu­dents can for­mally approve the use of their own mate­ri­als — such as prompts, tem­plates or assign­ment for­mats — by oth­ers. Tem­plates for the trans­fer of use can be pro­vided for this pur­pose, e.g. to allow third par­ties to reuse con­tent in AI sys­tems.


Users should always ask them­selves whether enter­ing per­sonal data could harm them or oth­ers, whether the data can be anonymized or whether it can be dis­pensed with alto­gether!

  • When using AI sys­tems, users may del­e­gate deci­sions to the AI. Espe­cially when such deci­sions affect or could affect other peo­ple, there is a risk that the use of AI in deci­sion-mak­ing processes, for exam­ple in selec­tion pro­ce­dures, objec­ti­fies those peo­ple. This means that deci­sions are made over their heads. Deci­sions should not be made with­out involv­ing those (poten­tially) affected in the deci­sion-mak­ing process and the rea­son­ing, as oth­er­wise their human dig­nity may be vio­lated.
  • Fur­ther­more, when AI sys­tems are used in deci­sions, it is not always clear who is respon­si­ble for those deci­sions, which can lead to fur­ther ques­tions of lia­bil­ity. Uni­ver­si­ties are par­tic­u­larly chal­lenged here to define clear respon­si­bil­i­ties — both in admin­is­tra­tive and teach­ing oper­a­tions.


Users should ask them­selves whether they should leave final deci­sions to AI sys­tems. Mak­ing final deci­sions your­self means tak­ing respon­si­bil­ity for your own actions, which may affect other peo­ple, and respect­ing the dig­nity of oth­ers!

5 principles for dealing with AI at universities

Uni­ver­si­ties can meet these and other chal­lenges in deal­ing with AI by devel­op­ing and defin­ing bind­ing rules for deal­ing with AI in uni­ver­sity teach­ing. How­ever, teach­ers and stu­dents should also strive for their own eth­i­cal debate on AI. In ethics, jus­ti­fi­ca­tions for morally good actions are devel­oped in response to the ques­tion ‘What should I do? To this end, the inten­tions for actions, the actions them­selves and their pos­si­ble con­se­quences, such as ben­e­fits and harms for one­self and oth­ers, are exam­ined and weighed up.

As an exam­ple, five clear prin­ci­ples for deal­ing with AI accord­ing to Floridi, et al., 2019 are pre­sented here:

  1. Benef­i­cence: AI should only be used in a way that pro­motes human wel­fare, pre­serves human dig­nity and is sus­tain­able.
  2. Non-malef­i­cence: Espe­cially when risks can­not be com­pletely avoided or there is uncer­tainty about the occur­rence of pos­si­ble risks, a prin­ci­ple of dam­age avoid­ance and dam­age lim­i­ta­tion should also be pur­sued.
  3. Auton­omy: The prin­ci­ple of auton­omy calls on users to weigh up the trans­fer of deci­sion-mak­ing and deci­sion-mak­ing pow­ers to AI against the preser­va­tion of their own and oth­ers’ free­dom of action and deci­sion-mak­ing. All pos­si­ble stake­hold­ers must be included in this con­sid­er­a­tion and par­tic­u­lar pri­or­ity must be given to safe­guard­ing human free­dom of action and deci­sion-mak­ing and their dig­nity.
  4. Jus­tice: The use of AI should serve to pro­mote the well-being of every human being, to main­tain sol­i­dar­ity, and to avoid injus­tice. When deal­ing with AI, the pos­si­ble occur­rence of dis­crim­i­na­tion (e.g. in the con­clu­sions pro­vided or in the under­ly­ing data sets used by an AI) should be taken into account, bal­anced or at least mit­i­gated.
  5. Explic­a­bil­ity: The use of AI obliges — as far as pos­si­ble — to make one’s own usage inten­tions under­stand­able and to accept one’s own account­abil­ity for the result­ing con­se­quences of use. In addi­tion, these aspects must also be made trans­par­ent so that the ben­e­fits, risks and dam­age can be dis­cussed by all those involved and by soci­ety as a whole.


And now? What should be done in this spe­cific case?

Ethics can­not (and should not) pro­vide rigid guide­lines or instruc­tions to be fol­lowed. Instead, applied ethics thrives on our own seri­ous engage­ment with new and exist­ing chal­lenges and the open and trans­par­ent exam­i­na­tion of our inten­tions, the actions that arise from them and the con­se­quences that our actions have for us and oth­ers. This hand­out explains what a deeper eth­i­cal exam­i­na­tion can look like, which tools of applied ethics are avail­able for this and ref­er­ences to fur­ther read­ing: PDF file (276 kB), Docx file (75 kB)

Bibliography

Floridi, Luciano and Cowls, Josh. 2019. A Uni­fied Fram­work of Five Prin­ci­ples for AI in Soci­ety. Har­ward Data Sci­ence Review. 2019, Vol. 1, 1, pp. 1–14. https://doi.org/10.1162/99608f92.8cd550d1

Find Open Educational ResourcesFind OER