INFO901
Introduction to AI Ethics
This graduate course introduces the fast evolving interdisciplinary research area of Artificial Intelligence (AI) Ethics to doctoral students who are interested in either AI as a computer science discipline or students interested in researching the societal and personal impact of AI technologies introduced in society.
Schedule week 10
Day/Period |
I 9:30-10:30 |
II 10:45-11:30 |
III 11:45-12:30 |
IV 13:30-14:15 |
V 14:30-15:30 |
|
Monday | Course Organisation | Module 1: Intro AI Slides | Module 1: Intro AI Slides |
L
|
Module 1: Intro AI Slides | Module 1: Intro AI Slides |
Tuesday | Reading for Module 2 | |||||
Wednesday | Module 1:AI and decisions Slides | Module 2:Cancelled | Module 2:Cancelled | Module 2:Cancelled | Module 2:summary | |
Thursday | Reading for Module 3 | |||||
Friday | "What is Privacy?" by Tobias Matzner | Introduction to differential privacy by Fedor Fomin (Slides) | Privacy and the law by Malgorzata Agnieszka Cyndecka (Slides) | Overview of relevant legal requirements for AI development by Kari Laurmann (Slides) | Module 3:summary | |
Saturday | Reading for Module 4 |
Schedule week 11
Description
This is a detailed description of the modules of the course. Most modules and lectures have a required reading material before class. Make sure you read the texts. If you have a problem accessing some literure, email the course organisers. The lectures will not be recorded. Slides will be made avaialble after the lecture. The references listed under Other material are intended for your further reading, should you want to engage more with the topic.
Module 1: An Introduction to Artificial Intelligence
This module is a crash course into artificial intelligence. We will start with a very brief history of the field, and cover the basic concepts of reasoning, machiene learning, knowledge representation and computational agents.
- Lecturer: Marija Slavkovik
- Contact: marija.slavkovik@uib.no
- Date: March 7, 2022 (Monday)
- Slots: 10:45-11:30, 11:45-12:30, 13:30-14:15, 14:30-15:30
- Nota bene: you can skip this module if you have basic familiarity with AI
- Instead of reading before class interact with the How normal am I installation.
- Other material (videos, tutorials, etc. ):
- An artists representation of unethical smart tech.
- You do not need intelligence for an unethical computing. A report on the UK The Post Office case where a mistake in the IT system caused people to be persecuted and find for crimes they did not commit.
- How artificial intelligence is proposed to be used to protect the EU borders
- McQuaid, John (2021). Limits to Growth: Can AI’s Voracious Appetite for Data Be Tamed? In: Un- dark. The article discusses the need for purposfully creating and currating data for a particular machine learning tasks and the limits of relying on found, cheap, data.
- Neural networks, a comprehensive introduction video by 3Blue1Brown
- Neural Networks and Deep Learning, a free online book, by Michael Nielsen.
Module 2: Power and Politics in AI
- Lecturer: Miria Grisot
- Contact: miriag@uio.no
- Abstract: This is an introduction to power and politics in AI in Information Systems Research from a sociotechnical perspective. It gives an overview on the current discussions in the field such as how AI changes work and organizing in organizations, and how to re-think management of technology in relation to AI with emphasis on the ethical aspects.
- Date: March 9, 2022 (Wednesday)
- Slot: 9:30-10:15, 10:30-11:15, 11:45-12:30, 13:30-14:15
- Reading before class:
- Other material:
Module 3: Privacy and AI
This module introduces some of the basic concept of privacy that are relevant for AI. Lecture 1: What is privacy?
- Lecturer: Tobias Matzner
- Abstract: This is a short introduction into the conceptual and socio-technical development of privacy. It identifies central issues that inform and structure current debates as well as transformations of privacy spurred by digital technology. In particular, it highlights central ambivalences of privacy between protection and de-politicization and the relation of individual and social perspectives.
- Date: March 11, 2022 (friday)
- Slot: 9:30-10:30
- Nota bene
- Reading before class:
- Other material (videos, tutorials, etc. ):
- Helen Nissenbaum (2011). A Contextual Approach to Privacy Online. Daedalus. 140 (4): 32–48.
- Mary Flanagan, Daniel C. Howe, and Helen Nissenbaum (2008). Embodying Values in Technology Theory and Practice. InInformation Technology and Moral Philosophy, Jeroen van den Hoven and John Weckert (eds.) Cambridge: University Press. 322-353
- Lecturer: Fedor Fomin
- Abstract: Differential privacy (DP) is a system for publicly sharing information about a dataset by describing the patterns of groups within the dataset while withholding information about individuals in the dataset. It can been seen as a mathematical model of some aspects of privacy. The lecture gives a gentle introduction to this topic and discusses what guarantees diferential privacy makes and does not make.
- Date: March 11, 2022(Friday)
- Slots: 10:45-11:30
- Reading before class:
- A Gentle Introduction to Differential Privacy by Tim Titcombe
- Other material:
- The detailed core reference on differential privacy: Cynthia Dwork and Aaron Roth (2014). The Algorithmic Foundations of Differential Privacy. Foundations and Trnds in Theoretical Computer Science Vol. 9, Nos. 3–4 (2014) 211–407./li>
- A recommended popular science book: Michael Kearns and Aaron Roth (2019). The Ethical Algorithm: The Science of Socially Aware Algorithm Design. Oxford University Press, Inc., USA.. We cannot provide access to this. A A talk by Michael Kearns on the book can be seen here:
- A recommended popular science book: Michael Kearns and Aaron Roth (2019). The Ethical Algorithm: The Science of Socially Aware Algorithm Design. Oxford University Press, Inc., USA.. We cannot provide access to this. A A talk by Michael Kearns on the book can be seen here:
- The detailed core reference on differential privacy: Cynthia Dwork and Aaron Roth (2014). The Algorithmic Foundations of Differential Privacy. Foundations and Trnds in Theoretical Computer Science Vol. 9, Nos. 3–4 (2014) 211–407./li>
- Lecturer: Malgorzata Agnieszka Cyndecka
- Abstract: This lecture gives an introduction to the GDPR covering objectives, material and geographical scope, main actors and notions, principles relating to processing of personal data, legal basis for processing of personal data, rights of the data subject, GDPR and risk enforcement.
- Date: March 11, 2022(Friday)
- Slots: 11:45-12:30
- Nota bene: This lecture is a recorded video.
- Reading before class: Chapters 2 and 3 of the GDPR
- Link to the video.
- Lecturer: Kari Laurmann
- Abstract: This lecture covers the topics of privacy, GDPR requirements and AI. It aims to give a basic overview of relevant legal requirements for AI development, and some real case examples of how to apply the law in practices (examples from the sandbox)
- Date: March 11, 2022(Friday)
- Slots: 13:30-14:15
- Nota bene: all of the reports from Datatilsynet can be found here.
- Reading before class:
- It's getting personal. Privacy trends 2017
- Out of Control. How consumers are exploited by the online advertising industry 14.01.2020 (Mandatory: the first 43 pages ). Summary of the report can be found here.
Module 4: Explainable AI
This modul covers topics on explaining the behaviour of an AI system.Lecture 1: What is an explanation?
- Lecturer: Tim Miller
- Abstract: In this session, we will study how people explain complex things to each other. When we talk about explainable AI, we are discussing how a machine can explain to someone how and why it makes decisions. What should an explanation from a machine look like, given that they calculate quite differently to how we think? This talk argues that we should take inspiration from how people explain things to each other, as this gives us useful pointers for how people help others to understand how and why things happen.
- Date: March 14, 2022(Monday)
- Slot: 09:30-10:30
- Reading before class:
- Other material:
- if you would an overview of the technical details of explainability and interpretability: Vaishak Belle and Ioannis Papantonis, 2021. Principles and Practice of Explainable Machine Learning. Frontiers in Big Data
- If you would like to learn more about how people explain things to each other: Tim Miller 2018. Explanation in Artificial Intelligence: Insights from the Social Sciences.
- Lecturer: Alexander Kempton and Polyxeni Vassilakopoulou
- Abstract: AI explanations are important for many reasons and can be addressed to many different audiences. For instance, experts need explanations to evaluate or improve AI-enabled systems, impacted groups need to make sense of what lies behind AI-enabled systems that affect them. In this session, we will first explore different stakeholders´ needs for AI explanations and we will then focus on explanations for end-users. We will discuss the role of explanations for the use of AI enabled-systems and how different sets of users have different explainability needs.
- Date: March 14, 2022(Monday)
- Slots: 10:45-11:30
- Reading before class:
- Other material (videos, tutorials, etc..): might appear here
- Lecturer: Inga Strümke
- Abstract: One of the main challenges of XAI is that machine learning methods do predictive modeling, as opposed to explainable modeling. Their task is to detect potentially complex and non-linear correlations and use these to give the most accurate predictions. Explanations, on the other hand, are supposed to be non-complex and - for some recipients of explanations - even non-linear. Although there are many methods available from the field of XAI, these don’t have a unifying principle, and there are no benchmarks available for comparing them. This makes entering the field potentially overwhelming, and so we will take a step back and discuss the three conceptual ways to approach explaining black box models. This discussion will involve a brief introduction to the arguably most popular XAI methods, namely SHAP and LIME.
- Date: March 14, 2022(Monday)
- Slots: 11:45-12:30
- Reading before class:
- Other material (videos, tutorials, etc.):
- Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, Been Kim (2018). Sanity Checks for Saliency Maps. Proceedings of the 32nd International Conference on Neural Information Processing Systems. Pages 9525–9536
- Try out. AIX360 Demo. What does being explained things by machine look like.
For the students that (want to) enjoy programming, pointers will be given to hands on tutorials where they can try out some of the methods discussed in this and the next modules
- Date: March 14, 2022(Monday)
- Slots: 13:30-14:15
- Link to tutorials: https://github.com/mslavkovik/AI-Ethics-Tutorials
Module 5: Fairness and AI
This module covers the problem of ensuring fairness of decisions made by an AI system, but also explores the question of why it is important to have this fairness and what does it mean.Lecture 1: Unbiased data? Fair AI? Forget it!
- Lecturer: Maja Van Der Velden
- Abstract: While we all are affected, or will be, by algorithms, some of us are more vulnerable than others to biased data and unfair AI. Is a focus on unbiased data and fair AI the solution? Is there a universal understanding of fairness? Are there sources of neutral data or can we make existing data sets unbiased? If we answer ‘yes’ on these questions, does it mean that AI can be neutral? In this lecture we will engage with the understanding that technology is not neutral and explore what this means for working towards unbiased data and fair AI.
- Date: March 16, 2022(Wednesday)
- Slot: 09:30-10:30
- Reading before class:
- Other material (videos, tutorials, etc. ): might apear here
- Lecturer: Kristoffer Chelsom Vogt
- Abstract: A perspective from sociology
- Date: March 16, 2022(Wednesday)
- Slots: 10:45-11:30
- Reading before class:
- Anna Lauren Hoffmann (2019) Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse, Information, Communication & Society, 22:7, 900-915
- Mike Zajko(2022) Artificial intelligence, algorithms, and social inequality: Sociological contributions to contemporary debates.
- Joyce K, Smith-Doerr L, Alegria S, et al. (2021) Toward a Sociology of Artificial Intelligence: A Call for Research on Inequalities and Structural Change. Socius.
- Other material (videos, tutorials, etc. ): might apear here
- Lecturer: Robindra Prabhu
- Abstract: As AI solutions have become more ubuiquitous in society, so have concerns about the ethical and social fall out that follow in their wake. Reports of biased, unfair and socially misaligned models, have triggered a growing body of academic research on “fair, transparent and accountable” ML. Industry and policy makers have responded with a plethora of principles and guidelines for “responsible AI”, lawmakers are proposing new regulatory frameworks to curtail the risks associated with this new, emerging technology, and auditors are developing algorithmic audits. All of this notwithstanding, it remains unclear how these separate contributions should translate into practical interventions in the development process. How exactly, does one go about conducting a fairness assessment of a model in practice? In this lecture we will engage with this question, using NAV’s model for predicting the duration of sick leave as a case in point. We will touch upon the challenges of
- Date: March 16, 2022(Wednesday)
- Slots: 11:45-12:30
- Reading before class:
- Wachter, Sandra and Mittelstadt, Brent and Russell, Chris, Bias Preservation in Machine Learning: The Legality of Fairness Metrics Under EU Non-Discrimination Law (January 15, 2021). West Virginia Law Review, Vol. 123, No. 3, 2021.
- Barocas, Hardt, and Narayan (2022). Chapter 5 “Testing discrimination in practice”. Fairness and Machine Learning.
- Other material (videos, tutorials, etc..):
- Lecturer: Marija Slavkovik
- Abstract: This lecture gives an overview of the algorithmic tools avaialbale for mittigating bias by some machine learning methods. The lecture is sufficiently genreal to be followed by people with no programming experience, but will give pointers on how to proceed forthose who do want to engage hands on.
- Date: March 16, 2022(Wednesday)
- Slots: 13:30-14:15
- Reading before class:
- Chapter 11.3 and Chapter 11.4 from Ian Foster, Rayid Ghani, Ron S. Jarmin, Frauke Kreuter and Julia Lane 2020. Big Data and Social Science Data Science Methods and Tools for Research and Practice. Routledge. Second Edition.
- Other material:
- Play with some debiasing tools. A web interactive tutoral made available by AIF360.
- Does TIKTOK have a race problem? This piece of investigative journalism by Forbes shows an approach that journalists took to detect bias without having access to the algorithm.
- If AI is the problem, is debiasing the solution? EDRi is a european network of nongoverement organisations dedicated to defending rights and freedoms online. The linked articly summarises a recent report they have comissioned (Beyond Debiasing: Regulating AI and its Inequalities, authored by Agathe Balayn and Dr. Seda Gürses). The report outlines the limits of technical debiasing measures as a solution to structural discrimination and inequality reinforced or propagated through AI systems.
Module 6:Transparency and accountability
Lecture 1: Overview of algorithmic accountability- Lecturer: Maranke Wieringa (they/them)
- Abstract: As research on algorithms and their impact proliferates, so do calls for scrutiny/accountability of algorithms. In the lecture I will briefly introduce accountability theory from public administration, introduce a sociotechnical understanding of algorithmic systems, and then discuss algorithmic accountability. The importance of algorithmic accountability is discussed, and I will introduce some strands of research which are particularly useful when starting to realize algorithmic accountability.
- Date: March 18, 2022(Friday)
- Slot: 09:30-10:30
- Reading before class:
- Other material:
- Jennifer Cobbe, Michelle Seng Ah Lee, Jatinder Singh. 2021. Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems. In Conference on Fairness, Accountability, and Transparency (FAccT ’21), March 3–10, 2021, Virtual Event, Canada. ACM, New York, NY, USA, 12 pages.
- Joshua A. Kroll. 2021. Outlining Traceability: A Principle for Operationalizing Accountability in Computing Systems. In FAccT ’21: ACM Conference on Fairness, Accountability, and Transparency, March 2021, Toronto, CA (Virtual). ACM, New York, NY, USA, 14 pages.
- Lecturer: Alexander Kempton and Polyxeni Vassilakopoulou
- Contact: polyxenv@uia.no / alexansk@ifi.uio.no
- Abstract: In this session, we will introduce and discuss accountability in the context of AI and approaches for achieving AI accountability in practice. Accountability relates to the obligations of those designing, deploying and operating AI-enabled technologies (sometimes expressed as their “responsibility for”), the interrogation ability of supervision authorities and those affected by AI-enabled technologies and also, the post-hoc sanctioning potential for blamable agents (when things go wrong). The use of AI-enabled technology can introduce issues for and require new ways in establishing accountability. During the two hour session we will :
- Date: March 18, 2022 (Friday)
- Slot: 10:45-11:30, 11:45-12:30
- Nota bene
- Reading before class
- reading after class:
- Hutchinson, B., Smart, A., Hanna, A., Denton, E., Greer, C., Kjartansson, O., Barner, P. & Mitchell, M. (2021). Towards accountability for machine learning datasets: Practices from software engineering and infrastructure. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency.
- Selbst, A. D. (2021). An Institutional View Of Algorithmic Impact Assessments.Harvard Journal of Law & Technology (35)
- Other material:
- Lecturer: Nick Diakopolous
- Abstract: This session will introduce a framework for applying information transparency to algorithmic decision-making systems and discuss challenges in implementing such a framework. Participants will be engaged in active learning to apply the framework
- Date: March 18, 2022(Wednesday)
- Slot: 14:30-15:30
- Nota bene:
- Reading before class:
- N. Diakopoulos. (2020) Transparency. Oxford Handbook of Ethics and AI. Eds. Markus Dubber, Frank Pasquale, Sunit Das. May, 2020
- Margaret Mitchell, et al (2019) “Model Cards for Model Reporting,” Proceedings of the Conference on Fairness, Accountability, and Transparency (2019), 220-229
- Turilli, Matteo, and Luciano Floridi (2009). “The Ethics of Information Transparency.” Ethics and Information Technology 11, no. 2 (2009).
- Other material: may appear here
Exam
The exam for INFO901 consists in executing, and writing a report, on a project with a topic from AI Ethics connected to at least one of the lectures in the course. The project can be done individually or in pairs. The project should contain research work approximately equivalent to one conference paper. The report should follow the structure of a conference or a journal article:
- Introduction Should include the research problem, hypothesis or topic. Motivation of this problem/hypothesi/topic including grounding in related work. A short description of methodology used (if applicable), scope and success criteria (if applicable). A contribution: how this work advances the state of the art in AI Ethics (and which field are you aspiring to contribute to with this work). Link to a code and/or data repository (if relevant).
- Preliminaries All the relevant information from other work that are necessary to be known in other for your project to be understood by the reader
- Related work Either as second or penultimate section. Describe work that addresses similar research problem, hypothesis or topic as yours. Describe the similarities and differences with your work.
- 1-3 Chapters of reporting on work, results or argumentation
- Conclusions Outline how research problem, hypothesis or topic has been addressed. Outline directions for future work.
The project reports should be between 10 and 15 pages (excluding references) formatted following one of the templates:
- LaTeX2e Proceedings Templates download (zip, 309kb)
- Microsoft Word Proceedings Templates (zip, 559kb)
The students are free to publish the reports as research articles to the venue of their choice.
Deadline for submitting the report: June 10th, 2022
Office hours with the course responsibles: scheduled by need. The method for submiting the proposal will be specified with the topic aproval notification,Selection of topics
The students should develop a project research problem, hypothesis or topic and submit it for approval by April 4, 2022 by email to marija.slavkovik@uib.no and miriag@ifi.uio.no. Use subject "INFO901 topic for approval". The proposal should not exceed one page and include:
- Working Title
- Either a research problem, hypothesis or topic
- At least one related research article
- Which class from the course is the project related to
- A short declaration of aspired contribution (and to which field)
- Planned methodology
Course organisers
Miria Grisot is an associate professor at the Digitalisation section at the University of Oslo. She works in Information Infrastructures, Digital Infrastructures, Digital Ecosystems, Digitalisation of work, Design of Information Infrastructures, Healthcare, remote care, AI in context, data work, CSCW, and collaborative work. Her publications and project envolvement can be found here.
Marija Slavkovik is an Professor at the Department of Information Science and Media Studies at the University of Bergen. Her area of expertise is collective reasoning and decision making. She has been doing research in machine ethics since 2012, in particular on formalising ethical decision-making . At present her interest is in machine meta-ethics and problems of reconciling different requirements of various stakeholders in what ethical behaviour should an artificial agent follow in a given context. The details of activities and full list of publications can be found here.
Slides
Module 1:
Module 3:
- Lecture did not use slides
- Slides Part 2
- Slides Part 3
- Slides Part 4
Module 4:
- Lecture did not use slides
- Slides Part 2
- Slides Part 3
Module 5:
Module 6: