INFO381-2020: Graduate course on AI Ethics
@ University of Bergen

INFO381-2020 was organised in eight, six hour long sessions. This is the standard organisation form for the Masters in Information Science courses. Each session was structured as a sequence of lectures, discussions and group work. The learning outcomes of the course and syllabus were adjusted in this regard to reflect the students' now existing knowledge in AI:
Knowledge.

  • Identify the basic problems studied in XAI, FAccT, Responsible AI and machine ethics.
  • Understand the premises of the core moral theories.
  • Interpret, explain and extend the need for, and challenges of, AI Ethics.
  • Experience the entire process of research in machine ethics from the inception of an idea, analysis of research work, refining a research question, planing and executing group work and reporting on the work in the form of a scientific report.
Skills.
  • Appraise the ethical aspects of AI problems.
  • Match a specific AI Ethics challenge to its most relevant discipline.
General competence.
  • Reading and explaining scientific articles.
  • Research project management.
  • Scientific reporting.

Organisation

  • 1. What do we talk about when we talk about AI and Ethics
    Material: Chapters 1 and 2 from Russell und Norvig (2015).
    Discussion: Ministry of Local Governmentand Modernisation (2020).
    Exercise: The moral machines project
  • 2. Basics of moral philosophy 1/2
    Material: Vaughn (2014).
    Discussion: Turing (1950), Weizenbaum (1966), Searle (1980).
    Workshop: How to motivate a research question.
    Exercis: Find out who Eugene Goostman is
  • 3. Basics of moral philosophy 2/2
    Material: Vaughn (2014).
    Discussion: Rawls (1958), McIntyre (2019), Anderson (2008), Foot (1967).
    Workshop: The role of related work in science articles.
    Exercise:
    • Fix the three laws of robotics (or come upwith your own)
    • Represent in a representation of your choice “do not do harm”.
  • 4. Deontic logic, Top-down machine ethics
    Material: Lecture notes.
    Discussion: Moor (2006), Amodei u. a. (2016), Awad u. a. (2018), Powers(2006).
    Workshop: Defining a research question and success criteria.
    Exercise:
    • How would you build a prima facieduty following implicit artificial agent?
    • How would you decide what moral theory should be used forgoverning the behaviour of artificial moral agents?
  • 5. Inductive Logic programming, Bottom up Machine ethics
    Material: Chapter 19.5 of Russell und Norvig (2015), Logic Programming.
    Discussion: Tolmeijer u. a.(2020), Arkin (2008), Bentzen (2016), Malle u. a. (2015).
    Workshop: How to develop research project ideas.
  • 6. Fairness, Accountability and Trust
    Material: Wieringa (2020), Bolukbasi u. a. (2016), Mehrabi u. a. (2019).
    Videos: Arvind Narayanantutorial at FAT2018. Trusted AI and AI Fairness 360 Tutorial by Prasanna Sattigeri, September 18,2019.
    Exercise: AIF360
  • 7. Explainability
    Material: Miller (2019), Arya u. a. (2019), Gunning und Aha (2019).
    Video: Explainability 360 Tutorial by Amit Dhurandhar, September 18, 2019.5
    Exercise: AIX360
  • 8. Responsible use of AI
    Material: Part III of Himma und Tavani (2008), Jobin u. a. (2019), Hagendorff (2019), Rahwan (2018), AI ISO standard in progress. EU Ethics Guidelines for Trustworthy AI
  • Discussion: The Norewegian AI ethic princtiples.
    Slides

    References