Machines that know Right, and can not do Wrong
The theory and practice of machine ethics
Machine ethics is concerned with the challenge of constructing ethical and ethically behaving artificial agents and systems. This quarter day tutorial introduces recent advances in the machine ethics field and outlines the main AI challenges in this discipline.
Slides
Date Sunday, July 15, 2018. 16:00-18:00 Place Room T6 Slides Part 1, Part 2, Part 3 For update on events in machine ethics subscribe to https://mailman.uib.no/listinfo/machine_ethics
Description
The reality of self-driving cars heralded the need to consider the legal and ethical impact of artificial agents. Self-driving cars and powerful robots are not the only ethically "challenging" artificial entities today. Whenever we outsource decisions we also outsource the responsibility for the legal and ethical impact of those decisions. Thus, any system capable of autonomous decision-making that replaces or aids human decision-making is potentially a subject of concern for machine ethics. It is thus unsurprising that there is an enormous interest, particularly in the AI community, in the challenge of making ethical artificial systems and ensuring that artificial entities "behave ethically". The aim of this tutorial is to support the AI community in advancing this interest.
Machine ethics involves research in traditional AI topics such as: robotics, decision-making, reasoning and planning, but also concerns moral philosophy, economy and law. A lot of the early work in machine ethics revolves around answering the question of "can an artificial agent be a moral agent?"" Today we see a rise in research pursuing the question of "how to make artificial moral agents?" This tutorial aims to introduce the existing theory of explicitly and implicitly ethical agents, give an overview of existing implementations of such agents and outline the open lines of research and challenges in machine ethics.
The tutorial is organised around the following topics
- What is machine ethics? - we introduce the field, in particular the issues that touch upon moral philosophy, law and social science, before we focus on the interesting machine ethics challenges for AI.
- Which systems should/can be ethical? - we present the Moor [2006] distinction between implicit moral agents and explicit moral agents, as well as the Dyrkolbotn et al [2018] refinement. We discuss also the theory of bottom-up and top-down approaches of Chapter2, Wallach and Allen [2008]. We next proceed with discussing specific challenges to implementing and validating ethical behaviour with respect to different decision-making approaches.
- Issues of impementation, validation and certification. We discuss materail from Charisi et al [2018].
- Overview of Implementation Approaches - more specifically we present the core ideas of Anderson et al. [2016]; Arkin et al. [2012] ; Dennis et al. [2016] ; Vanderelst and Winfield [2018]; Govindarajulu and Bringsjord [2017] ; Lindner and Bentzen [2017] .
- Verifiable AI - we focus on the particular challenges and advantages with respect to ethics of systems that use rule-based reasoning. In particular we discuss the role that verification, and other formal models can play in machine ethics Dennis et al. [2016] ; Dennis et al. [2015] .
Presenters
Louise Dennis is a Post-Doctoral Researcher and Knowledge Exchange Support Officer in the Autonomy and Verification Laboratory at the University of Liverpool. Her background is in artificial intelligence and more specifically in agent and autonomous systems and automated reasoning. She has worked on the development of several automated reasoning and theorem proving tools, most notably the Agent JPF model checker for BDI agent languages. She is currently interested rational agent programming languages and architectures for autonomous systems, with a particular emphasis on ethical machine reasoning and creating verifiable systems. Her publications can be found here.
Marija Slavkovik is an Associate Professor at the Department of Information Science and Media Studies at the University of Bergen. Her area of expertise is collective reasoning and decision-making. She is actively doing research in machine ethics, in particular formalising ethical decision-making. Dr. Slavkovik was a coordinator of the Dagstuhl Seminar 16222 in 2016 on Engineering Moral Agents - from Human Morality to Artificial Morality and will be one of the organisers of the Dagstuhl Seminar 19171 in 2019 on on Ethics and Trust: Principles, Verification and Validation. She has given numerous courses and invited talks on (Engineering) machine ethics. The details of activities can be found here. The up to date list of publications can be found here.