Machine Ethics
State-of-the art and interdisciplinary challenges

Much is written and discussed about AI and ethics in both science and media, which can make it confusing to discern where science ends and fad begins. This tutorial serves as a guide through the state-of-the-art and main challenges in machine ethics, contrasting it also with the fields of explainable AI (XAI) and Fairness, Accountability, and Transparency (FAccT).

Description

There are currently at least three regular venues focusing scientific work on AI and ethics: In addition there is an increasing pressure on AI communities, both research and industry, to incorporate ethic consideration in artificial intelligence research (Gibney 2020). With this rapid explosion of sub-fields and terminology, it is not always clear what we talk about when we talk about AI and ethics. Machine ethics is concerned with the analysis and synthesis of ethical behaviour, reasoning and decision making by computational agents (Anderson and Anderson 2011,Bremner et al 2019,Liao et al 2019). In contrast, responsible use of AI focuses on regulating AI research and deployment, while XAI and FAccT are concerned with maintaining human control over the impact AI-based systems have on personal lives and society. All of these fields have overlapping and diverging concerns. This tutorial has two goals:
  • to clarify to the artificial intelligence researcher all the different sub-areas of AI ethics,
  • to distinguish and give an overview to the state-of-art and major challenges of one of these sub-areas that is machine ethics.
The tutorial is organised around the following six topics:
  1. What is AI?
    With the fad of AI abilities raging it is not clear which artificial intelligence research and applications should be subject to AI ethics concerns in general and machine ethics in particular. We begin by clarifying which class of systems we are concerned with in terms of their abilities and irrespective of whether they rely on machine learning or some other AI paradigm
  2. Machine ethics is not just moral dilemmas.
    We give an overview of how ethics is studied in moral philosophy. We highlight the differences in challenges when the agent of morality is strictly human (as in moral philosophy) and when the agency of a system is shared between a human and an artificial agent (as in artificial intelligence). We aim to explain the key issues of machine ethics raised by moral philosophers and humanists and help the audience understand their point of view and concerns in machine ethics. AI Ethics in general is plagued by a confusion stemming from double usage of words such as agent, autonomy, learning, control, choice etc. which mean one thing when applied to human agents and another when AI researchers use them in connection to artificial agents. We highlight the differences.
  3. Ethics is not just for philosophers.
    It is all too easy to assume that machine ethics is about solving the trolley problem, or other dilemmas in moral philosophy or to assume that societal concerns of AI should be addressed only by humanities researchers or only by AI researchers separately. For example, while finding moral theories to operationalise ethical behaviour is traditionally the focus of moral philosophy, verifying that ethical behaviour is indeed achieved is not. In this part of the tutorial we elaborate why and how we as AI researchers contribute towards solving the problems of AI Ethics. We discuss topics such as the benefits and shortcoming of machine ethics approaches: learning ethics by observation, using hard-coded behaviour rules, user input-ed vs factory set ethical behaviour etc.
  4. Implementations
    We present the state-of-the-art regarding machine ethics following the taxonomy of implicit artificial moral agents vs. explicit vs. artificial agents that learn moral behaviour and those that implement hybrid approaches (Tolmeijer et al. 2020).
  5. Norms are not ethics.
    Normative reasoning is concerned with reasoning about concepts such as prohibitions, permissions and obligations. It is studied both as part of knowledge representation and reasoning, and the multi-agent systems communities (Boella et al 2006). Research in machine ethics is often trivialized as ``what should a robot never do" leading one to believe that all that needs to be done in machine ethics is already discussed in normative reasoning. We discuss how ethical norms can be distinguished from other types of norms and what does this imply for their representation and reasoning, relying on literature on the topic such as (O'Neill 2017).
  6. Bias and other unintended consequences
    Although bias in artificial intelligence is mostly a concern of XAI and FAccT, we give an overview of this concern from the aspect of machine ethics as well. We discuss how it could potentially manifest even in non machine learning systems and analyse counter measures.

Presenters

Louise Dennis Louise Dennis is a Lecturer at the Autonomy and Verification Laboratory at the University of Liverpool.She has world-leading expertise in agent programming languages (including explainability for agent programming, the development and verification of autonomous systems and the implementation of ethical reasoning and has published over 50 peer-reviewed papers on verification and autonomy, including key contributions on robot morality/ethics. She is a member of the IEEE P7001 Standards Working Group on Transparency for Autonomous systems. Her publications can be found here.

​​

Marija Slavkovik Marija Slavkovik is an Associate Professor at the Department of Information Science and Media Studies at the University of Bergen. Marija Slavkovik is an associate professor with the Faculty for Social Sciences of the University of Bergen. Her area of expertise is collective reasoning and decision making. She has been doing research in machine ethics since 2012, in particular on formalising ethical decision-making . At present her interest is in machine meta-ethics and problems of reconciling different requirements of various stakeholders in what ethical behaviour should an artificial agent follow in a given context. The details of activities and full list of publications can be found here.

Slides

Date TBD
Place TBD
Slides TBD