State-of-the art and interdisciplinary challenges
- Artificial Intelligence Ethics and Society (AIES) conference collocated with AAAI,
- the ACM* FAccT conference,
- the XAI Workshop collocated with IJCAI
- to clarify to the artificial intelligence researcher all the different sub-areas of AI ethics,
- to distinguish and give an overview to the state-of-art and major challenges of one of these sub-areas that is machine ethics.
- What is AI?
With the fad of AI abilities raging it is not clear which artificial intelligence research and applications should be subject to AI ethics concerns in general and machine ethics in particular. We begin by clarifying which class of systems we are concerned with in terms of their abilities and irrespective of whether they rely on machine learning or some other AI paradigm
- Machine ethics is not just moral dilemmas.
We give an overview of how ethics is studied in moral philosophy. We highlight the differences in challenges when the agent of morality is strictly human (as in moral philosophy) and when the agency of a system is shared between a human and an artificial agent (as in artificial intelligence). We aim to explain the key issues of machine ethics raised by moral philosophers and humanists and help the audience understand their point of view and concerns in machine ethics. AI Ethics in general is plagued by a confusion stemming from double usage of words such as agent, autonomy, learning, control, choice etc. which mean one thing when applied to human agents and another when AI researchers use them in connection to artificial agents. We highlight the differences.
- Ethics is not just for philosophers.
It is all too easy to assume that machine ethics is about solving the trolley problem, or other dilemmas in moral philosophy or to assume that societal concerns of AI should be addressed only by humanities researchers or only by AI researchers separately. For example, while finding moral theories to operationalise ethical behaviour is traditionally the focus of moral philosophy, verifying that ethical behaviour is indeed achieved is not. In this part of the tutorial we elaborate why and how we as AI researchers contribute towards solving the problems of AI Ethics. We discuss topics such as the benefits and shortcoming of machine ethics approaches: learning ethics by observation, using hard-coded behaviour rules, user input-ed vs factory set ethical behaviour etc.
We present the state-of-the-art regarding machine ethics following the taxonomy of implicit artificial moral agents vs. explicit vs. artificial agents that learn moral behaviour and those that implement hybrid approaches (Tolmeijer et al. 2020).
- Norms are not ethics.
Normative reasoning is concerned with reasoning about concepts such as prohibitions, permissions and obligations. It is studied both as part of knowledge representation and reasoning, and the multi-agent systems communities (Boella et al 2006). Research in machine ethics is often trivialized as ``what should a robot never do" leading one to believe that all that needs to be done in machine ethics is already discussed in normative reasoning. We discuss how ethical norms can be distinguished from other types of norms and what does this imply for their representation and reasoning, relying on literature on the topic such as (O'Neill 2017).
- Bias and other unintended consequences
Although bias in artificial intelligence is mostly a concern of XAI and FAccT, we give an overview of this concern from the aspect of machine ethics as well. We discuss how it could potentially manifest even in non machine learning systems and analyse counter measures.
Louise Dennis is a Lecturer at the Autonomy and Verification Laboratory at the University of Liverpool.She has world-leading expertise in agent programming languages (including explainability for agent programming, the development and verification of autonomous systems and the implementation of ethical reasoning and has published over 50 peer-reviewed papers on verification and autonomy, including key contributions on robot morality/ethics. She is a member of the IEEE P7001 Standards Working Group on Transparency for Autonomous systems. Her publications can be found here.
Marija Slavkovik is an Associate Professor at the Department of Information Science and Media Studies at the University of Bergen. Marija Slavkovik is an associate professor with the Faculty for Social Sciences of the University of Bergen. Her area of expertise is collective reasoning and decision making. She has been doing research in machine ethics since 2012, in particular on formalising ethical decision-making . At present her interest is in machine meta-ethics and problems of reconciling different requirements of various stakeholders in what ethical behaviour should an artificial agent follow in a given context. The details of activities and full list of publications can be found here.
Date TBD Place TBD Slides TBD