Machines that know Right, and can not do Wrong
The theory and practice of machine ethics

Machine ethics is concerned with the challenge of constructing ethical and ethically behaving artificial agents and systems. This quarter day tutorial introduces recent advances in the machine ethics field and outlines the main AI challenges in this discipline.

Slides

Date Sunday, July 15, 2018. 16:00-18:00
Place Room T6
Slides Part 1, Part 2, Part 3
For update on events in machine ethics subscribe to https://mailman.uib.no/listinfo/machine_ethics

Description

The reality of self-driving cars heralded the need to consider the legal and ethical impact of artificial agents. Self-driving cars and powerful robots are not the only ethically "challenging" artificial entities today. Whenever we outsource decisions we also outsource the responsibility for the legal and ethical impact of those decisions. Thus, any system capable of autonomous decision-making that replaces or aids human decision-making is potentially a subject of concern for machine ethics. It is thus unsurprising that there is an enormous interest, particularly in the AI community, in the challenge of making ethical artificial systems and ensuring that artificial entities "behave ethically". The aim of this tutorial is to support the AI community in advancing this interest.

Machine ethics involves research in traditional AI topics such as: robotics, decision-making, reasoning and planning, but also concerns moral philosophy, economy and law. A lot of the early work in machine ethics revolves around answering the question of "can an artificial agent be a moral agent?"" Today we see a rise in research pursuing the question of "how to make artificial moral agents?" This tutorial aims to introduce the existing theory of explicitly and implicitly ethical agents, give an overview of existing implementations of such agents and outline the open lines of research and challenges in machine ethics.

The tutorial is organised around the following topics

Presenters

Louise Dennis Louise Dennis is a Post-Doctoral Researcher and Knowledge Exchange Support Officer in the Autonomy and Verification Laboratory at the University of Liverpool. Her background is in artificial intelligence and more specifically in agent and autonomous systems and automated reasoning. She has worked on the development of several automated reasoning and theorem proving tools, most notably the Agent JPF model checker for BDI agent languages. She is currently interested rational agent programming languages and architectures for autonomous systems, with a particular emphasis on ethical machine reasoning and creating verifiable systems. Her publications can be found here.

​​

Marija Slavkovik Marija Slavkovik is an Associate Professor at the Department of Information Science and Media Studies at the University of Bergen. Her area of expertise is collective reasoning and decision-making. She is actively doing research in machine ethics, in particular formalising ethical decision-making. Dr. Slavkovik was a coordinator of the Dagstuhl Seminar 16222 in 2016 on Engineering Moral Agents - from Human Morality to Artificial Morality and will be one of the organisers of the Dagstuhl Seminar 19171 in 2019 on on Ethics and Trust: Principles, Verification and Validation. She has given numerous courses and invited talks on (Engineering) machine ethics. The details of activities can be found here. The up to date list of publications can be found here.