Machine Ethics
State-of-the art and interdisciplinary challenges

Much is written and discussed about AI and ethics in both science and media, which can make it confusing to discern where science ends and fad begins. This tutorial serves as a guide through the state-of-the-art and main challenges in machine ethics, contrasting it also with the fields of explainable AI (XAI) and Fairness, Accountability, and Transparency (FAccT).


There are currently at least three regular venues focusing scientific work on AI and ethics: In addition there is an increasing pressure on AI communities, both research and industry, to incorporate ethic consideration in artificial intelligence research (Gibney 2020). With this rapid explosion of sub-fields and terminology, it is not always clear what we talk about when we talk about AI and ethics. Machine ethics is concerned with the analysis and synthesis of ethical behaviour, reasoning and decision making by computational agents (Anderson and Anderson 2011,Bremner et al 2019,Liao et al 2019). In contrast, responsible use of AI focuses on regulating AI research and deployment, while XAI and FAccT are concerned with maintaining human control over the impact AI-based systems have on personal lives and society. All of these fields have overlapping and diverging concerns. This tutorial has two goals:
  • to clarify to the artificial intelligence researcher all the different sub-areas of AI ethics,
  • to distinguish and give an overview to the state-of-art and major challenges of one of these sub-areas that is machine ethics.
Since originally there was no way for the tutorial to be presented live, we have prepared several videos to cover the topics we initially envisioned for our tutorial. The situation has since improved, but the videos were already made, so why not make the available. This is where we say "hi!"

The tutorial, organised around the following topics:
  1. What is AI (to do with ethics)?
    We begin with discussing why there is a concern for ethics in AI.

    Link for the recording of the live tutorial (also the talk bellow):

    References in Video:

  2. The landscape of AI ethics
    We give an overview of what sub-disciplines of AI Ethics exist in addition to machine ethics and point to other tutorials and workshops at IJCAI-PRICAI2020 that focus specifically on these sub-topics: explainability, accountability, transparency, responsible AI, fairness and privacy

    References in Video:

  3. What is machine ethics?
    We discuss various taxonomies for Implementations of Machine Ethics.

    References in Video:

    • Moor, James H.: The Nature, Importance, and Difficulty of Machine Ethics. In: IEEEIntelligent Systems21 (2006), Juli, Nr. 4, S. 18–21.
    • Wallach and Allen. 2008. Moral machines: Teaching robots right from wrong. Oxford University Press, Oxford, UK. DOI:10.1093/acprof:oso/9780195374049.001.0001

  4. The quick and dirty introduction to moral philosophy
    We give an overview of how ethics is studied in moral philosophy. We highlight the differences in challenges when the agent of morality is strictly human (as in moral philosophy) and when the agency of a system is shared between a human and an artificial agent (as in artificial intelligence).

    A link to the video of the live tutorial:
  5. An aside on causality and intentions.
    A brief excursion into how Machine Ethics systems reason about causality and the intentions of agents.

    References from Video:

    • Miller and Shanahan. 2002. Some alternative formulations of the event calculus. In Computational logic: logic programming and beyond. Springer, 452– 490.
    • Berreby et al. 2018. Event-Based and Scenario-Based Causality for Computational Ethics. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS ’18), 47–155
    • Naveen Sundar Govindarajulu and Selmer Bringsjord. 2017. On automating the doctrine of double effect. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI'17). AAAI Press, 4722–4730.
    • Lindner and Bentzen. 2018. A Formalization of Kant’s Second Formulation of the Categorical Imperative. In Proceedings of The 14th International Conference on Deontic Logic and Normative Systems (DEON 2018)
    • Halpern, J. Y. 2016. Actual Causality. The MIT press.

  6. An introduction to Hera
    A brief introduction to the HERA System which we will use for the practical component of the Tutorial.
  7. The Hera system hands-on
    This is your oportunity to get some hands on experience with machine ethics. These tutorials are curtosy of the Hera project .

    Louise goes through some of the HERA tutorials in order to help work through them.

    It also might be useful to have a look at:

  8. Implementations
    We present the state-of-the-art regarding machine ethics through a survey of implementations.

  9. A link to the video of the live presetation (all three parts together):

    References in video:

    • Part 1:
      • Nallur, Vivek (2020). Landscape of Machine Implemented Ethics. Science and Engineering Ethics 26 (5):2381-2399.
      • S. Tolmeijer, M. Kneer, C. Sarasua, M. Christen, A. Bernstein (2020). Implementations in Machine Ethics: A Survey. arXiv preprint arXiv:2001.07573
      • Arkin et al. 2009. An Ethical Governor for Constraining Lethal Action in an Autonomous System. Technical Report GIT-GVU-09-02, Georgia Institute of Technology.
      • Anderson et al: A Value Driven Agent: Instantiation of a Case-Supported Principle-Based Behavior Paradigm, Workshop on AI, Ethics and Society, 2016
      • Winfield et al. 2014. Towards an Ethical Robot: Internal Models, Consequences and Ethical Action Selection. In Advances in Autonomous Robotics Systems, LNCS 8717, 85–96.
    • Part 2:
      • Bringsjord et al 2014. Akratic Robots and the Computational Logic Thereof. Proceedings of the IEEE 2014 International Symposium on Ethics in Engineering, Science, and Technology. DOI: 10.1109/ETHICS.2014.6893436
      • Bringsjord & Govindarajulu (2013), Toward a Modern Geography of Minds, Machines, and Math, in V. C. Mueller, ed., ‘Philosophy and Theory of Artificial Intelligence’, Vol. 5 of Studies in Applied Philosophy, Epistemology and Rational Ethics, Springer, New York, NY, pp. 151–165. 10.1109/MIS.2006.82 (Citation on slides for this is wrong!)
      • Website:
      • Benzmüller et al. Designing Normative Theories for Ethical and Legal Reasoning: LogiKEy Framework, Methodology, and Tool Support, In Artificial Intelligence, Elsevier, volume 287, pp. 103348, 2020.
      • Berreby et al. A Declarative Modular Framework for Representing and Applying Ethical Principles. Proc 16th International Conference on Autonomous Agents and Muliagent Systems (AAMAS 2017) (Can't find DOI)
      • Cointe et al. Ethical Judgment of Agents’ Behaviors in Multi-Agent Systems. Proceedings of the 15th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2016) (Can't find DOI)
      • P.M. Dung, On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games, Artificial Intelligence 77 (1995) 321–357.
      • B. Liao, M. Slavkovik, L. van der Torre, Building Jiminy Cricket: An architecture for moral agreements among stakeholders, in: AAAI/ACM Conference on Artificial Intelligence, Ethics and Society
      • K. Atkinson, T. Bench-Capon, Addressing moral problems through practical reasoning, J. Appl. Log. 6(2) (2008) 135–151.
      • K. Atkinson, T. Bench-Capon, Taking the long view: looking ahead in practical reasoning, in: Proceedings of COMMA 2014, 2014, pp. 109–120. (Can't find DOI)
      • K. Atkinson, T. Bench-Capon, States, goals and values: revisiting practical reasoning, Argument Comput. 7 (2–3) (2016) 135–154. DOI: 10.3233/AAC-160011
    • Part 3:
      • Abel et al (2016). Reinforcement learning as a framework for ethical decision making. In B. Bonet, et al. (Eds.), AAAI Workshop: AI, Ethics, and Society (pp. 54–61) (Can't find DOI)
      • Ecoffet and Lehman. Reinforcement Learning Under Moral Uncertainty. arXiv:2006.04734 [cs], July 2020.
      • Balakrishnan et al, 2019. Incorporating behavioral constraints in online ai systems. In Proc. of the 33rd AAAI Conference on Artificial Intelligence (AAAI) (Can't find DOI)
      • Loreggia et al. 2018. Value alignment via tractable preference distance. In Yampolskiy, R. V., ed., Artificial Intelligence Safety and Security. CRC Press. chapter 18. (Can't find DOI)
      • Rossi and Mattei, 2019. Building Ethically Bounded AI. In 33rd AAAI Conference on Artificial Intelligence (AAAI-19). DOI: 10.1609/aaai.v33i01.33019785

  10. Verification of Machine Ethics Systems
    We look at verification of Machine Ethics Systems. In the first part we ask why we need to verify them and what sort of properties we can check. In the second part of our look at the Verification of Machine Ethics systems, we look at three example verifications.

    References in video:

    • Part 1:
      • Louise A. Dennis, Michael Fisher, Nicholas K. Lincoln, Alexei Lisitsa, Sandor M. Veres. Practical Verification of Decision-Making in Agent-Based Autonomous Systems (available Open Access). Automated Software Engineering 23(3), 305-359, 2016. DOI: 10.1007/s10515-014-0168-9. 
    • Part 2:
      • Paul Bremner, Louise A. Dennis, Michael Fisher and Alan F. Winfield. On Proactive, Transparent and Verifiable Ethical Reasoning for Robots. Proceedings of the IEEE. Special Issue on Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems. 107(3), pp:541-561. DOI: 10.1109/JPROC.2019.2898267
      • Louise. A. Dennis, Martin Mose Benzen, Felix Lindner and Michael Fisher. Verifiable Machine Ethics in Changing Contexts. In: 35th AAAI Conference on Artificial Intelligence (AAAI 2021). To appear.
      • Louise A. Dennis, Michael Fisher, Marija Slavkovik, and Matt Webster. Formal Verification of Ethical Choices in Autonomous Systems Robotics and Autonomous Systems. DOI:10.1016/j.robot.2015.11.012.
      • Louise A. Dennis, Michael Fisher, and Alan Winfield. Towards Verifiably Ethical Robot Behaviour. Proceedings of the AAAI Workshop on Artificial Intelligence and Ethics (1st International Workshop on AI and Ethics).

  11. Social choice and machine ethics
    We discuss the role that social choice can play in machine ethics. We are discussing the problem of whose values or obligation should be implemented in a machine ethics able system. Specifically we focus on what it would take to agree on values and obligations for machines.
    Link for the recording of the live tutorial (same talk different take):
    References in video:
    • Seth D. Baum. 2020. Social choice ethics in artificial intelligence. AI Soc. 35, 1 (2020), 165–176.
    • Ritesh Noothigattu, Snehalkumar (Neil) S. Gaikwad, Edmond Awad, Sohan Dsouza, Iyad Rahwan, Pradeep Ravikumar, and Ariel D. Procaccia. 2018. A Voting-Based System for Ethical Decision Making. In Proceedings of the ThirtySecond AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, Sheila A. McIlraith and Kilian Q. Weinberger (Eds.). AAAI Press, 1587–1594. paper/view/17052

  12. Open questions
    We discuss some general open issues in machine ethics.

    This is a link to the recording of the live Q&A: The reference from the live Q&A:
    • Bertram Malle and Matthias Scheutz: Learning How to Behave: Moral Competence for Social Robots. In Handbuch Maschinenethik. Springer Fachmedien Wiesbaden. pp. 1-24. DOI 10.1007/978-3-658-17484-2_17-1


Louise Dennis Louise Dennis is a Lecturer at the Autonomy and Verification Laboratory at the University of Liverpool.She has world-leading expertise in agent programming languages (including explainability for agent programming, the development and verification of autonomous systems and the implementation of ethical reasoning and has published over 50 peer-reviewed papers on verification and autonomy, including key contributions on robot morality/ethics. She is a member of the IEEE P7001 Standards Working Group on Transparency for Autonomous systems. Her publications can be found here.


Marija Slavkovik Marija Slavkovik is an Associate Professor at the Department of Information Science and Media Studies at the University of Bergen. Marija Slavkovik is an associate professor with the Faculty for Social Sciences of the University of Bergen. Her area of expertise is collective reasoning and decision making. She has been doing research in machine ethics since 2012, in particular on formalising ethical decision-making . At present her interest is in machine meta-ethics and problems of reconciling different requirements of various stakeholders in what ethical behaviour should an artificial agent follow in a given context. The details of activities and full list of publications can be found here.


Date January 7, 2021. 8:00 a.m. - 11.15 a.m. UTC
Place Online or watch the youtube videos
Slides Who needs slides when there is videos?