State-of-the art and interdisciplinary challenges
- Artificial Intelligence Ethics and Society (AIES) conference collocated with AAAI,
- the ACM* FAccT conference,
- the XAI Workshop collocated with IJCAI
- to clarify to the artificial intelligence researcher all the different sub-areas of AI ethics,
- to distinguish and give an overview to the state-of-art and major challenges of one of these sub-areas that is machine ethics.
The tutorial, organised around the following topics:
- What is AI (to do with ethics)?
We begin with discussing why there is a concern for ethics in AI.
Link for the recording of the live tutorial (also the talk bellow): https://www.youtube.com/watch?v=JMdBcVI6fjw
References in Video:
- David Poole and Alan Mackworth. 2017. Artificial Intelligence: Foundations of Computational Agents (2 ed.). Cambridge University Press, Cambridge, UK.
- Richard E. Bellman. 1978. An Introduction to Artificial Intelligence: Can Computers Think? Boyd & Fraser Publishing Company
- The Moral Machines Project: https://www.moralmachine.net/
- The landscape of AI ethics
We give an overview of what sub-disciplines of AI Ethics exist in addition to machine ethics and point to other tutorials and workshops at IJCAI-PRICAI2020 that focus specifically on these sub-topics: explainability, accountability, transparency, responsible AI, fairness and privacy
References in Video:
- Casey Fiesler, Natalie Garrett, Nathan Beard:What Do We Teach When We Teach Tech Ethics?: A Syllabi Analysis. SIGCSE 2020: 289-295 DOI: https://doi.org/10.1145/3328778.3366825
- Thilo Hagendorff: The Ethics of AI Ethics - An Evaluation of Guidelines. In:CoRRabs/1903.03425 (2019).<
- Anna Jobin, Marcello Ienca and Effy Vayena: The global landscape of AI ethicsguidelines. In: Nature Machine Intelligence (2019).
- Maranke Wieringa: What to Account for When Accounting for Algorithms: A Systematic Literature Review on Algorithmic Accountability. In: Proceedings of the 2020 Conferenceon Fairness, Accountability, and Transparency. New York, NY, USA : Association for Computing Ma-chinery, 2020 (FAT* ’20), S. 1–18.
- Nicholas Diakopoulos. 2020. Transparency. In The Oxford Handbook of Ethics of AI, Markus D. Dubber, Frank Pasquale, and Sunit Das (Eds.). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190067397.013.11
- David Gunning and David: DARPA’s Explainable Artificial In-telligence (XAI) Program. In: AI Magazine40 (2019), Jun., Nr. 2, S. 44–58.
- Alexandra Chouldechova and Aaron Roth: A snapshot of the frontiers of fairness in machine learning. Commun. ACM 63, 5 (2020), 82–89. https://doi.org/10.1145/3376898
- Michael Kearns and Aaron Roth. 2019. The Ethical Algorithm: The Science of Socially Aware Algorithm Design. Oxford University Press.
- Virginia Dignum. 2019. Responsible Artificial Intelligence - How to Develop and Use AI in a Responsible Way. Springer.
- What is machine ethics?
We discuss various taxonomies for Implementations of Machine Ethics.
References in Video:
- Moor, James H.: The Nature, Importance, and Difficulty of Machine Ethics. In: IEEEIntelligent Systems21 (2006), Juli, Nr. 4, S. 18–21.
- Wallach and Allen. 2008. Moral machines: Teaching robots right from wrong. Oxford University Press, Oxford, UK. DOI:10.1093/acprof:oso/9780195374049.001.0001
- The quick and dirty introduction to moral philosophy
We give an overview of how ethics is studied in moral philosophy. We highlight the differences in challenges when the agent of morality is strictly human (as in moral philosophy) and when the agency of a system is shared between a human and an artificial agent (as in artificial intelligence).
A link to the video of the live tutorial: https://www.youtube.com/watch?v=7-gJDOUQjHs
- An aside on causality and intentions.
A brief excursion into how Machine Ethics systems reason about causality and the intentions of agents.
References from Video:
- Miller and Shanahan. 2002. Some alternative formulations of the event calculus. In Computational logic: logic programming and beyond. Springer, 452– 490. https://doi.org/10.1007/3-540-45632-5_17
- Berreby et al. 2018. Event-Based and Scenario-Based Causality for Computational Ethics. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS ’18), 47–155
- Naveen Sundar Govindarajulu and Selmer Bringsjord. 2017. On automating the doctrine of double effect. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI'17). AAAI Press, 4722–4730.
- Lindner and Bentzen. 2018. A Formalization of Kant’s Second Formulation of the Categorical Imperative. In Proceedings of The 14th International Conference on Deontic Logic and Normative Systems (DEON 2018)
- Halpern, J. Y. 2016. Actual Causality. The MIT press. https://doi.org/10.7551/mitpress/10809.001.0001
- An introduction to Hera
A brief introduction to the HERA System which we will use for the practical component of the Tutorial.
- The Hera system hands-on
This is your oportunity to get some hands on experience with machine ethics. These tutorials are curtosy of the Hera project .
- HERA Utility Based Causal Agency Models Tutorial: http://www.hera-project.com/utility-based-causal-agency-models/
- HERA Kantian Causal Agency Models Tutorial: http://www.hera-project.com/kantian-causal-agency-models/
- HERA Moral Planning Domain Definitions Tutorial: http://www.hera-project.com/moral-planning-domain-definitions/
- HERA Explainable Ethical Reasoning Tutorial: http://www.hera-project.com/explainable-causal-agency-models/
It also might be useful to have a look at:
We present the state-of-the-art regarding machine ethics through a survey of implementations.
A link to the video of the live presetation (all three parts together): https://www.youtube.com/watch?v=H7n1W8J1vWo
References in video:
- Part 1:
- Nallur, Vivek (2020). Landscape of Machine Implemented Ethics. Science and Engineering Ethics 26 (5):2381-2399. https://doi.org/10.1007/s11948-020-00236-y
- S. Tolmeijer, M. Kneer, C. Sarasua, M. Christen, A. Bernstein (2020). Implementations in Machine Ethics: A Survey. arXiv preprint arXiv:2001.07573
- Arkin et al. 2009. An Ethical Governor for Constraining Lethal Action in an Autonomous System. Technical Report GIT-GVU-09-02, Georgia Institute of Technology. https://doi.org/10.21236/ada493563
- Anderson et al: A Value Driven Agent: Instantiation of a Case-Supported Principle-Based Behavior Paradigm, Workshop on AI, Ethics and Society, 2016
- Winfield et al. 2014. Towards an Ethical Robot: Internal Models, Consequences and Ethical Action Selection. In Advances in Autonomous Robotics Systems, LNCS 8717, 85–96. https://doi.org/10.1007/978-3-319-10401-0_8
- Part 2:
- Bringsjord et al 2014. Akratic Robots and the Computational Logic Thereof. Proceedings of the IEEE 2014 International Symposium on Ethics in Engineering, Science, and Technology. DOI: 10.1109/ETHICS.2014.6893436
- Bringsjord & Govindarajulu (2013), Toward a Modern Geography of Minds, Machines, and Math, in V. C. Mueller, ed., ‘Philosophy and Theory of Artificial Intelligence’, Vol. 5 of Studies in Applied Philosophy, Epistemology and Rational Ethics, Springer, New York, NY, pp. 151–165. 10.1109/MIS.2006.82 (Citation on slides for this is wrong!)
- Website: https://rair.cogsci.rpi.edu/projects/muri/
- Benzmüller et al. Designing Normative Theories for Ethical and Legal Reasoning: LogiKEy Framework, Methodology, and Tool Support, In Artificial Intelligence, Elsevier, volume 287, pp. 103348, 2020. https://doi.org/10.1016/j.artint.2020.103348
- Berreby et al. A Declarative Modular Framework for Representing and Applying Ethical Principles. Proc 16th International Conference on Autonomous Agents and Muliagent Systems (AAMAS 2017) (Can't find DOI)
- Cointe et al. Ethical Judgment of Agents’ Behaviors in Multi-Agent Systems. Proceedings of the 15th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2016) (Can't find DOI)
- P.M. Dung, On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games, Artificial Intelligence 77 (1995) 321–357. https://doi.org/10.1016/0004-3702(94)00041-X
- B. Liao, M. Slavkovik, L. van der Torre, Building Jiminy Cricket: An architecture for moral agreements among stakeholders, in: AAAI/ACM Conference on Artificial Intelligence, Ethics and Society https://doi.org/10.1145/3306618.3314257
- K. Atkinson, T. Bench-Capon, Addressing moral problems through practical reasoning, J. Appl. Log. 6(2) (2008) 135–151. https://doi.org/10.1016/j.jal.2007.06.005
- K. Atkinson, T. Bench-Capon, Taking the long view: looking ahead in practical reasoning, in: Proceedings of COMMA 2014, 2014, pp. 109–120. (Can't find DOI)
- K. Atkinson, T. Bench-Capon, States, goals and values: revisiting practical reasoning, Argument Comput. 7 (2–3) (2016) 135–154. DOI: 10.3233/AAC-160011
- Part 3:
- Abel et al (2016). Reinforcement learning as a framework for ethical decision making. In B. Bonet, et al. (Eds.), AAAI Workshop: AI, Ethics, and Society (pp. 54–61) (Can't find DOI)
- Ecoffet and Lehman. Reinforcement Learning Under Moral Uncertainty. arXiv:2006.04734 [cs], July 2020.
- Balakrishnan et al, 2019. Incorporating behavioral constraints in online ai systems. In Proc. of the 33rd AAAI Conference on Artificial Intelligence (AAAI) (Can't find DOI)
- Loreggia et al. 2018. Value alignment via tractable preference distance. In Yampolskiy, R. V., ed., Artificial Intelligence Safety and Security. CRC Press. chapter 18. (Can't find DOI)
- Rossi and Mattei, 2019. Building Ethically Bounded AI. In 33rd AAAI Conference on Artificial Intelligence (AAAI-19). DOI: 10.1609/aaai.v33i01.33019785
We look at verification of Machine Ethics Systems. In the first part we ask why we need to verify them and what sort of properties we can check. In the second part of our look at the Verification of Machine Ethics systems, we look at three example verifications.
References in video:
- Part 1:
- Louise A. Dennis, Michael Fisher, Nicholas K. Lincoln, Alexei Lisitsa, Sandor M. Veres. Practical Verification of Decision-Making in Agent-Based Autonomous Systems (available Open Access). Automated Software Engineering 23(3), 305-359, 2016. DOI: 10.1007/s10515-014-0168-9.
- Part 2:
- Paul Bremner, Louise A. Dennis, Michael Fisher and Alan F. Winfield. On Proactive, Transparent and Verifiable Ethical Reasoning for Robots. Proceedings of the IEEE. Special Issue on Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems. 107(3), pp:541-561. DOI: 10.1109/JPROC.2019.2898267
- Louise. A. Dennis, Martin Mose Benzen, Felix Lindner and Michael Fisher. Verifiable Machine Ethics in Changing Contexts. In: 35th AAAI Conference on Artificial Intelligence (AAAI 2021). To appear.
- Louise A. Dennis, Michael Fisher, Marija Slavkovik, and Matt Webster. Formal Verification of Ethical Choices in Autonomous Systems Robotics and Autonomous Systems. DOI:10.1016/j.robot.2015.11.012.
- Louise A. Dennis, Michael Fisher, and Alan Winfield. Towards Verifiably Ethical Robot Behaviour. Proceedings of the AAAI Workshop on Artificial Intelligence and Ethics (1st International Workshop on AI and Ethics).
We discuss the role that social choice can play in machine ethics. We are discussing the problem of whose values or obligation should be implemented in a machine ethics able system. Specifically we focus on what it would take to agree on values and obligations for machines.
Link for the recording of the live tutorial (same talk different take): https://www.youtube.com/watch?v=bqsEdeIrbKE
References in video:
- Seth D. Baum. 2020. Social choice ethics in artificial intelligence. AI Soc. 35, 1 (2020), 165–176. https://doi.org/10.1007/s00146-017-0760-1
- Ritesh Noothigattu, Snehalkumar (Neil) S. Gaikwad, Edmond Awad, Sohan Dsouza, Iyad Rahwan, Pradeep Ravikumar, and Ariel D. Procaccia. 2018. A Voting-Based System for Ethical Decision Making. In Proceedings of the ThirtySecond AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, Sheila A. McIlraith and Kilian Q. Weinberger (Eds.). AAAI Press, 1587–1594. https://www.aaai.org/ocs/index.php/AAAI/AAAI18/ paper/view/17052
We discuss some general open issues in machine ethics.
This is a link to the recording of the live Q&A: https://www.youtube.com/watch?v=D37Rca9c9jo The reference from the live Q&A:
- Bertram Malle and Matthias Scheutz: Learning How to Behave: Moral Competence for Social Robots. In Handbuch Maschinenethik. Springer Fachmedien Wiesbaden. pp. 1-24. DOI 10.1007/978-3-658-17484-2_17-1
Louise Dennis is a Lecturer at the Autonomy and Verification Laboratory at the University of Liverpool.She has world-leading expertise in agent programming languages (including explainability for agent programming, the development and verification of autonomous systems and the implementation of ethical reasoning and has published over 50 peer-reviewed papers on verification and autonomy, including key contributions on robot morality/ethics. She is a member of the IEEE P7001 Standards Working Group on Transparency for Autonomous systems. Her publications can be found here.
Marija Slavkovik is an Associate Professor at the Department of Information Science and Media Studies at the University of Bergen. Marija Slavkovik is an associate professor with the Faculty for Social Sciences of the University of Bergen. Her area of expertise is collective reasoning and decision making. She has been doing research in machine ethics since 2012, in particular on formalising ethical decision-making . At present her interest is in machine meta-ethics and problems of reconciling different requirements of various stakeholders in what ethical behaviour should an artificial agent follow in a given context. The details of activities and full list of publications can be found here.
Date January 7, 2021. 8:00 a.m. - 11.15 a.m. UTC Place Online or watch the youtube videos Slides Who needs slides when there is videos?