How to Build Ethics into Artificial Intelligence (2020-2021)

Background

Self-driving cars are beginning to appear on local streets. Autonomous drones are flying over populated areas and on military missions. Robots are performing surgery, and artificial intelligence is used in criminal justice. These autonomous systems have promise to provide many services that will help society, but they also raise significant concerns.

Autonomous agents need to be programmed with an artificial intelligence that instructs them how to interact with other agents, but how can we do this? One option is to apply a basic moral rule, but the main challenge to this approach is the pluralism problem. How do we decide which rules, rights and roles should be built into the artificial intelligence? The goal of this project is to combine computational methods, philosophy, game theory and psychology to develop robust moral artificial intelligence to direct autonomous agents.

Project Description

This project team will attempt to build morality into artificial intelligence by incorporating new morally relevant features based on the data gathered by the previous years’ project teams. The project employs crowdsourcing of online moral judgments about which features of an action are and should be seen as relevant to the moral status of the action. In 2020-2021, the team will focus on constructing machine learning algorithms and submitting papers for publication.

The project’s pilot study focuses on kidney exchanges, using experiments and online platforms to ask people what types of factors they believe kidney exchange algorithms should and should not take into account in determining who gets a kidney. Team members then construct scenarios where these factors vary in systematic ways and use machine learning to construct an algorithm that predicts human moral judgments, tests its extension to a novel set of scenarios and reveals how various moral factors interact.

Another challenge for moral artificial intelligence is society’s unwillingness to adopt it. To tackle this challenge, team members will ask participants whether they think humans or computers should be making decisions in a wide range of scenarios and applications. Researchers have some preliminary evidence showing that people will be more willing to accept computers’ decisions if they have been exposed to computers making decisions in similar contexts before. The team will be testing this pattern through new uses of artificial intelligence and interventions.

The third component combines principles from moral philosophy and economic game theory to design scenarios and games that ask participants to judge actions. Game theory underestimates the amount of money participants will choose to give to each other in this game. This might be happening because game theory has not adequately taken morality into account. To test this, team members will ask participants to play the classic “trust game” and report how morally wrong or acceptable they thought their actions and their partner’s actions were.

Anticipated Outputs

Presentations at computer science, game theory and ethics conferences; publications in academic journals; theses; team website

Timing

Fall 2020 – Spring 2021

  • Fall 2020: Continue data collection; construct machine learning algorithms
  • Spring 2021: Write papers based on results; give conference presentations

See earlier related team, How to Build Ethics into Robust Artificial Intelligence (2019-2020).

Node graph.

Team Leaders

  • Vincent Conitzer, Arts & Sciences-Computer Science
  • Jana Schaich Borg, Social Science Research Institute
  • Walter Sinnott-Armstrong, Arts & Sciences-Philosophy

/graduate Team Members

  • Lok Chan, Philosophy-PHD
  • Daniela Goya-Tocchetto, Business Administration-PHD

/undergraduate Team Members

  • Thomas Huck, Computer Science (BS), Mathematics (BS2)
  • Zachary Starr, Program II (BS)
  • Ziyi Yan, Evolutionary Anthropology (AB), Computer Science (BS2)

/zcommunity Team Members

  • John Dickerson, University of Maryland