Moral Artificial Intelligence (2018-2019)

Background

Autonomous agents are beginning to interact with humans on a regular basis. For example, self-driving cars are appearing on local streets where many people drive, and various types of drones are flying through skies over populated areas. These autonomous agents have promise to provide many services that will benefit society, but they also raise significant concerns. The goal of this project is to combine computational methods, philosophy, game theory and psychology to develop a robust moral artificial intelligence (“moral AI”) to direct autonomous agents.

Project Description

One of the main challenges in designing a moral AI is the pluralism problem: How do you ensure that the AI takes into account the variety of different responses people have to moral situations? Another challenge is the new situation problem: How do you ensure that the AI gives useful guidance in new situations that were not anticipated or used in the development phase of the AI? Another challenge is the adoptability problem: How do you make a moral AI that all members of society are willing to adopt?  This Bass Connections project will address these problems through three related sub-projects. 

The bottom-up approach

The team will use in-person experiments and online platforms such as Amazon Mechanical Turk (MTurk) and interactive websites built by team members to ask people what types of factors they believe kidney exchange algorithms should and should not take into account. Team members will analyze participants’ responses to find common ethical themes that people feel are relevant to how kidney exchanges should be implemented, then revise current kidney exchange algorithms so that their recommendations of how kidneys should be allocated take these ethical themes into account. This approach will rely primarily on new data to learn the moral features that should be incorporated into AI algorithms.

The top-down approach

The team will combine principles from moral philosophy and economic game theory to design scenarios and games that ask participants to judge the actions described or displayed in specific moral situations. We will begin by focusing on the “trust game” from the field of game theory. MTurk participants will play the game against one another, and then report how morally wrong or acceptable they thought their actions and their partner’s actions were. Participants’ morality ratings will be used to refine game theory so that it accounts for behavior in the trust game more accurately. This approach will rely primarily on game theory, ethical theory and emotion theory to generate algorithms that can make ethical choices.

When should computers make decisions?

The team will ask participants whether they think humans or computers should be making decisions in a wide range of scenarios and applications. There is some preliminary evidence that people will be more willing to accept computers’ decisions if they have been exposed to computers making decisions in similar contexts before. Team members will test this pattern using new scenarios, and through interventions that manipulate how much exposure participants have to computers performing certain types of tasks.

Anticipated Outcomes

Academic journal publications and conference presentations in computer science, game theory and ethics; website where thousands of participants at a time can make decisions about kidney exchanges, analogous to the MIT website “Moral Machine”

Timing

Fall 2018 – Spring 2019  

  • Fall 2018: Begin weekly meetings; refine and add interactive functions to the website and gather data on which features humans see as forbidden as bases for moral judgments
  • Spring 2019: Begin exploring ways to combine algorithms from individuals into social decision procedures and review empirical literature on the most effective ways to get humans (including policymakers) to take moral advice from computers

This Team in the News

Can Artificial Intelligence Help Us Be More Moral?

Video

Can Artificial Intelligence Help Us Be More Moral?

See earlier related team, How to Build Ethics into Robust Artificial Intelligence (2017-2018).

Faces and computer hardware

/faculty/staff Team Members

  • Joyanne Becker, Duke Social Science Research Institute
  • Vincent Conitzer, Arts & Sciences-Computer Science*
  • Kenzie Doyle, Arts & Sciences-Computer Science
  • Michele Peruzzi, Arts & Sciences-Statistical Science
  • Arkaprava Roy, Arts & Sciences-Statistical Science
  • Jana Schaich Borg, Social Science Research Institute*
  • Walter Sinnott-Armstrong, Arts & Sciences-Philosophy
  • Joshua Skorburg, Arts & Sciences-Philosophy
  • Siyuan Yin, Arts & Sciences-Philosophy

/graduate Team Members

  • Cassandra Carley, Computer Science-PHD
  • Lok Chan, Philosophy-PHD
  • Yuan Deng, Computer Science-PHD
  • Caspar Oesterheld, Computer Science-PHD
  • Abbas Zaidi, Statistical Science - PHD

/undergraduate Team Members

  • Emre Kiziltug, Economics (BS)
  • Caroline Wang, Computer Science (BS), Mathematics (AB2)

/zcommunity Team Members

  • CEPS Ideas Lab, Brussels
  • John Dickerson, University of Maryland
  • Duncan McElfresh, University of Maryland
  • Eitan Sapiro-Gheiler, Princeton University (Undergraduate Student)