Moral Artificial Intelligence (2018-2019)

Background

Autonomous agents are beginning to interact with humans on a regular basis. For example, self-driving cars are appearing on local streets where many people drive, and various types of drones are flying through skies over populated areas. These autonomous agents have promise to provide many services that will benefit society, but they also raise significant concerns. The goal of this project is to combine computational methods, philosophy, game theory and psychology to develop a robust moral artificial intelligence (“moral AI”) to direct autonomous agents.

Project Description

One of the main challenges in designing a moral AI is the pluralism problem: How do you ensure that the AI takes into account the variety of different responses people have to moral situations? Another challenge is the new situation problem: How do you ensure that the AI gives useful guidance in new situations that were not anticipated or used in the development phase of the AI? Another challenge is the adoptability problem: How do you make a moral AI that all members of society are willing to adopt?  This Bass Connections project will address these problems through three related sub-projects. 

The bottom-up approach

The team will use in-person experiments and online platforms such as Amazon Mechanical Turk (MTurk) and interactive websites built by team members to ask people what types of factors they believe kidney exchange algorithms should and should not take into account. Team members will analyze participants’ responses to find common ethical themes that people feel are relevant to how kidney exchanges should be implemented, then revise current kidney exchange algorithms so that their recommendations of how kidneys should be allocated take these ethical themes into account. This approach will rely primarily on new data to learn the moral features that should be incorporated into AI algorithms.

The top-down approach

The team will combine principles from moral philosophy and economic game theory to design scenarios and games that ask participants to judge the actions described or displayed in specific moral situations. We will begin by focusing on the “trust game” from the field of game theory. MTurk participants will play the game against one another, and then report how morally wrong or acceptable they thought their actions and their partner’s actions were. Participants’ morality ratings will be used to refine game theory so that it accounts for behavior in the trust game more accurately. This approach will rely primarily on game theory, ethical theory and emotion theory to generate algorithms that can make ethical choices.

When should computers make decisions?

The team will ask participants whether they think humans or computers should be making decisions in a wide range of scenarios and applications. There is some preliminary evidence that people will be more willing to accept computers’ decisions if they have been exposed to computers making decisions in similar contexts before. Team members will test this pattern using new scenarios, and through interventions that manipulate how much exposure participants have to computers performing certain types of tasks.

Anticipated Outcomes

Academic journal publications and conference presentations in computer science, game theory and ethics; website where thousands of participants at a time can make decisions about kidneys exchanges, analogous to the MIT website “Moral Machine”

Student Opportunities

All team members will meet weekly to discuss progress, plan for the following week and exchange feedback as well as in smaller research groups working on sub-projects.

The goal is for members to participate in conference or paper submissions and be given opportunities to present at national or international conferences. In addition, members will learn skills related to experimental design, submitting IRBs and human subjects research, machine learning, algorithm design, mathematical/statistical analysis, programming, data visualization, presentation and communication, mentorship (especially for graduate students) and teamwork. Graduate students will lead sub-projects and mentor undergraduate team members.

The ideal team composition will include three graduate students and four undergraduates with backgrounds in computer science, economics, philosophy, statistics, neuroscience or psychology. Parts of each project may serve as the basis of undergraduate honors theses.

Team progress will be evaluated mainly on the basis of publications and submissions, invitations to interdisciplinary conferences or workshops and success in securing additional funding. Students receiving course credit will be graded on the basis of regular constructive participation in team and sub-group meetings, tailored assignments managed by the project manager, help on team publications and final projects for independent studies. Students not receiving course credit will be evaluated—and provide evaluations about their experience—using assessments designed by the team leaders and project manager.

Timing

Fall 2018 – Spring 2019  

  • Fall 2018: Begin weekly meetings; refine and add interactive functions to the website and gather data on which features humans see as forbidden as bases for moral judgments
  • Spring 2019: Begin exploring ways to combine algorithms from individuals into social decision procedures and review empirical literature on the most effective ways to get humans (including policymakers) to take moral advice from computers

Crediting

Special Topics course credit available for fall and spring semesters

See earlier related team, How to Build Ethics into Robust Artificial Intelligence (2017-2018).

Faculty/Staff Team Members

Vincent Conitzer, Arts & Sciences-Computer Science*
Kenzie Doyle, Arts & Sciences-Computer Science
Jana Schaich Borg, Social Science Research Institute*
Walter Sinnott-Armstrong, Kenan Institute for Ethics|Arts & Sciences-Philosophy

Graduate Team Members

Cassandra Carley, Computer Science-PHD
Lok Chan, Philosophy-PHD
Abbas Zaidi, Statistical Science - PHD

Undergraduate Team Members

Emre Kiziltug, Economics (BS)
Anika Mukherji, Computer Science (BS), Neuroscience (BS2)

Community Team Members

CEPS Ideas Lab, Brussels
John Dickerson, University of Maryland

* denotes team leader

Status

Active, New