How to Build Ethics into Robust Artificial Intelligence (2017-2018)
Autonomous agents are beginning to interact with humans on a regular basis. Self-driving cars are appearing on local streets where many people drive, and various types of drones are flying through skies over populated areas. Autonomous agents have promise to provide many services that will help society, but they also raise significant concerns.
Autonomous agents have to be programmed with AI that instructs them how to interact with other agents. Traditional approaches to programming AI agents are usually conducted within straightforward utilitarian or consequentialist frameworks that try to optimize a certain type of good. When these approaches are applied to AI agents that interact with people, though, they often give counterintuitive or unethical recommendations for action. For example, an algorithm generated using traditional approaches might recommend that a hospital should harvest one healthy patient’s organs to save the lives of five other patients, because doing so will save the greatest number of lives, but most people would consider such an action highly immoral.
Research in ethics and moral psychology elucidates our moral intuitions in such examples by distinguishing between doing and allowing, emphasizing the role of intent, applying general rules about kinds of actions (such as “don’t kill”) and referring to rights (such as the patient’s) and roles (such as the doctor’s). Incorporating these additional morally relevant factors could enable AI to make moral decisions that are safer, more robust, more beneficial and acceptable to a wider range of people.
The goal of this Bass Connections project is to combine computational methods, philosophy, game theory and psychology to develop moral artificial intelligence (“moral AI”) to direct autonomous agents that is both robust and ethical.
One of the main challenges in designing moral AI is the pluralism problem: How do you ensure that the AI takes into account the variety of different responses people have to moral situations? Another challenge is the new situation problem: How do you ensure that the AI gives useful guidance in new situations that were not anticipated or used in the development phase of the AI? A third challenge is the adoptability problem: How do you make a moral AI that all members of society are willing to adopt? This project team’s research will address these problems through three subprojects.
The “bottom-up” approach: The team will use in-person experiments and online platforms such as Amazon Mechanical Turk (MTurk) to ask people what types of factors they believe kidney exchange algorithms should and should not take into account. Team members will analyze participants’ responses to find common ethical themes that people feel are relevant to how kidney exchanges should be implemented. They will then revise current kidney exchange algorithms so that their recommendations of how kidneys should be allocated will take these ethical themes into account. The “bottom-up” approach relies primarily on new data to learn the moral features that should be incorporated into AI algorithms.
The “top-down” approach: The team will combine principles from moral philosophy and economic game theory to design scenarios and games that ask participants to judge the actions described or displayed in specific moral situations. Team members will begin by focusing on the “trust game” from the field of game theory. In the trust game, Participant A can choose to give a certain amount of money to Participant B. Whatever amount A chooses to give is tripled and delivered to B, who can then choose to give a certain amount of money back to A. Straightforward game theoretic analyses underestimate the amount of money participants will choose to give to each other in this game. This may be because morality has not been adequately modeled in game theory. To test this, the team will have MTurk participants play the trust game against one another, and then report how morally wrong or acceptable they thought their actions and their partner’s actions were. Team members will use participants’ morality ratings to refine game theoretic notions so that they account for behavior in the trust game more accurately. The “top-down” approach relies primarily on game theory and ethical theory to generate algorithms that can make ethical choices.
When should computers make decisions? The team will ask participants whether they think humans or computers should be making decisions in a wide range of scenarios and applications. Pilot experiments have shown that people are often reluctant to allow computers to make decisions, even when they believe computers are better equipped to make those decisions. Some preliminary evidence shows that people will be more willing to accept computers’ decisions if they have been exposed to computers making decisions in similar contexts before. Team members will test this pattern using new scenarios and interventions that manipulate how much exposure participants have to computers performing certain types of tasks.
Results submitted for presentations at conferences in computer science, game theory and ethics, and for publication in academic journals; development of a website where participants can make decisions about kidney exchanges, analogous to MIT’s Moral Machine, where players make decisions about hypothetical autonomous car scenarios
Fall 2017 – Spring 2018
- Fall 2017: Data collection
- Spring 2018: Data analysis, summarize findings and submit papers to academic journals and/or conferences
Team Outcomes to Date
Adapting a Kidney Exchange Algorithm to Align with Human Values (poster by Rachel Freedman, Jana Schaich Borg, Walter Sinnott-Armstrong, John P. Dickerson, Vincent Conitzer), presented at Bass Connections Showcase, April 18, 2018
Cultures of Collaboration: Managing the Moral AI Lab (Kenzie Doyle)
This Team in the News
/faculty/staff Team Members
Emmanuel Chevallier, Arts & Sciences-Statistical Science
Vincent Conitzer, Arts & Sciences-Computer Science*
Kenzie Doyle, Arts & Sciences-Computer Science
Jana Schaich Borg, Social Science Research Institute*
Walter Sinnott-Armstrong, Arts & Sciences-Philosophy*
Joshua Skorburg, Arts & Sciences-Philosophy
Siyuan Yin, Arts & Sciences-Philosophy
/graduate Team Members
Cassandra Carley, Computer Science-PHD
Lok Chan, Philosophy-PHD
Abbas Zaidi, Statistical Science - PHD
/undergraduate Team Members
Anika Mukherji, Computer Science (BS), Neuroscience (BS2)
Weiyao Wang, Electrical & Computer Egr(BSE)
/zcommunity Team Members
John Dickerson, University of Maryland