Moral Artificial Intelligence (2018-2019)
Autonomous agents are beginning to interact with humans on a regular basis. For example, self-driving cars are appearing on local streets where many people drive, and various types of drones are flying through skies over populated areas. These autonomous agents have promise to provide many services that will benefit society, but they also raise significant concerns. The goal of this project was to combine computational methods, philosophy, game theory and psychology to develop a robust moral artificial intelligence (“moral AI”) to direct autonomous agents. This team addressed this problem through three main questions.
The Pluralism Problem: How do you ensure that the AI takes into account the variety of different responses people have to moral situations? The team built an online platform prototype to ask people what types of factors they believe kidney exchange algorithms should and should not take into account. They collected preliminary data and plan to continue developing the website in Fall 2019.
The New Situation Problem: How do you ensure that the AI takes gives useful guidance in new situations that were not anticipated or used in the development phase of AI? The team combined principles from moral philosophy and economic game theory to design scenarios and games that ask participants to judge the actions described or displayed in specific moral situations. They focused on the “trust game” from the field of game theory. Participants played the game against one another, and then reported how morally wrong or acceptable they thought their actions and their partner’s actions were. Participants’ morality ratings will be used to refine game theory so that it accounts for behavior in the trust game more accurately. The team has been working on innovative statistical strategies and machine vision techniques to analyze what they have found. They plan to develop an app based on the experiment.
The Adoptability Problem: How do you make a moral AI that all members of society are willing to adopt? The team plans to address this problem in the future. The team will ask participants whether they think humans or computers should be making decisions in a wide range of scenarios and applications. There is some preliminary evidence that people will be more willing to accept computers’ decisions if they have been exposed to computers making decisions in similar contexts before. Team members will test this pattern using new scenarios, and through interventions that manipulate how much exposure participants have to computers performing certain types of tasks.
Fall 2018 – Spring 2019
Lok Chan, Jana Schaich Borg, Vincent Conitzer, Dominic Wilkinson, Julian Savulescu, Hazem Zohny, Walter Sinnott-Armstrong. 2022. Which features of patients are morally relevant in ventilator triage? A survey of the UK public. BMC Medical Ethics.
Kenzie Doyle and Walter Sinnott-Arsmtrong, eds. 2019. Ethics of Artificial Intelligence: From Dating to Finance. Lulu Press.
Jana Schaich Borg, Walter Sinnott-Armstrong, Vince Conitzer. How to Use Artificial Intelligence to Improve Human Moral Judgement ($205,147 grant awarded from Templeton World Charity Foundation, 2018)
This Team in the News
See related teams, How to Build Ethics into Robust Artificial Intelligence (2019-2020) and How to Build Ethics into Robust Artificial Intelligence (2017-2018).
- Vincent Conitzer, Arts & Sciences-Computer Science
- Jana Schaich Borg, Social Science Research Institute
/graduate Team Members
Cassandra Carley, Computer Science-PHD, Computer Science-MS
Lok Chan, Philosophy-PHD
Yuan Deng, Computer Science-PHD
Caspar Oesterheld, Computer Science-PHD
Gayan Seneviratna, Electrical/Computer Engg-MS
Abbas Zaidi, Statistical Science - MS
/undergraduate Team Members
Emre Kiziltug, Economics (BS)
/yfaculty/staff Team Members
Joyanne Becker, Duke Social Science Research Institute
Kenzie Doyle, Duke Institute for Brain Sciences
Michele Peruzzi, Arts & Sciences-Statistical Science
Arkaprava Roy, Arts & Sciences-Statistical Science
Walter Sinnott-Armstrong, Arts & Sciences-Philosophy
Joshua Skorburg, Fuqua School of Business
Siyuan Yin, Arts & Sciences-Philosophy
/zcommunity Team Members
CEPS Ideas Lab, Brussels
John Dickerson, University of Maryland
Duncan McElfresh, University of Maryland
Eitan Sapiro-Gheiler, Princeton University (Undergraduate Student)