When Are Artificial Intelligence vs. Human Agents Faulted for Wrongdoing? Moral Attributions after Individual and Joint Decisions
Artificial intelligence (AI) agents make decisions that affect individuals and society which can produce outcomes traditionally considered moral violations if performed by humans. Do people attribute the same moral permissibility and fault to AIs and humans when each produces the same moral violation outcome? Additionally, how do people attribute morality when the AI and human are jointly making the decision which produces that violation? We investigate these questions with an experiment that manipulates written descriptions of four real-world scenarios where, originally, a violation outcome was produced by an AI. Our decision-making structures include individual decision-making–either AIs or humans -- and joint decision-making -- either humans monitoring AIs or AIs recommending to humans. We find that the decision-making structure has little effect on morally faulting AIs, but that humans who monitor AIs are faulted less than solo humans and humans receiving recommendations. Furthermore, people attribute more permission and less fault to AIs compared to humans for the violation in both joint decision-making structures. The blame for joint AI-human wrongdoing suggests the potential for strategic scapegoating of AIs for human moral failings and the need for future research on AI-human teams.
Shank, D. B., DeSanti, A., & Maninger, T. (2019). When Are Artificial Intelligence vs. Human Agents Faulted for Wrongdoing? Moral Attributions after Individual and Joint Decisions. Information Communication and Society, 22(5), pp. 648-663. Routledge.
The definitive version is available at https://doi.org/10.1080/1369118X.2019.1568515
Intelligent Systems Center
Keywords and Phrases
Algorithms; Attributions; Decision-Making; Ethics; Morality
International Standard Serial Number (ISSN)
Article - Journal
© 2019 Routledge, All rights reserved.
01 Apr 2019