When Are Artificial Intelligence vs. Human Agents Faulted for Wrongdoing? Moral Attributions after Individual and Joint Decisions

Abstract

Artificial intelligence (AI) agents make decisions that affect individuals and society which can produce outcomes traditionally considered moral violations if performed by humans. Do people attribute the same moral permissibility and fault to AIs and humans when each produces the same moral violation outcome? Additionally, how do people attribute morality when the AI and human are jointly making the decision which produces that violation? We investigate these questions with an experiment that manipulates written descriptions of four real-world scenarios where, originally, a violation outcome was produced by an AI. Our decision-making structures include individual decision-making–either AIs or humans -- and joint decision-making -- either humans monitoring AIs or AIs recommending to humans. We find that the decision-making structure has little effect on morally faulting AIs, but that humans who monitor AIs are faulted less than solo humans and humans receiving recommendations. Furthermore, people attribute more permission and less fault to AIs compared to humans for the violation in both joint decision-making structures. The blame for joint AI-human wrongdoing suggests the potential for strategic scapegoating of AIs for human moral failings and the need for future research on AI-human teams.

Department(s)

Psychological Science

Research Center/Lab(s)

Intelligent Systems Center

Keywords and Phrases

Algorithms; Attributions; Decision-Making; Ethics; Morality

International Standard Serial Number (ISSN)

1369-118X; 1468-4462

Document Type

Article - Journal

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2019 Routledge, All rights reserved.

Publication Date

01 Apr 2019

Share

 
COinS