Exposed by AIs! People Personally Witness Artificial Intelligence Exposing Personal Information and Exposing People to Undesirable Content


Do people personally witness artificial intelligence (AI) committing moral wrongs? If so, what kinds of moral wrong and what situations produce these? To address these questions, respondents selected one of six prompt questions, each based on a moral foundation violation, asking about a personally-witnessed interaction with an AI resulting in a moral victim (victim prompts) or where the AI seemed to engage in immoral actions (action prompt). Respondents then answered their selected question in an open-ended response. In conjunction with liberty/privacy and purity moral foundations and across both victim and action prompts, respondents most frequently reported moral violations as two types of exposure by AIs: their personal information being exposed (31%) and people’s exposure to undesirable content (20%). AIs expose people’s personal information to their colleagues, close relations, and online due to information sharing across devices, people in proximity of audio devices, and simple accidents. AIs expose people, often children, to undesirable content such as nudity, pornography, violence, and profanity due to their proximity to audio devices and to seemly purposeful action. We argue that the prominence in reporting these types of exposure may be due to their frequent occurrence on personal and home devices. This suggests that research on AI ethics should not only focus on the prototypically harmful moral dilemmas (e.g., autonomous vehicle deciding whom to sacrifice) but everyday interactions with personal technology.


Psychological Science


This research was supported by the Army Research Office under Grant Number W911NF-19-1-0246.

International Standard Serial Number (ISSN)

1044-7318; 1532-7590

Document Type

Article - Journal

Document Version


File Type





© 2020 Taylor & Francis, All rights reserved.

Publication Date

25 May 2020