Artificial Virtue: The Machine Question and Perceptions of Moral Character in Artificial Moral Agents


Virtue ethics seems to be a promising moral theory for understanding and interpreting the development and behavior of artificial moral agents. Virtuous artificial agents would blur traditional distinctions between different sorts of moral machines and could make a claim to membership in the moral community. Accordingly, we investigate the “machine question” by studying whether virtue or vice can be attributed to artificial intelligence; that is, are people willing to judge machines as possessing moral character? An experiment describes situations where either human or AI agents engage in virtuous or vicious behavior and experiment participants then judge their level of virtue or vice. The scenarios represent different virtue ethics domains of truth, justice, fear, wealth, and honor. Quantitative and qualitative analyses show that moral attributions are weakened for AIs compared to humans, and the reasoning and explanations for the attributions are varied and more complex. On “relational” views of membership in the moral community, virtuous machines would indeed be included, even if they are indeed weakened. Hence, while our moral relationships with artificial agents may be of the same types, they may yet remain substantively different than our relationships to human beings.


Arts, Languages, and Philosophy

Second Department

Psychological Science

Research Center/Lab(s)

Center for Science, Technology, and Society


This research was partially supported by the Army Research Office under Grant Number W911NF-19-1-0246. This work was also funded by a small grant from Missouri University of Science and Technology’s Center for Science, Technology, and Society to Daniel B. Shank and Patrick Gamez.

International Standard Serial Number (ISSN)

0951-5666; 1435-5655)

Document Type

Article - Journal

Document Version


File Type





© 2020 Springer, All rights reserved.

Publication Date

25 April 2020