Special Session: Dehumanization of Robotic Assistants and Subsequent Unethical and Abusive Customer Behavior in Frontline Encounters: An Abstract

Abstract

The rapid technological advancements have revolutionized service encounters (Huang and Rust 2017) and provided an opportunity for companies to improve customer experiences through automatic service interactions (van Doorn et al. 2017). As an increasing number of companies begin to incorporate robotic assistants into their service designs, such as LoweBot adopted by Lowe's as a shopping assistant (King 2014) and Pepper adopted by Pizza Hut as a waiter (Santos 2018), some of these companies have observed abusive customer behavior directed towards robotic assistants. For example, the co-founder of Starship Technologies, Ahti Heinla, indicated that it is not uncommon to find its food-delivery robots being kicked by people (Hamilton 2018). This type of abusive customer behavior may negatively influence not only organizations through the financial loss associated with the damage of robotic assistants but also other customers through the spread of such behavior. Given the importance of this issue, this research aims to examine why and when customers engage in abusive customer behavior directed towards robotic assistants, a behavior that is both dysfunctional and unethical because it violates the commonly accepted norm of morally appropriate conduct. Drawing upon moral disengagement theory (Bandura 1986), we propose that customers may abuse robotic assistants in service encounters due to psychological freedom derived from a lack of moral self-regulation that would refrain them from unethical and dysfunctional behavior. Specially, we expect that customers are more likely to morally disengage and abuse a robotic assistant if the assistant's style is non-humanoid as opposed to humanoid. The non-humanoid style allows customers to view the assistant as a machine without human characteristics and to abuse it without experiencing self-condemnation. The negative influence of humanoid style on moral disengagement and abusive behavior is expected to be strengthened when the assistant's features are hedonic as opposed to utilitarian. Hedonic experiences tend to involve people-focused service delivery, whereas utilitarian experiences tend to involve equipment-focused service delivery (Ng et al. 2005). Thus, we suggest that customers are more likely to attend to a humanoid cue of a robotic assistant when its features are hedonic than when they are utilitarian. The study results provided support for the proposed interactive effect of robotic assistants' style (humanoid vs. nonhumanoid) and feature (hedonic vs. utilitarian) on customer abusive behavior. This paper offers managerial and theoretical implications regarding how to manage customers' unethical and abusive behavior toward robotic assistants.

Department(s)

Business and Information Technology

Keywords and Phrases

Bridged silsesquioxanes; Dysfunctional customer behavior; Moral disengagement; Robotic assistants; Unethical customer behavior; Water glass

International Standard Serial Number (ISSN)

2363-6173; 2363-6165

Document Type

Article - Journal

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2024 Springer, All rights reserved.

Publication Date

01 Jan 2020

Share

 
COinS