Combining Uncertainty Information With AI Recommendations Supports Calibration With Domain Knowledge
Abstract
The use of Artificial Intelligence (AI) decision support is increasing in high-stakes contexts, such as healthcare, defense, and finance. Uncertainty information may help users better leverage AI predictions, especially when combined with their domain knowledge. We conducted a human-subject experiment with an online sample to examine the effects of presenting uncertainty information with AI recommendations. The experimental stimuli and task, which included identifying plant and animal images, are from an existing image recognition deep learning model, a popular approach to AI. The uncertainty information was predicted probabilities for whether each label was the true label. This information was presented numerically and visually. In the study, we tested the effect of AI recommendations in a within-subject comparison and uncertainty information in a between-subject comparison. The results suggest that AI recommendations increased both participants' accuracy and confidence. Further, providing uncertainty information significantly increased accuracy but not confidence, suggesting that it may be effective for reducing overconfidence. In this task, participants tended to have higher domain knowledge for animals than plants based on a self-reported measure of domain knowledge. Participants with more domain knowledge were appropriately less confident when uncertainty information was provided. This suggests that people use AI and uncertainty information differently, such as an expert versus second opinion, depending on their level of domain knowledge. These results suggest that if presented appropriately, uncertainty information can potentially decrease overconfidence that is induced by using AI recommendations.
Recommended Citation
H. V. Subramanian et al., "Combining Uncertainty Information With AI Recommendations Supports Calibration With Domain Knowledge," Journal of Risk Research, Taylor and Francis Group; Routledge, Jan 2023.
The definitive version is available at https://doi.org/10.1080/13669877.2023.2259406
Department(s)
Engineering Management and Systems Engineering
Second Department
Psychological Science
Keywords and Phrases
artificial intelligence; human-AI teams; Overconfidence; risk communication; uncertainty
International Standard Serial Number (ISSN)
1466-4461; 1366-9877
Document Type
Article - Journal
Document Version
Citation
File Type
text
Language(s)
English
Rights
© 2023 Taylor and Francis Group; Routledge, All rights reserved.
Publication Date
01 Jan 2023