Publication in Academy of Management Annals

Human Trust in Artificial Intelligence: Review of Empirical Research

 

 

 

Artificial intelligence (AI) characterizes a new generation of technologies capable of interacting with the environment and aiming to simulate human intelligence. The success of integrating AI into organizations critically depends on workers’ trust in AI technology. This review explains how AI differs from other technologies and presents the existing empirical research on the determinants of human “trust” in AI, conducted in multiple disciplines over the last 20 years. Based on the reviewed literature, we identify the form of AI representation (robot, virtual, and embedded) and its level of machine intelligence (i.e., its capabilities) as important antecedents to the development of trust and propose a framework that addresses the elements that shape users’ cognitive and emotional trust. Our review reveals the important role of AI’s tangibility, transparency, reliability, and immediacy behaviors in developing cognitive trust, and the role of AI’s anthropomorphism specifically for emotional trust. We also note several limitations in the current evidence base, such as the diversity of trust measures and overreliance on short-term, small sample, and experimental studies, where the development of trust is likely to be different than in longer-term, higher stakes field environments. Based on our review, we suggest the most promising paths for future research.

Person
Year
2020

Last Updated Date : 18/05/2021