Artificial Intelligence (AI) has become increasingly influential, making it crucial to understand what determines public acceptance of AI. Moral foundation theory predicts responses to morally contentious acts, and acceptance of an AI’s decisions is influenced if people feel their moral foundations are considered. While previous research has examined how moral foundations predict awareness and perceptions about AI, public acceptance of AI also needs to be understood in everyday contexts. To address this gap, this research explores the role of moral foundations in shaping acceptance of AI across diverse domains of daily life, using survey data from 614 U.S. participants. Results show that when AI is perceived as causing vulnerability or excluding certain groups and opportunities, people are less likely to accept it in the context of hiring, criminal sentencing, and the automation of jobs. It also suggests that individuals who place a high value on social norms are more likely to accept AI in contexts such as hiring, criminal sentencing, and marketing. This study highlights the importance of considering moral psychology in predicting public acceptance of AI. It suggests stakeholders consider moral foundations in AI design and use, which may address ethical conflicts and guide policy.
Accepted Oral Presentation