E4b Through a Moral Lens: Moral Foundations and Public Acceptance of Artificial Intelligence Across Contexts

Wednesday, April 15, 2026 at 9:45 AM–11:15 AM PDT
Room 4
Short Description

Artificial Intelligence (AI) has become increasingly influential, making it crucial to understand what determines public acceptance of AI. Moral foundation theory predicts responses to morally contentious acts, and acceptance of an AI’s decisions is influenced if people feel their moral foundations are considered. While previous research has examined how moral foundations predict awareness and perceptions about AI, public acceptance of AI also needs to be understood in everyday contexts. To address this gap, this research explores the role of moral foundations in shaping acceptance of AI across diverse domains of daily life, using survey data from 614 U.S. participants. Results show that when AI is perceived as causing vulnerability or excluding certain groups and opportunities, people are less likely to accept it in the context of hiring, criminal sentencing, and the automation of jobs. It also suggests that individuals who place a high value on social norms are more likely to accept AI in contexts such as hiring, criminal sentencing, and marketing. This study highlights the importance of considering moral psychology in predicting public acceptance of AI. It suggests stakeholders consider moral foundations in AI design and use, which may address ethical conflicts and guide policy.

Type of presentation

Accepted Oral Presentation

Submitter

Nayeon Kim, University of Georgia

Authors

Nayeon Kim, University of Georgia
Yilang Peng, University of Georgia
Loading…