Search
Browse By Day
Browse By Time
Browse By Person
Browse By Mini-Conference
Browse By Division
Browse By Session or Event Type
Browse Sessions by Fields of Interest
Browse Papers by Fields of Interest
Search Tips
Conference
Location
About APSA
Personal Schedule
Change Preferences / Time Zone
Sign In
X (Twitter)
The burgeoning field of artificial intelligence (AI) has ushered in a new era of ethical challenges, demanding robust governance and ethics frameworks. Amidst increasing calls from governments, civil society, and the public for ethical AI governance, experts have proposed diverse strategies to signal a company's commitment to responsible AI. These range from adopting normative principles and establishing AI ethics advisory boards, to conducting audits followed by declarations of conformity. AI ethics audits have received significant attention in recent policies, such as the European Union's AI Act and the US NIST AI Risk Management Framework, as critical for “smart regulation” (Gunningham and Sinclair 2017).
Despite the growing advocacy for these strategies, there is a significant gap in empirical evidence regarding their impact on public perceptions and behaviors. To address this, our research aims to empirically investigate the influence of various AI ethics certification signals on consumer trust and support. With a theoretical approach informed by the related literature on eco-labeling, food labeling, and product safety in medicine, as well as the technology acceptance model (TAM), we utilize a conjoint experiment with a multi-country sample to explore the attributes of AI ethics certifications that the public finds salient, including the certifying entity, the type of product, the nature of the audit, and the label design.
The results of this study will have important implications for whether there is a return on investment (ROI) for responsible AI (RAI) strategies and whether strategies like auditing and certifications shape trust in public sector and private sector use of AI.