Please use this identifier to cite or link to this item: https://doi.org/10.21256/zhaw-28966
Full metadata record
DC FieldValueLanguage
dc.contributor.authorScharowski, Nicolas-
dc.contributor.authorBenk, Michaela-
dc.contributor.authorKühne, Swen J.-
dc.contributor.authorWettstein, Léane-
dc.contributor.authorBrühlmann, Florian-
dc.date.accessioned2023-10-27T09:05:07Z-
dc.date.available2023-10-27T09:05:07Z-
dc.date.issued2023-05-15-
dc.identifier.isbn979-8-4007-0192-4de_CH
dc.identifier.urihttps://digitalcollection.zhaw.ch/handle/11475/28966-
dc.description.abstractAuditing plays a pivotal role in the development of trustworthy AI. However, current research primarily focuses on creating auditable AI documentation, which is intended for regulators and experts rather than end-users affected by AI decisions. How to communicate to members of the public that an AI has been audited and considered trustworthy remains an open challenge. This study empirically investigated certification labels as a promising solution. Through interviews (N = 12) and a census-representative survey (N = 302), we investigated end-users' attitudes toward certification labels and their effectiveness in communicating trustworthiness in low- and high-stakes AI scenarios. Based on the survey results, we demonstrate that labels can significantly increase end-users' trust and willingness to use AI in both low- and high-stakes scenarios. However, end-users' preferences for certification labels and their effect on trust and willingness to use AI were more pronounced in high-stake scenarios. Qualitative content analysis of the interviews revealed opportunities and limitations of certification labels, as well as facilitators and inhibitors for the effective use of labels in the context of AI. For example, while certification labels can mitigate data-related concerns expressed by end-users (e.g., privacy and data protection), other concerns (e.g., model performance) are more challenging to address. Our study provides valuable insights and recommendations for designing and implementing certification labels as a promising constituent within the trustworthy AI ecosystem.de_CH
dc.language.isoende_CH
dc.publisherAssociation for Computing Machineryde_CH
dc.rightshttp://creativecommons.org/licenses/by/4.0/de_CH
dc.subjectComputers and societyde_CH
dc.subjectComputer sciencede_CH
dc.subjectArtificial intelligencede_CH
dc.subjectAuditde_CH
dc.subjectDocumentationde_CH
dc.subjectLabelde_CH
dc.subjectSealde_CH
dc.subjectCertificationde_CH
dc.subjectTrustde_CH
dc.subjectUser studyde_CH
dc.subject.ddc006: Spezielle Computerverfahrende_CH
dc.subject.ddc150: Psychologiede_CH
dc.titleCertification labels for trustworthy AI : insights from an empirical mixed-method studyde_CH
dc.typeKonferenz: Paperde_CH
dcterms.typeTextde_CH
zhaw.departementAngewandte Psychologiede_CH
zhaw.organisationalunitPsychologisches Institut (PI)de_CH
dc.identifier.doi10.1145/3593013.3593994de_CH
dc.identifier.doi10.21256/zhaw-28966-
zhaw.conference.details6th ACM Conference on Fairness, Accountability, and Transparency (FAccT), Chicago, USA, 12-15 June 2023de_CH
zhaw.funding.euNode_CH
zhaw.originated.zhawYesde_CH
zhaw.pages.end260de_CH
zhaw.pages.start248de_CH
zhaw.publication.statuspublishedVersionde_CH
zhaw.publication.reviewPeer review (Publikation)de_CH
zhaw.title.proceedingsProceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparencyde_CH
zhaw.webfeedPI - Umwelt- und Nachhaltigkeitspsychologiede_CH
zhaw.author.additionalNode_CH
zhaw.display.portraitYesde_CH
zhaw.relation.referenceshttps://osf.io/gzp5k/de_CH
Appears in collections:Publikationen Angewandte Psychologie

Files in This Item:
File Description SizeFormat 
2023_Scharowski-etal_Certification-labels-for-trustworthy-AI.pdf1.15 MBAdobe PDFThumbnail
View/Open
Show simple item record
Scharowski, N., Benk, M., Kühne, S. J., Wettstein, L., & Brühlmann, F. (2023). Certification labels for trustworthy AI : insights from an empirical mixed-method study [Conference paper]. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 248–260. https://doi.org/10.1145/3593013.3593994
Scharowski, N. et al. (2023) ‘Certification labels for trustworthy AI : insights from an empirical mixed-method study’, in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery, pp. 248–260. Available at: https://doi.org/10.1145/3593013.3593994.
N. Scharowski, M. Benk, S. J. Kühne, L. Wettstein, and F. Brühlmann, “Certification labels for trustworthy AI : insights from an empirical mixed-method study,” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, May 2023, pp. 248–260. doi: 10.1145/3593013.3593994.
SCHAROWSKI, Nicolas, Michaela BENK, Swen J. KÜHNE, Léane WETTSTEIN und Florian BRÜHLMANN, 2023. Certification labels for trustworthy AI : insights from an empirical mixed-method study. In: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. Conference paper. Association for Computing Machinery. 15 Mai 2023. S. 248–260. ISBN 979-8-4007-0192-4
Scharowski, Nicolas, Michaela Benk, Swen J. Kühne, Léane Wettstein, and Florian Brühlmann. 2023. “Certification Labels for Trustworthy AI : Insights from an Empirical Mixed-Method Study.” Conference paper. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 248–60. Association for Computing Machinery. https://doi.org/10.1145/3593013.3593994.
Scharowski, Nicolas, et al. “Certification Labels for Trustworthy AI : Insights from an Empirical Mixed-Method Study.” Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, Association for Computing Machinery, 2023, pp. 248–60, https://doi.org/10.1145/3593013.3593994.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.