Publication type: Conference other
Type of review: Peer review (abstract)
Title: A framework for assessing and certifying explainability of health-oriented AI systems
Authors: Denzel, Philipp
Brunner, Stefan
Luley, Paul-Philipp
Frischknecht-Gruber, Carmen
Reif, Monika Ulrike
Schilling, Frank-Peter
Amini, Amin
Repetto, Marco
Iranfar, Arman
Weng, Joanna
Chavarriaga, Ricardo
et. al: No
Conference details: Explainable AI in Medicine Workshop, Lugano, Switzerland, 2-3 November 2023
Issue Date: 2-Nov-2023
Language: English
Subjects: Explainable artificial intelligence; Transparency; Trustworthy AI; EU AI act
Subject (DDC): 006: Special computer methods
Abstract: Explainability has been recognized as one of the key tenets for the development of trustworthy AI systems for health-related applications. As regulation for AI is being developed, organizations deploying health-oriented AI systems will have to comply with requirements on transparency and explainability. However, despite the imminent introduction of these regulations, actionable guidelines for assessing compliance are still lacking. We present ongoing work on developing a framework for assessing and certifying the transparency of AI systems. This framework makes an explicit link between the foreseen certification requirements and validated processes, algorithms, and methods for assessing compliance of AI systems; resulting in a concrete workflow to perform AI certification. It is based on analysis of the proposed AI regulation in the EU, recommended practices, and ISO standards. This is complemented by empirically validated state-of-the-art algorithmic methods for explainable AI in real-world applications. Stakeholders have unique requirements for the explainability of AI systems. Meanwhile, a wide variety of explainable AI methodologies are available and may be appropriate for different stakeholder preferences, data modalities, applications, and purposes. As a result, the nuanced selection of relevant methodologies becomes an indispensable consideration in this framework. Therefore, within this framework, a taxonomy has been developed to guide the decision for selecting the appropriate and applicable set of methods. The framework and the application of the taxonomy is illustrated through several health-related use cases. Take the case of a skin lesion classification system, involving as stakeholders the patient, the dermatologist, the developers, and the authorities. Here, several considerations guide the choice of methods: e.g., which stakeholder should receive the explanation, is the model directly interpretable or not, are intrinsic or post-hoc explanatory methods required, or whether explanations should be local or global. To mention some of the methods suitable for the dermatologist based on these considerations: In cases where local explanations are required and deep learning methods are used, our framework will point to methods such as SHAP or LIME that illustrate what features in the image led to the model’s decision. Such methods might indicate that a lesion was classified as malignant due to its asymmetric shape, ill-defined border, or irregular colour. Likewise, developers and regulators may require other types of explanations. This could include, for example, explaining global behavior in conjunction with local behavior, as well as identifying model weaknesses throughout the learning and verification stages by providing feature-level explanations. In essence, the presented framework will provide a concrete guide for researchers, developers, and certification bodies in the development, validation, and certification of explainable, transparent AI systems and promote the adoption of best practices for responsible AI innovation.
URI: https://digitalcollection.zhaw.ch/handle/11475/29258
Fulltext version: Published version
License (according to publishing contract): Licence according to publishing contract
Departement: School of Engineering
Organisational Unit: Centre for Artificial Intelligence (CAI)
Institute of Applied Mathematics and Physics (IAMP)
Published as part of the ZHAW project: certAInty – A Certification Scheme for AI systems
Appears in collections:Publikationen School of Engineering

Files in This Item:
There are no files associated with this item.
Show full item record
Denzel, P., Brunner, S., Luley, P.-P., Frischknecht-Gruber, C., Reif, M. U., Schilling, F.-P., Amini, A., Repetto, M., Iranfar, A., Weng, J., & Chavarriaga, R. (2023, November 2). A framework for assessing and certifying explainability of health-oriented AI systems. Explainable AI in Medicine Workshop, Lugano, Switzerland, 2-3 November 2023.
Denzel, P. et al. (2023) ‘A framework for assessing and certifying explainability of health-oriented AI systems’, in Explainable AI in Medicine Workshop, Lugano, Switzerland, 2-3 November 2023.
P. Denzel et al., “A framework for assessing and certifying explainability of health-oriented AI systems,” in Explainable AI in Medicine Workshop, Lugano, Switzerland, 2-3 November 2023, Nov. 2023.
DENZEL, Philipp, Stefan BRUNNER, Paul-Philipp LULEY, Carmen FRISCHKNECHT-GRUBER, Monika Ulrike REIF, Frank-Peter SCHILLING, Amin AMINI, Marco REPETTO, Arman IRANFAR, Joanna WENG und Ricardo CHAVARRIAGA, 2023. A framework for assessing and certifying explainability of health-oriented AI systems. In: Explainable AI in Medicine Workshop, Lugano, Switzerland, 2-3 November 2023. Conference presentation. 2 November 2023
Denzel, Philipp, Stefan Brunner, Paul-Philipp Luley, Carmen Frischknecht-Gruber, Monika Ulrike Reif, Frank-Peter Schilling, Amin Amini, et al. 2023. “A Framework for Assessing and Certifying Explainability of Health-Oriented AI Systems.” Conference presentation. In Explainable AI in Medicine Workshop, Lugano, Switzerland, 2-3 November 2023.
Denzel, Philipp, et al. “A Framework for Assessing and Certifying Explainability of Health-Oriented AI Systems.” Explainable AI in Medicine Workshop, Lugano, Switzerland, 2-3 November 2023, 2023.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.