|Publication type:||Conference paper|
|Type of review:||Peer review (publication)|
|Title:||Global efforts towards establishing safety directives for intelligent systems : review|
Reif, Monika Ulrike
|Proceedings:||Proceedings of the 32nd European Safety and Reliability Conference (ESREL 2022)|
|Conference details:||European Safety and Reliability Conference (ESREL 2022), Dublin, Ireland, 28 August - 1 September 2022|
|Publisher / Ed. Institution:||Research Publishing|
|Publisher / Ed. Institution:||Singapore|
|Subjects:||Artificial intelligence; Künstliche Intelligen; AI standards; Machine learning; Safety; Digitalisierung|
|Subject (DDC):||006: Special computer methods|
|Abstract:||Intelligent systems have found their way into our lives in almost every conceivable field. There is hardly any area in which the possibilities and applications of machine learning are not thought of. In safety-critical systems, there is an urgent need to examine how a model generates a prediction and whether that response can be trusted. Furthermore, ethics must be considered in the practical development of AI systems to ensure a safe and secure application. Despite these requirements, due to the nature of deep learning, we are confronted with a black box. This disparity needs to be addressed using interpretability and explainable approaches to minimize potential bias and at the same time increase transparency, fairness, justice and inclusion. In order to enhance trust in intelligent systems - accountability, responsibility and robustness must be ensured as well. Appropriate policies and standards need to be put in place to enforce this in practice. We are facing a global challenge here; standards must be set not only at national but also at international level, and a common understanding of how to deal with AI on ethical and legal levels must be found. We provide an overview of efforts that are being made at national and international level by governments and global organizations. We discuss current and upcoming challenges and risks posed by intelligent systems considering ethical guidelines and legal frameworks. In particular, we examine and compare the classification of risk levels and mitigation strategies. To conclude we show the latest state of technical feasibility and possible certification to ensure safe, transparent and robust AI systems and give an outlook on possible certification approaches for safe AI systems meeting the proposed governance frameworks.|
|Fulltext version:||Published version|
|License (according to publishing contract):||Licence according to publishing contract|
|Departement:||School of Engineering|
|Organisational Unit:||Institute of Applied Mathematics and Physics (IAMP)|
|Appears in collections:||Publikationen School of Engineering|
Files in This Item:
There are no files associated with this item.
Show full item record
Frischknecht-Gruber, C., Reif, M. U., & Senn, C. (2022). Global efforts towards establishing safety directives for intelligent systems : review [Conference paper]. Proceedings of the 32nd European Safety and Reliability Conference (ESREL 2022), 2420–2427. https://doi.org/10.3850/978-981-18-5183-4_S11-11-472-cd
Frischknecht-Gruber, C., Reif, M.U. and Senn, C. (2022) ‘Global efforts towards establishing safety directives for intelligent systems : review’, in Proceedings of the 32nd European Safety and Reliability Conference (ESREL 2022). Singapore: Research Publishing, pp. 2420–2427. Available at: https://doi.org/10.3850/978-981-18-5183-4_S11-11-472-cd.
C. Frischknecht-Gruber, M. U. Reif, and C. Senn, “Global efforts towards establishing safety directives for intelligent systems : review,” in Proceedings of the 32nd European Safety and Reliability Conference (ESREL 2022), 2022, pp. 2420–2427. doi: 10.3850/978-981-18-5183-4_S11-11-472-cd.
Frischknecht-Gruber, Carmen, et al. “Global Efforts towards Establishing Safety Directives for Intelligent Systems : Review.” Proceedings of the 32nd European Safety and Reliability Conference (ESREL 2022), Research Publishing, 2022, pp. 2420–27, https://doi.org/10.3850/978-981-18-5183-4_S11-11-472-cd.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.