Please use this identifier to cite or link to this item: https://doi.org/10.21256/zhaw-30985
Full metadata record
DC FieldValueLanguage
dc.contributor.authorBrunner, Stefan-
dc.contributor.authorReif, Monika Ulrike-
dc.contributor.authorRejzek, Martin-
dc.date.accessioned2024-07-04T13:41:06Z-
dc.date.available2024-07-04T13:41:06Z-
dc.date.issued2024-06-
dc.identifier.isbn978-83-68136-16-6de_CH
dc.identifier.isbn978-83-68136-03-6de_CH
dc.identifier.urihttps://esrel2024.com/wp-content/uploads/articles/part4/improving-resilience-and-robustness-in-artificial-intelligence-systems-through-adversarial-training-and-verification.pdfde_CH
dc.identifier.urihttps://digitalcollection.zhaw.ch/handle/11475/30985-
dc.description.abstractThis contribution presents a comprehensive review of the applicability of adversarial perturbations in the training and verification of neural networks. Adversarial perturbations, designed to deliberately manipulate inputs, have emerged as a powerful tool for improving the robustness and generalization of neural networks. This review systematically examines the utilization of adversarial perturbations in both the training and verification phases of neural network development. In the training phase, adversarial perturbations have been harnessed to enhance model resilience against adversarial attacks by augmenting the training dataset with perturbed examples. Various techniques, such as adversarial training and robust optimization, are explored for their effectiveness in fortifying neural networks against both traditional and advanced adversarial attacks. In the verification realm, adversarial perturbations offer a novel approach for assessing model reliability and safety. Adversarial examples generated during verification expose vulnerabilities and aid in uncovering potential shortcomings of neural network architectures. This review delves into the evaluation of various state-of-the-art adversarial perturbations for different model architectures and datasets and provides a comprehensive analysis of their applications in both training and verification of neural networks. By providing a thorough overview of their benefits, limitations, and evolving methodologies, this review not only contributes to a deeper understanding of the pivotal role adversarial perturbations play in enhancing the robustness and resilience of neural networks, but also provides a basis for selecting the appropriate perturbation for specific tasks such as training or verification.de_CH
dc.language.isoende_CH
dc.publisherPolish Safety and Reliability Associationde_CH
dc.rightsLicence according to publishing contractde_CH
dc.subjectMachine learningde_CH
dc.subjectAdversarial perturbationde_CH
dc.subjectDeep learningde_CH
dc.subjectNeural network resiliencede_CH
dc.subject.ddc006: Spezielle Computerverfahrende_CH
dc.titleImproving resilience and robustness in artificial intelligence systems through adversarial training and verificationde_CH
dc.typeKonferenz: Paperde_CH
dcterms.typeTextde_CH
zhaw.departementSchool of Engineeringde_CH
zhaw.organisationalunitInstitut für Angewandte Mathematik und Physik (IAMP)de_CH
zhaw.publisher.placeGdyniade_CH
dc.identifier.doi10.21256/zhaw-30985-
zhaw.conference.details34th European Safety and Reliability Conference (ESREL), Cracow, Poland, 23-27 June 2024de_CH
zhaw.funding.euNode_CH
zhaw.originated.zhawYesde_CH
zhaw.pages.end48de_CH
zhaw.pages.start39de_CH
zhaw.parentwork.editorKołowrocki, Krzysztof-
zhaw.parentwork.editorDąbrowska, Ewade_CH
zhaw.publication.statuspublishedVersionde_CH
zhaw.publication.reviewPeer review (Publikation)de_CH
zhaw.title.proceedingsAdvances in Reliability, Safety and Security, Part 4de_CH
zhaw.author.additionalNode_CH
zhaw.display.portraitYesde_CH
Appears in collections:Publikationen School of Engineering

Show simple item record
Brunner, S., Reif, M. U., & Rejzek, M. (2024). Improving resilience and robustness in artificial intelligence systems through adversarial training and verification [Conference paper]. In K. Kołowrocki & E. Dąbrowska (Eds.), Advances in Reliability, Safety and Security, Part 4 (pp. 39–48). Polish Safety and Reliability Association. https://doi.org/10.21256/zhaw-30985
Brunner, S., Reif, M.U. and Rejzek, M. (2024) ‘Improving resilience and robustness in artificial intelligence systems through adversarial training and verification’, in K. Kołowrocki and E. Dąbrowska (eds) Advances in Reliability, Safety and Security, Part 4. Gdynia: Polish Safety and Reliability Association, pp. 39–48. Available at: https://doi.org/10.21256/zhaw-30985.
S. Brunner, M. U. Reif, and M. Rejzek, “Improving resilience and robustness in artificial intelligence systems through adversarial training and verification,” in Advances in Reliability, Safety and Security, Part 4, Jun. 2024, pp. 39–48. doi: 10.21256/zhaw-30985.
BRUNNER, Stefan, Monika Ulrike REIF und Martin REJZEK, 2024. Improving resilience and robustness in artificial intelligence systems through adversarial training and verification. In: Krzysztof KOŁOWROCKI und Ewa DĄBROWSKA (Hrsg.), Advances in Reliability, Safety and Security, Part 4 [online]. Conference paper. Gdynia: Polish Safety and Reliability Association. Juni 2024. S. 39–48. ISBN 978-83-68136-16-6. Verfügbar unter: https://esrel2024.com/wp-content/uploads/articles/part4/improving-resilience-and-robustness-in-artificial-intelligence-systems-through-adversarial-training-and-verification.pdf
Brunner, Stefan, Monika Ulrike Reif, and Martin Rejzek. 2024. “Improving Resilience and Robustness in Artificial Intelligence Systems through Adversarial Training and Verification.” Conference paper. In Advances in Reliability, Safety and Security, Part 4, edited by Krzysztof Kołowrocki and Ewa Dąbrowska, 39–48. Gdynia: Polish Safety and Reliability Association. https://doi.org/10.21256/zhaw-30985.
Brunner, Stefan, et al. “Improving Resilience and Robustness in Artificial Intelligence Systems through Adversarial Training and Verification.” Advances in Reliability, Safety and Security, Part 4, edited by Krzysztof Kołowrocki and Ewa Dąbrowska, Polish Safety and Reliability Association, 2024, pp. 39–48, https://doi.org/10.21256/zhaw-30985.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.