Please use this identifier to cite or link to this item: https://doi.org/10.21256/zhaw-22061
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAmirian, Mohammadreza-
dc.contributor.authorTuggener, Lukas-
dc.contributor.authorChavarriaga, Ricardo-
dc.contributor.authorSatyawan, Yvan Putra-
dc.contributor.authorSchilling, Frank-Peter-
dc.contributor.authorSchwenker, Friedhelm-
dc.contributor.authorStadelmann, Thilo-
dc.date.accessioned2021-03-15T10:57:26Z-
dc.date.available2021-03-15T10:57:26Z-
dc.date.issued2021-03-
dc.identifier.urihttps://digitalcollection.zhaw.ch/handle/11475/22061-
dc.description.abstractWith great power comes great responsibility. The success of machine learning, especially deep learning, in research and practice has attracted a great deal of interest, which in turn necessitates increased trust. Sources of mistrust include matters of model genesis ("Is this really the appropriate model?") and interpretability ("Why did the model come to this conclusion?", "Is the model safe from being easily fooled by adversaries?"). In this paper, two partners for the trustworthiness tango are presented: recent advances and ideas, as well as practical applications in industry in (a) Automated machine learning (AutoML), a powerful tool to optimize deep neural network architectures and netune hyperparameters, which promises to build models in a safer and more comprehensive way; (b) Interpretability of neural network outputs, which addresses the vital question regarding the reasoning behind model predictions and provides insights to improve robustness against adversarial attacks.de_CH
dc.language.isoende_CH
dc.publisherSpringerde_CH
dc.rightsLicence according to publishing contractde_CH
dc.subjectAutomated deep learningde_CH
dc.subjectAutoDLde_CH
dc.subjectAdversarial attacksde_CH
dc.subject.ddc006: Spezielle Computerverfahrende_CH
dc.titleTwo to trust : AutoML for safe modelling and interpretable deep learning for robustnessde_CH
dc.typeKonferenz: Paperde_CH
dcterms.typeTextde_CH
zhaw.departementSchool of Engineeringde_CH
zhaw.organisationalunitInstitut für Informatik (InIT)de_CH
dc.identifier.doi10.21256/zhaw-22061-
zhaw.conference.details1st TAILOR Workshop on Trustworthy AI at ECAI 2020, Santiago de Compostela, Spain, 29-30 August 2020de_CH
zhaw.funding.euNode_CH
zhaw.originated.zhawYesde_CH
zhaw.publication.statusacceptedVersionde_CH
zhaw.publication.reviewPeer review (Abstract)de_CH
zhaw.title.proceedingsPostproceedings of the 1st TAILOR Workshop on Trustworthy AI at ECAI 2020de_CH
zhaw.webfeedDatalabde_CH
zhaw.webfeedInformation Engineeringde_CH
zhaw.webfeedZHAW digitalde_CH
zhaw.webfeedNatural Language Processingde_CH
zhaw.webfeedMachine Perception and Cognitionde_CH
zhaw.webfeedIntelligent Vision Systemsde_CH
zhaw.funding.zhawAda – Advanced Algorithms for an Artificial Data Analystde_CH
zhaw.funding.zhawQualitAI - Quality control of industrial products via deep learning on imagesde_CH
zhaw.author.additionalNode_CH
zhaw.display.portraitYesde_CH
Appears in collections:Publikationen School of Engineering

Files in This Item:
File Description SizeFormat 
2021_Amirian_etal_AutoML-for-safe-modelling_TAILOR_ECAI.pdfAccepted Version2.89 MBAdobe PDFThumbnail
View/Open
Show simple item record
Amirian, M., Tuggener, L., Chavarriaga, R., Satyawan, Y. P., Schilling, F.-P., Schwenker, F., & Stadelmann, T. (2021, March). Two to trust : AutoML for safe modelling and interpretable deep learning for robustness. Postproceedings of the 1st TAILOR Workshop on Trustworthy AI at ECAI 2020. https://doi.org/10.21256/zhaw-22061
Amirian, M. et al. (2021) ‘Two to trust : AutoML for safe modelling and interpretable deep learning for robustness’, in Postproceedings of the 1st TAILOR Workshop on Trustworthy AI at ECAI 2020. Springer. Available at: https://doi.org/10.21256/zhaw-22061.
M. Amirian et al., “Two to trust : AutoML for safe modelling and interpretable deep learning for robustness,” in Postproceedings of the 1st TAILOR Workshop on Trustworthy AI at ECAI 2020, Mar. 2021. doi: 10.21256/zhaw-22061.
AMIRIAN, Mohammadreza, Lukas TUGGENER, Ricardo CHAVARRIAGA, Yvan Putra SATYAWAN, Frank-Peter SCHILLING, Friedhelm SCHWENKER und Thilo STADELMANN, 2021. Two to trust : AutoML for safe modelling and interpretable deep learning for robustness. In: Postproceedings of the 1st TAILOR Workshop on Trustworthy AI at ECAI 2020. Conference paper. Springer. März 2021
Amirian, Mohammadreza, Lukas Tuggener, Ricardo Chavarriaga, Yvan Putra Satyawan, Frank-Peter Schilling, Friedhelm Schwenker, and Thilo Stadelmann. 2021. “Two to Trust : AutoML for Safe Modelling and Interpretable Deep Learning for Robustness.” Conference paper. In Postproceedings of the 1st TAILOR Workshop on Trustworthy AI at ECAI 2020. Springer. https://doi.org/10.21256/zhaw-22061.
Amirian, Mohammadreza, et al. “Two to Trust : AutoML for Safe Modelling and Interpretable Deep Learning for Robustness.” Postproceedings of the 1st TAILOR Workshop on Trustworthy AI at ECAI 2020, Springer, 2021, https://doi.org/10.21256/zhaw-22061.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.