Bitte benutzen Sie diese Kennung, um auf die Ressource zu verweisen: https://doi.org/10.21256/zhaw-20217
Publikationstyp: Konferenz: Paper
Art der Begutachtung: Peer review (Abstract)
Neue Version verfügbar unter: https://digitalcollection.zhaw.ch/handle/11475/22061
Titel: Two to trust : AutoML for safe modelling and interpretable deep learning for robustness
Autor/-in: Amirian, Mohammadreza
Tuggener, Lukas
Chavarriaga, Ricardo
Satyawan, Yvan Putra
Schilling, Frank-Peter
Schwenker, Friedhelm
Stadelmann, Thilo
et. al: No
DOI: 10.21256/zhaw-20217
Tagungsband: Proceedings of the 1st TAILOR Workshop on Trustworthy AI at ECAI 2020
Angaben zur Konferenz: 1st TAILOR Workshop on Trustworthy AI at ECAI 2020, Santiago de Compostela, Spain, 29-30 August 2020
Erscheinungsdatum: Aug-2020
Verlag / Hrsg. Institution: Springer
Sprache: Englisch
Schlagwörter: Automated deep learning; AutoDL; Adversarial attacks
Fachgebiet (DDC): 006: Spezielle Computerverfahren
Zusammenfassung: With great power comes great responsibility. The success of machine learning, especially deep learning, in research and practice has attracted a great deal of interest, which in turn necessitates increased trust. Sources of mistrust include matters of model genesis ("Is this really the appropriate model?") and interpretability ("Why did the model come to this conclusion?", "Is the model safe from being easily fooled by adversaries?"). In this paper, two partners for the trustworthiness tango are presented: recent advances and ideas, as well as practical applications in industry in (a) Automated machine learning (AutoML), a powerful tool to optimize deep neural network architectures and netune hyperparameters, which promises to build models in a safer and more comprehensive way; (b) Interpretability of neural network outputs, which addresses the vital question regarding the reasoning behind model predictions and provides insights to improve robustness against adversarial attacks.
URI: https://digitalcollection.zhaw.ch/handle/11475/20217
Volltext Version: Akzeptierte Version
Lizenz (gemäss Verlagsvertrag): Lizenz gemäss Verlagsvertrag
Departement: School of Engineering
Organisationseinheit: 
Publiziert im Rahmen des ZHAW-Projekts: Ada – Advanced Algorithms for an Artificial Data Analyst
QualitAI - Quality control of industrial products via deep learning on images
Enthalten in den Sammlungen:Publikationen School of Engineering

Dateien zu dieser Ressource:
Datei Beschreibung GrößeFormat 
2020_Amirian_etal_AutoML-for-safe-modelling_TAILOR_ECAI.pdfAccepted Version2.88 MBAdobe PDFMiniaturbild
Öffnen/Anzeigen
Zur Langanzeige
Amirian, M., Tuggener, L., Chavarriaga, R., Satyawan, Y. P., Schilling, F.-P., Schwenker, F., & Stadelmann, T. (2020, August). Two to trust : AutoML for safe modelling and interpretable deep learning for robustness. Proceedings of the 1st TAILOR Workshop on Trustworthy AI at ECAI 2020. https://doi.org/10.21256/zhaw-20217
Amirian, M. et al. (2020) ‘Two to trust : AutoML for safe modelling and interpretable deep learning for robustness’, in Proceedings of the 1st TAILOR Workshop on Trustworthy AI at ECAI 2020. Springer. Available at: https://doi.org/10.21256/zhaw-20217.
M. Amirian et al., “Two to trust : AutoML for safe modelling and interpretable deep learning for robustness,” in Proceedings of the 1st TAILOR Workshop on Trustworthy AI at ECAI 2020, Aug. 2020. doi: 10.21256/zhaw-20217.
AMIRIAN, Mohammadreza, Lukas TUGGENER, Ricardo CHAVARRIAGA, Yvan Putra SATYAWAN, Frank-Peter SCHILLING, Friedhelm SCHWENKER und Thilo STADELMANN, 2020. Two to trust : AutoML for safe modelling and interpretable deep learning for robustness. In: Proceedings of the 1st TAILOR Workshop on Trustworthy AI at ECAI 2020. Conference paper. Springer. August 2020
Amirian, Mohammadreza, Lukas Tuggener, Ricardo Chavarriaga, Yvan Putra Satyawan, Frank-Peter Schilling, Friedhelm Schwenker, and Thilo Stadelmann. 2020. “Two to Trust : AutoML for Safe Modelling and Interpretable Deep Learning for Robustness.” Conference paper. In Proceedings of the 1st TAILOR Workshop on Trustworthy AI at ECAI 2020. Springer. https://doi.org/10.21256/zhaw-20217.
Amirian, Mohammadreza, et al. “Two to Trust : AutoML for Safe Modelling and Interpretable Deep Learning for Robustness.” Proceedings of the 1st TAILOR Workshop on Trustworthy AI at ECAI 2020, Springer, 2020, https://doi.org/10.21256/zhaw-20217.


Alle Ressourcen in diesem Repository sind urheberrechtlich geschützt, soweit nicht anderweitig angezeigt.