Bitte benutzen Sie diese Kennung, um auf die Ressource zu verweisen:
https://doi.org/10.21256/zhaw-22061
Langanzeige der Metadaten
DC Element | Wert | Sprache |
---|---|---|
dc.contributor.author | Amirian, Mohammadreza | - |
dc.contributor.author | Tuggener, Lukas | - |
dc.contributor.author | Chavarriaga, Ricardo | - |
dc.contributor.author | Satyawan, Yvan Putra | - |
dc.contributor.author | Schilling, Frank-Peter | - |
dc.contributor.author | Schwenker, Friedhelm | - |
dc.contributor.author | Stadelmann, Thilo | - |
dc.date.accessioned | 2021-03-15T10:57:26Z | - |
dc.date.available | 2021-03-15T10:57:26Z | - |
dc.date.issued | 2021-03 | - |
dc.identifier.uri | https://digitalcollection.zhaw.ch/handle/11475/22061 | - |
dc.description.abstract | With great power comes great responsibility. The success of machine learning, especially deep learning, in research and practice has attracted a great deal of interest, which in turn necessitates increased trust. Sources of mistrust include matters of model genesis ("Is this really the appropriate model?") and interpretability ("Why did the model come to this conclusion?", "Is the model safe from being easily fooled by adversaries?"). In this paper, two partners for the trustworthiness tango are presented: recent advances and ideas, as well as practical applications in industry in (a) Automated machine learning (AutoML), a powerful tool to optimize deep neural network architectures and netune hyperparameters, which promises to build models in a safer and more comprehensive way; (b) Interpretability of neural network outputs, which addresses the vital question regarding the reasoning behind model predictions and provides insights to improve robustness against adversarial attacks. | de_CH |
dc.language.iso | en | de_CH |
dc.publisher | Springer | de_CH |
dc.rights | Licence according to publishing contract | de_CH |
dc.subject | Automated deep learning | de_CH |
dc.subject | AutoDL | de_CH |
dc.subject | Adversarial attacks | de_CH |
dc.subject.ddc | 006: Spezielle Computerverfahren | de_CH |
dc.title | Two to trust : AutoML for safe modelling and interpretable deep learning for robustness | de_CH |
dc.type | Konferenz: Paper | de_CH |
dcterms.type | Text | de_CH |
zhaw.departement | School of Engineering | de_CH |
zhaw.organisationalunit | Institut für Informatik (InIT) | de_CH |
dc.identifier.doi | 10.21256/zhaw-22061 | - |
zhaw.conference.details | 1st TAILOR Workshop on Trustworthy AI at ECAI 2020, Santiago de Compostela, Spain, 29-30 August 2020 | de_CH |
zhaw.funding.eu | No | de_CH |
zhaw.originated.zhaw | Yes | de_CH |
zhaw.publication.status | acceptedVersion | de_CH |
zhaw.publication.review | Peer review (Abstract) | de_CH |
zhaw.title.proceedings | Postproceedings of the 1st TAILOR Workshop on Trustworthy AI at ECAI 2020 | de_CH |
zhaw.webfeed | Datalab | de_CH |
zhaw.webfeed | Information Engineering | de_CH |
zhaw.webfeed | ZHAW digital | de_CH |
zhaw.webfeed | Natural Language Processing | de_CH |
zhaw.webfeed | Machine Perception and Cognition | de_CH |
zhaw.webfeed | Intelligent Vision Systems | de_CH |
zhaw.funding.zhaw | Ada – Advanced Algorithms for an Artificial Data Analyst | de_CH |
zhaw.funding.zhaw | QualitAI - Quality control of industrial products via deep learning on images | de_CH |
zhaw.author.additional | No | de_CH |
zhaw.display.portrait | Yes | de_CH |
Enthalten in den Sammlungen: | Publikationen School of Engineering |
Dateien zu dieser Ressource:
Datei | Beschreibung | Größe | Format | |
---|---|---|---|---|
2021_Amirian_etal_AutoML-for-safe-modelling_TAILOR_ECAI.pdf | Accepted Version | 2.89 MB | Adobe PDF | Öffnen/Anzeigen |
Zur Kurzanzeige
Amirian, M., Tuggener, L., Chavarriaga, R., Satyawan, Y. P., Schilling, F.-P., Schwenker, F., & Stadelmann, T. (2021, March). Two to trust : AutoML for safe modelling and interpretable deep learning for robustness. Postproceedings of the 1st TAILOR Workshop on Trustworthy AI at ECAI 2020. https://doi.org/10.21256/zhaw-22061
Amirian, M. et al. (2021) ‘Two to trust : AutoML for safe modelling and interpretable deep learning for robustness’, in Postproceedings of the 1st TAILOR Workshop on Trustworthy AI at ECAI 2020. Springer. Available at: https://doi.org/10.21256/zhaw-22061.
M. Amirian et al., “Two to trust : AutoML for safe modelling and interpretable deep learning for robustness,” in Postproceedings of the 1st TAILOR Workshop on Trustworthy AI at ECAI 2020, Mar. 2021. doi: 10.21256/zhaw-22061.
AMIRIAN, Mohammadreza, Lukas TUGGENER, Ricardo CHAVARRIAGA, Yvan Putra SATYAWAN, Frank-Peter SCHILLING, Friedhelm SCHWENKER und Thilo STADELMANN, 2021. Two to trust : AutoML for safe modelling and interpretable deep learning for robustness. In: Postproceedings of the 1st TAILOR Workshop on Trustworthy AI at ECAI 2020. Conference paper. Springer. März 2021
Amirian, Mohammadreza, Lukas Tuggener, Ricardo Chavarriaga, Yvan Putra Satyawan, Frank-Peter Schilling, Friedhelm Schwenker, and Thilo Stadelmann. 2021. “Two to Trust : AutoML for Safe Modelling and Interpretable Deep Learning for Robustness.” Conference paper. In Postproceedings of the 1st TAILOR Workshop on Trustworthy AI at ECAI 2020. Springer. https://doi.org/10.21256/zhaw-22061.
Amirian, Mohammadreza, et al. “Two to Trust : AutoML for Safe Modelling and Interpretable Deep Learning for Robustness.” Postproceedings of the 1st TAILOR Workshop on Trustworthy AI at ECAI 2020, Springer, 2021, https://doi.org/10.21256/zhaw-22061.
Alle Ressourcen in diesem Repository sind urheberrechtlich geschützt, soweit nicht anderweitig angezeigt.