Langanzeige der Metadaten
DC ElementWertSprache
dc.contributor.authorKook, Lucas-
dc.contributor.authorHerzog, Lisa-
dc.contributor.authorHothorn, Torsten-
dc.contributor.authorDürr, Oliver-
dc.contributor.authorSick, Beate-
dc.date.accessioned2023-02-23T16:10:10Z-
dc.date.available2023-02-23T16:10:10Z-
dc.date.issued2022-
dc.identifier.issn0031-3203de_CH
dc.identifier.issn1873-5142de_CH
dc.identifier.urihttps://digitalcollection.zhaw.ch/handle/11475/27101-
dc.descriptionA preprint version of this article is available on arXiv at https://doi.org/10.48550/arXiv.2010.08376de_CH
dc.description.abstractOutcomes with a natural order commonly occur in prediction problems and often the available input data are a mixture of complex data like images and tabular predictors. Deep Learning (DL) models are state-of-the-art for image classification tasks but frequently treat ordinal outcomes as unordered and lack interpretability. In contrast, classical ordinal regression models consider the outcome’s order and yield interpretable predictor effects but are limited to tabular data. We present ordinal neural network transformation models (ontrams), which unite DL with classical ordinal regression approaches. ontrams are a special case of transformation models and trade off flexibility and interpretability by additively decomposing the transformation function into terms for image and tabular data using jointly trained neural networks. The performance of the most flexible ontram is by definition equivalent to a standard multi-class DL model trained with cross-entropy while being faster in training when facing ordinal outcomes. Lastly, we discuss how to interpret model components for both tabular and image data on two publicly available datasets.de_CH
dc.language.isoende_CH
dc.publisherElsevierde_CH
dc.relation.ispartofPattern Recognitionde_CH
dc.rightsLicence according to publishing contractde_CH
dc.subjectDeep learningde_CH
dc.subjectInterpretabilityde_CH
dc.subjectDistributional regressionde_CH
dc.subjectOrdinal regressionde_CH
dc.subjectTransformation modelde_CH
dc.subject.ddc006: Spezielle Computerverfahrende_CH
dc.titleDeep and interpretable regression models for ordinal outcomesde_CH
dc.typeBeitrag in wissenschaftlicher Zeitschriftde_CH
dcterms.typeTextde_CH
zhaw.departementSchool of Engineeringde_CH
zhaw.organisationalunitInstitut für Datenanalyse und Prozessdesign (IDP)de_CH
dc.identifier.doi10.1016/j.patcog.2021.108263de_CH
zhaw.funding.euNode_CH
zhaw.issue108263de_CH
zhaw.originated.zhawYesde_CH
zhaw.publication.statuspublishedVersionde_CH
zhaw.volume122de_CH
zhaw.publication.reviewPeer review (Publikation)de_CH
zhaw.funding.snf184603de_CH
zhaw.author.additionalNode_CH
zhaw.display.portraitYesde_CH
Enthalten in den Sammlungen:Publikationen School of Engineering

Dateien zu dieser Ressource:
Es gibt keine Dateien zu dieser Ressource.
Zur Kurzanzeige
Kook, L., Herzog, L., Hothorn, T., Dürr, O., & Sick, B. (2022). Deep and interpretable regression models for ordinal outcomes. Pattern Recognition, 122(108263). https://doi.org/10.1016/j.patcog.2021.108263
Kook, L. et al. (2022) ‘Deep and interpretable regression models for ordinal outcomes’, Pattern Recognition, 122(108263). Available at: https://doi.org/10.1016/j.patcog.2021.108263.
L. Kook, L. Herzog, T. Hothorn, O. Dürr, and B. Sick, “Deep and interpretable regression models for ordinal outcomes,” Pattern Recognition, vol. 122, no. 108263, 2022, doi: 10.1016/j.patcog.2021.108263.
KOOK, Lucas, Lisa HERZOG, Torsten HOTHORN, Oliver DÜRR und Beate SICK, 2022. Deep and interpretable regression models for ordinal outcomes. Pattern Recognition. 2022. Bd. 122, Nr. 108263. DOI 10.1016/j.patcog.2021.108263
Kook, Lucas, Lisa Herzog, Torsten Hothorn, Oliver Dürr, and Beate Sick. 2022. “Deep and Interpretable Regression Models for Ordinal Outcomes.” Pattern Recognition 122 (108263). https://doi.org/10.1016/j.patcog.2021.108263.
Kook, Lucas, et al. “Deep and Interpretable Regression Models for Ordinal Outcomes.” Pattern Recognition, vol. 122, no. 108263, 2022, https://doi.org/10.1016/j.patcog.2021.108263.


Alle Ressourcen in diesem Repository sind urheberrechtlich geschützt, soweit nicht anderweitig angezeigt.