Please use this identifier to cite or link to this item:
https://doi.org/10.21256/zhaw-23318
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Amirian, Mohammadreza | - |
dc.contributor.author | Montoya, Javier | - |
dc.contributor.author | Gruss, Jonathan | - |
dc.contributor.author | Stebler, Yves D. | - |
dc.contributor.author | Bozkir, Ahmet Selman | - |
dc.contributor.author | Calandri, Marco | - |
dc.contributor.author | Schwenker, Friedhelm | - |
dc.contributor.author | Stadelmann, Thilo | - |
dc.date.accessioned | 2021-10-29T08:13:45Z | - |
dc.date.available | 2021-10-29T08:13:45Z | - |
dc.date.issued | 2021-10 | - |
dc.identifier.uri | https://digitalcollection.zhaw.ch/handle/11475/23318 | - |
dc.description.abstract | With the spread of COVID-19 over the world, the need arose for fast and precise automatic triage mechanisms to decelerate the spread of the disease by reducing human efforts e.g. for image-based diagnosis. Although the literature has shown promising efforts in this direction, reported results do not consider the variability of CT scans acquired under varying circumstances, thus rendering resulting models unfit for use on data acquired using e.g. different scanner technologies. While COVID-19 diagnosis can now be done efficiently using PCR tests, this use case exemplifies the need for a methodology to overcome data variability issues in order to make medical image analysis models more widely applicable. In this paper, we explicitly address the variability issue using the example of COVID-19 diagnosis and propose a novel generative approach that aims at erasing the differences induced by e.g. the imaging technology while simultaneously introducing minimal changes to the CT scans through leveraging the idea of deep autoencoders. The proposed prepossessing architecture (PrepNet) (i) is jointly trained on multiple CT scan datasets and (ii) is capable of extracting improved discriminative features for improved diagnosis. Experimental results on three public datasets (SARS-COVID-2, UCSD COVID-CT, MosMed) show that our model improves cross-dataset generalization by up to 11:84 percentage points despite a minor drop in within dataset performance. | de_CH |
dc.language.iso | en | de_CH |
dc.publisher | ZHAW Zürcher Hochschule für Angewandte Wissenschaften | de_CH |
dc.rights | Licence according to publishing contract | de_CH |
dc.subject | Adaptive preprocessing | de_CH |
dc.subject | Domain adaptation | de_CH |
dc.subject | Autoencoder | de_CH |
dc.subject.ddc | 610.28: Biomedizin, Biomedizinische Technik | de_CH |
dc.title | PrepNet : a convolutional auto-encoder to homogenize CT scans for cross-dataset medical image analysis | de_CH |
dc.type | Konferenz: Paper | de_CH |
dcterms.type | Text | de_CH |
zhaw.departement | School of Engineering | de_CH |
zhaw.organisationalunit | Centre for Artificial Intelligence (CAI) | de_CH |
zhaw.organisationalunit | Institut für Informatik (InIT) | de_CH |
dc.identifier.doi | 10.21256/zhaw-23318 | - |
zhaw.conference.details | 14th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Shanghai, China, 23-25 October 2021 | de_CH |
zhaw.funding.eu | No | de_CH |
zhaw.originated.zhaw | Yes | de_CH |
zhaw.publication.status | acceptedVersion | de_CH |
zhaw.publication.review | Peer review (Publikation) | de_CH |
zhaw.title.proceedings | Proceedings of CISP-BMEI’21 | de_CH |
zhaw.webfeed | Machine Perception and Cognition | de_CH |
zhaw.webfeed | Datalab | de_CH |
zhaw.webfeed | Digital Health Lab | de_CH |
zhaw.webfeed | ZHAW digital | de_CH |
zhaw.funding.zhaw | Synthetic data generation of CoVID-19 CT/X-rays images for enabling fast triage of healthy vs. unhealthy patients | de_CH |
zhaw.funding.zhaw | Standardized Data and Modeling for AI-based CoVID-19 Diagnosis Support on CT Scans (SDMCT) | de_CH |
zhaw.author.additional | No | de_CH |
zhaw.display.portrait | Yes | de_CH |
Appears in collections: | Publikationen School of Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
2021_Amirian-etal_PrepNet_CISP-BMEI.pdf | Accepted Version | 1.83 MB | Adobe PDF | View/Open |
Show simple item record
Amirian, M., Montoya, J., Gruss, J., Stebler, Y. D., Bozkir, A. S., Calandri, M., Schwenker, F., & Stadelmann, T. (2021, October). PrepNet : a convolutional auto-encoder to homogenize CT scans for cross-dataset medical image analysis. Proceedings of CISP-BMEI’21. https://doi.org/10.21256/zhaw-23318
Amirian, M. et al. (2021) ‘PrepNet : a convolutional auto-encoder to homogenize CT scans for cross-dataset medical image analysis’, in Proceedings of CISP-BMEI’21. ZHAW Zürcher Hochschule für Angewandte Wissenschaften. Available at: https://doi.org/10.21256/zhaw-23318.
M. Amirian et al., “PrepNet : a convolutional auto-encoder to homogenize CT scans for cross-dataset medical image analysis,” in Proceedings of CISP-BMEI’21, Oct. 2021. doi: 10.21256/zhaw-23318.
AMIRIAN, Mohammadreza, Javier MONTOYA, Jonathan GRUSS, Yves D. STEBLER, Ahmet Selman BOZKIR, Marco CALANDRI, Friedhelm SCHWENKER und Thilo STADELMANN, 2021. PrepNet : a convolutional auto-encoder to homogenize CT scans for cross-dataset medical image analysis. In: Proceedings of CISP-BMEI’21. Conference paper. ZHAW Zürcher Hochschule für Angewandte Wissenschaften. Oktober 2021
Amirian, Mohammadreza, Javier Montoya, Jonathan Gruss, Yves D. Stebler, Ahmet Selman Bozkir, Marco Calandri, Friedhelm Schwenker, and Thilo Stadelmann. 2021. “PrepNet : A Convolutional Auto-Encoder to Homogenize CT Scans for Cross-Dataset Medical Image Analysis.” Conference paper. In Proceedings of CISP-BMEI’21. ZHAW Zürcher Hochschule für Angewandte Wissenschaften. https://doi.org/10.21256/zhaw-23318.
Amirian, Mohammadreza, et al. “PrepNet : A Convolutional Auto-Encoder to Homogenize CT Scans for Cross-Dataset Medical Image Analysis.” Proceedings of CISP-BMEI’21, ZHAW Zürcher Hochschule für Angewandte Wissenschaften, 2021, https://doi.org/10.21256/zhaw-23318.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.