Please use this identifier to cite or link to this item:
Publication type: Conference paper
Type of review: Peer review (publication)
Title: PrepNet : a convolutional auto-encoder to homogenize CT scans for cross-dataset medical image analysis
Authors: Amirian, Mohammadreza
Montoya, Javier
Gruss, Jonathan
Stebler, Yves D.
Bozkir, Ahmet Selman
Calandri, Marco
Schwenker, Friedhelm
Stadelmann, Thilo
et. al: No
DOI: 10.21256/zhaw-23318
Proceedings: Proceedings of CISP-BMEI’21
Conference details: 14th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Shanghai, China, 23-25 October 2021
Issue Date: Oct-2021
Publisher / Ed. Institution: ZHAW Zürcher Hochschule für Angewandte Wissenschaften
Language: English
Subjects: Adaptive preprocessing; Domain adaptation; Autoencoder
Subject (DDC): 610.28: Biomedicine, biomedical engineering
Abstract: With the spread of COVID-19 over the world, the need arose for fast and precise automatic triage mechanisms to decelerate the spread of the disease by reducing human efforts e.g. for image-based diagnosis. Although the literature has shown promising efforts in this direction, reported results do not consider the variability of CT scans acquired under varying circumstances, thus rendering resulting models unfit for use on data acquired using e.g. different scanner technologies. While COVID-19 diagnosis can now be done efficiently using PCR tests, this use case exemplifies the need for a methodology to overcome data variability issues in order to make medical image analysis models more widely applicable. In this paper, we explicitly address the variability issue using the example of COVID-19 diagnosis and propose a novel generative approach that aims at erasing the differences induced by e.g. the imaging technology while simultaneously introducing minimal changes to the CT scans through leveraging the idea of deep autoencoders. The proposed prepossessing architecture (PrepNet) (i) is jointly trained on multiple CT scan datasets and (ii) is capable of extracting improved discriminative features for improved diagnosis. Experimental results on three public datasets (SARS-COVID-2, UCSD COVID-CT, MosMed) show that our model improves cross-dataset generalization by up to 11:84 percentage points despite a minor drop in within dataset performance.
Fulltext version: Accepted version
License (according to publishing contract): Licence according to publishing contract
Departement: School of Engineering
Organisational Unit: Centre for Artificial Intelligence (CAI)
Institute of Applied Information Technology (InIT)
Published as part of the ZHAW project: Synthetic data generation of CoVID-19 CT/X-rays images for enabling fast triage of healthy vs. unhealthy patients
Standardized Data and Modeling for AI-based CoVID-19 Diagnosis Support on CT Scans (SDMCT)
Appears in collections:Publikationen School of Engineering

Files in This Item:
File Description SizeFormat 
2021_Amirian-etal_PrepNet_CISP-BMEI.pdfAccepted Version1.83 MBAdobe PDFThumbnail

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.