Please use this identifier to cite or link to this item:
Publication type: Article in scientific journal
Type of review: Peer review (publication)
Title: Unsupervised domain adaptation for vertebrae detection and identification in 3D CT volumes using a domain sanity loss
Authors: Sager, Pascal
Salzmann, Sebastian
Burn, Felice
Stadelmann, Thilo
et. al: No
DOI: 10.3390/jimaging8080222
Published in: Journal of Imaging
Volume(Issue): 8
Issue: 8
Page(s): 222
Issue Date: 19-Aug-2022
Publisher / Ed. Institution: MDPI
ISSN: 2313-433X
Language: English
Subjects: Unsupervised domain adaptation; Semi-supervised learning; Vertebrae detection; Vertebrae identification; Transfer learning; Semantic segmentation; Data centrism; Deep learning
Subject (DDC): 006: Special computer methods
616: Internal medicine and diseases
Abstract: A variety of medical computer vision applications analyze 2D slices of computed tomography (CT) scans, whereas axial slices from the body trunk region are usually identified based on their relative position to the spine. A limitation of such systems is that either the correct slices must be extracted manually or labels of the vertebrae are required for each CT scan to develop an automated extraction system. In this paper, we propose an unsupervised domain adaptation (UDA) approach for vertebrae detection and identification based on a novel Domain Sanity Loss (DSL) function. With UDA the model’s knowledge learned on a publicly available (source) data set can be transferred to the target domain without using target labels, where the target domain is defined by the specific setup (CT modality, study protocols, applied pre- and processing) at the point of use (e.g., a specific clinic with its specific CT study protocols). With our approach, a model is trained on the source and target data set in parallel. The model optimizes a supervised loss for labeled samples from the source domain and the DSL loss function based on domain-specific “sanity checks” for samples from the unlabeled target domain. Without using labels from the target domain, we are able to identify vertebra centroids with an accuracy of 72.8%. By adding only ten target labels during training the accuracy increases to 89.2%, which is on par with the current state-of-the-art for full supervised learning, while using about 20 times less labels. Thus, our model can be used to extract 2D slices from 3D CT scans on arbitrary data sets fully automatically without requiring an extensive labeling effort, contributing to the clinical adoption of medical imaging by hospitals.
Fulltext version: Published version
License (according to publishing contract): CC BY 4.0: Attribution 4.0 International
Departement: School of Engineering
Organisational Unit: Centre for Artificial Intelligence (CAI)
Appears in collections:Publikationen School of Engineering

Files in This Item:
File Description SizeFormat 
2022_Sager-etal_Unsupervised-domain-adaptation-vertebrae-detection-3D-CT.pdf32.91 MBAdobe PDFThumbnail

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.