Automatic upper airway segmentation in static and dynamic MRI via anatomy-guided convolutional neural networks

Lipeng Xie, Jayaram K. Udupa, Yubing Tong, Drew A. Torigian, Zihan Huang, Rachel M. Kogan, David Wootton, Kok R. Choy, Sanghun Sin, Mark E. Wagshul, Raanan Arens

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Purpose: Upper airway segmentation on MR images is a prerequisite step for quantitatively studying the anatomical structure and function of the upper airway and surrounding tissues. However, the complex variability of intensity and shape of anatomical structures and different modes of image acquisition commonly used in this application makes automatic upper airway segmentation challenging. In this paper, we develop and test a comprehensive deep learning-based segmentation system for use on MR images to address this problem. Materials and Methods: In our study, both static and dynamic MRI data sets are utilized, including 58 axial static 3D MRI studies, 22 mid-retropalatal dynamic 2D MRI studies, 21 mid-retroglossal dynamic 2D MRI studies, 36 mid-sagittal dynamic 2D MRI studies, and 23 isotropic dynamic 3D MRI studies, involving a total of 160 subjects and over 20 000 MRI slices. Samples of static and 2D dynamic MRI data sets were randomly divided into training, validation, and test sets by an approximate ratio of 5:2:3. Considering that the variability of annotation data among 3D dynamic MRIs was greater than for other MRI data sets, we increased the ratio of training data for these data to improve the robustness of the model. We designed a unified framework consisting of the following procedures. For static MRI, a generalized region-of-interest (GROI) strategy is applied to localize the partitions of nasal cavity and other portions of upper airway in axial data sets as two separate subobjects. Subsequently, the two subobjects are segmented by two separate 2D U-Nets. The two segmentation results are combined as the whole upper airway structure. The GROI strategy is also applied to other MRI modes. To minimize false-positive and false-negative rates in the segmentation results, we employed a novel loss function based explicitly on these rates to train the segmentation networks. An inter-reader study is conducted to test the performance of our system in comparison to human variability in ground truth (GT) segmentation of these challenging structures. Results: The proposed approach yielded mean Dice coefficients of 0.84±0.03, 0.89±0.13, 0.84±0.07, and 0.86±0.05 for static 3D MRI, mid-retropalatal/mid-retroglossal 2D dynamic MRI, mid-sagittal 2D dynamic MRI, and isotropic dynamic 3D MRI, respectively. The quantitative results show excellent agreement with manual delineation results. The inter-reader study results demonstrate that the segmentation performance of our approach is statistically indistinguishable from manual segmentations considering the inter-reader variability in GT. Conclusions: The proposed method can be utilized for routine upper airway segmentation from static and dynamic MR images with high accuracy and efficiency. The proposed approach has the potential to be employed in other dynamic MRI-related applications, such as lung or heart segmentation.

Original languageEnglish (US)
Pages (from-to)324-342
Number of pages19
JournalMedical physics
Volume49
Issue number1
DOIs
StatePublished - Jan 2022

ASJC Scopus subject areas

  • Biophysics
  • Radiology Nuclear Medicine and imaging

Fingerprint

Dive into the research topics of 'Automatic upper airway segmentation in static and dynamic MRI via anatomy-guided convolutional neural networks'. Together they form a unique fingerprint.

Cite this