Context-guided fully convolutional networks for joint craniomaxillofacial bone segmentation and landmark digitization

Jun Zhang, Mingxia Liu, Li Wang, Si Chen, Peng Yuan, Jianfu Li, Steve Guo Fang Shen, Zhen Tang, Ken Chung Chen, James J. Xia, Dinggang Shen

Research output: Contribution to journalArticle

Abstract

Cone-beam computed tomography (CBCT) scans are commonly used in diagnosing and planning surgical or orthodontic treatment to correct craniomaxillofacial (CMF) deformities. Based on CBCT images, it is clinically essential to generate an accurate 3D model of CMF structures (e.g., midface, and mandible) and digitize anatomical landmarks. This process often involves two tasks, i.e., bone segmentation and anatomical landmark digitization. Because landmarks usually lie on the boundaries of segmented bone regions, the tasks of bone segmentation and landmark digitization could be highly associated. Also, the spatial context information (e.g., displacements from voxels to landmarks) in CBCT images is intuitively important for accurately indicating the spatial association between voxels and landmarks. However, most of the existing studies simply treat bone segmentation and landmark digitization as two standalone tasks without considering their inherent relationship, and rarely take advantage of the spatial context information contained in CBCT images. To address these issues, we propose a Joint bone Segmentation and landmark Digitization (JSD) framework via context-guided fully convolutional networks (FCNs). Specifically, we first utilize displacement maps to model the spatial context information in CBCT images, where each element in the displacement map denotes the displacement from a voxel to a particular landmark. An FCN is learned to construct the mapping from the input image to its corresponding displacement maps. Using the learned displacement maps as guidance, we further develop a multi-task FCN model to perform bone segmentation and landmark digitization jointly. We validate the proposed JSD method on 107 subjects, and the experimental results demonstrate that our method is superior to the state-of-the-art approaches in both tasks of bone segmentation and landmark digitization.

Original languageEnglish
Article number101621
JournalMedical Image Analysis
Volume60
DOIs
Publication statusPublished - 2020 Feb

All Science Journal Classification (ASJC) codes

  • Radiological and Ultrasound Technology
  • Radiology Nuclear Medicine and imaging
  • Computer Vision and Pattern Recognition
  • Health Informatics
  • Computer Graphics and Computer-Aided Design

Fingerprint Dive into the research topics of 'Context-guided fully convolutional networks for joint craniomaxillofacial bone segmentation and landmark digitization'. Together they form a unique fingerprint.

  • Cite this

    Zhang, J., Liu, M., Wang, L., Chen, S., Yuan, P., Li, J., Shen, S. G. F., Tang, Z., Chen, K. C., Xia, J. J., & Shen, D. (2020). Context-guided fully convolutional networks for joint craniomaxillofacial bone segmentation and landmark digitization. Medical Image Analysis, 60, [101621]. https://doi.org/10.1016/j.media.2019.101621