Paper ID | SS-MIA.4 | ||
Paper Title | A TEACHER-STUDENT LEARNING BASED ON COMPOSED GROUND-TRUTH IMAGES FOR ACCURATE CEPHALOMETRIC LANDMARK DETECTION | ||
Authors | Yu Song, Ritsumeikan University, Japan; Xu Qiao, Shandong University, China; Yutaro Iwamoto, Yen-Wei Chen, Ritsumeikan University, Japan | ||
Session | SS-MIA: Special Session: Deep Learning and Precision Quantitative Imaging for Medical Image Analysis | ||
Location | Area A | ||
Session Time: | Wednesday, 22 September, 14:30 - 16:00 | ||
Presentation Time: | Wednesday, 22 September, 14:30 - 16:00 | ||
Presentation | Poster | ||
Topic | Special Sessions: Deep Learning and Precision Quantitative Imaging for Medical Image Analysis | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Computer-aided automatic cephalometric landmark localization has been a hot topic since last century. Recent proposed deep learning-based methods have made great contributions to this research topic. Among them, convolutional neural networks (CNN)-based regression is widely used, where ground-truth (GT) information is mainly used in the calculation of loss function, thus, mimics the difference between the predicted landmarks’ locations and the ground-truth locations through backpropagation. However, considering the limited number of annotated cephalometric data, we believe the performance can be better improved by better utilizing ground-truth information. In this paper, we propose a teacher-student learning method using GT images for accurate cephalometric detection. We first use images composed with GT landmarks as input images to train a detection model, which is treated as a teacher model. Then the teacher model is used to guide a student model, which is trained by original images, by transferring useful features. We believe the features between GT images and original images have similar domain distribution since they both represent same structure. We validate our method on public grand challenge dataset. Our method achieves better performance compared with state-of-the-art methods. |