Paper ID | BIO-1.3 | ||
Paper Title | AKFNET: AN ANATOMICAL KNOWLEDGE EMBEDDED FEW-SHOT NETWORK FOR MEDICAL IMAGE SEGMENTATION | ||
Authors | Yanan Wei, Jiang Tian, Cheng Zhong, Zhongchao Shi, Lenovo, China | ||
Session | BIO-1: Biomedical Signal Processing 1 | ||
Location | Area C | ||
Session Time: | Monday, 20 September, 13:30 - 15:00 | ||
Presentation Time: | Monday, 20 September, 13:30 - 15:00 | ||
Presentation | Poster | ||
Topic | Biomedical Signal Processing: Medical image analysis | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Automated organ segmentation in CTs is an essential prerequisite for many clinical applications, such as computeraided diagnosis and intervention. As medical data annotation requires massive human labor from experienced radiologists, how to effectively improve the segmentation performance with limited annotated training data remains a challenging problem. Few-shot learning imitates the learning process of humans, which turns out to be a promising way to overcome the aforementioned challenge. In this paper, we propose a novel anatomical knowledge embedded few-shot network (AKFNet), where an anatomical knowledge embedded support unit (AKSU) is carefully designed to embed the anatomical priors from support images into our model. Moreover, a similarity guidance alignment unit (SGAU) is proposed to impose a mutual alignment between the support and query sets. As a result, AKFNet fully exploits anatomical knowledge and presents good learning capability. Without bells and whistles, AKFNet outperforms the state-of-the-art methods with 0.84-1.76% Dice increase. Transfer learning experiments further verify its learning capability. |