Paper ID | SS-MIA.7 | ||
Paper Title | SepUnet: Depthwise Separable Convolution Integrated U-Net for MRI Reconstruction | ||
Authors | Soheil Zabihi, Elahe Rahimian, Concordia University, Canada; Amir Asif, York University, Canada; Arash Mohammadi, Concordia University, Canada | ||
Session | SS-MIA: Special Session: Deep Learning and Precision Quantitative Imaging for Medical Image Analysis | ||
Location | Area A | ||
Session Time: | Wednesday, 22 September, 14:30 - 16:00 | ||
Presentation Time: | Wednesday, 22 September, 14:30 - 16:00 | ||
Presentation | Poster | ||
Topic | Special Sessions: Deep Learning and Precision Quantitative Imaging for Medical Image Analysis | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Accelerating Magnetic Resonance Imaging (MRI) acquisition process is a critical and challenging medical imaging problem as basic reconstructions obtained from the undersampled k-space often exhibit blur or aliasing effects. Despite its significance and recent advancements in the field of deep neural networks (DNNs), development of deep learning-based MRI reconstruction algorithms is not yet flourished due to unavailability of public and large datasets. The recently introduced large-scale fastMRI dataset is posed to change this state of affairs, however, existing DNN solutions developed based on fastMRI require learning a large number of parameters rendering their practical application limited due to the strict low-latency requirements of real-time MRI acquisition. In this paper, we aim to address this drawback and target reducing the computational cost associated with single-coil reconstruction task. More specifically, the paper proposes a novel deep model referred to as the SepUnet architecture achieving significant reduction in the required number of parameters while maintaining high accuracy. Performance of the proposed SepUnet architecture is evaluated based on the official test dataset from fastMRI illustrating accuracy improvement in comparison to its published counterparts while requiring significantly reduced number of trainable parameters (i.e., the SepUnet architecture is much faster and lighter than its counterparts). |