Paper ID | BIO-2.10 | ||
Paper Title | ROBUST AND INTERPRETABLE CONVOLUTIONAL NEURAL NETWORKS TO DETECT GLAUCOMA IN OPTICAL COHERENCE TOMOGRAPHY IMAGES | ||
Authors | Kaveri Thakoor, Sharath Koorathota, Donald Hood, Paul Sajda, Columbia University, United States | ||
Session | BIO-2: Biomedical Signal Processing 2 | ||
Location | Area D | ||
Session Time: | Tuesday, 21 September, 08:00 - 09:30 | ||
Presentation Time: | Tuesday, 21 September, 08:00 - 09:30 | ||
Presentation | Poster | ||
Topic | Biomedical Signal Processing: Medical image analysis | ||
Abstract | Recent studies suggest that deep learning systems can achieve performance on par with medical experts in diagnosis of disease. A prime example is in the field of ophthalmology, where convolutional neural networks (CNNs) have been used to detect retinal diseases. However, this type of artificial intelligence (AI) has yet to be adopted clinically due to questions regarding robustness of the algorithms to datasets collected at new clinical sites and lack of explainability of AI-based predictions. We develop CNN architectures that demonstrate robust detection of glaucoma in optical coherence tomography (OCT) images and test with concept activation vectors (TCAVs) to infer what image concepts CNNs use to generate predictions. Furthermore, we compare TCAV results to eye fixations of clinicians, to identify common decision-making features used by both AI and human experts. We find that CNN ensemble learning creates end-to-end deep learning models with superior robustness compared to previous hybrid models, and TCAV/eye-fixation comparison suggests the importance of OCT report sub-images that are consistent with areas of interest fixated upon by OCT experts to detect glaucoma. The pipeline described here for evaluating CNN robustness and validating model interpretability offers a standardized protocol for acceptance of new AI tools in the clinic. |