Paper ID | SS-EDNN.2 | ||
Paper Title | EXPLAINING DEEP MODELS THROUGH FORGETTABLE LEARNING DYNAMICS | ||
Authors | Ryan Benkert, Oluwaseun Joseph Aribido, Ghassan AlRegib, Georgia Institute of Technology, United States | ||
Session | SS-EDNN: Special Session: Explainable Deep Neural Networks for Image/Video Processing | ||
Location | Area B | ||
Session Time: | Wednesday, 22 September, 14:30 - 16:00 | ||
Presentation Time: | Wednesday, 22 September, 14:30 - 16:00 | ||
Presentation | Poster | ||
Topic | Special Sessions: Explainable Deep Neural Networks for Image/Video Processing | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Even though deep neural networks have shown tremendous success in countless applications, explaining model behaviour or predictions is an open research problem. In this paper, we address this issue by employing a simple yet effective method by analysing the learning dynamics of deep neural networks in semantic segmentation tasks. Specifically, we visualize the learning behaviour during training by tracking how often samples are learned and forgotten in subsequent training epochs. This further allows us to derive important information about the proximity to the class decision boundary and identify regions that pose a particular challenge to the model. Inspired by this phenomenon, we present a novel segmentation method that actively uses this information to alter the data representation within the model by increasing the variety of difficult regions. Finally, we show that our method consistently reduces the amount of regions that are forgotten frequently. We further evaluate our method in light of the segmentation performance. |