Paper ID | SS-EDNN.4 | ||
Paper Title | Class Specific Interpretability in CNN Using Causal Analysis | ||
Authors | Ankit Yadu, Suhas P K, Samsung Research Institute India, Bangalore, India; Neelam Sinha, International Institute of Information Technology, Bangalore, India | ||
Session | SS-EDNN: Special Session: Explainable Deep Neural Networks for Image/Video Processing | ||
Location | Area B | ||
Session Time: | Wednesday, 22 September, 14:30 - 16:00 | ||
Presentation Time: | Wednesday, 22 September, 14:30 - 16:00 | ||
Presentation | Poster | ||
Topic | Special Sessions: Explainable Deep Neural Networks for Image/Video Processing | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | A singular problem that mars the wide applicability of machine learning (ML) models is the lack of generalizability and interpretability. The ML community is increasingly working on bridging this gap. Prominent among them are methods that study causal significance of features, with techniques such as Average Causal Effect (ACE). In this paper, our objective is to utilize the causal analysis framework to measure the significance level of the features in binary classification task. Towards this, we propose a novel ACE-based metric called "Absolute area under ACE (A-ACE)" which computes the area of the absolute value of the ACE across different permissible levels of intervention. The performance of the proposed metric is illustrated on MNIST data set (~42000 images) by considering pair-wise binary classification problem. The computed metric values are found to be higher (peak performance of 50% higher than others) at precisely those locations that human intuition would mark as distinguishing regions. The method helps to capture the quantifiable metric which represents the distinction between the classes learnt by the model. This metric aids in visual explanation of the model's prediction and thus, makes the model more trustworthy. |