Paper ID | MLR-APPL-IP-6.3 | ||
Paper Title | KNOWLEDGE TRANSFERRED FINE-TUNING FOR ANTI-ALIASED CONVOLUTIONAL NEURAL NETWORK IN DATA-LIMITED SITUATION | ||
Authors | Satoshi Suzuki, Shoichiro Takeda, Ryuichi Tanida, Hideaki Kimata, NTT Corporation, Japan; Hayaru Shouno, University of Electro-Communications, Japan | ||
Session | MLR-APPL-IP-6: Machine learning for image processing 6 | ||
Location | Area E | ||
Session Time: | Tuesday, 21 September, 15:30 - 17:00 | ||
Presentation Time: | Tuesday, 21 September, 15:30 - 17:00 | ||
Presentation | Poster | ||
Topic | Applications of Machine Learning: Machine learning for image processing | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Anti-aliased convolutional neural networks~(CNNs) introduce blur filters to intermediate representations in CNNs to achieve high accuracy. A promising way to build a new anti-aliased CNN is to fine-tune a pre-trained CNN, which can easily be found online, with blur filters. However, blur filters drastically degrade the pre-trained representation, so the fine-tuning needs to rebuild the representation by using massive training data. Therefore, if the training data is limited, the fine-tuning cannot work well because it induces overfitting to the limited training data. To tackle this problem, this paper proposes ``knowledge transferred fine-tuning.'' On the basis of the idea of knowledge transfer, our method transfers the knowledge from intermediate representations in the pre-trained CNN to the anti-aliased CNN while fine-tuning. We transfer only essential knowledge using a pixel-level loss that transfers detailed knowledge and a global-level loss that transfers coarse knowledge. Experimental results demonstrate that our method significantly outperforms the simple fine-tuning method. |