Paper ID | MLR-APPL-IP-1.1 | ||
Paper Title | Multi-Scale Feature Guided Low-Light Image Enhancement | ||
Authors | Lanqing Guo, Renjie Wan, Nanyang Technological University, Singapore; Guan-Ming Su, Dolby Laboratories, United States; Alex C. Kot, Bihan Wen, Nanyang Technological University, Singapore | ||
Session | MLR-APPL-IP-1: Machine learning for image processing 1 | ||
Location | Area E | ||
Session Time: | Monday, 20 September, 13:30 - 15:00 | ||
Presentation Time: | Monday, 20 September, 13:30 - 15:00 | ||
Presentation | Poster | ||
Topic | Applications of Machine Learning: Machine learning for image processing | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Low-light image enhancement aims at enlarging the intensity of image pixels to better match human perception, and to improve the performance of subsequent vision tasks. While it is relatively easy to enlighten a globally low-light image, the lighting condition of realistic scenes is usually non-uniform and complex, e.g., some images may contain both bright and extremely dark regions, with or without rich features and information. Existing methods often generate abnormal light-enhancement results with over-exposure artifacts without proper guidance. To tackle this challenge, we propose a multi-scale feature guided attention mechanism in the deep generator, which can effectively perform spatially-varying light enhancement. The attention map is fused by both the gray map and extracted feature map of the input image, to focus more on those dark and informative regions. Our baseline is an unsupervised generative adversarial network, which can be trained without any low/normal-light image pair. Experimental results demonstrate the superiority in visual quality and performance of subsequent object detection over state-of-the-art alternatives. |