Paper ID | TEC-1.1 | ||
Paper Title | CONTEXTUAL COLORIZATION AND DENOISING FOR LOW-LIGHT ULTRA HIGH RESOLUTION SEQUENCES | ||
Authors | Nantheera Anantrasirichai, David Bull, University of Bristol, United Kingdom | ||
Session | TEC-1: Restoration and Enhancement 1 | ||
Location | Area G | ||
Session Time: | Tuesday, 21 September, 13:30 - 15:00 | ||
Presentation Time: | Tuesday, 21 September, 13:30 - 15:00 | ||
Presentation | Poster | ||
Topic | Image and Video Processing: Restoration and enhancement | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Low-light image sequences generally suffer from spatio-temporal incoherent noise, flicker and blurring of moving objects. These artefacts significantly reduce visual quality and, in most cases, post-processing is needed in order to generate acceptable quality. Most state-of-the-art enhancement methods based on machine learning require ground truth data but this is not usually available for naturally captured low light sequences. We tackle these problems with an unpaired-learning method that offers simultaneous colorization and denoising. Our approach is an adaptation of the CycleGAN structure. To overcome the excessive memory limitations associated with ultra high resolution content, we propose a multiscale patch-based framework, capturing both local and contextual features. Additionally, an adaptive temporal smoothing technique is employed to remove flickering artefacts. Experimental results show that our method outperforms existing approaches in terms of subjective quality and that it is robust to variations in brightness levels and noise. |