Paper ID | SMR-2.8 | ||
Paper Title | CONTEXT-AWARE CANDIDATES FOR IMAGE CROPPING | ||
Authors | Tianpei Lian, Zhiguo Cao, Ke Xian, Zhiyu Pan, Huazhong University of Science and Technology, China; Weicai Zhong, Huawei Technologies CO., LTD., China | ||
Session | SMR-2: Perception and Quality Models | ||
Location | Area F | ||
Session Time: | Wednesday, 22 September, 14:30 - 16:00 | ||
Presentation Time: | Wednesday, 22 September, 14:30 - 16:00 | ||
Presentation | Poster | ||
Topic | Image and Video Sensing, Modeling, and Representation: Perception and quality models for images & video | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Image cropping aims to enhance the aesthetic quality of a given image by removing unwanted areas. Existing image cropping methods can be divided into two groups: candidate-based and candidate-free methods. For candidate-based methods, dense predefined candidate boxes can indeed cover good boxes, but most candidates with low aesthetic quality may disturb the following judgment and lead to an undesirable result. For candidate-free methods, the cropping box is directly acquired according to certain prior knowledge. However, the effect of only one box is not stable enough due to the subjectivity of image cropping. In order to combine the advantages of the above methods and overcome these shortcomings, we need fewer but more representative candidate boxes. To this end, we propose FCRNet, a fully convolutional regression network, which predicts several context-aware cropping boxes in an ensemble manner as candidates. A multi-task loss is employed to supervise the generation of candidates. Unlike previous candidate-based works, FCRNet outputs a small number of context-aware candidates without any predefined box and the final result is selected from these candidates by an aesthetic evaluation network or even manual selection. Extensive experiments show the superiority of our context-aware candidates based method over the state-of-the-art approaches. |