Paper ID | MLR-APPL-IP-5.8 | ||
Paper Title | GENERATING ANNOTATED HIGH-FIDELITY IMAGES CONTAINING MULTIPLE COHERENT OBJECTS | ||
Authors | Bryan Cardenas Guevara, Devanshu Arya, Deepak K. Gupta, University of Amsterdam, Netherlands | ||
Session | MLR-APPL-IP-5: Machine learning for image processing 5 | ||
Location | Area E | ||
Session Time: | Tuesday, 21 September, 13:30 - 15:00 | ||
Presentation Time: | Tuesday, 21 September, 13:30 - 15:00 | ||
Presentation | Poster | ||
Topic | Applications of Machine Learning: Machine learning for image processing | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Recent developments related to generative models have enabled the generation of diverse and high-fidelity images. In particular, layout-to-image generation models have gained significant attention due to their capability to generate realistic and complex images containing distinct objects. These models are generally conditioned on either semantic layouts or textual descriptions. However, unlike natural images, providing auxiliary information can be extremely hard in domains such as biomedical imaging and remote sensing. In this work, we propose a multi-object generation framework that can synthesize images with multiple objects without explicitly requiring their contextual information during the generation process. Based on a vector-quantized variational autoencoder (VQ-VAE) backbone, our model learns to preserve spatial coherency within an image as well as semantic coherency through the use of powerful autoregressive priors. An advantage of our approach is that the generated samples are accompanied by object-level annotations. The efficacy of our approach is demonstrated through application on medical imaging datasets, where we show that augmenting the training set with the samples generated by our approach improves the performance of existing models. |