Improved ArtGAN for Conditional Synthesis of Natural Image and Artwork

From MaRDI portal
Publication:4617840

DOI10.1109/TIP.2018.2866698zbMATH Open1409.94581DBLPjournals/tip/TanCAT19arXiv1708.09533WikidataQ91099121 ScholiaQ91099121MaRDI QIDQ4617840FDOQ4617840


Authors: Wei Ren Tan, Chee Seng Chan, Kiyoshi Tanaka, Hernán E. Aguirre Edit this on Wikidata


Publication date: 7 February 2019

Published in: IEEE Transactions on Image Processing (Search for Journal in Brave)

Abstract: This paper proposes a series of new approaches to improve Generative Adversarial Network (GAN) for conditional image synthesis and we name the proposed model as ArtGAN. One of the key innovation of ArtGAN is that, the gradient of the loss function w.r.t. the label (randomly assigned to each generated image) is back-propagated from the categorical discriminator to the generator. With the feedback from the label information, the generator is able to learn more efficiently and generate image with better quality. Inspired by recent works, an autoencoder is incorporated into the categorical discriminator for additional complementary information. Last but not least, we introduce a novel strategy to improve the image quality. In the experiments, we evaluate ArtGAN on CIFAR-10 and STL-10 via ablation studies. The empirical results showed that our proposed model outperforms the state-of-the-art results on CIFAR-10 in terms of Inception score. Qualitatively, we demonstrate that ArtGAN is able to generate plausible-looking images on Oxford-102 and CUB-200, as well as able to draw realistic artworks based on style, artist, and genre. The source code and models are available at: https://github.com/cs-chan/ArtGAN


Full work available at URL: https://arxiv.org/abs/1708.09533




Recommendations




Cited In (1)

Uses Software





This page was built for publication: Improved ArtGAN for Conditional Synthesis of Natural Image and Artwork

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4617840)