This Is AuburnElectronic Theses and Dissertations

Show simple item record

Understanding and Improving Generative Adversarial Networks


Metadata FieldValueLanguage
dc.contributor.advisorNguyen, Anh
dc.contributor.authorLi, Qi
dc.date.accessioned2020-04-06T18:29:37Z
dc.date.available2020-04-06T18:29:37Z
dc.date.issued2020-04-06
dc.identifier.urihttp://hdl.handle.net/10415/7100
dc.description.abstractGenerative Adversarial Networks (GANs) have been under the spotlight in the machine learning field for a few years. Especially, the power that learns a data distribution in an unsupervised fashion leads GANs to be applied to various applications such as page generation, image style transformation, image attribution manipulation, and similar domains in computer vision. Despite the huge success of GANs, the difficult and unstable training process still limits the applications of GANs in the real world. Mode collapse is a well-known byproduct of unstable GAN training. We propose to improve the sample diversity of a pre-trained class- conditional generator by modifying its class embeddings in the direction of maximizing the log probability outputs of a classifier pre-trained on the same dataset. We improved the sample diversity of state-of-the-art ImageNet BigGANs at both 128 × 128 and 256 × 256 resolutions. By replacing the embeddings, We can also synthesize plausible images for Places365 using a BigGAN pre-trained on ImageNet.en_US
dc.subjectComputer Science and Software Engineeringen_US
dc.titleUnderstanding and Improving Generative Adversarial Networksen_US
dc.typeMaster's Thesisen_US
dc.embargo.statusNOT_EMBARGOEDen_US

Files in this item

Show simple item record