Resources for learning about Generative Models, organized into three sections; Overview, Autoencoders and Generative Adversarial Networks. I update the list on an ongoing basis. See Neural Networks for general resources on deep learning.
Overview
- OpenAI intro: Brief and clear overview of the different approaches to generative models.
Autoencoders
- Stanford tutorial on autoencoders.
- Variational Autoencoders: Great introduction to the topic by Fast Forward Labs
- The Unreasonable Confusion of Variational Autoencoders: Explains VAE’s from a deep learning and a graphical models perspective and bridges the perceived gap between the two.
Generative Adversarial Networks (GANs)
- Introduction to Generative Adversarial Networks: Nice introduction with interesting animations of the training process.
- Generative Adversarial Nets: The paper where it all began. The objective is for a neural network to learn to model a complex probability distribution, a set of images for example. Inspired by game theoretic models, GANs consist of two competing networks, a generator, which tries to create examples which are indistinguishable from the desired distribution, and a discriminator, which tries to tell real from fake (generated) examples apart.
- Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks: Also known as DCGAN, this is one of the most stable and easy to use GAN variants. Unlike many other models, DCGAN tends to work out of the box.
- Improved techniques for training GANs: GANs are notoriously tricky to train. This paper offers some lessons learned from the world experts.
- Also the useful, and concise GAN hacks.
- Do GANs actually do distribution learning?: Sanjeev Arora and Yi Zhang question whether GANs actually learn to generate the distribution of the data they are modeling in this blog post and their paper. Plus a really inventive use of the birthday paradox to test the size of the learned distribution.
- Instance noise: A trick for stabilizing GAN training: Interesting theory on why GANs don’t work from Ferenc Huszár, and his proposed remedy.
- Select, important GAN architectures
- The GAN Zoo: Literally all the GANs!
I am always looking to learn more. Please send suggestions or comments to contact [at] learningmachinelearning [dot] org