Latent Variable Models and Beyond: An Exploration of Variational Autoencoders

The notebook below is an introduction to variational autoencoders (VAEs) from a probabilistic perspective. Instead of traditional autoencoder approaches to variational autoencoders, here we will investigate VAEs through a generative modeling perspective with an emphasis on latent variable models (LVMs), variational inference (VI), and deep learning. 

Figure 1: High dimensional rotating MNIST digit '3' and its underlying latent representation that describes the angle of the '3' that we are observing

Notes_VAE_VI.pdf

Notebook: Variational Autoencoders, and Variational Inference

You can click on the notebook to view more information on my notes on VAEs and VI. Here are some key takeaways that I think are very important:

Some great resources on VAEs and VI:


If you have more questions on VAEs, VI, or deep learning, you can leave your comments/questions to this form!