Skip to content

A PyTorch implementation of MADE (Masked Autoencoder Distribution Estimation)

Notifications You must be signed in to change notification settings

axeloh/made-pytorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MADE (Masked Autoencoder Distribution Estimation)

A simple PyTorch implementation of Masked Autoencoder Distribution Estimation (MADE) for binary image dataset. Based on MADE: Masked Autoencoder for Distribution Estimation by Germain et. al., and inspired by karpathy. This by no means serve to reproduce the original results in the paper.

MADE

MADE is a type of autoregressive generative models which try to model the generation of images as a sequence of generation of pixels. MADE takes a regular autoencoder and tweaks it so that its output units predict the n conditional distributions instead of reconstructing the n inputs (as in regular autoencoders). It does this by masking the MLP connections such that the k-th output unit only depends on the previous k-1 inputs, see figure below.

More formally, given a binary image of height H and width W, we can represent the image as a flattened binary vector to input into MADE to model the conditioned distribution:

As with most autoregressive models, evaluating likelihood is cheap (requires a single forward pass), while sampling must be done iteratively for every pixel, and is thus linear in the number of pixels.

Generating samples

Below are samples generated by MADE (for shape and mnist dataset) after training for 10 epochs:

Shape dataset MADE samples
MNIST dataset MADE samples

About

A PyTorch implementation of MADE (Masked Autoencoder Distribution Estimation)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages