State-of-the-Art VQ-VAE from Gaussian VAE without Training!
-
Updated
Jan 3, 2026 - Python
State-of-the-Art VQ-VAE from Gaussian VAE without Training!
Thesis Project: A multimodal transformer-based generative model that creates listener avatars conditioned on personality traits to produce realistic non-verbal responses (facial expressions, body and hand gestures), during dyadic conversations. Built with PyTorch, and trained on UDIVA dataset, achieving state-of-the-art FID/P-FID performance.
A hands‑on autoencoder lab (AE, VAE, CVAE, VQ‑VAE) for MNIST, Fashion‑MNIST, and CIFAR‑10, with training scripts, visualizations, and a Streamlit demo to compare reconstructions and sampling.
Using Federated Learning to train Autoencoder and its variants' models in pytorch
Add a description, image, and links to the vector-quantized-variational-autoencoder topic page so that developers can more easily learn about it.
To associate your repository with the vector-quantized-variational-autoencoder topic, visit your repo's landing page and select "manage topics."