This project explores various generative models for image generation, implemented from scratch using PyTorch. The models included in this study are Variational Autoencoder (VAE), Generative Adversarial Network (GAN), Conditional GAN (C-GAN), Denoising Diffusion Probabilistic Model (DDPM), and Conditional DDPM (C-DDPM). The primary objective is to compare the performance of these models, with a particular focus on DDPM. All models are trained and evaluated on the Fashion-MNIST dataset, which consists of grayscale images of clothing items across 10 categories. The project examines various aspects of these generative approaches, including the quality of generated images, training stability, and convergence behavior. By analyzing these factors, the study aims to highlight the strengths and weaknesses of diffusion models compared to traditional generative techniques. This research also serves as a foundational step towards understanding and improving diffusion-based generative models for image synthesis.
MAmin-y/Image-Generative-Models
Folders and files
| Name | Name | Last commit date | ||
|---|---|---|---|---|