Welcome to the Era of Generative AI! 🚀
This repository introduces Generative Artificial Intelligence (AI), where machines create lifelike images, human-like text, and more. Explore three core parts for deeper insights.
- About This Repository
- What to Expect
- Part 1: Foundations of Generative AI
- Part 2: Large Language Models (LLMs)
- Part 3: Advanced Concepts and Applications
- Contribution Guidelines
- References
- Reading Materials
A comprehensive guide to Generative AI, covering models, architectures, and applications with code, tutorials, and research.
- Code Samples: Practical implementations (code/).
- Tutorials: Step-by-step guides (tutorials/).
- Research: Key paper summaries (papers/).
Learn the basics:
- Generative Adversarial Networks (GANs): Generator vs. discriminator (code/generative_models/gans/).
- Variational Autoencoders (VAEs): Latent space sampling (code/generative_models/vaes/).
- Diffusion Models: Iterative denoising (code/generative_models/diffusion/).
Details: part1_foundations.md
Explore LLMs and Transformers:
- Key Models: GPT-3/4, BERT, T5 (code/llms/).
- Transformer Architecture: Attention, positional encoding (papers/attention_is_all_you_need.md).
Details: part2_llms.md
Discover advanced topics:
- Retrieval-Augmented Generation (RAG): Enhance generation with external knowledge (code/rag/).
- Prompt Engineering: Optimize LLM outputs (tutorials/prompt_engineering.md).
- Multimodal Applications: Text, image, audio integration (code/multimodal/).
- LangChain: Build LLM-powered apps (code/langchain/).
Details: part3_applications.md
Join us! See CONTRIBUTING.md for details.