Welcome to the Generative AI Engineering with LLMs specialization! This advanced program is meticulously designed for junior to mid-level professionals who understand machine learning, neural networks, and programming in Python. Over the course of this specialization, you will look into the world of large language models (LLMs) and generative AI, equipping yourself with the knowledge and skills necessary to build cutting-edge Natural language processing or NLP-based applications using PyTorch.
This specialization is designed for:
- Machine Learning Engineers and Data Scientists looking to expand their expertise in generative AI and LLMs.
- Software Developers who are interested in building NLP-based applications and want to integrate AI into their projects.
- AI enthusiasts with foundational knowledge in Python and neural networks, eager to explore advanced techniques in AI and machine learning.
- Professionals in AI/ML roles who wish to deepen their understanding of transformers, fine-tuning, and NLP applications.
- Academic and research professionals aiming to implement generative AI models in their work or studies.
Upon completing this specialization, you will be able to:
- Explain the architectures and models used in generative AI for NLP, including tokenization and data preparation techniques.
- Implement various generative AI models such as skip-gram, Continuous Bag of Words (CBOW), sequence-to-sequence, and Recurrent neural network OR RNN-based models using PyTorch.
- Utilize transformer-based models for text classification, translation, and sequence generation, and apply fine-tuning techniques using frameworks like Hugging Face and LangChain.
- Design and develop advanced generative AI applications, including a question-answering NLP system, by applying LLMs and generative AI concepts.
This specialization is comprised of seven comprehensive courses, each designed to build upon the knowledge acquired in the previous one. Here’s a brief overview of what you will learn:
- Explore the significance of generative AI in various domains.
- Learn about different generative AI architectures like RNNs, Transformers, VAEs (Variational autoencoders), GANs (Generative Adversarial Network), and diffusion models.
- Understand how LLMs like GPT (Generative Pre-trained Transformer), BERT (Bidirectional Encoder Representations from Transformers), and T5 are utilized in NLP applications.
- Implement tokenization and create NLP data loaders using PyTorch.
- Dive into the fundamentals of language understanding and NLP.
- Learn about models such as N-Gram, Word2Vec, and sequence-to-sequence models.
- Build and train simple language models and explore metrics for evaluating the quality of generated text.
- Understand transformer-based models, focusing on positional encoding and attention mechanisms.
- Learn to apply these models for text classification and language translation using GPT and BERT.
- Explore fine-tuning techniques like parameter-efficient fine-tuning (PEFT) using LoRA and QLoRA.
- Learn to work with pre-trained transformers and fine-tune them for specific tasks using PyTorch and Hugging Face.
- Delve into instruction tuning, reward modeling, and reinforcement learning with human feedback (RLHF).
- Learn to implement direct preference optimization (DPO) with Hugging Face.
- Learn how retrieval-augmented generation (RAG) is used in NLP applications.
- Explore in-context learning, advanced prompt engineering, and the LangChain framework for building AI agents.
- Apply your acquired skills in a capstone project focused on developing a QA bot using LangChain, watsonx, and Gradio.
Note: This course includes practice assessments and involves optional activities that may require the use of paid services/models.
Throughout the specialization, you will have access to a wide range of learning resources, including:
- Video lectures: Comprehensive videos that explain the core concepts and techniques.
- Hands-on labs: Practical labs to apply the concepts learned in real-world scenarios.
- Readings and quizzes: Supplementary readings and quizzes to reinforce your understanding.
- Capstone project: A final project to showcase your skills in building a generative AI application.
The courses in this specialization offer a variety of learning assets, such as videos, readings, discussion prompts, practice and graded quizzes, and a peer-reviewed graded assignment.
- The videos and readings present the instructions, while interactive learning is encouraged through discussions where you can meet and interact with the staff and your peers.
- Practice quizzes at the end of each module will test your understanding of what you learned, and the final graded quiz will assess your conceptual understanding of the course.
- A peer review assignment at the end of the course is a final project that helps you consolidate what you have learned. Your assignment will be graded or evaluated by your peers, and you will review or grade the assignment submitted by your peers.
The specialization uses simple, easy-to-understand language to explain the important concepts without relying on technical jargon.
To derive maximum learning from this specialization, actively participate in and complete the various learning engagements offered across the seven courses.
Good Luck!