This repository contains the learning materials, lessons, and code examples for Week 1 of the LLM Engineering and Deployment Certification program offered by Ready Tensor.
The LLM Engineering and Deployment Certification is a 9-week, project-based program that teaches the core skills employers expect from LLM engineers: dataset preparation, LoRA/QLoRA fine-tuning, model evaluation, optimization, and scalable deployment. The program is designed for technical professionals who want to go beyond prompting and orchestration to work directly at the model layer.
Program Details:
- Duration: 9 weeks (self-paced)
- Format: Project-based with hands-on lessons and capstone projects
- Certificate: Certified LLM Engineer upon completion
For complete program information, enrollment details, and prerequisites, visit the official program page.
Week 1 establishes the foundational knowledge needed for LLM engineering work. The lessons cover:
- Understanding LLM architectures and model types
- Navigating the LLM ecosystem (frontier vs. open-source models)
- Deciding when to use fine-tuning vs. alternative approaches
- Understanding the fine-tuning project workflow
- Choosing between custom code and managed service implementations
- Selecting base models using infrastructure constraints and benchmarks
- Setting up development environments with Google Colab
- Evaluating models using the Hugging Face Leaderboard
rt-llm-eng-cert-week1/
│
├── code/
│ ├── lesson1/
│ │ └── types_of_llms.ipynb
│ ├── lesson2/
│ │ └── frontier_vs_open_source.ipynb
│ └── lesson7/
│ └── llm_evaluation.ipynb
│
├── lessons/
│ ├── lesson1/
│ │ ├── w1-l2.md
│ │ └── [supporting images]
│ ├── lesson2/
│ │ ├── w1-l2.md
│ │ └── [supporting images]
│ ├── lesson3/
│ │ ├── w1-l3.md
│ │ └── [supporting images]
│ ├── lesson4/
│ │ ├── w1-l4.md
│ │ └── [supporting images]
│ ├── lesson5/
│ │ ├── w1-l5.md
│ │ └── [supporting images]
│ ├── lesson6/
│ │ ├── w1-l6.md
│ │ └── [supporting images]
│ ├── lesson7/
│ │ └── w1-l7.md
│ └── lesson8/
│ └── w1-l8.md
│
├── LICENSE
└── README.md
code/
Contains Jupyter notebooks and Python scripts with hands-on examples and exercises for select lessons. These provide practical implementations of concepts covered in the lesson materials.
lessons/
Contains the main lesson content in Markdown format, organized by lesson number. Each lesson directory includes:
- The primary lesson file (
w1-lX.md) - Supporting images, diagrams, and visual aids referenced in the lessons
Discover the transformer architecture that powers modern LLMs and understand why decoder-only models dominate today's AI landscape. Learn the fundamental building blocks behind ChatGPT and similar systems.
Navigate the LLM ecosystem by understanding the differences between frontier models (GPT-4, Claude) and open-source alternatives (Llama, Mistral). Explore training variants from base models to instruction-tuned and reasoning-optimized versions.
Learn when to use prompting, RAG (Retrieval-Augmented Generation), or fine-tuning for your specific use case. Develop a decision framework to choose the most effective and cost-efficient approach for customizing LLM behavior.
Master the complete fine-tuning pipeline from selecting a base model through preparing structured data, training, and deployment. Understand the key stages, decisions, and considerations at each step of the workflow.
Compare the artisan's approach (custom code with Hugging Face libraries) versus the pragmatist's approach (managed platforms like AWS Bedrock). Learn which path aligns with your team's expertise, timeline, and strategic goals.
Develop a systematic framework for choosing base models based on infrastructure constraints and benchmark performance. Learn to read leaderboards wisely, use tools like Vellum and Chatbot Arena, and avoid common pitfalls in model selection.
Set up your zero-configuration development environment with Google Colab. Understand its capabilities, limitations, subscription tiers, and best practices for reliable LLM experimentation with cloud GPUs.
Learn what the six core benchmarks (IFEval, BBH, MATH, GPQA, MuSR, MMLU-PRO) actually measure and how to reproduce leaderboard results locally using lm-evaluation-harness. Develop the skills to evaluate models independently and objectively.
Before working through these materials, you should have:
- Intermediate Python programming skills (functions, classes, modules)
- Familiarity with Hugging Face, PyTorch, or similar ML frameworks
- Experience with LLM APIs and foundational NLP concepts
- Understanding of basic ML workflows (training loops, evaluation, model saving)
- Comfort with command-line environments and package management
git clone https://github.com/readytensor/rt-llm-eng-cert-week1.git
cd rt-llm-eng-cert-week1Navigate to the lessons/ directory and open the lesson files in your preferred Markdown reader or IDE. Lessons are designed to be read sequentially from Lesson 1 through Lesson 8.
Code examples are provided in the code/ directory. You can run Jupyter notebooks locally or upload them to Google Colab for cloud-based execution.
ALL code examples are designed to run in Google Colab, which provides free GPU access for experimentation. To use Colab:
- Go to Google Colab
- Upload the notebook from the
code/directory - Follow the instructions in Lesson 7 for best practices on using Colab effectively
- Official Program Page: LLM Engineering and Deployment Certification
- Ready Tensor Platform: app.readytensor.ai
- Community Support: Join the Ready Tensor Discord for discussions and support
See the LICENSE file for details on usage rights and restrictions.
Ready Tensor is a platform dedicated to advancing practical AI engineering education through hands-on, project-based certification programs. The platform focuses on teaching production-relevant skills using real-world workflows and industry-standard tools.
Copyright: Ready Tensor, Inc.