A comprehensive full-day workshop covering the fundamentals and advanced concepts of Large Language Models (LLMs), from theoretical foundations to practical implementation and deployment strategies.
Note
Visit workshop website to navigate with ease
📅 Date: August 23rd 2025
🎟️ Registration: Workshop Link
- Fundamentals of text representation
- Contextual embeddings using transformers.
- Internals of the transformer architecture: attention mechanism, embeddings, and core components that make up large language models
- HuggingFace pipelines for different tasks a language model can handle: classification, text generation, etc.
- Fine-tune a pretrained GPT2 for code-generation.
- LLM Optimizations:
- PEFT
- Quantization/LoRA
- Instruction Tuning
- LLM alignment or Performance Tuning using RLHF/PPO
- RAG
- LangChain
- DSpy
- Tool/Function Calling
- MCP
## Setup Instructions
Before attending the workshop, please ensure you have the following:
- Huggingface.co
- Github.com
- LLM API Keys:
- OpenAI/Gemini/Claude OR
- Ollama for local LLMs
# Clone the repository
git clone https://github.com/raghavbali/mastering_llms_workshop_dhs2025.git
cd mastering_llms_workshop_dhs2025- Notebooks are self-contained for quick setup
- Modules aimed at low-resource setups/colab compatible
## Prerequisites
- Familiarity with python, pytorch and python ecosystem
- Understanding of neural networks and deep learning concepts
A huge round of thanks to amazing teams at:
If you use materials from this workshop in your research or projects, please cite:
@misc{mastering_llms_workshop_2025,
title={Mastering LLMs: Training, Fine-Tuning, and Best Practices},
author={Raghav Bali},
year={2025},
url={https://github.com/raghavbali/mastering_llms_workshop_dhs2025},
note={Workshop materials for DHS 2025}
}For questions about the workshop content or materials, please open an issue in this repository.
Author:
- 💼 LinkedIn: www.linkedin.com/in/baliraghav
- 🌐 Personal Website: https://raghavbali.github.io/






