Skip to content

LangGraph Mastery: Comprehensive tutorial teaching LangGraph with local Ollama LLMs - privacy-first AI development from simple workflows to advanced systems

vsrak22/langgraph-mastery

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

11 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

LangGraph Mastery: From Simple Workflows to Advanced AI Systems

A comprehensive, hands-on tutorial that teaches all LangGraph capabilities through a progressive project that evolves from a basic local LLM-powered assistant to a sophisticated AI research system with human-in-the-loop capabilities, memory, and advanced orchestration.

๐ŸŽฏ Project Overview

This tutorial builds a Research Assistant AI that can:

  • Answer questions using local Ollama LLMs and knowledge bases
  • Maintain conversation memory and context
  • Handle complex multi-step research tasks
  • Collaborate with humans through approval workflows
  • Recover from errors gracefully
  • Scale across multiple agents and tools
  • Run completely offline and private - no external API calls required

Key Learning Philosophy: Each chapter builds upon the previous one, enhancing the same codebase rather than starting from scratch. By the end, you'll have a production-ready AI system showcasing all LangGraph capabilities.

๐Ÿ“š Learning Path

Chapter 1: Foundation - Simple Linear Workflow with Local LLM

What you'll build: Basic Q&A bot with local Ollama LLM and knowledge base LangGraph concepts: Nodes, edges, state management, basic graphs, local LLM integration

# Simple flow: Question โ†’ Knowledge Search โ†’ Local LLM Query โ†’ Format Answer
User Question โ†’ Search Knowledge Base โ†’ Query Ollama โ†’ Format Response

Features introduced:

  • Basic StateGraph creation
  • Simple node functions
  • Linear workflow execution
  • State passing between nodes
  • Local LLM integration with Ollama
  • Knowledge base search and context provision
  • Privacy-first AI development

Chapter 2: Adding Intelligence - Conditional Logic

Enhancement: Smart routing based on question type LangGraph concepts: Conditional edges, decision nodes

# Enhanced flow with routing
Question โ†’ Classify Intent โ†’ [Local LLM | Knowledge Base | Web Search*] โ†’ Response

New features:

  • Conditional routing with add_conditional_edges()
  • Intent classification node
  • Multiple execution paths
  • Dynamic workflow decisions
  • *Optional web search integration for comparison

Chapter 3: Tool Integration - Multiple LLMs and Sources

Enhancement: Multiple LLM models and document processing LangGraph concepts: Tool calling, error handling, model switching

# Multi-model integration
Question โ†’ Route โ†’ [Ollama Models | Document Analysis | Optional APIs] โ†’ Synthesize โ†’ Response

New features:

  • Multiple Ollama model integrations (different sizes/specializations)
  • Document upload and processing with local models
  • Tool error handling and fallbacks
  • Result synthesis from multiple local sources
  • Optional external API integration for comparison

Chapter 4: Memory and Context - Persistent Conversations

Enhancement: Conversation history and contextual responses LangGraph concepts: Persistent state, memory management

# Context-aware conversations
[Previous Context] โ†’ Question โ†’ Enhanced Routing โ†’ Tools โ†’ Context Update โ†’ Response

New features:

  • Conversation memory implementation
  • Context-aware question processing
  • State persistence between sessions
  • Follow-up question handling

Chapter 5: Human-in-the-Loop - Collaborative AI

Enhancement: Human approval for sensitive operations LangGraph concepts: Interrupts, human feedback, approval workflows

# Human collaboration workflow
Question โ†’ Plan โ†’ [Auto Execute | Request Approval] โ†’ Execute โ†’ Human Review โ†’ Finalize

New features:

  • Human approval nodes with interrupt_before
  • Approval workflow implementation
  • Human feedback integration
  • Sensitive operation detection

Chapter 6: Advanced Orchestration - Multi-Agent Collaboration

Enhancement: Specialized agents working together LangGraph concepts: Sub-graphs, agent delegation, complex orchestration

# Multi-agent system
Main Agent โ†’ Task Analysis โ†’ [Research Agent | Analysis Agent | Writing Agent] โ†’ Coordination โ†’ Final Output

New features:

  • Multiple specialized agents
  • Sub-graph implementation
  • Agent-to-agent communication
  • Task delegation and coordination

Chapter 7: Error Handling and Recovery - Robust Systems

Enhancement: Comprehensive error handling and self-recovery LangGraph concepts: Error nodes, retry logic, fallback strategies

# Robust error handling
Any Node โ†’ [Success | Error] โ†’ [Continue | Retry | Fallback | Human Escalation] โ†’ Recovery

New features:

  • Comprehensive error handling
  • Automatic retry mechanisms
  • Fallback strategies
  • Human escalation for critical failures

Chapter 8: Streaming and Real-time - Live Interactions

Enhancement: Real-time streaming responses and live updates LangGraph concepts: Streaming execution, real-time updates

# Streaming workflow
Question โ†’ Stream Planning โ†’ Stream Execution โ†’ Live Updates โ†’ Final Response

New features:

  • Streaming response generation
  • Real-time progress updates
  • Live workflow visualization
  • Asynchronous processing

Chapter 9: Advanced State Management - Complex Data Flows

Enhancement: Sophisticated state handling and data transformation LangGraph concepts: Custom state classes, state transformations, parallel processing

# Complex state management
Multi-Input โ†’ Parallel Processing โ†’ State Merging โ†’ Complex Transformations โ†’ Output

New features:

  • Custom state classes
  • Parallel node execution
  • State transformation functions
  • Complex data flow patterns

Chapter 10: Production Deployment - Scalable AI Systems

Enhancement: Production-ready deployment with monitoring LangGraph concepts: Deployment patterns, monitoring, scaling

# Production system
Load Balancer โ†’ Multiple Graph Instances โ†’ Monitoring โ†’ Logging โ†’ Analytics

New features:

  • Docker containerization
  • Kubernetes deployment manifests
  • Monitoring and observability
  • Performance optimization
  • Scaling strategies

๐Ÿš€ Quick Start

Prerequisites

# Python 3.9+
python --version

# Install Ollama (visit https://ollama.ai for your platform)
# macOS/Linux:
curl -fsSL https://ollama.ai/install.sh | sh

# Install Python dependencies
pip install langgraph langchain-ollama ollama python-dotenv streamlit

Environment Setup

# Clone and setup
git clone <your-repo>
cd langgraph-mastery

# Start Ollama server
ollama serve

# Pull the model (in another terminal)
ollama pull llama3.2:latest

# Setup environment
cp env.example .env
# Defaults should work, but you can customize Ollama settings

# Install all dependencies
pip install -r requirements.txt

Run Chapter 1

# Quick setup verification
python setup_chapter1.py

# Start with the basics
python chapter_01_foundation/basic_workflow.py

# Or run the demo
python chapter_01_foundation/demo.py

๐Ÿ“ Project Structure

langgraph-mastery/
โ”œโ”€โ”€ README.md                          # This file
โ”œโ”€โ”€ requirements.txt                   # Python dependencies
โ”œโ”€โ”€ env.example                        # Environment variables template
โ”œโ”€โ”€ OLLAMA_SETUP.md                    # Comprehensive Ollama setup guide
โ”œโ”€โ”€ docker-compose.yml                 # Local development setup
โ”œโ”€โ”€ kubernetes/                        # K8s deployment manifests
โ”‚   โ”œโ”€โ”€ namespace.yaml
โ”‚   โ”œโ”€โ”€ deployment.yaml
โ”‚   โ””โ”€โ”€ service.yaml
โ”œโ”€โ”€ setup_chapter1.py                  # Chapter 1 setup verification script
โ”œโ”€โ”€ app.py                             # Main Streamlit application
โ”œโ”€โ”€ shared/                            # Shared utilities and components
โ”‚   โ”œโ”€โ”€ __init__.py
โ”‚   โ”œโ”€โ”€ state.py                       # State management classes
โ”‚   โ”œโ”€โ”€ tools.py                       # Tool implementations (Ollama, Knowledge Base)
โ”‚   โ”œโ”€โ”€ agents.py                      # Agent definitions
โ”‚   โ””โ”€โ”€ utils.py                       # Helper functions
โ”œโ”€โ”€ chapter_01_foundation/
โ”‚   โ”œโ”€โ”€ README.md                      # Chapter-specific instructions
โ”‚   โ”œโ”€โ”€ basic_workflow.py              # Simple linear workflow
โ”‚   โ”œโ”€โ”€ state_management.py            # Basic state handling
โ”‚   โ””โ”€โ”€ tests/                         # Unit tests
โ”œโ”€โ”€ chapter_02_conditional/
โ”‚   โ”œโ”€โ”€ README.md
โ”‚   โ”œโ”€โ”€ conditional_routing.py         # Enhanced workflow with routing
โ”‚   โ”œโ”€โ”€ intent_classifier.py          # Question classification
โ”‚   โ””โ”€โ”€ tests/
โ”œโ”€โ”€ chapter_03_tools/
โ”‚   โ”œโ”€โ”€ README.md
โ”‚   โ”œโ”€โ”€ multi_tool_integration.py     # Multiple tool usage
โ”‚   โ”œโ”€โ”€ tool_implementations.py       # Custom tools
โ”‚   โ””โ”€โ”€ tests/
โ”œโ”€โ”€ chapter_04_memory/
โ”‚   โ”œโ”€โ”€ README.md
โ”‚   โ”œโ”€โ”€ persistent_memory.py          # Conversation memory
โ”‚   โ”œโ”€โ”€ context_management.py         # Context handling
โ”‚   โ””โ”€โ”€ tests/
โ”œโ”€โ”€ chapter_05_human_loop/
โ”‚   โ”œโ”€โ”€ README.md
โ”‚   โ”œโ”€โ”€ approval_workflow.py          # Human-in-the-loop
โ”‚   โ”œโ”€โ”€ human_interface.py            # Human interaction
โ”‚   โ””โ”€โ”€ tests/
โ”œโ”€โ”€ chapter_06_multi_agent/
โ”‚   โ”œโ”€โ”€ README.md
โ”‚   โ”œโ”€โ”€ agent_orchestration.py        # Multi-agent coordination
โ”‚   โ”œโ”€โ”€ specialized_agents.py         # Individual agents
โ”‚   โ””โ”€โ”€ tests/
โ”œโ”€โ”€ chapter_07_error_handling/
โ”‚   โ”œโ”€โ”€ README.md
โ”‚   โ”œโ”€โ”€ robust_workflows.py           # Error handling
โ”‚   โ”œโ”€โ”€ recovery_strategies.py        # Recovery mechanisms
โ”‚   โ””โ”€โ”€ tests/
โ”œโ”€โ”€ chapter_08_streaming/
โ”‚   โ”œโ”€โ”€ README.md
โ”‚   โ”œโ”€โ”€ streaming_responses.py        # Real-time streaming
โ”‚   โ”œโ”€โ”€ live_updates.py               # Progress tracking
โ”‚   โ””โ”€โ”€ tests/
โ”œโ”€โ”€ chapter_09_advanced_state/
โ”‚   โ”œโ”€โ”€ README.md
โ”‚   โ”œโ”€โ”€ complex_state.py              # Advanced state management
โ”‚   โ”œโ”€โ”€ parallel_processing.py        # Concurrent execution
โ”‚   โ””โ”€โ”€ tests/
โ””โ”€โ”€ chapter_10_production/
    โ”œโ”€โ”€ README.md
    โ”œโ”€โ”€ production_app.py              # Production-ready application
    โ”œโ”€โ”€ monitoring.py                  # Observability
    โ”œโ”€โ”€ Dockerfile                     # Container definition
    โ””โ”€โ”€ tests/

๐ŸŽฎ Interactive Learning Experience

Web Interface

Each chapter includes a Streamlit web interface for interactive exploration:

# Run specific chapter
streamlit run app.py --chapter 3

# Compare chapters
streamlit run app.py --compare "1,3,5"

# Full application (Chapter 10)
streamlit run app.py --production

Command Line Tools

# Test individual components
python -m chapter_03.tools.test_ollama_models

# Run workflow with custom input
python -m chapter_05.approval_workflow --input "Research quantum computing applications"

# Performance benchmarking
python -m chapter_10.benchmark --iterations 100

# Ollama model management
ollama list                    # Show available models
ollama pull llama3.2:3b       # Pull different model sizes
ollama rm old-model           # Remove unused models

๐Ÿงช Testing Strategy

Each chapter includes comprehensive tests:

# Run all tests
pytest

# Test specific chapter
pytest chapter_03/tests/

# Integration tests
pytest tests/integration/

# Performance tests
pytest tests/performance/ --benchmark-only

๐Ÿ“Š Progress Tracking

Learning Checkpoints

  • Chapter 1: Create basic linear workflow
  • Chapter 2: Implement conditional routing
  • Chapter 3: Integrate multiple tools
  • Chapter 4: Add persistent memory
  • Chapter 5: Build human approval system
  • Chapter 6: Create multi-agent system
  • Chapter 7: Implement error recovery
  • Chapter 8: Add streaming capabilities
  • Chapter 9: Master advanced state management
  • Chapter 10: Deploy production system

Skills Mastered

By completion, you'll master:

  • โœ… LangGraph fundamentals and architecture
  • โœ… State management and data flow
  • โœ… Conditional logic and routing
  • โœ… Tool integration and API usage
  • โœ… Memory and context management
  • โœ… Human-AI collaboration patterns
  • โœ… Multi-agent orchestration
  • โœ… Error handling and recovery
  • โœ… Streaming and real-time processing
  • โœ… Production deployment and scaling

๐Ÿ”ง Development Setup

Local Development

# Development server with hot reload
streamlit run app.py --server.runOnSave true

# Debug mode
python -m debugpy --listen 5678 --wait-for-client app.py

Docker Development

# Build and run locally
docker-compose up --build

# Development with volume mounts
docker-compose -f docker-compose.dev.yml up

Kubernetes Local Testing

# Apply namespace and resources
kubectl apply -f kubernetes/

# Port forward for local access
kubectl port-forward service/research-assistant 8501:80

# View logs
kubectl logs -f deployment/research-assistant

๐Ÿ“ˆ Advanced Features

Chapter 6+ Exclusive Features

  • Multi-Agent Coordination: Specialized local LLM agents for research, analysis, and writing
  • Dynamic Model Selection: AI chooses optimal Ollama models based on task complexity
  • Adaptive Workflows: Graphs that modify themselves based on performance
  • Human Collaboration: Seamless human-AI teamwork with approval workflows
  • Error Recovery: Self-healing systems that recover from local LLM failures
  • Performance Optimization: Parallel processing, model caching, and load balancing

Production Features (Chapter 10)

  • Horizontal Scaling: Multiple Ollama instances with load balancing
  • Monitoring Dashboard: Real-time performance and usage analytics
  • A/B Testing: Compare different model configurations
  • Resource Management: Intelligent model loading/unloading
  • Privacy Compliance: Complete local processing for sensitive data
  • Audit Logging: Complete interaction history without external dependencies

๐Ÿค Contributing

We welcome contributions! Please see our Contributing Guide for details.

Adding New Chapters

  1. Follow the established structure in chapter_XX/
  2. Ensure backward compatibility with previous chapters
  3. Include comprehensive tests and documentation
  4. Update the main application to support the new chapter

๐Ÿ“š Additional Resources

LangGraph Documentation

Related Technologies

Ollama Resources

๐Ÿ† Certification

Complete all chapters and pass the final assessment to earn your LangGraph Mastery Certificate!

Assessment Criteria

  • All chapter checkpoints completed โœ…
  • Final project demonstrates all learned concepts โœ…
  • Code quality meets production standards โœ…
  • Successful deployment to Kubernetes โœ…

๐Ÿš€ Ready to Start?

Begin your LangGraph mastery journey with local LLMs:

# 1. Start Ollama
ollama serve

# 2. Pull the model (in another terminal)
ollama pull llama3.2:latest

# 3. Verify setup
python setup_chapter1.py

# 4. Start learning!
cd chapter_01_foundation
python basic_workflow.py

Happy Learning! ๐ŸŽ‰


๐Ÿ”’ Privacy & Cost Benefits

Why Local LLMs with Ollama?

  • โœ… Complete Privacy: No data sent to external services
  • โœ… Zero API Costs: No per-token charges or rate limits
  • โœ… Offline Capable: Works without internet connection
  • โœ… Full Control: Choose your models and update schedules
  • โœ… Production Ready: Deploy anywhere without external dependencies
  • โœ… Learning Focused: Understand AI systems from the ground up

This project is designed to take you from LangGraph beginner to expert through hands-on, incremental learning with privacy-first local LLMs. Each chapter builds real capabilities while teaching core concepts. By the end, you'll have both deep knowledge and a production-ready AI system that runs completely under your control.

About

LangGraph Mastery: Comprehensive tutorial teaching LangGraph with local Ollama LLMs - privacy-first AI development from simple workflows to advanced systems

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published