Skip to content

This is a repository of agent notebooks that contains studies on how to use various agent libraries with different tools and methods.

License

Notifications You must be signed in to change notification settings

emredeveloper/Agents-Notebooks

Repository files navigation

πŸ€– AI Agents & LangGraph Projects

This repository showcases multiple agent systems and LangGraph workflow examples built with modern AI tooling. All projects are fully documented in English with comprehensive examples and usage instructions.

πŸ“Έ Screenshots

πŸŽ₯ YouTube QA Agent

YouTube QA - System Diagram
System Diagram

YouTube QA - Home & Configuration
Home & Configuration

YouTube QA - URL Input & Processing
URL Input & Processing

YouTube QA - Q&A and Results
Q&A Interface and Results

🀝 A2A-Agent (Multi-Agent Demo)

A2A-Agent - Screen 1
Screen 1

A2A-Agent - Screen 2
Screen 2

A2A-Agent - Screen 3
Screen 3

A2A-Agent - Screen 4
Screen 4


🎯 Featured Projects

πŸŽ₯ YouTube Video QA Agent

The most advanced project: extracts transcripts from YouTube videos and enables smart question-answering with a modern UI.

✨ Features:

  • 🎬 YouTube Processing: automatic transcript extraction
  • 🧠 Multi-LLM Support: LM Studio (local) + Google Gemini 2.5
  • πŸ’‘ Key Ideas Extraction: 3–5 core takeaways
  • 🌐 Modern Streamlit UI: web interface with embedded player
  • πŸ” Vector Search: FAISS-based fast retrieval
  • 🌎 Full English documentation

πŸš€ Quickstart:

cd "Youtube Video - RAG - Agent"
streamlit run streamlit_app.py

πŸ“– Detailed Guide β†’


πŸ“Š Sequential Agent - CSV Data Analysis

A comprehensive multi-agent workflow for CSV data analysis using Gemini Code Execution. Executes real Python code for statistical analysis, visualization, and anomaly detection.

✨ Features:

  • πŸ“‚ Data Loading Agent: Reads and validates CSV files
  • πŸ” Analysis Agent: Structure analysis with Google Search integration
  • πŸ’» Code Generation Agent: Generates and executes Python code with Gemini Code Execution
  • πŸ”§ Error Correction Agent: Automatically fixes and retries failed code
  • πŸ“ˆ Visualization Agent: Creates charts with Matplotlib/Seaborn
  • 🚨 Anomaly Detection Agent: Identifies outliers using Z-score and IQR
  • πŸ”Ž Insight Agent: Extracts deep insights with Google Search
  • πŸ’‘ Recommendation Agent: Generates actionable recommendations
  • πŸ“Š Final Report Agent: Creates comprehensive executive summary

πŸš€ Quickstart:

cd "Sequential Agent"
python langchain_seq.py

Configuration: Set your Gemini API key in langchain_seq.py:

GEMINI_API_KEY = "your_api_key_here"

Workflow:

  1. Load CSV file
  2. Analyze data structure
  3. Generate and execute analysis code
  4. Fix errors (if any)
  5. Create visualizations
  6. Detect anomalies
  7. Extract insights
  8. Generate recommendations
  9. Create final report

🀝 A2A-Agent (Multi-Agent Demo)

Provides a simple multi-agent flow (MathAgent, WriterAgent) with an orchestrator, powered by LM Studio's OpenAI-compatible server.

✨ Features:

  • Math Agent: Performs mathematical calculations
  • Writer Agent: Generates text content
  • Orchestrator: Coordinates agent communication
  • LM Studio integration for local LLM support

πŸš€ Quickstart:

cd A2A-Agent

# Run in separate terminals
python math_agent.py
python writer_agent.py
python orchestrator.py

πŸ“– A2A-Agent Docs β†’


πŸ”§ Agent Frameworks & Tools

πŸ“š LangGraph Examples

Examples built with the LangGraph library for building stateful, multi-actor applications.

1. Basic Flow (langraph_basic.py)

Basic loop: user message β†’ LLM β†’ repeat (stops if response contains "done")

flowchart LR
    U[Message] --> LLM[llm_node]
    LLM --> C{is "done" included?}
    C -->|No| LLM
    C -->|Yes / MAX_TURN| E[End]
Loading

2. Thread / Memory (langraph_stream_memory.py)

Thread-based memory with InMemorySaver (thread_id isolates conversation history)

flowchart TB
    subgraph T1[Thread 1]
        Name[Step Will] --> G1[Graph]
        G1 --> M1[(Memory)]
        M1 --> A1[Answer 1]
        A1 --> Recall[Do you remember the step?]
        Recall --> G1
    end
    subgraph T2[Thread 2]
        Recall2[Do you remember the step?] --> G2[Graph]
        G2 --> M2[(Memory)]
        M2 --> A2[Answer 2]
    end
Loading

3. Persona Branching (langraph_branch_personas.py)

Run the same prompt across different personas, then compare results (diff modes)

flowchart LR
    P[Prompt] --> F1[Warm persona]
    P --> F2[Formal persona]
    P --> F3[Instructor persona]
    P --> F4[Skeptical persona]
    F1 --> R1[Answer 1]
    F2 --> R2[Answer 2]
    F3 --> R3[Answer 3]
    F4 --> R4[Answer 4]
    R1 --> COL[Summary Table]
    R2 --> COL
    R3 --> COL
    R4 --> COL
    COL --> DIFF[Diff Analysis]
Loading

4. Dynamic Temperature (langraph_dynamic_temperature.py)

Classify prompt type and select temperature automatically; optional comparison vs fixed temp

flowchart LR
    P2[Prompt] --> CLS[Heuristic Classification]
    CLS --> DYN[LLM dynamic]
    P2 --> FIX[LLM fixed]
    DYN --> CMP[Comparison]
    FIX --> CMP
Loading

πŸš€ Quickstart:

cd Langraph

# Set environment variables
set LG_BASE_URL=http://127.0.0.1:1234/v1
set LG_API_KEY=lm-studio
set LG_MODEL=google/gemma-3n-e4b

# Run examples
python langraph_basic.py
python langraph_stream_memory.py
python langraph_branch_personas.py --prompt "Write a short motivational sentence"
python langraph_dynamic_temperature.py --prompt "Translate to French" --compare

Features:

  • Configurable via env vars (model, base URL, API key)
  • Retry for transient failures
  • Proper role mapping (user / assistant / system / tool)
  • Maximum turn limit (prevents infinite loops)
  • Logging for observability

πŸ› οΈ Tool Calling From Scratch

Educational project demonstrating tool calling with Google's Gemini AI. Shows both manual (educational) and production (recommended) approaches.

✨ Features:

  • Manual Approach: Shows how tool calling works under the hood (5-step process)
  • Production Approach: Uses native Gemini API for robust tool calling
  • Real Working Tools:
    • google_search - Web search using DuckDuckGo
    • scrape_url - Web scraping with BeautifulSoup
    • get_current_weather - Weather data from Open-Meteo API
    • calculate_math - Safe mathematical expression evaluation
    • get_current_time - Real time for any timezone
    • wikipedia_search - Wikipedia article summaries
    • get_exchange_rate - Real-time currency exchange rates

πŸš€ Quickstart:

cd "Tool Calling From Scratch"

# Set API key
set GEMINI_API_KEY=your_api_key_here

# Run application
python app.py

Menu Options:

  • 1 - Manual Tool Calling Demo (Educational)
  • 2 - Production Tool Calling Demo (Recommended)
  • 3 - Interactive Mode (Chat with the AI)
  • 4 - Run All Demos

πŸ“– Detailed Docs β†’


⚑ Groq - Mixture of Agents

Advanced agent system using Groq API with rate limit management, ReAct agent pattern, and web search capabilities.

✨ Features:

  • Groq API integration with free tier optimization
  • Rate limit management (TPM/RPM tracking)
  • ReAct agent pattern (Reasoning + Acting)
  • DuckDuckGo web search integration
  • Rich console output for better readability
  • Conversation memory management

πŸš€ Quickstart:

cd "Groq - Mixture of Agents"

# Set API key
set GROQ_API_KEY=your_api_key_here

# Run agent
python advanced_agents.py

Files:

  • advanced_agents.py - Main agent implementation
  • duckduckgo_agent.py - Web search agent
  • app.ipynb - Jupyter notebook examples

πŸ—„οΈ Database & Storage Agents

πŸƒ MongoDB SQL Talk

Intelligent agent that lets you interact with MongoDB databases in natural language. Supports dynamic collection detection and automatic schema analysis.

✨ Features:

  • Natural Language Understanding: "find users whose name is Ahmet"
  • Dynamic Collection Detection: Works with any collection name
  • Smart Data Insertion: "add a new user"
  • Automatic Schema Analysis: Detects existing fields
  • Web Interface: User-friendly modern web UI
  • LM Studio Integration: Local LLM support

πŸš€ Quickstart:

cd "Mongodb SQL Talk"

# Start MongoDB and LM Studio first
python mongodb-langchain-agent-clean.py

Open http://localhost:5000 in your browser.

Example Queries:

  • "list collections"
  • "show the first 5 records in the users table"
  • "find users whose name is Ahmet"
  • "add a user: name Mehmet, surname Kaya, age 30"
  • "how many users are there?"

πŸ“– Detailed Docs β†’


πŸ–₯️ Local LLM Integration

πŸ¦™ Ollama

Web search integration with Ollama local LLM for enhanced agent capabilities.

✨ Features:

  • Ollama local LLM integration
  • Web search capabilities
  • Simple agent implementation

πŸš€ Quickstart:

cd Ollama

# Start Ollama first
ollama serve

# Run agent
python web_search.py

πŸ“¦ Agno

Collection of advanced agent projects including RAG agents, SQLite storage, structured output, and Ollama integration.

✨ Features:

  • RAG (Retrieval-Augmented Generation) agent
  • SQLite storage integration
  • Structured output generation
  • Ollama local LLM support
  • CSV analysis capabilities

πŸš€ Quickstart:

cd Agno

# Install dependencies
pip install -r requirements_rag.txt  # For RAG agent
pip install -r requirements_advanced.txt  # For advanced features

# Run specific agent
python ollama-rag-agent.py
python csv_analysis.py
python Structured-output.py

Files:

  • ollama-rag-agent.py - RAG agent with Ollama
  • csv_analysis.py - CSV data analysis
  • sqlite-storage.py - SQLite storage integration
  • Structured-output.py - Structured output generation
  • app.py - Main application

πŸŽ“ Specialized Agents

🐍 Phidata-Agent

Python execution agent using Phidata framework for code execution and agent management.

✨ Features:

  • Python code execution
  • Phidata framework integration
  • Agent orchestration

πŸš€ Quickstart:

cd Phidata-Agent
python python-execute-agent.py

πŸ”¬ AgentScope

Agent framework example using AgentScope for multi-agent systems.

✨ Features:

  • AgentScope framework integration
  • Multi-agent communication
  • Agent orchestration

πŸš€ Quickstart:

cd AgentScope
python agentscope_example.py

🐝 BeeAI Framework

FastAPI-based agent framework with web interface for building agent applications.

✨ Features:

  • FastAPI backend
  • Modern web interface
  • Agent management UI
  • RESTful API

πŸš€ Quickstart:

cd "BeeAI Framework"

# Run FastAPI app
python fastapi_app.py

# Or run Flask app
python app.py

Open http://localhost:8000 (FastAPI) or http://localhost:5000 (Flask) in your browser.


🧩 General

General AI agent system with customizable agent configurations.

✨ Features:

  • Configurable agent system
  • Multiple agent types
  • Extensible architecture

πŸš€ Quickstart:

cd General
pip install -r requirements.txt
python ai_agent_system.py

πŸ“ Project Structure

Agents-Notebooks/
β”œβ”€β”€ πŸŽ₯ Youtube Video - RAG - Agent/      # Main project (Streamlit UI)
β”‚   β”œβ”€β”€ streamlit_app.py                 # Web interface
β”‚   β”œβ”€β”€ youtube_qa_agent.py              # Core agent logic
β”‚   └── README_youtube_qa.md             # Detailed documentation
β”‚
β”œβ”€β”€ πŸ“Š Sequential Agent/                  # CSV Analysis Multi-Agent
β”‚   β”œβ”€β”€ langchain_seq.py                 # Main workflow
β”‚   └── monthly-car-sales.csv            # Example data
β”‚
β”œβ”€β”€ πŸ”§ Langraph/                         # LangGraph examples
β”‚   β”œβ”€β”€ langraph_basic.py               # Basic flow
β”‚   β”œβ”€β”€ langraph_stream_memory.py       # Threaded memory
β”‚   β”œβ”€β”€ langraph_branch_personas.py     # Persona branching
β”‚   └── langraph_dynamic_temperature.py  # Dynamic temperature
β”‚
β”œβ”€β”€ 🀝 A2A-Agent/                        # Multi-agent demo (LM Studio)
β”‚   β”œβ”€β”€ orchestrator.py                  # Simple orchestrator
β”‚   β”œβ”€β”€ math_agent.py                    # Math agent
β”‚   β”œβ”€β”€ writer_agent.py                  # Writing agent
β”‚   β”œβ”€β”€ embedding_agent.py               # Embedding helpers
β”‚   β”œβ”€β”€ ui_streamlit.py                  # Optional UI
β”‚   └── common.py                        # Shared helpers
β”‚
β”œβ”€β”€ πŸ› οΈ Tool Calling From Scratch/        # Tool calling examples
β”‚   β”œβ”€β”€ app.py                           # Main application
β”‚   β”œβ”€β”€ simple_tool_calling.py           # Simple implementation
β”‚   └── README.md                        # Documentation
β”‚
β”œβ”€β”€ ⚑ Groq - Mixture of Agents/         # Groq API agents
β”‚   β”œβ”€β”€ advanced_agents.py               # Main agent
β”‚   β”œβ”€β”€ duckduckgo_agent.py              # Web search agent
β”‚   └── app.ipynb                        # Jupyter notebook
β”‚
β”œβ”€β”€ πŸ—„οΈ Mongodb SQL Talk/                 # MongoDB agent
β”‚   β”œβ”€β”€ mongodb-langchain-agent-clean.py # Main application
β”‚   β”œβ”€β”€ templates/                       # Web UI templates
β”‚   β”œβ”€β”€ static/                          # Static files
β”‚   └── README.md                        # Documentation
β”‚
β”œβ”€β”€ πŸ¦™ Ollama/                           # Ollama integration
β”‚   β”œβ”€β”€ web_search.py                    # Web search agent
β”‚   └── web-search.py                    # Alternative implementation
β”‚
β”œβ”€β”€ πŸ“¦ Agno/                             # Advanced agents
β”‚   β”œβ”€β”€ ollama-rag-agent.py              # RAG agent
β”‚   β”œβ”€β”€ csv_analysis.py                  # CSV analysis
β”‚   β”œβ”€β”€ sqlite-storage.py                # SQLite storage
β”‚   β”œβ”€β”€ Structured-output.py             # Structured output
β”‚   └── app.py                           # Main app
β”‚
β”œβ”€β”€ 🐍 Phidata-Agent/                    # Phidata framework
β”‚   └── python-execute-agent.py          # Python execution agent
β”‚
β”œβ”€β”€ πŸ”¬ AgentScope/                       # AgentScope framework
β”‚   └── agentscope_example.py            # Example implementation
β”‚
β”œβ”€β”€ 🐝 BeeAI Framework/                  # FastAPI framework
β”‚   β”œβ”€β”€ fastapi_app.py                   # FastAPI application
β”‚   β”œβ”€β”€ app.py                           # Flask application
β”‚   └── static/                          # Web interface
β”‚
β”œβ”€β”€ 🧩 General/                          # General agent system
β”‚   β”œβ”€β”€ ai_agent_system.py               # Main system
β”‚   └── requirements.txt                 # Dependencies
β”‚
└── requirements.txt                     # Shared dependencies

πŸš€ Quickstart Guide

Prerequisites

  • Python 3.8+
  • Virtual environment (recommended)
  • API keys (as needed for each project):
    • Gemini API key (for YouTube QA, Sequential Agent, Tool Calling)
    • Groq API key (for Groq agents)
    • LM Studio (for local LLM projects)

Installation

  1. Clone the repository:
git clone <repository-url>
cd Agents-Notebooks
  1. Create virtual environment:
python -m venv venv
venv\Scripts\activate  # Windows
# or
source venv/bin/activate  # Linux/Mac
  1. Install dependencies:
pip install -r requirements.txt

Environment Variables

Windows (cmd.exe):

set GEMINI_API_KEY=your_api_key_here
set GROQ_API_KEY=your_api_key_here
set LG_BASE_URL=http://127.0.0.1:1234/v1
set LG_API_KEY=lm-studio
set LG_MODEL=google/gemma-3n-e4b

PowerShell:

$env:GEMINI_API_KEY="your_api_key_here"
$env:GROQ_API_KEY="your_api_key_here"

Linux/Mac:

export GEMINI_API_KEY=your_api_key_here
export GROQ_API_KEY=your_api_key_here

πŸ“‹ Project Details

LangGraph Scripts

  1. langraph_basic.py – Basic loop: user message β†’ LLM β†’ repeat (stops if response contains "done")
  2. langraph_stream_memory.py – Thread-based memory with InMemorySaver (thread_id isolates conversation history)
  3. langraph_branch_personas.py – Run the same prompt across different personas, then compare results (diff modes)
  4. langraph_dynamic_temperature.py – Classify prompt type and select temperature automatically; optional comparison vs fixed temp

Persona Branching Example

Diff Modes (--diff-mode):

  • unified: Classic line-based
  • side: Side-by-side
  • words: Word-level
  • all: All of the above

Other Flags:

  • --no-diff: Skip diffs (only summary)
  • --strict-turkish: Warn if non-English leaks into output
  • --max-preview-chars N: Summary clipping length

Example:

python langraph_branch_personas.py --prompt "Write a short motivational sentence" --diff-mode side --strict-turkish

Dynamic Temperature Example

Flags:

  • --show-rationale: Print classification rationale
  • --compare: Compare dynamic vs fixed
  • --fixed-temperature 0.7: Fixed value for comparison

Example:

python langraph_dynamic_temperature.py --prompt "Write a short motivational sentence" --show-rationale --compare

🌟 Roadmap

πŸŽ₯ YouTube QA Agent

  • Streamlit UI
  • Key Ideas extraction
  • Multi-LLM support
  • A2A protocol integration
  • Video timeline navigation
  • Export features (PDF/Word)
  • Multi-language support

πŸ“Š Sequential Agent

  • Multi-agent workflow
  • Code execution with Gemini
  • Error correction
  • Visualization
  • Anomaly detection
  • Streamlit UI
  • Export reports (PDF/Excel)
  • Real-time analysis

πŸ”§ LangGraph Examples

  • Persistent memory (SQLite / file)
  • Vector memory & summarization
  • JSON/CSV logging
  • FastAPI interface
  • Load personas from external YAML

πŸ› οΈ Tool Calling From Scratch

  • Manual tool calling
  • Production tool calling
  • More tool examples
  • Tool composition examples
  • Async tool calling

⚑ Groq - Mixture of Agents

  • Rate limit management
  • ReAct agent pattern
  • Advanced agent orchestration
  • Agent communication protocols
  • Multi-agent collaboration

🀝 Contributing

How to contribute

  1. Fork and create a feature branch
  2. Commit your changes
  3. Open a Pull Request
  4. Open issues for feature ideas

Areas for contribution

  • Bug fixes
  • New features
  • Documentation
  • UI/UX improvements
  • Testing
  • Performance optimization

Dev environment

  • Python 3.8+
  • Use a virtual environment
  • Code formatting: Black, isort
  • Follow PEP 8 style guide

🏷️ Tech Stack

Backend

  • Python 3.8+
  • LangGraph - Stateful, multi-actor applications
  • LangChain - LLM application framework
  • FastAPI - Modern web framework
  • Flask - Lightweight web framework
  • Streamlit - Rapid web app development

LLM Providers

  • Google Gemini - Advanced AI models
  • Groq - Fast inference API
  • LM Studio - Local LLM support
  • Ollama - Local LLM runner

Databases & Storage

  • MongoDB - NoSQL database
  • SQLite - Lightweight database
  • FAISS - Vector similarity search
  • Pandas - Data manipulation

Tools & Utilities

  • BeautifulSoup - Web scraping
  • DuckDuckGo - Web search
  • Rich - Rich text and beautiful formatting
  • PyTube - YouTube video processing
  • YouTube Transcript API - Transcript extraction

πŸ“ Notes

Environment Variables

  • Windows cmd.exe: set VARIABLE="value"
  • PowerShell: $env:VARIABLE="value"
  • Linux/Mac: export VARIABLE="value"

API Keys

Local LLM Setup

  1. Download and install LM Studio
  2. Load a model (e.g., Gemma, Qwen)
  3. Start the server on port 1234
  4. Set environment variables accordingly

πŸ“„ License

See LICENSE file in the repository.


πŸ™ Acknowledgments


πŸ“§ Contact

For questions, suggestions, or contributions, please open an issue or pull request.


⭐ If you find this repository helpful, please consider giving it a star!

About

This is a repository of agent notebooks that contains studies on how to use various agent libraries with different tools and methods.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published