This repository showcases multiple agent systems and LangGraph workflow examples built with modern AI tooling. All projects are fully documented in English with comprehensive examples and usage instructions.
System Diagram
Home & Configuration
URL Input & Processing
Q&A Interface and Results

Screen 1

Screen 2

Screen 3

Screen 4
The most advanced project: extracts transcripts from YouTube videos and enables smart question-answering with a modern UI.
β¨ Features:
- π¬ YouTube Processing: automatic transcript extraction
- π§ Multi-LLM Support: LM Studio (local) + Google Gemini 2.5
- π‘ Key Ideas Extraction: 3β5 core takeaways
- π Modern Streamlit UI: web interface with embedded player
- π Vector Search: FAISS-based fast retrieval
- π Full English documentation
π Quickstart:
cd "Youtube Video - RAG - Agent"
streamlit run streamlit_app.pyπ Detailed Guide β
A comprehensive multi-agent workflow for CSV data analysis using Gemini Code Execution. Executes real Python code for statistical analysis, visualization, and anomaly detection.
β¨ Features:
- π Data Loading Agent: Reads and validates CSV files
- π Analysis Agent: Structure analysis with Google Search integration
- π» Code Generation Agent: Generates and executes Python code with Gemini Code Execution
- π§ Error Correction Agent: Automatically fixes and retries failed code
- π Visualization Agent: Creates charts with Matplotlib/Seaborn
- π¨ Anomaly Detection Agent: Identifies outliers using Z-score and IQR
- π Insight Agent: Extracts deep insights with Google Search
- π‘ Recommendation Agent: Generates actionable recommendations
- π Final Report Agent: Creates comprehensive executive summary
π Quickstart:
cd "Sequential Agent"
python langchain_seq.pyConfiguration:
Set your Gemini API key in langchain_seq.py:
GEMINI_API_KEY = "your_api_key_here"Workflow:
- Load CSV file
- Analyze data structure
- Generate and execute analysis code
- Fix errors (if any)
- Create visualizations
- Detect anomalies
- Extract insights
- Generate recommendations
- Create final report
Provides a simple multi-agent flow (MathAgent, WriterAgent) with an orchestrator, powered by LM Studio's OpenAI-compatible server.
β¨ Features:
- Math Agent: Performs mathematical calculations
- Writer Agent: Generates text content
- Orchestrator: Coordinates agent communication
- LM Studio integration for local LLM support
π Quickstart:
cd A2A-Agent
# Run in separate terminals
python math_agent.py
python writer_agent.py
python orchestrator.pyπ A2A-Agent Docs β
Examples built with the LangGraph library for building stateful, multi-actor applications.
Basic loop: user message β LLM β repeat (stops if response contains "done")
flowchart LR
U[Message] --> LLM[llm_node]
LLM --> C{is "done" included?}
C -->|No| LLM
C -->|Yes / MAX_TURN| E[End]
Thread-based memory with InMemorySaver (thread_id isolates conversation history)
flowchart TB
subgraph T1[Thread 1]
Name[Step Will] --> G1[Graph]
G1 --> M1[(Memory)]
M1 --> A1[Answer 1]
A1 --> Recall[Do you remember the step?]
Recall --> G1
end
subgraph T2[Thread 2]
Recall2[Do you remember the step?] --> G2[Graph]
G2 --> M2[(Memory)]
M2 --> A2[Answer 2]
end
Run the same prompt across different personas, then compare results (diff modes)
flowchart LR
P[Prompt] --> F1[Warm persona]
P --> F2[Formal persona]
P --> F3[Instructor persona]
P --> F4[Skeptical persona]
F1 --> R1[Answer 1]
F2 --> R2[Answer 2]
F3 --> R3[Answer 3]
F4 --> R4[Answer 4]
R1 --> COL[Summary Table]
R2 --> COL
R3 --> COL
R4 --> COL
COL --> DIFF[Diff Analysis]
Classify prompt type and select temperature automatically; optional comparison vs fixed temp
flowchart LR
P2[Prompt] --> CLS[Heuristic Classification]
CLS --> DYN[LLM dynamic]
P2 --> FIX[LLM fixed]
DYN --> CMP[Comparison]
FIX --> CMP
π Quickstart:
cd Langraph
# Set environment variables
set LG_BASE_URL=http://127.0.0.1:1234/v1
set LG_API_KEY=lm-studio
set LG_MODEL=google/gemma-3n-e4b
# Run examples
python langraph_basic.py
python langraph_stream_memory.py
python langraph_branch_personas.py --prompt "Write a short motivational sentence"
python langraph_dynamic_temperature.py --prompt "Translate to French" --compareFeatures:
- Configurable via env vars (model, base URL, API key)
- Retry for transient failures
- Proper role mapping (user / assistant / system / tool)
- Maximum turn limit (prevents infinite loops)
- Logging for observability
Educational project demonstrating tool calling with Google's Gemini AI. Shows both manual (educational) and production (recommended) approaches.
β¨ Features:
- Manual Approach: Shows how tool calling works under the hood (5-step process)
- Production Approach: Uses native Gemini API for robust tool calling
- Real Working Tools:
google_search- Web search using DuckDuckGoscrape_url- Web scraping with BeautifulSoupget_current_weather- Weather data from Open-Meteo APIcalculate_math- Safe mathematical expression evaluationget_current_time- Real time for any timezonewikipedia_search- Wikipedia article summariesget_exchange_rate- Real-time currency exchange rates
π Quickstart:
cd "Tool Calling From Scratch"
# Set API key
set GEMINI_API_KEY=your_api_key_here
# Run application
python app.pyMenu Options:
- 1 - Manual Tool Calling Demo (Educational)
- 2 - Production Tool Calling Demo (Recommended)
- 3 - Interactive Mode (Chat with the AI)
- 4 - Run All Demos
π Detailed Docs β
Advanced agent system using Groq API with rate limit management, ReAct agent pattern, and web search capabilities.
β¨ Features:
- Groq API integration with free tier optimization
- Rate limit management (TPM/RPM tracking)
- ReAct agent pattern (Reasoning + Acting)
- DuckDuckGo web search integration
- Rich console output for better readability
- Conversation memory management
π Quickstart:
cd "Groq - Mixture of Agents"
# Set API key
set GROQ_API_KEY=your_api_key_here
# Run agent
python advanced_agents.pyFiles:
advanced_agents.py- Main agent implementationduckduckgo_agent.py- Web search agentapp.ipynb- Jupyter notebook examples
Intelligent agent that lets you interact with MongoDB databases in natural language. Supports dynamic collection detection and automatic schema analysis.
β¨ Features:
- Natural Language Understanding: "find users whose name is Ahmet"
- Dynamic Collection Detection: Works with any collection name
- Smart Data Insertion: "add a new user"
- Automatic Schema Analysis: Detects existing fields
- Web Interface: User-friendly modern web UI
- LM Studio Integration: Local LLM support
π Quickstart:
cd "Mongodb SQL Talk"
# Start MongoDB and LM Studio first
python mongodb-langchain-agent-clean.pyOpen http://localhost:5000 in your browser.
Example Queries:
- "list collections"
- "show the first 5 records in the users table"
- "find users whose name is Ahmet"
- "add a user: name Mehmet, surname Kaya, age 30"
- "how many users are there?"
π Detailed Docs β
Web search integration with Ollama local LLM for enhanced agent capabilities.
β¨ Features:
- Ollama local LLM integration
- Web search capabilities
- Simple agent implementation
π Quickstart:
cd Ollama
# Start Ollama first
ollama serve
# Run agent
python web_search.pyCollection of advanced agent projects including RAG agents, SQLite storage, structured output, and Ollama integration.
β¨ Features:
- RAG (Retrieval-Augmented Generation) agent
- SQLite storage integration
- Structured output generation
- Ollama local LLM support
- CSV analysis capabilities
π Quickstart:
cd Agno
# Install dependencies
pip install -r requirements_rag.txt # For RAG agent
pip install -r requirements_advanced.txt # For advanced features
# Run specific agent
python ollama-rag-agent.py
python csv_analysis.py
python Structured-output.pyFiles:
ollama-rag-agent.py- RAG agent with Ollamacsv_analysis.py- CSV data analysissqlite-storage.py- SQLite storage integrationStructured-output.py- Structured output generationapp.py- Main application
Python execution agent using Phidata framework for code execution and agent management.
β¨ Features:
- Python code execution
- Phidata framework integration
- Agent orchestration
π Quickstart:
cd Phidata-Agent
python python-execute-agent.pyAgent framework example using AgentScope for multi-agent systems.
β¨ Features:
- AgentScope framework integration
- Multi-agent communication
- Agent orchestration
π Quickstart:
cd AgentScope
python agentscope_example.pyFastAPI-based agent framework with web interface for building agent applications.
β¨ Features:
- FastAPI backend
- Modern web interface
- Agent management UI
- RESTful API
π Quickstart:
cd "BeeAI Framework"
# Run FastAPI app
python fastapi_app.py
# Or run Flask app
python app.pyOpen http://localhost:8000 (FastAPI) or http://localhost:5000 (Flask) in your browser.
General AI agent system with customizable agent configurations.
β¨ Features:
- Configurable agent system
- Multiple agent types
- Extensible architecture
π Quickstart:
cd General
pip install -r requirements.txt
python ai_agent_system.pyAgents-Notebooks/
βββ π₯ Youtube Video - RAG - Agent/ # Main project (Streamlit UI)
β βββ streamlit_app.py # Web interface
β βββ youtube_qa_agent.py # Core agent logic
β βββ README_youtube_qa.md # Detailed documentation
β
βββ π Sequential Agent/ # CSV Analysis Multi-Agent
β βββ langchain_seq.py # Main workflow
β βββ monthly-car-sales.csv # Example data
β
βββ π§ Langraph/ # LangGraph examples
β βββ langraph_basic.py # Basic flow
β βββ langraph_stream_memory.py # Threaded memory
β βββ langraph_branch_personas.py # Persona branching
β βββ langraph_dynamic_temperature.py # Dynamic temperature
β
βββ π€ A2A-Agent/ # Multi-agent demo (LM Studio)
β βββ orchestrator.py # Simple orchestrator
β βββ math_agent.py # Math agent
β βββ writer_agent.py # Writing agent
β βββ embedding_agent.py # Embedding helpers
β βββ ui_streamlit.py # Optional UI
β βββ common.py # Shared helpers
β
βββ π οΈ Tool Calling From Scratch/ # Tool calling examples
β βββ app.py # Main application
β βββ simple_tool_calling.py # Simple implementation
β βββ README.md # Documentation
β
βββ β‘ Groq - Mixture of Agents/ # Groq API agents
β βββ advanced_agents.py # Main agent
β βββ duckduckgo_agent.py # Web search agent
β βββ app.ipynb # Jupyter notebook
β
βββ ποΈ Mongodb SQL Talk/ # MongoDB agent
β βββ mongodb-langchain-agent-clean.py # Main application
β βββ templates/ # Web UI templates
β βββ static/ # Static files
β βββ README.md # Documentation
β
βββ π¦ Ollama/ # Ollama integration
β βββ web_search.py # Web search agent
β βββ web-search.py # Alternative implementation
β
βββ π¦ Agno/ # Advanced agents
β βββ ollama-rag-agent.py # RAG agent
β βββ csv_analysis.py # CSV analysis
β βββ sqlite-storage.py # SQLite storage
β βββ Structured-output.py # Structured output
β βββ app.py # Main app
β
βββ π Phidata-Agent/ # Phidata framework
β βββ python-execute-agent.py # Python execution agent
β
βββ π¬ AgentScope/ # AgentScope framework
β βββ agentscope_example.py # Example implementation
β
βββ π BeeAI Framework/ # FastAPI framework
β βββ fastapi_app.py # FastAPI application
β βββ app.py # Flask application
β βββ static/ # Web interface
β
βββ π§© General/ # General agent system
β βββ ai_agent_system.py # Main system
β βββ requirements.txt # Dependencies
β
βββ requirements.txt # Shared dependencies
- Python 3.8+
- Virtual environment (recommended)
- API keys (as needed for each project):
- Gemini API key (for YouTube QA, Sequential Agent, Tool Calling)
- Groq API key (for Groq agents)
- LM Studio (for local LLM projects)
- Clone the repository:
git clone <repository-url>
cd Agents-Notebooks- Create virtual environment:
python -m venv venv
venv\Scripts\activate # Windows
# or
source venv/bin/activate # Linux/Mac- Install dependencies:
pip install -r requirements.txtWindows (cmd.exe):
set GEMINI_API_KEY=your_api_key_here
set GROQ_API_KEY=your_api_key_here
set LG_BASE_URL=http://127.0.0.1:1234/v1
set LG_API_KEY=lm-studio
set LG_MODEL=google/gemma-3n-e4bPowerShell:
$env:GEMINI_API_KEY="your_api_key_here"
$env:GROQ_API_KEY="your_api_key_here"Linux/Mac:
export GEMINI_API_KEY=your_api_key_here
export GROQ_API_KEY=your_api_key_herelangraph_basic.pyβ Basic loop: user message β LLM β repeat (stops if response contains "done")langraph_stream_memory.pyβ Thread-based memory withInMemorySaver(thread_idisolates conversation history)langraph_branch_personas.pyβ Run the same prompt across different personas, then compare results (diff modes)langraph_dynamic_temperature.pyβ Classify prompt type and select temperature automatically; optional comparison vs fixed temp
Diff Modes (--diff-mode):
unified: Classic line-basedside: Side-by-sidewords: Word-levelall: All of the above
Other Flags:
--no-diff: Skip diffs (only summary)--strict-turkish: Warn if non-English leaks into output--max-preview-chars N: Summary clipping length
Example:
python langraph_branch_personas.py --prompt "Write a short motivational sentence" --diff-mode side --strict-turkishFlags:
--show-rationale: Print classification rationale--compare: Compare dynamic vs fixed--fixed-temperature 0.7: Fixed value for comparison
Example:
python langraph_dynamic_temperature.py --prompt "Write a short motivational sentence" --show-rationale --compare- Streamlit UI
- Key Ideas extraction
- Multi-LLM support
- A2A protocol integration
- Video timeline navigation
- Export features (PDF/Word)
- Multi-language support
- Multi-agent workflow
- Code execution with Gemini
- Error correction
- Visualization
- Anomaly detection
- Streamlit UI
- Export reports (PDF/Excel)
- Real-time analysis
- Persistent memory (SQLite / file)
- Vector memory & summarization
- JSON/CSV logging
- FastAPI interface
- Load personas from external YAML
- Manual tool calling
- Production tool calling
- More tool examples
- Tool composition examples
- Async tool calling
- Rate limit management
- ReAct agent pattern
- Advanced agent orchestration
- Agent communication protocols
- Multi-agent collaboration
- Fork and create a feature branch
- Commit your changes
- Open a Pull Request
- Open issues for feature ideas
- Bug fixes
- New features
- Documentation
- UI/UX improvements
- Testing
- Performance optimization
- Python 3.8+
- Use a virtual environment
- Code formatting: Black, isort
- Follow PEP 8 style guide
- Python 3.8+
- LangGraph - Stateful, multi-actor applications
- LangChain - LLM application framework
- FastAPI - Modern web framework
- Flask - Lightweight web framework
- Streamlit - Rapid web app development
- Google Gemini - Advanced AI models
- Groq - Fast inference API
- LM Studio - Local LLM support
- Ollama - Local LLM runner
- MongoDB - NoSQL database
- SQLite - Lightweight database
- FAISS - Vector similarity search
- Pandas - Data manipulation
- BeautifulSoup - Web scraping
- DuckDuckGo - Web search
- Rich - Rich text and beautiful formatting
- PyTube - YouTube video processing
- YouTube Transcript API - Transcript extraction
- Windows cmd.exe:
set VARIABLE="value" - PowerShell:
$env:VARIABLE="value" - Linux/Mac:
export VARIABLE="value"
- Get Gemini API key from: Google AI Studio
- Get Groq API key from: Groq Console
- LM Studio: Download from lmstudio.ai
- Download and install LM Studio
- Load a model (e.g., Gemma, Qwen)
- Start the server on port 1234
- Set environment variables accordingly
See LICENSE file in the repository.
- LangChain - Agent framework
- LangGraph - Stateful agent workflows
- LM Studio - Local LLM support
- Google Gemini - Advanced AI models
- Groq - Fast inference API
- Streamlit - Web app framework
- MongoDB - Database
- Ollama - Local LLM runner
For questions, suggestions, or contributions, please open an issue or pull request.
β If you find this repository helpful, please consider giving it a star!