Skip to content

rapidarchitect/ollama_strands

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Strands Agents with Ollama Examples

A collection of example scripts demonstrating how to build intelligent agents using the Strands framework with local Ollama language models. These scripts showcase different agent capabilities from basic calculations to real-time AWS documentation queries.

🚀 Overview

This repository contains three progressively advanced examples of Strands Agents:

  1. Calculator Agent - Basic mathematical computation capabilities
  2. Interactive Conversational Agent - General-purpose question answering
  3. AWS Documentation Agent - Real-time AWS documentation queries via MCP

All examples use local Ollama models for privacy and control, with comprehensive documentation and error handling.

📋 Prerequisites

System Requirements

  • Python 3.8+
  • Ollama installed and running locally
  • uvx (for AWS documentation script)

Ollama Setup

  1. Install Ollama from ollama.ai
  2. Pull a compatible model:
    ollama pull llama2
    # or
    ollama pull mistral
  3. Ensure Ollama is running:
    ollama serve

Python Dependencies

All scripts use inline dependency management. Dependencies are automatically installed when running with uv:

uv run script_name.py

⚙️ Configuration

Create a .env file in the repository root:

STRANDS_OLLAMA_HOST=localhost
STRANDS_OLLAMA_MODEL=llama2

Environment Variables:

  • STRANDS_OLLAMA_HOST: Hostname where Ollama is running (default: localhost)
  • STRANDS_OLLAMA_MODEL: Name of the Ollama model to use

🧮 Script 1: Calculator Agent

File: calculator_agent.py

A Strands Agent equipped with calculator tools for mathematical computations.

Features

  • Integration with calculator tool
  • Mathematical problem solving
  • Step-by-step calculation explanations

Usage

uv run calculator_agent.py

Example Query

The script automatically asks: "What is the square root of 1764?"

Expected output: Detailed explanation showing the calculation process and result (42).

Key Components

  • Agent: Core Strands agent with calculator tool
  • OllamaModel: Local language model integration
  • calculator: Pre-built mathematical computation tool

💬 Script 2: Interactive Conversational Agent

File: interactive_agent.py

A general-purpose conversational agent for open-ended discussions and questions.

Features

  • Interactive command-line interface
  • General knowledge question answering
  • No specialized tools - pure conversation
  • Default topic about Agentic AI

Usage

uv run interactive_agent.py

Example Interaction

Enter a topic to query the LLM about: What is machine learning?

Or press Enter for the default topic: "Tell me about Agentic AI"

Key Components

  • Agent: Basic conversational agent without tools
  • OllamaModel: Local language model for responses
  • Interactive user input with sensible defaults

📚 Script 3: AWS Documentation Agent

File: aws_mcp_agent.py

An advanced agent that provides real-time access to official AWS documentation using Model Context Protocol (MCP).

Features

  • Real-time AWS documentation queries
  • Official AWS Labs MCP server integration
  • Markdown-formatted responses
  • Up-to-date service information

Prerequisites

  • uvx installed: curl -LsSf https://astral.sh/uv/install.sh | sh
  • Internet connection for MCP server

Usage

uv run aws_mcp_agent.py

Example Queries

Ask a question about aws documentation: How do I configure S3 buckets for static websites?

Or press Enter for default: "Tell me about Amazon Bedrock and how to use it with Python, provide the output in Markdown format"

Key Components

  • Agent: Strands agent with AWS documentation tools
  • MCPClient: Model Context Protocol client
  • stdio_client: Communication with AWS documentation server
  • Real-time documentation access via awslabs.aws-documentation-mcp-server

🔧 Architecture

Common Components

All scripts share these core architectural elements:

# Environment setup
load_dotenv()
ollama_host = f"http://{os.getenv('STRANDS_OLLAMA_HOST')}:11434"

# Model initialization
ollama_model = OllamaModel(
    host=ollama_host,
    model_id=os.getenv('STRANDS_OLLAMA_MODEL')
)

# Agent creation
agent = Agent(tools=[...], model=ollama_model)

Progression of Complexity

  1. Calculator Agent: Basic tool integration
  2. Interactive Agent: User interaction patterns
  3. AWS MCP Agent: External service integration via MCP

🛠️ Development

Running Scripts

Each script can be executed directly:

# Using uv (recommended)
uv run script_name.py

# Or with traditional Python (after installing dependencies)
python script_name.py

Error Handling

All scripts include comprehensive error handling for:

  • Missing environment variables
  • Ollama connection issues
  • MCP server connectivity (AWS script)
  • Invalid user inputs

Code Structure

Each script follows a consistent pattern:

  • Environment setup and validation
  • Model/client initialization
  • User interaction (where applicable)
  • Query processing and response handling
  • Graceful error management

📖 Learning Path

Beginner: Calculator Agent

Start here to understand:

  • Basic Strands Agent setup
  • Tool integration concepts
  • Ollama model configuration

Intermediate: Interactive Agent

Builds upon basics with:

  • User interaction patterns
  • Input validation and defaults
  • Conversational AI without tools

Advanced: AWS MCP Agent

Demonstrates complex integrations:

  • Model Context Protocol usage
  • External service integration
  • Real-time data access
  • Context manager patterns

🔍 Troubleshooting

Common Issues

Ollama Connection Errors:

# Check if Ollama is running
curl http://localhost:11434/api/version

# Start Ollama if needed
ollama serve

Missing Model Errors:

# List available models
ollama list

# Pull required model
ollama pull llama2

AWS MCP Server Issues (AWS script only):

# Ensure uvx is installed
uvx --version

# Test MCP server availability
uvx awslabs.aws-documentation-mcp-server@latest --help

Environment Variable Issues:

  • Verify .env file exists and contains required variables
  • Check file permissions and syntax
  • Ensure no extra spaces or quotes around values

Debug Mode

Add debugging to any script by modifying the agent call:

response = agent(query, debug=True)  # If supported

Or add verbose logging:

import logging
logging.basicConfig(level=logging.DEBUG)

📝 Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Add comprehensive docstrings to new functions
  4. Include error handling and examples
  5. Test with multiple Ollama models
  6. Submit a pull request

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

  • Strands for the agent framework
  • Ollama for local language model hosting
  • AWS Labs for the MCP documentation server
  • Astral for uvx and uv tools

🔗 Related Resources

About

AWS Strands Agents with Ollama Examples

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages