Skip to content

feat: add model switching support to think, agent_graph, swarm and use_llm tools #117

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

cagataycali
Copy link
Member

Summary

This PR adds model provider switching capabilities to think, use_llm, agent_graph, and swarm tools while migrating them from TOOL_SPEC to @tool decorator pattern.

Changes Made

New Model Provider System:

  • Added 6 model provider modules: bedrock, anthropic, litellm, llamaapi, ollama, openai
  • Created utils/models.py for dynamic model creation and configuration
  • Environment-based model configuration with sensible defaults

Tool Enhancements:

  • think: Added model_provider, model_settings, thinking_system_prompt parameters
  • use_llm: Migrated to @tool decorator, added model switching
  • agent_graph: Migrated to @tool decorator, added per-node model configuration
  • swarm: Migrated to @tool decorator, added swarm-wide model control

Security & Access Control:

  • Added tools parameter to all enhanced tools for security isolation
  • Granular control over which tools nested agents can access

Technical Details

All new parameters are optional with backward-compatible defaults:

# Existing calls continue to work
agent.tool.think("analyze this", 3, "You are an analyst")

# New model switching available  
agent.tool.think(
    "analyze this", 
    3, 
    "You are an analyst",
    model_provider="anthropic",
    model_settings={"model": "claude-3-sonnet", "temperature": 0.5}
)

Model providers are loaded dynamically based on the model_provider parameter. Configuration can be set via environment variables or passed directly in model_settings.

Use Cases

  • Route different reasoning tasks to specialized models
  • Cost optimization by using appropriate models per task
  • Environment-specific model selection for deployments
  • Multi-model workflows within single agent sessions

Testing

  • All existing tests pass (560+ tests)
  • Added comprehensive test coverage for new functionality
  • Manual testing of all model provider integrations
  • Backward compatibility verified across all tools

Files Changed

  • src/strands_tools/models/ - New model provider modules (6 files)
  • src/strands_tools/utils/models.py - Model factory and utilities
  • src/strands_tools/think.py - Enhanced with model switching
  • src/strands_tools/use_llm.py - Migrated to @tool + model switching
  • src/strands_tools/agent_graph.py - Migrated to @tool + per-node models
  • src/strands_tools/swarm.py - Migrated to @tool + swarm models
  • Updated corresponding test files

Backward Compatibility

100% backward compatible. All existing tool calls work unchanged. New parameters are optional and use environment-based defaults.

Related

Addresses issues;
#74 - Multi-model support request
#20 - Unable to use custom LLM providers with tools like swarm, agent_graph, workflow

PS: Workflow tool is not in this list of changes.

Previous PR closed due messy git history, #110

- Add model provider modules for bedrock, anthropic, litellm, llamaapi, ollama, openai
- Enhance think tool with model_provider, model_settings, and thinking_system_prompt parameters
- Enhance use_llm tool with model switching capabilities
- Add model utilities for dynamic model creation and configuration
- Update tests to work with new @tool decorator approach
- Support environment-based model configuration
- Maintain backward compatibility with existing behavior
…del configuration

- Convert agent_graph.py from TOOL_SPEC to @tool decorator format
- Add per-node model provider and configuration support in agent_graph
- Add tools parameter for security isolation and role-based access control
- Convert swarm.py from TOOL_SPEC to @tool decorator format
- Add model_provider, model_settings, and tools parameters to swarm
- Update use_llm calls to new direct parameter format
- Enhance documentation with comprehensive examples and usage patterns
- Update all corresponding tests for new function signatures
- Maintain backward compatibility while adding new capabilities

Co-authored-by: AI Assistant
@cagataycali
Copy link
Member Author

New PR is here: #143

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants