Skip to content

Commit ce1d5f1

Browse files
Added examples and documentation for LiteLLM model provider & exposed LiteLLMProvider
1 parent 656bcdb commit ce1d5f1

File tree

4 files changed

+339
-0
lines changed

4 files changed

+339
-0
lines changed

docs/models.md

+152
Original file line numberDiff line numberDiff line change
@@ -71,3 +71,155 @@ spanish_agent = Agent(
7171
model_settings=ModelSettings(temperature=0.5),
7272
)
7373
```
74+
75+
## Using LiteLLM Provider
76+
77+
The SDK includes built-in support for [LiteLLM](https://docs.litellm.ai/), a unified interface for multiple LLM providers. LiteLLM provides a proxy server that exposes an OpenAI-compatible API for various LLM providers including OpenAI, Anthropic, Azure, AWS Bedrock, Google, and more.
78+
79+
### Basic Usage
80+
81+
```python
82+
from agents import Agent, Runner, LiteLLMProvider
83+
import asyncio
84+
85+
# Create a LiteLLM provider
86+
provider = LiteLLMProvider(
87+
api_key="your-litellm-api-key", # or set LITELLM_API_KEY env var
88+
base_url="http://localhost:8000", # or set LITELLM_API_BASE env var
89+
)
90+
91+
# Create an agent using a specific model
92+
agent = Agent(
93+
name="Assistant",
94+
instructions="You are a helpful assistant.",
95+
model="claude-3", # Will be routed to Anthropic
96+
model_provider=provider,
97+
)
98+
99+
async def main():
100+
result = await Runner.run(agent, input="Hello!")
101+
print(result.final_output)
102+
103+
if __name__ == "__main__":
104+
asyncio.run(main())
105+
```
106+
107+
### Environment Variables
108+
109+
The LiteLLM provider supports configuration through environment variables:
110+
111+
```bash
112+
# LiteLLM configuration
113+
export LITELLM_API_KEY="your-litellm-api-key"
114+
export LITELLM_API_BASE="http://localhost:8000"
115+
export LITELLM_MODEL="gpt-4" # Default model (optional)
116+
117+
# Provider-specific keys (examples)
118+
export OPENAI_API_KEY="sk-..."
119+
export ANTHROPIC_API_KEY="sk-ant-..."
120+
export AZURE_API_KEY="..."
121+
export AWS_ACCESS_KEY_ID="..."
122+
export AWS_SECRET_ACCESS_KEY="..."
123+
```
124+
125+
### Model Routing
126+
127+
The provider automatically routes model names to their appropriate providers:
128+
129+
```python
130+
# Models are automatically routed based on their names
131+
openai_agent = Agent(
132+
name="OpenAI Agent",
133+
instructions="Using GPT-4",
134+
model="gpt-4", # Will be routed to OpenAI
135+
model_provider=provider,
136+
)
137+
138+
anthropic_agent = Agent(
139+
name="Anthropic Agent",
140+
instructions="Using Claude",
141+
model="claude-3", # Will be routed to Anthropic
142+
model_provider=provider,
143+
)
144+
145+
azure_agent = Agent(
146+
name="Azure Agent",
147+
instructions="Using Azure OpenAI",
148+
model="azure/gpt-4", # Explicitly using Azure
149+
model_provider=provider,
150+
)
151+
```
152+
153+
You can also explicitly specify providers using prefixes:
154+
155+
- `openai/` - OpenAI models
156+
- `anthropic/` - Anthropic models
157+
- `azure/` - Azure OpenAI models
158+
- `aws/` - AWS Bedrock models
159+
- `cohere/` - Cohere models
160+
- `replicate/` - Replicate models
161+
- `huggingface/` - Hugging Face models
162+
- `mistral/` - Mistral AI models
163+
- `gemini/` - Google Gemini models
164+
- `groq/` - Groq models
165+
166+
### Advanced Configuration
167+
168+
The provider supports additional configuration options:
169+
170+
```python
171+
provider = LiteLLMProvider(
172+
api_key="your-litellm-api-key",
173+
base_url="http://localhost:8000",
174+
model_name="gpt-4", # Default model
175+
use_responses=True, # Use OpenAI Responses API format
176+
extra_headers={ # Additional headers
177+
"x-custom-header": "value"
178+
},
179+
drop_params=True, # Drop unsupported params for specific models
180+
)
181+
```
182+
183+
### Using Multiple Providers
184+
185+
You can use different providers for different agents in your workflow:
186+
187+
```python
188+
from agents import Agent, Runner, OpenAIProvider, LiteLLMProvider
189+
import asyncio
190+
191+
# OpenAI provider for direct OpenAI API access
192+
openai_provider = OpenAIProvider()
193+
194+
# LiteLLM provider for other models
195+
litellm_provider = LiteLLMProvider(
196+
api_key="your-litellm-api-key",
197+
base_url="http://localhost:8000"
198+
)
199+
200+
# Agent using OpenAI directly
201+
triage_agent = Agent(
202+
name="Triage",
203+
instructions="Route requests to appropriate agents",
204+
model="gpt-3.5-turbo",
205+
model_provider=openai_provider,
206+
)
207+
208+
# Agent using Claude through LiteLLM
209+
analysis_agent = Agent(
210+
name="Analysis",
211+
instructions="Perform detailed analysis",
212+
model="claude-3",
213+
model_provider=litellm_provider,
214+
)
215+
216+
async def main():
217+
result = await Runner.run(
218+
triage_agent,
219+
input="Analyze this data",
220+
handoffs=[analysis_agent]
221+
)
222+
print(result.final_output)
223+
```
224+
225+
The LiteLLM provider makes it easy to use multiple LLM providers while maintaining a consistent interface and the full feature set of the Agents SDK including handoffs, tools, and tracing.

examples/litellm/README.md

+52
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
# LiteLLM Provider Examples
2+
3+
This directory contains examples demonstrating how to use the LiteLLM provider with the Agents SDK.
4+
5+
## Prerequisites
6+
7+
1. Install and run the LiteLLM proxy server:
8+
```bash
9+
pip install litellm
10+
litellm --model ollama/llama2 --port 8000
11+
```
12+
13+
2. Set up environment variables:
14+
```bash
15+
# LiteLLM configuration
16+
export LITELLM_API_KEY="your-litellm-api-key" # If required by your proxy
17+
export LITELLM_API_BASE="http://localhost:8000"
18+
19+
# Provider API keys (as needed)
20+
export OPENAI_API_KEY="sk-..."
21+
export ANTHROPIC_API_KEY="sk-ant-..."
22+
export GEMINI_API_KEY="..."
23+
```
24+
25+
## Examples
26+
27+
### Multi-Provider Workflow (`multi_provider_workflow.py`)
28+
29+
This example demonstrates using multiple LLM providers in a workflow:
30+
31+
1. A triage agent (using OpenAI directly) determines the task type
32+
2. Based on the task type, it routes to specialized agents:
33+
- Summarization tasks → Claude (via LiteLLM)
34+
- Coding tasks → GPT-4 (via LiteLLM)
35+
- Creative tasks → Gemini (via LiteLLM)
36+
37+
To run:
38+
```bash
39+
python examples/litellm/multi_provider_workflow.py
40+
```
41+
42+
The example will process three different types of requests to demonstrate the routing:
43+
1. A summarization request about the French Revolution
44+
2. A coding request to implement a Fibonacci sequence
45+
3. A creative writing request about a time-traveling coffee cup
46+
47+
## Notes
48+
49+
- The LiteLLM provider automatically routes model names to their appropriate providers (e.g., `claude-3` → Anthropic, `gpt-4` → OpenAI)
50+
- You can explicitly specify providers using prefixes (e.g., `anthropic/claude-3`, `openai/gpt-4`)
51+
- The provider handles passing API keys and configuration through headers
52+
- All Agents SDK features (handoffs, tools, tracing) work with the LiteLLM provider
+133
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,133 @@
1+
"""
2+
This example demonstrates using multiple LLM providers in a workflow using LiteLLM.
3+
It creates a workflow where:
4+
1. A triage agent (using OpenAI directly) determines the task type
5+
2. Based on the task type, it routes to:
6+
- A summarization agent using Claude via LiteLLM
7+
- A coding agent using GPT-4 via LiteLLM
8+
- A creative agent using Gemini via LiteLLM
9+
"""
10+
11+
import asyncio
12+
import os
13+
from typing import Literal
14+
15+
from agents import Agent, Runner, OpenAIProvider, LiteLLMProvider
16+
from agents.agent_output import AgentOutputSchema
17+
from pydantic import BaseModel
18+
19+
20+
class TaskType(BaseModel):
21+
"""The type of task to be performed."""
22+
task: Literal["summarize", "code", "creative"]
23+
explanation: str
24+
25+
26+
class TaskOutput(BaseModel):
27+
"""The output of the task."""
28+
result: str
29+
provider_used: str
30+
31+
32+
# Set up providers
33+
openai_provider = OpenAIProvider(
34+
api_key=os.getenv("OPENAI_API_KEY")
35+
)
36+
37+
litellm_provider = LiteLLMProvider(
38+
api_key=os.getenv("LITELLM_API_KEY"),
39+
base_url=os.getenv("LITELLM_API_BASE", "http://localhost:8000")
40+
)
41+
42+
# Create specialized agents for different tasks
43+
triage_agent = Agent(
44+
name="Triage Agent",
45+
instructions="""
46+
You are a triage agent that determines the type of task needed.
47+
- For text analysis, summarization, or understanding tasks, choose 'summarize'
48+
- For programming, coding, or technical tasks, choose 'code'
49+
- For creative writing, storytelling, or artistic tasks, choose 'creative'
50+
""",
51+
model="gpt-3.5-turbo",
52+
model_provider=openai_provider,
53+
output_schema=AgentOutputSchema(TaskType),
54+
)
55+
56+
summarize_agent = Agent(
57+
name="Summarization Agent",
58+
instructions="""
59+
You are a summarization expert using Claude's advanced comprehension capabilities.
60+
Provide clear, concise summaries while preserving key information.
61+
""",
62+
model="claude-3", # Will be routed to Anthropic
63+
model_provider=litellm_provider,
64+
output_schema=AgentOutputSchema(TaskOutput),
65+
)
66+
67+
code_agent = Agent(
68+
name="Coding Agent",
69+
instructions="""
70+
You are a coding expert using GPT-4's technical capabilities.
71+
Provide clean, well-documented code solutions.
72+
""",
73+
model="gpt-4", # Will be routed to OpenAI
74+
model_provider=litellm_provider,
75+
output_schema=AgentOutputSchema(TaskOutput),
76+
)
77+
78+
creative_agent = Agent(
79+
name="Creative Agent",
80+
instructions="""
81+
You are a creative writer using Gemini's creative capabilities.
82+
Create engaging, imaginative content.
83+
""",
84+
model="gemini-pro", # Will be routed to Google
85+
model_provider=litellm_provider,
86+
output_schema=AgentOutputSchema(TaskOutput),
87+
)
88+
89+
90+
async def process_request(user_input: str) -> str:
91+
"""Process a user request using the appropriate agent."""
92+
93+
# First, triage the request
94+
triage_result = await Runner.run(
95+
triage_agent,
96+
input=f"What type of task is this request? {user_input}"
97+
)
98+
task_type = triage_result.output
99+
100+
# Route to the appropriate agent
101+
target_agent = {
102+
"summarize": summarize_agent,
103+
"code": code_agent,
104+
"creative": creative_agent,
105+
}[task_type.task]
106+
107+
# Process with the specialized agent
108+
result = await Runner.run(target_agent, input=user_input)
109+
return f"""
110+
Task Type: {task_type.task}
111+
Reason: {task_type.explanation}
112+
Result: {result.output.result}
113+
Provider Used: {result.output.provider_used}
114+
"""
115+
116+
117+
async def main():
118+
"""Run example requests through the workflow."""
119+
requests = [
120+
"Can you summarize the key points of the French Revolution?",
121+
"Write a Python function to calculate the Fibonacci sequence.",
122+
"Write a short story about a time-traveling coffee cup.",
123+
]
124+
125+
for request in requests:
126+
print(f"\nProcessing request: {request}")
127+
print("-" * 80)
128+
result = await process_request(request)
129+
print(result)
130+
131+
132+
if __name__ == "__main__":
133+
asyncio.run(main())

src/agents/__init__.py

+2
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,7 @@
4444
from .models.openai_chatcompletions import OpenAIChatCompletionsModel
4545
from .models.openai_provider import OpenAIProvider
4646
from .models.openai_responses import OpenAIResponsesModel
47+
from .models.litellm_provider import LiteLLMProvider
4748
from .result import RunResult, RunResultStreaming
4849
from .run import RunConfig, Runner
4950
from .run_context import RunContextWrapper, TContext
@@ -139,6 +140,7 @@ def enable_verbose_stdout_logging():
139140
"OpenAIChatCompletionsModel",
140141
"OpenAIProvider",
141142
"OpenAIResponsesModel",
143+
"LiteLLMProvider",
142144
"AgentOutputSchema",
143145
"Computer",
144146
"AsyncComputer",

0 commit comments

Comments
 (0)