Description
Describe the question
When using a custom model provider that is passed as part of the RunConfig
in Runner.run
, the model provider is ignored when other agents are used as tools within a triage agent. The reason is that the Agent.as_tool
function creates its own call to Runner.run
, which does not include the current run config. A fix would be to add an additional optional argument run_config
to Agent.as_tool
. Alternatively, if we really just care about the model provider, we could have a model_provider
parameter in the Agent.as_tool
signature instead of the proposed run_config
argument.
The current implementation just ignores the model provider called in the initial Runner.run
and switches to the default OpenAIResponsesModel
.
Debug information
- Agents SDK version: 0.0.13
- Python version 3.11.11
Repro steps
Use the following script together with an OpenRouter API key to reproduce (or you could change it to an OllamaProvider so that you don't need an OpenRouter API key:
import asyncio
import os
from agents import (
Agent,
ItemHelpers,
MessageOutputItem,
Model,
ModelProvider,
OpenAIChatCompletionsModel,
RunConfig,
Runner,
set_tracing_disabled,
)
from openai import AsyncOpenAI
client = AsyncOpenAI(base_url="https://openrouter.ai/api/v1", api_key=os.getenv("OPENROUTER_API_KEY"))
set_tracing_disabled(disabled=True)
class OpenRouterModelProvider(ModelProvider):
def get_model(self, model_name: str | None) -> Model:
return OpenAIChatCompletionsModel("meta-llama/llama-3.3-70b-instruct", openai_client=client)
spanish_agent = Agent(
name="spanish_agent",
instructions="You translate the user's message to Spanish",
handoff_description="An english to spanish translator",
)
orchestrator_agent = Agent(
name="orchestrator_agent",
instructions=(
"You are a translation agent. You use the tools given to you to translate."
"If asked for multiple translations, you call the relevant tools in order."
"You never translate on your own, you always use the provided tools."
),
tools=[
spanish_agent.as_tool(
tool_name="translate_to_spanish",
tool_description="Translate the user's message to Spanish",
),
],
)
synthesizer_agent = Agent(
name="synthesizer_agent",
instructions="You inspect translations, correct them if needed, and produce a final concatenated response.",
)
async def main():
msg = input("Hi! What would you like translated, and to which languages? ")
orchestrator_result = await Runner.run(
orchestrator_agent, msg, run_config=RunConfig(model_provider=OpenRouterModelProvider())
)
for item in orchestrator_result.new_items:
if isinstance(item, MessageOutputItem):
text = ItemHelpers.text_message_output(item)
if text:
print(f" - Translation step: {text}")
synthesizer_result = await Runner.run(synthesizer_agent, orchestrator_result.to_input_list())
print(f"\n\nFinal response:\n{synthesizer_result.final_output}")
if __name__ == "__main__":
asyncio.run(main())
This results in the following exception if no OpenAI api key is set:
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
Expected behavior
I would have expected that the Agent.as_tool
function would still use the same run config as the current agent is using during the run. If not, I would have expected that I can pass a run config to as_tool
such that the custom model provider is still used.