Description
Please read this first
- Have you read the docs?Agents SDK docs -> yes
- Have you searched for related issues? Others may have faced similar issues. -> Yes
Describe the bug
When I configure the set_default_openai_api("chat_completions")
in the OpenAI Agents SDK, the traces rendered in the UI switch from a well-formatted Markdown view to a raw JSON format. This behavior does not occur when using the default responses API.
Debug information
- Agents SDK version: v0.0.9
- Python version (e.g. Python 3.13)
Repro steps
...
def get_azure_openai_client():
"""
Creates and returns an Azure OpenAI client instance.
Returns:
AsyncAzureOpenAI: Configured Azure OpenAI client
"""
load_dotenv()
return AsyncAzureOpenAI(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version=os.getenv("AZURE_OPENAI_API_VERSION"),
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
)
openai_client = get_azure_openai_client()
set_default_openai_client(openai_client, use_for_tracing=False)
set_default_openai_api("chat_completions") # 👈 Switching to this causes the trace format change
...
rag_agent = Agent(
name="rag_agent",
instructions=PROMPT,
tools=[get_stocks_information, get_current_datetime],
model=os.getenv("AZURE_OPENAI_GPT_4O_MODEL"),
model_settings=ModelSettings(
temperature=0,
parallel_tool_calls=True,
max_tokens=4096,
),
input_guardrails=[smartsource_input_guardrail],
)
✅ Responses API (Expected Trace View)

❌ Chat Completions API (Raw JSON Trace)

Expected behavior
The trace output should remain consistent in terms of formatting and readability regardless of whether I use chat_completions or responses. Ideally, both should render structured, human-readable output (e.g., Markdown or rich display), especially since the underlying content remains similar.
Notes
• The model used in both cases is GPT-4o from Azure.
• This may be a regression or an unhandled case in the tracing renderer.
• If this is the expected behavior, are there plans to provide consistent trace formatting across APIs?
@rm-openai Please help! :)