You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We want to use guardrail detection actions to track user behavior in our Strands Agents–based chat app.
We set the guardrail on our Bedrock account in detection mode only. However, we are unable to get the guardrail trace when using Strands Agents with LiteLLM (LiteLLMModel).
Note that we are self-hosting the LiteLLM proxy. In our case, we define the guardrailConfig on the request itself (example below) rather than in the config.yaml of the LiteLLM proxy.
When we access Bedrock directly, we were able to confirm the guardrail is applied (e.g., by changing detection mode to block and verifying the input was blocked), but we still were not able to get the trace in the response.
litellm.completion(
model="hosted_vllm/bedrock/claude-sonnet-4",
messages=[{"role": "user", "content": "Pretend you are a financial advisor"}],
api_key="<api-key>",
api_base="<url-base>",
guardrailConfig={
"guardrailIdentifier": "<guardrail-id>", # The identifier (ID) for the guardrail.
"guardrailVersion": "1", # The version of the guardrail.
"trace": "enabled", # Can be "disabled" or "enabled"
}
)
Is it possible to achieve guardrail detection and especially getting the trace using Strands Agents together with LiteLLM?
Any guidance or examples would be greatly appreciated.
Many thanks in advance!
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hi!
We want to use guardrail detection actions to track user behavior in our Strands Agents–based chat app.
We set the guardrail on our Bedrock account in detection mode only. However, we are unable to get the guardrail trace when using Strands Agents with LiteLLM (LiteLLMModel).
Note that we are self-hosting the LiteLLM proxy. In our case, we define the guardrailConfig on the request itself (example below) rather than in the config.yaml of the LiteLLM proxy.
When we access Bedrock directly, we were able to confirm the guardrail is applied (e.g., by changing detection mode to block and verifying the input was blocked), but we still were not able to get the trace in the response.
Here is a working example using LiteLLM (https://docs.litellm.ai/docs/providers/bedrock#usage---bedrock-guardrails):
Our attempt using Strands Agents with LiteLLM:
Is it possible to achieve guardrail detection and especially getting the trace using Strands Agents together with LiteLLM?
Any guidance or examples would be greatly appreciated.
Many thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions