Skip to content

docs: Explain tool execution, validation, and retries #1578

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 28, 2025

Conversation

angelol
Copy link
Contributor

@angelol angelol commented Apr 24, 2025

This PR adds a new section to the docs/tools.md documentation explaining the process of tool execution, validation, and retries within PydanticAI.

Justification: Currently, information about how tool execution errors (like ValidationError) and explicit retries (ModelRetry) are handled is not easily discoverable via documentation search. I personally was looking for this information in the tools section of the documentation and couldn't find it even after a few tries (yes, I'm stupid). Adding this dedicated section aims to improve findability and provide users with a clearer understanding of how tools handle errors and retries during execution.

Key points covered:

  • How tool arguments provided by the LLM are validated against the function signature using Pydantic.
  • The automatic generation of RetryPromptPart when a ValidationError occurs, allowing the LLM to correct parameters.
  • Introduction of the ModelRetry exception for tools to explicitly request a retry from their internal logic (e.g., for transient errors or invalid inputs not caught by type validation).
  • Clarification that both ValidationError and ModelRetry respect the configured retries setting.

Add section detailing how tool validation errors (ValidationError) and explicit retries (ModelRetry) are handled during execution to improve discoverability.
Copy link
Contributor

hyperlint-ai bot commented Apr 24, 2025

PR Change Summary

Enhanced documentation on tool execution, validation, and retries in PydanticAI.

  • Added a new section explaining tool execution and validation processes.
  • Clarified handling of validation errors and retries with examples.
  • Introduced the concept of ModelRetry for explicit retry requests during tool execution.

Modified Files

  • docs/tools.md

How can I customize these reviews?

Check out the Hyperlint AI Reviewer docs for more information on how to customize the review.

If you just want to ignore it on this PR, you can add the hyperlint-ignore label to the PR. Future changes won't trigger a Hyperlint review.

Note specifically for link checks, we only check the first 30 links in a file and we cache the results for several hours (for instance, if you just added a page, you might experience this). Our recommendation is to add hyperlint-ignore to the PR to ignore the link check for this PR.

@Kludex Kludex merged commit 433e1bc into pydantic:main Apr 28, 2025
19 checks passed
@Kludex
Copy link
Member

Kludex commented Apr 28, 2025

Thanks! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants