Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Retry 429 errors with exponential backoff #36

Open
jwadolowski opened this issue Sep 30, 2024 · 0 comments
Open

Retry 429 errors with exponential backoff #36

jwadolowski opened this issue Sep 30, 2024 · 0 comments

Comments

@jwadolowski
Copy link

As mentioned in #35 it's quite easy to lock yourself out and end up with permanently broken terraform plan / terraform apply. After a bit of thinking I came to a conclusion that it'd be great to introduce some retries when request/minute threshold is hit. Even slow-ish plan/apply is much better than current behaviour.

Current state

  1. Terraform immediately fails when req/minute threshold is hit

Expected state

  1. Terraform should respect the API limit and keep retrying in the background with exponential backoff

Noteworthy considerations

  • it'd be great if the API informs the user about remaining requests (i.e. in form of x-ratelimit-* HTTP headers; here's how GitHub and OpenAI APIs implement that). As far as I can see, there's no such a thing at the time of writing
  • does it make sense to track "in-flight" requests on the client (provider) side? Is this even feasible? Would it be reliable?
  • a feature flag (part of provider block?) that enables/disables the retry logic (opt-out? It's hard for me to imagine that'd be the case, but I bet that some users may prefer current behaviour)
  • customizable timeouts and deadline exceeded errors (docs)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant