Skip to content

Conversation

@DilumAluthge
Copy link
Contributor

@DilumAluthge DilumAluthge commented Sep 13, 2025

There are two places in the script where we hit a "search" endpoint (/search/issues). Search endpoints have that annoying 30 requests/minute rate limit (GitHub docs).

Before this PR:

  1. In one place, we have rate limit handling.
  2. In the other place, we don't have rate limit handling.

After this PR:

  1. We handle the rate limits in both places.
  2. We consolidate the rate limit logic into a single place.
  3. Instead of pessimistically always sleeping for 60 seconds, we ask GitHub how much longer until our rate limit resets, and then we only sleep for the minimum necessary amount of time.

Also, an unrelated change, we now rethrow() in some places where we previously caught and @warned. This is because I think catching and warning can hide a more serious problem with the script, and I think rethrow()-ing is better.

@DilumAluthge DilumAluthge changed the title Handle another place where we hit rate limits Handle another place where we hit rate limits, and consolidate rate limit logic in a single place Sep 13, 2025
@KristofferC
Copy link
Owner

It doesn't really make sense to me to try catch, print a warning with a backtrace and then rethrow the error unconditionally (which prints the backtrace again). What's the point of the try catch at all? Also, I have never hit any rate limit when running authenticated. Did you get rate limited in some case?

@DilumAluthge
Copy link
Contributor Author

DilumAluthge commented Sep 16, 2025

Did you get rate limited in some case?

Yeah, I got rate-limited (when authenticated with PAT) on the 1.10.11 backports branch, around this line:

Backporter/backporter.jl

Lines 398 to 399 in 52691be

req = HTTP.request("GET", "https://api.github.com/search/issues?q=$hash+type:pr+repo:$(config.repo)";
headers = headers, connect_timeout=30, read_timeout=60)

Let me drop all non-rate-limit changes from this PR, and clean it up.

@KristofferC
Copy link
Owner

Okay, I don't think this script has to do any automatic retrying at all in case of rate limit. Doesn't seem worth complicating the code with it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants