-
Notifications
You must be signed in to change notification settings - Fork 152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
API rate limit exceeded for user ID #613
Comments
@supergeoff Do you see this warning for multiple users or for the same one all over again? |
Same profile/ user id over again |
@supergeoff We are looking into this and trying to determine if this was caused by something on the plugin side. |
error stopped on the 26/11/2022 01:30:55. But since the error appeared and still as of today the view on top of topics doesn't work anymore and display 0 for all counters even if i have PR or unread messages. |
From the last link above:
For the scenario of the "spammy" feature described below, the two points that relate to this plugin are 2 and 4. At the moment, the webapp portion of the plugin sends 4 separate HTTP requests to fetch the counts on left hand side of the application. If the user is using the Mattermost desktop app, this is the same as having 3 browser tabs open, so there are actually 12 requests happening semi-concurrently. These requests are sent during page load, websocket reconnect, and at a 5 minute timer while connected. In order to satisfy point 2 in the numbered items above, we can add some logic to avoid concurrent requests:
For point 4: Given the improvements detailed above, I'm not sure if we need to do anything else, but we can certainly try to adhere to the These rate limits are specific to a given user. If we receive a I'm thinking this is overkill, as I don't think we will be hitting their API in quick succession if the proposals for point 2 are implemented. |
@mickmister Doubts regarding point 2 of the above post:- Do I need to implement points 1,3,4 or like these are the possible solutions and do we need to implement any one of them?
So according to your point above for the WebSocket event part, we are updating the cache and sending these values directly in the WebSocket event and if we click on the refresh button we are refreshing the count and updating the cache again so for which case we are actually going to use the cache?. Because if we are sending the counts in the WebSocket event itself then the problem of multiple clients will be fixed as now there is no need for different clients to call these APIs and we are also updating the cache in 2 out of 3 cases. Can you please elaborate on that? Should I send the whole data in the WebSocket event like from all 4 APIs or just the count? I am not able to understand the need for a cache here. Please explain a bit about that. Thanks for the answer above it was very helpful in understanding the issue and please feel free to correct me and ask any questions. |
@Kshitij-Katiyar I was thinking we would change to frontend to fetch all of the 4 counts in one request, and the backend would serially fetch the data for each category in that request
This is hopefully avoided by the "jitter" aspect of what I'm suggesting. At runtime, we choose a random number between 0-10 and use that as the delay for that specific tab.
Any event pertaining to the user could trigger an update to the notifications count, so we would want to keep that as updated as possible. I'm not sure if there's a performance concern with refreshing all counts for a given webhook event. I'd like to get @hanzei's opinion on this. This is definitely increasing the scope of what we're trying to fix here.
I thought this was every 5 minutes, though I can't seem to find the code that has this constant at the moment. Do you have a link to the code that waits on a timer to call the reconnect handler?
The cache is meant to defend against users requesting with multiple tabs in quick succession. Now that you point this out, I'm not sure if a cache really helpful here. It depends on how often the reconnect handler is called, which you said is once an hour, which is negligible.
Might as well send all the data since it's already available. |
@mickmister Link for the code containing the constant for
|
@mickmister This is the whole approach I have thought of to tackle this issue. Please let me know your thoughts about the following:-
If we are sending the updated count from the server inside the WebSocket event only then I think there is no need to create a jitter as mentioned in point number 2 as the API call is only going to be done if the user's internet gets connected to the internet again or the user is inactive for more than 1 hour which I think is a very edge case. Please let me know your thoughts on this. |
@Kshitij-Katiyar Note that a websocket reconnect can happen for any number of reasons, including something like a hiccup in middleware between the frontend and MM server. I'm still thinking we should add the jitter for this case of multiple tabs reconnecting.
I made this suggestion with the assumption that we would be using cached values for the counts. I'm not sure how much value the webhook-driven update brings by itself, in regards to rate limiting. We can add this functionality, but if we're not adding the server-side cache, then I think the "auto-update based on incoming webhook event" is unrelated to the rate limiting issue and can be done in a separate PR. Also, about the "5 minute" thing I referenced: the webapp used to unconditionally call these reconnect handlers every 15 minutes but it has since been removed mattermost/mattermost-webapp#10386 |
* [MI-2481]:Fixed issue mattermost#681 on github * [MI-2481]:Fixed review fixes * [MI-2481]:Fixed review fixes * [MI-2481]:Fixed review fixes
* [MI-2481]:Fixed issue #613 on github (#12) * [MI-2481]:Fixed issue #681 on github * [MI-2481]:Fixed review fixes * [MI-2481]:Fixed review fixes * [MI-2481]:Fixed review fixes * [MI-2547]:Fixed review fixes for Github issue 613 and PR 626 (#14) * [MI-2547]:Fixed review fixes for Github issue 613 and PR 626 * [MI-2547]:Fixed review fixes * [MI-2547]:Fixed review fixes * [MI-2582]:Fixed review comments for github issue #613 (#15) * [MM-613]:Fixed review comments * [MI-3072]:Converted the LHS APIs into GraphQL (#32) * [MI-3012]: Converted user PRs API into GraphQL * [MI-3012]: Fixed review comments * [MI-3012]:fixed log message * [MI-3035]: Converted the get assignment API into GraphQL * [MI-3035]:Fixed self review comments * [MI-3035]: Fixed review comments * [MI-3072]:Converted review requested API to graphQL * [MI-3072]:Combined all the graphQL queries * [MI-3072]:Fixed CI errors * [MI-3072]:Fixed review comments * [MI-3072]:Fixed CI errors * [MI-3072]:Fixed review comments * [MI-3072]:Fixed review comments * [MM-613]:Changed namings * [MM-613]:Fixed review comments * [MM-613]:Fixed review comments * [MM-613]:Fixed panic error * [MM-613]:Fixed review comments * [MM-613]:Fixed lint errors * [MM-613]:Fixed review comment * [MM-613]:Fixed review comments * [MM-613]:Fixed review comments
Hello,
My mattermost logs are full of API rate limit exceeded warn message.
Only for my user (I'm mattermost admin)
{"timestamp":"2022-11-22 18:05:34.007 Z","level":"warn","msg":"Failed to search for review","caller":"app/plugin_api.go:977","plugin_id":"github","userid":"XXX","github username":"username","error":"GET https://api.github.com/search/issues?q=is%3Apr+is%3Aopen+review-requested%3Ausername+archived%3Afalse+org%3Aorganization: 403 API rate limit exceeded for user ID XXX. [rate reset in 24s]","query":"is:pr is:open review-requested:username archived:false org:organization"}
Any idea on wether this is a common limitation or if it is bad setup on my end ?
I followed documentation for setup.
Regards;
The text was updated successfully, but these errors were encountered: