Skip to content

Update best-practices.md - parallel requests #176

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

guiguilechat
Copy link
Contributor

an optional extended description

an optional extended description
@guiguilechat guiguilechat requested review from a team as code owners June 6, 2025 13:18

However, doing so can result in a lot of errors (timeout) if the ESI is having a hard time, potentially ending with temporary ban (420) and therefore preventing you from accessing other resource for 30s on average.
To avoid this issue, you can query an endpoint that has no effect on the monolith, typically [the status](https://esi.evetech.net/ui/#/Status/get_status) , to not only be sure the server is up before hammering it with requests, but also check your `X-ESI-Error-Limit-Remain` response header to adapt the query pool size.
For example, if ESI is in "vip" mode, or does not respond, or you have 0 remaining error, then there is no point in sending the next batch of requests. On the other hand, if it respond with 100 remaining errors and the server is not in "vip" mode, the static endpoints (solar systems, types, etc.) can be sent 1000 queries at a time.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The status endpoint shows if the game server is in VIP mode, not if ESI is in VIP mode.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@steven-noorbergen copied from the swagger:

If the server is in VIP mode

So yeah reword it as you like.

When requesting the same resource for different parameters, for example when [requesting the public informations](https://esi.evetech.net/ui/#/Character/get_characters_character_id) of several characters, or [fetching the items](https://esi.evetech.net/ui/#/Contracts/get_contracts_public_items_contract_id) of several contracts [for a given region](https://esi.evetech.net/ui/#/Contracts/get_contracts_public_region_id), it's a good idea to send the request in parallel to avoid having your application/client waiting for too long.

However, doing so can result in a lot of errors (timeout) if the ESI is having a hard time, potentially ending with temporary ban (420) and therefore preventing you from accessing other resource for 30s on average.
To avoid this issue, you can query an endpoint that has no effect on the monolith, typically [the status](https://esi.evetech.net/ui/#/Status/get_status) , to not only be sure the server is up before hammering it with requests, but also check your `X-ESI-Error-Limit-Remain` response header to adapt the query pool size.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might worth noting that you shouldn't query the status endpoint before every request.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's implied by "before hammering it with requests"
Adding it would make the sentence too verbose, I think.


## Parallel requests

When requesting the same resource for different parameters, for example when [requesting the public informations](https://esi.evetech.net/ui/#/Character/get_characters_character_id) of several characters, or [fetching the items](https://esi.evetech.net/ui/#/Contracts/get_contracts_public_items_contract_id) of several contracts [for a given region](https://esi.evetech.net/ui/#/Contracts/get_contracts_public_region_id), it's a good idea to send the request in parallel to avoid having your application/client waiting for too long.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would also need a note on using a sane amount of threads when parallelizing. Trying to spam ESI with thousands of parallel requests at the same time is not in the spirit of what ESI is for. ESI is a shared resource, so you should only go as fast as you realistically need to go.

Copy link
Contributor Author

@guiguilechat guiguilechat Jun 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sending 10000 requests at once is what I realistically need for those static endpoints.
I'm not gonna wait 2 hours to have the modifications on systems, types etc. information just because someone got arbitrary numbers to say otherwise.

When there exists rate limit then yes those existing rate limits must be followed. Until then, the sky is the limit, and trying to access thousands of static information in parallel is actually what ESI is designed for.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants