-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Discourage developers from requesting large sets of data #271
Comments
Let's talk potential solutions. Remove the iteration API and
|
The iterate methods ‒ by default ‒ produce 500 values per minute. This hopefully discourages developers from using this API to request an unreasonable amount of values (payments/customers/…). See: #271
The iterate methods ‒ by default ‒ produce 500 values per minute. This hopefully discourages developers from using this API to request an unreasonable amount of values (payments/customers/…). See: #271
The iterate methods ‒ by default ‒ produce 500 values per minute. This hopefully discourages developers from using this API to request an unreasonable amount of values (payments/customers/…). See: #271
The iterate methods ‒ by default ‒ produce 500 values per minute. This hopefully discourages developers from using this API to request an unreasonable amount of values (payments/customers/…). See: #271
The iterate methods ‒ by default ‒ produce 500 values per minute. This hopefully discourages developers from using this API to request an unreasonable amount of values (payments/customers/…). See: #271
@Pimm I know this is a closed thread, however I am genuinely interested on how we (developers) are supposed NOT to fetch large sets of data as we have no other choice but doing like this when we have nothing like search, filters, query parameters or anything to proceed ? Did I miss something ? |
Background
Since version 2.0.0 of this library, calling a paginated endpoint returns an array with added
nextPageCursor
andpreviousPageCursor
properties. These properties are great for implementing a paginated view.The array also has added
nextPage
andpreviousPage
methods (similar tonext
andprevious
in the PHP client). The only purpose of these methods is to request large sets of data ‒ sets which span across multiple pages.Why would a developer request large sets of data? They might need filtered data (all Italian customers); or they might be looking for a specific value but don't know the ID (getting a payment by its foreign ID in the metadata).
Ideally, the Mollie API would cover these cases. The clients would hand off the filtering/searching work to the Mollie API, which would send back the result. However, the Mollie API currently doesn't do any filtering or searching (not even trivial examples), and besides that it might never cover all use cases.
In version 3.6.0 of this library, we introduced the iteration API (
iterate
method), which is essentially the JavaScript-native alternative fornextPage
(andpreviousPage
).Usage:
The problem
Mollie has experienced incidents where developers were making so many requests in rapid succession that the API was having performance and reliability issues.
Developers can write this horrible code since 2018:
With the iteration API, they can write:
(Note that there is no
break
keyword in the for-loop.)The version with the iteration API is two lines shorter, and is less explicit about how many requests it makes (there is no explicit
nextPage
call).People within Mollie have expressed concern that the problem they're already having ‒ developers making many requests in rapid succession ‒ may worsen now that we have the iteration API.
Caveats
Developers are using this client voluntarily. They can opt-out of using it at any time, and just use curl to communicate with the Mollie API instead. Nothing we can do in the client prevents a developer from performing a
DDoS attack on the Mollie API.What we can do, is make developers aware of the stress they put on the Mollie API. And perhaps we can make it easier to get the job done within a reasonable number of requests.
The text was updated successfully, but these errors were encountered: