-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
@uppy/aws-s3-multipart: retry signature request #4691
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add a test in the e2e to ensure there are no regressions?
Co-authored-by: Antoine du Hamel <antoine@transloadit.com>
Co-authored-by: Antoine du Hamel <antoine@transloadit.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm except 1 nit
throwIfAborted(signal) | ||
|
||
const signatureRetryIterator = this.#retryDelays.values() | ||
const chunkRetryIterator = this.#retryDelays.values() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change makes it so the retry iterator will be reset for each chunk, i.e. if there's a failure on one chunk, it won't affect the other chunks. I don't think it's a good change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought this is what we wanted 🤔
Let's say we upload 5K chunks, then only 5 chunks in total are allowed to fail? Why shouldn't any chunk be allowed to fail and retry?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO it makes sense to increase the time before retry if any chunk has failed, otherwise we increase the chances of running into rate limiting issues. We could also just go with it and see if anyone complains.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the future if/when we implement a centralised retrying mechanism, this mechanism could save a shared state whenever some request fails due to throttling, stored per-domain (hostname). Then it could do something like exponentially slow down all retries to that particular domain name only
Co-authored-by: Mikael Finstad <finstaden@gmail.com>
Screencast.from.2023-09-27.23-19-41.webmFor me it doesn't really retry locally, just fails until I press “retry”. Am I doing something wrong? |
It looks like you might be having a CORS error, maybe you need to fix that first |
This error was happening in |
| Package | Version | Package | Version | | ------------------------- | ------- | ------------------------- | ------- | | @uppy/audio | 1.1.3 | @uppy/store-default | 3.0.4 | | @uppy/aws-s3 | 3.3.1 | @uppy/store-redux | 3.0.4 | | @uppy/aws-s3-multipart | 3.7.0 | @uppy/svelte | 3.1.0 | | @uppy/companion | 4.9.1 | @uppy/thumbnail-generator | 3.0.5 | | @uppy/companion-client | 3.4.1 | @uppy/transloadit | 3.3.1 | | @uppy/compressor | 1.0.4 | @uppy/tus | 3.3.1 | | @uppy/core | 3.5.1 | @uppy/utils | 5.5.1 | | @uppy/dashboard | 3.5.4 | @uppy/webcam | 3.3.3 | | @uppy/image-editor | 2.2.1 | @uppy/xhr-upload | 3.4.1 | | @uppy/remote-sources | 1.0.4 | uppy | 3.17.0 | - meta: add Prettier (Antoine du Hamel / #4707) - @uppy/aws-s3-multipart: retry signature request (Merlijn Vos / #4691) - meta: update linter config to cover more files (Mikael Finstad / #4706) - @uppy/image-editor: ImageEditor.jsx - remove 1px black lines (Evgenia Karunus / #4678) - meta: delete `.yarn/releases/yarn-3.4.1.cjs` (Antoine du Hamel) - meta: fix linter errors (Antoine du Hamel / #4704) - @uppy/utils: test: migrate to Vitest for Uppy core and Uppy plugins (Antoine du Hamel / #4700) - meta: run corepack yarn (Mikael Finstad) - @uppy/companion: upgrade TS target (Mikael Finstad / #4670) - @uppy/companion: use deferred length for tus streams (Mikael Finstad / #4697) - @uppy/companion-client: fix a refresh token race condition (Mikael Finstad / #4695) - meta: add companion hotfix doc (Mikael Finstad / #4683) - meta: run type checks also for companion and add files to docker (Mikael Finstad / #4688) - @uppy/svelte: revert breaking change (Antoine du Hamel / #4694) - meta: Update yarn.lock (Artur Paikin) - @uppy/companion: fix instagram/facebook auth error regression (Mikael Finstad / #4692) - @uppy/aws-s3-multipart: aws-s3-multipart - call `#setCompanionHeaders` in `setOptions` (jur-ng / #4687) - @uppy/svelte: Upgrade Svelte to 4 (frederikhors / #4652) - @uppy/companion: add test endpoint for dynamic oauth creds (Mikael Finstad / #4667) - meta: fix VITE_COMPANION_ALLOWED_HOSTS (Mikael Finstad / #4690) - @uppy/companion: fix edge case for pagination on root (Mikael Finstad / #4689) - @uppy/companion: fix onedrive pagination (Mikael Finstad / #4686)
* main: Bucket fn also remote files (#4693) fixup! Added Companion OAuth Key type (#4668) Added Companion OAuth Key type (#4668) meta: check for formatting in CI (#4714) meta: bump get-func-name from 2.0.0 to 2.0.2 (#4709) meta: run Prettier on existing files (#4713) Release: uppy@3.17.0 (#4716) meta: add Prettier (#4707) @uppy/aws-s3-multipart: retry signature request (#4691)
This is a tricky one. Using
RateLimitedQueue
is complex and unintuitive. We also duplicate this logic between tus and S3. We can not reuse the existing#shouldRetry
as it's very tightly coupled and full of side-effects. It's also odd that there is a global retry iterator, which takes the default or user providedretryDelays
and turns it into an iterator, as this seems to unnecessarily share the retry interval between all requests.There is an open issue about all of this: #4173.
For now this fix is fine.