Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

@uppy/aws-s3-multipart: retry signature request #4691

Merged
merged 8 commits into from
Sep 28, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 15 additions & 7 deletions e2e/cypress/integration/dashboard-aws-multipart.spec.ts
Original file line number Diff line number Diff line change
Expand Up @@ -72,12 +72,20 @@ describe('Dashboard with @uppy/aws-s3-multipart', () => {
cy.get('.uppy-StatusBar-statusPrimary').should('contain', 'Upload failed')

cy.intercept('POST', '/s3/multipart', { statusCode: 200, times: 1, body: JSON.stringify({ key:'mocked-key-attempt3', uploadId:'mocked-uploadId-attempt3' }) }).as('createMultipartUpload-attempt3')
cy.intercept('GET', '/s3/multipart/mocked-uploadId-attempt3/1?key=mocked-key-attempt3', {
statusCode: 200,
headers: {
ETag: 'ETag-attempt3',
},
body: JSON.stringify({ url:'/put-success-attempt3', expires:8 }),
let intercepted = 0
cy.intercept('GET', '/s3/multipart/mocked-uploadId-attempt3/1?key=mocked-key-attempt3', (req) => {
if (intercepted++ < 2) {
// Ensure that Uppy can recover from at least 2 network errors at this stage.
req.destroy()
return
}
req.reply({
statusCode: 200,
headers: {
ETag: 'ETag-attempt3',
},
body: JSON.stringify({ url:'/put-success-attempt3', expires:8 }),
})
}).as('signPart-attempt3')
cy.intercept('PUT', '/put-success-attempt3', {
statusCode: 200,
Expand All @@ -92,7 +100,7 @@ describe('Dashboard with @uppy/aws-s3-multipart', () => {
}),
}).as('completeMultipartUpload-attempt3')
cy.get('.uppy-StatusBar-actions > .uppy-c-btn').click()
cy.wait(['@createMultipartUpload-attempt3', '@signPart-attempt3', '@put-attempt3', '@completeMultipartUpload-attempt3'])
cy.wait(['@createMultipartUpload-attempt3', ...Array(3).fill('@signPart-attempt3'), '@put-attempt3', '@completeMultipartUpload-attempt3'])
cy.get('.uppy-StatusBar-statusPrimary').should('contain', 'Complete')
})

Expand Down
44 changes: 33 additions & 11 deletions packages/@uppy/aws-s3-multipart/src/index.js
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ class HTTPCommunicationQueue {

#requests

#retryDelayIterator
#retryDelays

#sendCompletionRequest

Expand Down Expand Up @@ -112,7 +112,7 @@ class HTTPCommunicationQueue {
this.#sendCompletionRequest = requests.wrapPromiseFunction(options.completeMultipartUpload, { priority:1 })
}
if ('retryDelays' in options) {
this.#retryDelayIterator = options.retryDelays?.values()
this.#retryDelays = options.retryDelays ?? []
}
if ('uploadPartBytes' in options) {
this.#uploadPartBytes = requests.wrapPromiseFunction(options.uploadPartBytes, { priority:Infinity })
Expand All @@ -122,7 +122,7 @@ class HTTPCommunicationQueue {
}
}

async #shouldRetry (err) {
async #shouldRetry (err, retryDelayIterator) {
const requests = this.#requests
const status = err?.source?.status

Expand All @@ -137,7 +137,7 @@ class HTTPCommunicationQueue {
// more than one request in parallel, to give slower connection a chance
// to catch up with the expiry set in Companion.
if (requests.limit === 1 || this.#previousRetryDelay == null) {
const next = this.#retryDelayIterator?.next()
const next = retryDelayIterator.next()
if (next == null || next.done) {
return false
}
Expand All @@ -156,7 +156,7 @@ class HTTPCommunicationQueue {
} else if (status === 429) {
// HTTP 429 Too Many Requests => to avoid the whole download to fail, pause all requests.
if (!requests.isPaused) {
const next = this.#retryDelayIterator?.next()
const next = retryDelayIterator.next()
if (next == null || next.done) {
return false
}
Expand All @@ -175,7 +175,7 @@ class HTTPCommunicationQueue {
}
} else {
// Other error code means the request can be retried later.
const next = this.#retryDelayIterator?.next()
const next = retryDelayIterator.next()
if (next == null || next.done) {
return false
}
Expand Down Expand Up @@ -348,14 +348,36 @@ class HTTPCommunicationQueue {
async uploadChunk (file, partNumber, chunk, signal) {
throwIfAborted(signal)
const { uploadId, key } = await this.getUploadId(file, signal)
throwIfAborted(signal)

const signatureRetryIterator = this.#retryDelays.values()
const chunkRetryIterator = this.#retryDelays.values()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change makes it so the retry iterator will be reset for each chunk, i.e. if there's a failure on one chunk, it won't affect the other chunks. I don't think it's a good change.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought this is what we wanted 🤔

Let's say we upload 5K chunks, then only 5 chunks in total are allowed to fail? Why shouldn't any chunk be allowed to fail and retry?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO it makes sense to increase the time before retry if any chunk has failed, otherwise we increase the chances of running into rate limiting issues. We could also just go with it and see if anyone complains.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the future if/when we implement a centralised retrying mechanism, this mechanism could save a shared state whenever some request fails due to throttling, stored per-domain (hostname). Then it could do something like exponentially slow down all retries to that particular domain name only

const shouldRetrySignature = () => {
const next = signatureRetryIterator.next()
if (next == null || next.done) {
return null
}
return next.value
}

for (;;) {
throwIfAborted(signal)
const chunkData = chunk.getData()
const { onProgress, onComplete } = chunk
let signature

const signature = await this.#fetchSignature(this.#getFile(file), {
uploadId, key, partNumber, body: chunkData, signal,
}).abortOn(signal)
try {
signature = await this.#fetchSignature(this.#getFile(file), {
uploadId, key, partNumber, body: chunkData, signal,
}).abortOn(signal)
} catch (err) {
const timeout = shouldRetrySignature()
if (timeout == null || signal.aborted) {
throw err
}
await new Promise(resolve => setTimeout(resolve, timeout))
// eslint-disable-next-line no-continue
continue
}

throwIfAborted(signal)
try {
Expand All @@ -366,7 +388,7 @@ class HTTPCommunicationQueue {
}).abortOn(signal),
}
} catch (err) {
if (!await this.#shouldRetry(err)) throw err
if (!await this.#shouldRetry(err, chunkRetryIterator)) throw err
}
}
}
Expand Down
2 changes: 1 addition & 1 deletion packages/@uppy/aws-s3-multipart/src/index.test.js
Original file line number Diff line number Diff line change
Expand Up @@ -390,7 +390,7 @@ describe('AwsS3Multipart', () => {

await expect(core.upload()).rejects.toEqual({ source: { status: 500 } })

expect(awsS3Multipart.opts.uploadPartBytes.mock.calls.length).toEqual(2)
expect(awsS3Multipart.opts.uploadPartBytes.mock.calls.length).toEqual(3)
expect(mock.mock.calls.length).toEqual(1)
})
})
Expand Down