You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The transfer manager should offer some API to help users submit the right amount of concurrent objects.
If a user has millions of objects to upload, they can't start work on all of them at once. There needs to be some mechanism to throttle the amount of concurrent work. Otherwise, users will just use a semaphore with some magic number (100? 1000?), and that number might end up as the bottleneck.
aws-c-s3 lacks this feature, and it's been an issue. Different transfer managers that use aws-c-s3 have picked different magic numbers, like 128. But in a workload like "download 10,000 256KiB files", 128 is a major bottleneck. Each object will only require 1 HTTP request, so higher concurrency allows for much higher throughput.
The text was updated successfully, but these errors were encountered:
The transfer manager should offer some API to help users submit the right amount of concurrent objects.
If a user has millions of objects to upload, they can't start work on all of them at once. There needs to be some mechanism to throttle the amount of concurrent work. Otherwise, users will just use a semaphore with some magic number (100? 1000?), and that number might end up as the bottleneck.
aws-c-s3 lacks this feature, and it's been an issue. Different transfer managers that use aws-c-s3 have picked different magic numbers, like 128. But in a workload like "download 10,000 256KiB files", 128 is a major bottleneck. Each object will only require 1 HTTP request, so higher concurrency allows for much higher throughput.
The text was updated successfully, but these errors were encountered: