Description
Why is this feature valuable to you? Does it solve a problem you're having?
The requests library allows both streaming and chunked uploads (see https://requests.readthedocs.io/en/latest/user/advanced/#streaming-uploads and https://requests.readthedocs.io/en/latest/user/advanced/#chunk-encoded-requests). This has two benefits:
- It is sufficient to only load small parts of a file into memory before upload.
- It is possible to limit bandwidth usage by using a generator that provides chunks at a limited rate.
The Dropbox API of course already requires upload sessions (files/upload_session_start
, files/upload_session_append
and files/upload_session_finish
) to upload files > 150 MB. However, this approach by itself does not replace chunked or streaming uploads because:
- The request body should be ideally >= 4 MB to reduce the total number of API calls (both for efficiency and to not exhaust data transport API call limits).
- Bandwidth control will be very coarse when performed on chunks of 4 MB compared for example 2 kB.
- Memory usage will still be larger compared to 1 kB or 2 kB chunks, especially for parallel uploads.
Describe the solution you'd like
Requests supports streaming uploads by passing a file-like object as the request body and chunked uploads by passing a generator as the request body. However, the Python SDK explicitly prevents both by requiring the request body to be of type bytes
:
dropbox-sdk-python/dropbox/dropbox_client.py
Lines 533 to 539 in 9895d70
It would be good to either completely drop this limitation, with appropriate warnings in the doc string, or at least allow chunked uploads (where requests handles retry / rewind logic) even when disallowing streaming uploads.
Describe alternatives you've considered
Not at present.