-
Notifications
You must be signed in to change notification settings - Fork 208
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
performance: consider relaxing chunks should be in order for patched uploads #546
Comments
Do you have a proposed change to the wording? Even better if there's an example implementation. I'm worried about how this would impact a streaming push where the source is unable to seek to recover previously sent data. If the client doesn't know a chunk wasn't received correctly, it may discard a chunk from the source after the send attempt and only discover the failure after sending the rest of the blob. |
IMO, nothing can be guaranteed on the registry side until the final PUT (?digest=) In which case the language could change from ... "Chunks MUST be uploaded in order, with the first byte of a chunk being the last chunk's plus one. If a chunk is uploaded out of order, the registry MUST respond with a 416 Requested Range Not Satisfiable code. A GET request may be used to retrieve the current valid offset and upload location." to completely remove this language, or ... "Chunks MAY be uploaded in any order and the registry MUST respond with a 202 code. A GET request may be used to retrieve the current valid offset and upload location."
|
This still feels like it's going to break a lot of things to me. I'd like to see some implementations showing both the feasibility and a measured performance improvement. |
It seems like "If a chunk is uploaded out of order, the registry MUST respond with a 416 Requested Range Not Satisfiable code." was intended to allow this to be supported at some point (or at least make it really clear that it isn't), so any new wording would have to include something like registries MAY respond with 416 if out of order uploads are not supported (anything else can't really work with existing implementations, especially as they're currently mandated by the spec to explicitly deny this use case). I would imagine hashing the end result is a lot more annoying if the chunks are out of order, since with them in order the server can calculate as they're received (and that's likely the real reason for the current very strong language). |
The challenges depend on the scenarios:
So the language can be relaxed, but I believe there needs to be communication between the parties to avoid breaking one or more scenarios. |
In a DC setting, from benchmarks I have seen, even for large buffers, for fairly modern Intel Xeons, sha256 per core << 1GiBps (= ~10Gbps), and ssd and network speeds will exceed this easily. So indeed that since sha256 is serial, hashing can become a bottleneck. Note that this may not qualify as the "average" case, but for large artifacts use case, it does make sense to consider this. Also note that if want streamed hashing and take advantage of pipelining of chunk uploading, we want hashing to be faster in that case. |
For some chunk C_k, k in [1, N] ... Today, client push single C_k -> registry hash C_k -> client wait response C_k and push next chunk but registry is waiting also -> lather, rinse, repeat -> client finalizes and registry verifies Pipeline parallelism, client push parallel many/all C_k -> registry hash whatever C_k in sequence, may stall -> client wait response for many C_k, push many other C_k but registry kept busy with hashing -> lather, rinse, repeat -> client finalizes and registry verifies If extremely lucky, all C_k arrive in exact sequence at registry side. Pathological case, they arrive in opposite order, so stall till the very end and worse than the serial case. Registry side make a choice to either store these chunks in buffers or storage. Client side also computes the hash for finalize step but it is an easier problem since it has more control. |
From the above statements, the bottleneck is the hashing speed. If we have a hash algorithm, of which speed is faster than the SSD and the network speed, sequential uploading (current method) and parallel uploading (proposed out of order chunk upload) should have the same performance. |
^ but that is assuming the registry wants to hash. IIRC, some registries just accept whatever is sent their way, so that is a secondary concern. So the question now is can parallel range uploads be faster somehow? |
The claim is that upload performance is improved when blob chunks are uploaded in parallel.
https://github.com/opencontainers/distribution-spec/blob/main/spec.md#pushing-a-blob-in-chunks
Therefore, the following language must be amended/relaxed.
"Chunks MUST be uploaded in order, with the first byte of a chunk being the last chunk's plus one. If a chunk is uploaded out of order, the registry MUST respond with a 416 Requested Range Not Satisfiable code. A GET request may be used to retrieve the current valid offset and upload location."
The text was updated successfully, but these errors were encountered: