Closed
Description
Because of the way metadata flushing works, by serializing and applying a set of "change operations" on the server, if there are too many change operations in a single flush, the server will reject the request with a 413 "Request Entity Too Large" error. Currently that value is hard-coded at 1MB.
So for example, let's say you are incrementing a value for each file processed:
// Do this in a loop with 10k files
metadata.increment("filesProcessed", 1)
This would result in a single flush request to the server with 10k operations.
We should do two things:
- Investigate allowing larger request bodies for older clients
- On the client, collapse operations before flushing, e.g. instead of sending 10k "increment by 1" operations, we collapse that into a single "increment by 10k" operation.
Metadata
Metadata
Assignees
Labels
No labels