Skip to content

Metadata flushing can fail if there are too many operations #2104

Closed
@ericallam

Description

@ericallam

Because of the way metadata flushing works, by serializing and applying a set of "change operations" on the server, if there are too many change operations in a single flush, the server will reject the request with a 413 "Request Entity Too Large" error. Currently that value is hard-coded at 1MB.

So for example, let's say you are incrementing a value for each file processed:

// Do this in a loop with 10k files
metadata.increment("filesProcessed", 1)

This would result in a single flush request to the server with 10k operations.

We should do two things:

  • Investigate allowing larger request bodies for older clients
  • On the client, collapse operations before flushing, e.g. instead of sending 10k "increment by 1" operations, we collapse that into a single "increment by 10k" operation.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions