Skip to content

Metadata flushing can fail if there are too many operations #2104

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ericallam opened this issue May 24, 2025 · 0 comments
Closed

Metadata flushing can fail if there are too many operations #2104

ericallam opened this issue May 24, 2025 · 0 comments

Comments

@ericallam
Copy link
Member

Because of the way metadata flushing works, by serializing and applying a set of "change operations" on the server, if there are too many change operations in a single flush, the server will reject the request with a 413 "Request Entity Too Large" error. Currently that value is hard-coded at 1MB.

So for example, let's say you are incrementing a value for each file processed:

// Do this in a loop with 10k files
metadata.increment("filesProcessed", 1)

This would result in a single flush request to the server with 10k operations.

We should do two things:

  • Investigate allowing larger request bodies for older clients
  • On the client, collapse operations before flushing, e.g. instead of sending 10k "increment by 1" operations, we collapse that into a single "increment by 10k" operation.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant