You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Because of the way metadata flushing works, by serializing and applying a set of "change operations" on the server, if there are too many change operations in a single flush, the server will reject the request with a 413 "Request Entity Too Large" error. Currently that value is hard-coded at 1MB.
So for example, let's say you are incrementing a value for each file processed:
// Do this in a loop with 10k filesmetadata.increment("filesProcessed",1)
This would result in a single flush request to the server with 10k operations.
We should do two things:
Investigate allowing larger request bodies for older clients
On the client, collapse operations before flushing, e.g. instead of sending 10k "increment by 1" operations, we collapse that into a single "increment by 10k" operation.
The text was updated successfully, but these errors were encountered:
Because of the way metadata flushing works, by serializing and applying a set of "change operations" on the server, if there are too many change operations in a single flush, the server will reject the request with a 413 "Request Entity Too Large" error. Currently that value is hard-coded at 1MB.
So for example, let's say you are incrementing a value for each file processed:
This would result in a single flush request to the server with 10k operations.
We should do two things:
The text was updated successfully, but these errors were encountered: