Details
-
Technical task
-
Status: Open
-
Major
-
Resolution: Unresolved
-
None
-
None
-
None
Description
While looking at log output for OAK-5387, it became clear that in at least that particular test, we are trying updates that we could know in advance to fail.
This is because in this test, all nodes get the same batched appending updates, thus their DATA column will exceed the maximum length for all nodes at the same time.
The correctly working per-document retry logic however immediately tries again an append, which consequently fails, and only then tries a full rewrite.
We could avoid this by leveraging DSIZE from the DB, currently there only for debug purposed. We would always fetch it, put it into RDBRow, expose it in Document and then make use of it in the decision logic.
Attachments
Attachments
Issue Links
- relates to
-
OAK-5387 Test failure: ConcurrentQueryAndUpdateIT.cacheUpdate[RDBFixture: RDB-H2(file)]
- Resolved