Details
-
Improvement
-
Status: Closed
-
Major
-
Resolution: Fixed
-
1.7.6
Description
As already explained in OAK-6659, there can be cases in which deleting the previous spool file fails (Windows) and new (duplicate) content is added under the hood to the old file. This way the persisted blob doesn't match in content and id with the original sent by the server.
A first improvement here is to not allow the decoding to continue if the old spool file cannot be deleted. For this, the call to File#delete needs to be replaced with java.nio.file.Files#delete which would throw an exception if something wrong happens.
By ensuring that the spool file has the same size as the original blob we solve this problem. This check is sufficient, since all the chunks received are individually checked by hash, before appending them to the spool file. Moreover, the single threaded nature of the client ensures that races in which a new thread starts appending new content, after the length check has just passed can never happen.
Attachments
Attachments
Issue Links
- is related to
-
OAK-6659 Cold standby should fail loudly when a big blob can't be timely transferred
- Closed