Details
-
Sub-task
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
None
-
None
Description
If you write a large buffer of data containing several chunks of data like this:
byte[] inputData = new byte[dataLength]; RAND.nextBytes(inputData); for (byte b : inputData) { key.write(b); }
Then the current EC Key logic will write the first chunk twice to block 1 and block 2 and then probably (I have not verified) drop the last chunk completely.
This is due to a bug in ECKeyOutputStream.write(…):
int currentChunkBufferRemainingLength = ecChunkBufferCache.dataBuffers[blockOutputStreamEntryPool.getCurrIdx()] .remaining(); int currentChunkBufferLen = ecChunkBufferCache.dataBuffers[blockOutputStreamEntryPool.getCurrIdx()] .position(); int maxLenToCurrChunkBuffer = (int) Math.min(len, ecChunkSize); int currentWriterChunkLenToWrite = Math.min(currentChunkBufferRemainingLength, maxLenToCurrChunkBuffer); int pos = handleDataWrite(blockOutputStreamEntryPool.getCurrIdx(), b, off, currentWriterChunkLenToWrite, currentChunkBufferLen + currentWriterChunkLenToWrite == ecChunkSize); checkAndWriteParityCells(pos); int remLen = len - currentWriterChunkLenToWrite; int iters = remLen / ecChunkSize; int lastCellSize = remLen % ecChunkSize; while (iters > 0) { pos = handleDataWrite(blockOutputStreamEntryPool.getCurrIdx(), b, off, ecChunkSize, true); off += ecChunkSize; iters--; checkAndWriteParityCells(pos); }
Here we write the first chunk before entering the "iters" loop, but we forget to increment "off" which results in the same data getting written twice.
We need to add "currentWriterChunkLenToWrite" to "off" before entering the loop.
We should add a test to reproduce this issue and then add the fix.
Attachments
Issue Links
- links to