-
Type:
Sub-task
-
Status: Resolved
-
Priority:
Major
-
Resolution: Fixed
-
Affects Version/s: None
-
Fix Version/s: HDFS-7285
-
Component/s: io
-
Labels:None
-
Hadoop Flags:Reviewed
-
Release Note:This enhances the ByteBuffer version of encode / decode API of raw erasure coder allowing to process variable length of inputs data.
While investigating a test failure in TestRecoverStripedFile, it was found that raw erasure coder may be used breaking its assumptions in the ByteBuffer version encode/decode API. Originally it assumes the ByteBuffer inputs/outputs available for reading or writing always start with zero (position) and takes the whole space (limit). Below is a code sample in that assumption:
protected static byte[][] toArrays(ByteBuffer[] buffers) { byte[][] bytesArr = new byte[buffers.length][]; ByteBuffer buffer; for (int i = 0; i < buffers.length; i++) { buffer = buffers[i]; if (buffer == null) { bytesArr[i] = null; continue; } if (buffer.hasArray()) { bytesArr[i] = buffer.array(); } else { throw new IllegalArgumentException("Invalid ByteBuffer passed, " + "expecting heap buffer"); } } return bytesArr; }
- is duplicated by
-
HDFS-8370 Erasure Coding: TestRecoverStripedFile#testRecoverOneParityBlock is failing
-
- Resolved
-