In org.apache.avro.generic.GenericData.deepCopy - the code for copying a ByteBuffer is
ByteBuffer byteBufferValue = (ByteBuffer) value;
byte bytesCopy = new byte[byteBufferValue.capacity()];
I think this is problematic because it will cause an UnderFlow exception to be thrown if the ByteBuffer limit is less than the capacity of the byte buffer.
My use case is as follows. I have ByteBuffer's backed by large arrays so I can avoid resizing the array every time I write data. So limit < capacity. When the data is written, or copied
I think avro should respect this. When data is serialized, avro should automatically use the minimum number of bytes.
When an object is copied, I think it makes sense to preserve the capacity of the underlying buffer as opposed to compacting it.
So I think the code could be fixed by replacing get with
byteBufferValue.get(bytesCopy, 0 , byteBufferValue.limit());
|Status||Resolved [ 5 ]||Closed [ 6 ]|
|Status||Open [ 1 ]||Resolved [ 5 ]|
|Assignee||Doug Cutting [ cutting ]|
|Fix Version/s||1.7.0 [ 12318848 ]|
|Fix Version/s||1.6.3 [ 12319869 ]|
|Resolution||Fixed [ 1 ]|
|Status||Patch Available [ 10002 ]||Open [ 1 ]|
|Status||Open [ 1 ]||Patch Available [ 10002 ]|