Uploaded image for project: 'Avro'
  1. Avro
  2. AVRO-1045

deepCopy of BYTES underflow exception

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: 1.6.2
    • Fix Version/s: 1.7.0
    • Component/s: java
    • Labels:
      None

      Description

      In org.apache.avro.generic.GenericData.deepCopy - the code for copying a ByteBuffer is
      ByteBuffer byteBufferValue = (ByteBuffer) value;
      byte[] bytesCopy = new byte[byteBufferValue.capacity()];
      byteBufferValue.rewind();
      byteBufferValue.get(bytesCopy);
      byteBufferValue.rewind();
      return ByteBuffer.wrap(bytesCopy);

      I think this is problematic because it will cause an UnderFlow exception to be thrown if the ByteBuffer limit is less than the capacity of the byte buffer.

      My use case is as follows. I have ByteBuffer's backed by large arrays so I can avoid resizing the array every time I write data. So limit < capacity. When the data is written, or copied
      I think avro should respect this. When data is serialized, avro should automatically use the minimum number of bytes.
      When an object is copied, I think it makes sense to preserve the capacity of the underlying buffer as opposed to compacting it.

      So I think the code could be fixed by replacing get with
      byteBufferValue.get(bytesCopy, 0 , byteBufferValue.limit());

        Attachments

        1. AVRO-1045.patch
          2 kB
          Doug Cutting
        2. AVRO-1045.patch
          2 kB
          Jeremy Lewi

          Activity

            People

            • Assignee:
              cutting Doug Cutting
              Reporter:
              jeremy@lewi.us Jeremy Lewi
            • Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: