Uploaded image for project: 'Apache Avro'
  1. Apache Avro
  2. AVRO-1045

deepCopy of BYTES underflow exception

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Minor
    • Resolution: Fixed
    • 1.6.2
    • 1.7.0
    • java
    • None

    Description

      In org.apache.avro.generic.GenericData.deepCopy - the code for copying a ByteBuffer is
      ByteBuffer byteBufferValue = (ByteBuffer) value;
      byte[] bytesCopy = new byte[byteBufferValue.capacity()];
      byteBufferValue.rewind();
      byteBufferValue.get(bytesCopy);
      byteBufferValue.rewind();
      return ByteBuffer.wrap(bytesCopy);

      I think this is problematic because it will cause an UnderFlow exception to be thrown if the ByteBuffer limit is less than the capacity of the byte buffer.

      My use case is as follows. I have ByteBuffer's backed by large arrays so I can avoid resizing the array every time I write data. So limit < capacity. When the data is written, or copied
      I think avro should respect this. When data is serialized, avro should automatically use the minimum number of bytes.
      When an object is copied, I think it makes sense to preserve the capacity of the underlying buffer as opposed to compacting it.

      So I think the code could be fixed by replacing get with
      byteBufferValue.get(bytesCopy, 0 , byteBufferValue.limit());

      Attachments

        1. AVRO-1045.patch
          2 kB
          Jeremy Lewi
        2. AVRO-1045.patch
          2 kB
          Doug Cutting

        Activity

          People

            cutting Doug Cutting
            jeremy@lewi.us Jeremy Lewi
            Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: