Uploaded image for project: 'Apache Avro'
  1. Apache Avro
  2. AVRO-986

Avro files generated from avro-c dont work with the Java mapred implementation.



    • Bug
    • Status: Closed
    • Critical
    • Resolution: Fixed
    • None
    • 1.6.2
    • c, java
    • avro-c 1.6.2-SNAPSHOT
      avro-java 1.6.2-SNAPSHOT
      hadoop 0.20.2

    • mapreduce hadoop avro sync


      When a file generated from the Avro-C implementation is fed into Hadoop, it will fail with "Block size invalid or too large for this implementation: -49".

      This is caused by the sync marker, namely the one that Avro-C puts into the header...

      The org.apache.avro.mapred.AvroRecordReader uses a FileSplit object to work out where it should read from, but this class is not particularly smart, it just divides the file up into equal size chunks, the first being with position 0.

      So org.apache.avro.mapred.AvroRecordReader gets 0 as the start of its chunk, and calls

      reader.sync(split.getStart());   // sync to start

      Then the org.apache.avro.file.DataFileReader::seek() goes to 0, then searches for a sync marker....
      It encounters one at position 32, the one in the header metadata map, "avro.sync"

      No other implementations add the sync marker in the metadata map, and none read it from there, not even the C version.

      I suggest we remove this from the header as the simplest solution.
      Another solution would be to create an AvroFileSplit class in mapred that knows where the blocks are, and provides the correct locations in the first place.


        1. 0001-avromod-utility.patch
          7 kB
          Douglas Creager
        2. 0001-Remove-sync-marker-from-metadata-in-header.patch
          1 kB
          Michael Cooper
        3. AVRO-986-java.patch
          2 kB
          Doug Cutting
        4. AVRO-986-java.patch
          0.7 kB
          Doug Cutting
        5. quickstop.db
          22 kB
          Douglas Creager



            Unassigned Unassigned
            mic159 Michael Cooper
            1 Vote for this issue
            0 Start watching this issue