When a file generated from the Avro-C implementation is fed into Hadoop, it will fail with "Block size invalid or too large for this implementation: -49".
This is caused by the sync marker, namely the one that Avro-C puts into the header...
The org.apache.avro.mapred.AvroRecordReader uses a FileSplit object to work out where it should read from, but this class is not particularly smart, it just divides the file up into equal size chunks, the first being with position 0.
So org.apache.avro.mapred.AvroRecordReader gets 0 as the start of its chunk, and calls
Then the org.apache.avro.file.DataFileReader::seek() goes to 0, then searches for a sync marker....
It encounters one at position 32, the one in the header metadata map, "avro.sync"
No other implementations add the sync marker in the metadata map, and none read it from there, not even the C version.
I suggest we remove this from the header as the simplest solution.
Another solution would be to create an AvroFileSplit class in mapred that knows where the blocks are, and provides the correct locations in the first place.