Details
Description
From jdcryans:
java.lang.NegativeArraySizeException at org.apache.hadoop.hbase.KeyValue.readFields(KeyValue.java:2259) at org.apache.hadoop.hbase.KeyValue.readFields(KeyValue.java:2266) at org.apache.hadoop.hbase.codec.KeyValueCodec$KeyValueDecoder.parseCell(KeyValueCodec.java:64) at org.apache.hadoop.hbase.codec.BaseDecoder.advance(BaseDecoder.java:46) at org.apache.hadoop.hbase.regionserver.wal.WALEdit.readFields(WALEdit.java:222) at org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:2114) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2242) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:245) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:214) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getNextLogLine(HLogSplitter.java:799) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:727) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:307) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:217) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:180) at org.apache.hadoop.hbase.regionserver.wal.TestHLogSplit.testMiddleGarbageCorruptionSkipErrorsReadsHalfOfFile(TestHLogSplit.java:363) ...
It seems to me that we're reading a negative length which we use to create the byte array and since it's not an IOE we don't treat it as a corrupted log. I'm surprised that not a single build has failed like this in the past 3 years.