Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-19339

OutofBounds Exception due to assumption about buffer size in BlockCompressorStream

    XMLWordPrintableJSON

Details

    • Reviewed

    Description

      What Happened: 

      Got an OutofBounds exception when io.compression.codec.snappy.buffersize is set to 7. BlockCompressorStream assumes that the buffer size will always be greater than the compression overhead, and consequently MAX_INPUT_SIZE will always be greater than or equal to 0. 

      Buggy Code: 

      When io.compression.codec.snappy.buffersize is set to 7, compressionOverhead is 33 and MAX_INPUT_SIZE is -26. 

      public BlockCompressorStream(OutputStream out, Compressor compressor, 
                                   int bufferSize, int compressionOverhead) {
        super(out, compressor, bufferSize);
        MAX_INPUT_SIZE = bufferSize - compressionOverhead; // -> Assumes bufferSize is always greater than compressionOverhead and MAX_INPUT_SIZE is non-negative. 
      } 

      Stack Trace: 

      java.lang.ArrayIndexOutOfBoundsException
              at org.apache.hadoop.io.compress.snappy.SnappyCompressor.setInput(SnappyCompressor.java:86)
              at org.apache.hadoop.io.compress.BlockCompressorStream.write(BlockCompressorStream.java:112) 

      How to Reproduce: 

      (1) Set io.compression.codec.snappy.buffersize to 7

      (2) Run test: org.apache.hadoop.io.compress.TestCodec#testSnappyMapFile

       

      Attachments

        Issue Links

          Activity

            People

              FuzzingTeam ConfX
              FuzzingTeam ConfX
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: