Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
3.4.1
-
Reviewed
Description
What Happened:
Got an OutofBounds exception when io.compression.codec.snappy.buffersize is set to 7. BlockCompressorStream assumes that the buffer size will always be greater than the compression overhead, and consequently MAX_INPUT_SIZE will always be greater than or equal to 0.
Buggy Code:
When io.compression.codec.snappy.buffersize is set to 7, compressionOverhead is 33 and MAX_INPUT_SIZE is -26.
public BlockCompressorStream(OutputStream out, Compressor compressor, int bufferSize, int compressionOverhead) { super(out, compressor, bufferSize); MAX_INPUT_SIZE = bufferSize - compressionOverhead; // -> Assumes bufferSize is always greater than compressionOverhead and MAX_INPUT_SIZE is non-negative. }
Stack Trace:
java.lang.ArrayIndexOutOfBoundsException at org.apache.hadoop.io.compress.snappy.SnappyCompressor.setInput(SnappyCompressor.java:86) at org.apache.hadoop.io.compress.BlockCompressorStream.write(BlockCompressorStream.java:112)
How to Reproduce:
(1) Set io.compression.codec.snappy.buffersize to 7
(2) Run test: org.apache.hadoop.io.compress.TestCodec#testSnappyMapFile
Attachments
Issue Links
- links to