Details
-
Bug
-
Status: Resolved
-
Minor
-
Resolution: Duplicate
-
2.1.0-beta
-
None
-
None
Description
I have created a file with checksum disable option and I am seeing ArrayIndexOutOfBoundsException.
out = fs.create(fileName, FsPermission.getDefault(), flags, fs.getConf() .getInt("io.file.buffer.size", 4096), replFactor, fs .getDefaultBlockSize(fileName), null, ChecksumOpt.createDisabled());
See the trace here:
java.lang.ArrayIndexOutOfBoundsException: 0 at org.apache.hadoop.fs.FSOutputSummer.int2byte(FSOutputSummer.java:178) at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunk(FSOutputSummer.java:162) at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:106) at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:92) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:54) at java.io.DataOutputStream.write(DataOutputStream.java:90) at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:261) at org.apache.hadoop.hdfs.TestReplication.testBadBlockReportOnTransfer(TestReplication.java:174)
In FSOutputSummer#int2byte will not check any bytes length, so, do you think we have to to check the length then only we call this in CRC NULL case, as there will not be any checksum bytes?
static byte[] int2byte(int integer, byte[] bytes) { bytes[0] = (byte)((integer >>> 24) & 0xFF); bytes[1] = (byte)((integer >>> 16) & 0xFF); bytes[2] = (byte)((integer >>> 8) & 0xFF); bytes[3] = (byte)((integer >>> 0) & 0xFF); return bytes; }
Attachments
Attachments
Issue Links
- is duplicated by
-
HADOOP-9114 After defined the dfs.checksum.type as the NULL, write file and hflush will through java.lang.ArrayIndexOutOfBoundsException
- Closed
- is related to
-
HADOOP-8240 Allow users to specify a checksum type on create()
- Closed