Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Duplicate
-
None
-
None
-
None
Description
I observed one of the reducers failing with NegativeArraySizeException with new api.
The exception trace:
java.lang.NegativeArraySizeException
at org.apache.hadoop.io.BytesWritable.setCapacity(BytesWritable.java:119)
at org.apache.hadoop.io.BytesWritable.setSize(BytesWritable.java:98)
at org.apache.hadoop.io.BytesWritable.readFields(BytesWritable.java:153)
at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:67)
at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:40)
at org.apache.hadoop.mapreduce.ReduceContext.nextKeyValue(ReduceContext.java:142)
at org.apache.hadoop.mapreduce.ReduceContext.nextKey(ReduceContext.java:121)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:189)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:542)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:409)
at org.apache.hadoop.mapred.Child.main(Child.java:159)
The corresponding line in ReduceContext is
line#142 key = keyDeserializer.deserialize(key);
Attachments
Issue Links
- is duplicated by
-
HADOOP-11901 BytesWritable fails to support 2G chunks due to integer overflow
-
- Resolved
-
- is related to
-
MAPREDUCE-15 SequenceFile RecordReader should skip bad records
-
- Reopened
-