Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Cannot Reproduce
-
0.12.0
-
None
-
None
Description
Getting the following error in the tasktracker log on 2 attempts:
2007-03-05 14:59:50,320 WARN mapred.TaskRunner - task_0001_r_000005_0 Intermediate Merge of the inmemory files threw an exception: org.apache.hadoop.fs.ChecksumException: Checksum error: /trank/n
utch-0.9-dev/filesystem/mapred/local/task_0001_r_000005_0/map_2.out at 16776192
at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.verifySum(ChecksumFileSystem.java:250)
at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.readBuffer(ChecksumFileSystem.java:207)
at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.read(ChecksumFileSystem.java:163)
at org.apache.hadoop.fs.FSDataInputStream$PositionCache.read(FSDataInputStream.java:41)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
at java.io.DataInputStream.readFully(DataInputStream.java:178)
at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:57)
at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:91)
at org.apache.hadoop.io.SequenceFile$Reader.readBuffer(SequenceFile.java:1300)
at org.apache.hadoop.io.SequenceFile$Reader.seekToCurrentValue(SequenceFile.java:1363)
at org.apache.hadoop.io.SequenceFile$Reader.nextRawValue(SequenceFile.java:1656)
at org.apache.hadoop.io.SequenceFile$Sorter$SegmentDescriptor.nextRawValue(SequenceFile.java:2579)
at org.apache.hadoop.io.SequenceFile$Sorter$MergeQueue.next(SequenceFile.java:2351)
at org.apache.hadoop.io.SequenceFile$Sorter.writeFile(SequenceFile.java:2226)
at org.apache.hadoop.mapred.ReduceTaskRunner$InMemFSMergeThread.run(ReduceTaskRunner.java:820)
When I changed fs.inmemory.size.mb to 0 (was 75 - default) the reduce completes successfully.
Could it be related to HADOOP-1027 or HADOOP-1014?
- Espen