Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-2486

Dropping records at reducer. InMemoryFileSystem NPE.

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Blocker
    • Resolution: Fixed
    • Affects Version/s: 0.14.3
    • Fix Version/s: 0.15.2
    • Component/s: None
    • Labels:
      None

      Description

      Note: I'm really not sure if this is a bug in my code or in mapred.

      With my mapreduce job without combiner, I sometimes see # of total Map output records != # of total Reduce input records. What's weird to me is, when I rerun my code with exact same input, usually I get an expected #map output recs == #reduce output recs.

      Both jobs finish successfully. No failed tasks. No speculative execution.

      I ran separate linecount mapred jobs on both the input and the output to see if the counters are reporting the correct number.

      When I looked at all the 513 reducer counter, I found single reducer with different counts for the two runs.
      Only error stood out in that reducer userlog is,

       
      2007-12-22 00:19:07,640 INFO org.apache.hadoop.mapred.ReduceTask: task_200712220008_0003_r_000024_0 done copying task_200712220008_0003_m_000288_0 output from qqq856.ppp.com.
      2007-12-22 00:19:07,640 INFO org.apache.hadoop.mapred.ReduceTask: task_200712220008_0003_r_000024_0 Copying task_200712220008_0003_m_000327_0 output from qqq887.ppp.com.
      2007-12-22 00:19:07,640 ERROR org.apache.hadoop.mapred.ReduceTask: Map output copy failure: java.lang.NullPointerException
      	at org.apache.hadoop.fs.InMemoryFileSystem$RawInMemoryFileSystem$FileAttributes.access$300(InMemoryFileSystem.java:366)
      	at org.apache.hadoop.fs.InMemoryFileSystem$RawInMemoryFileSystem$InMemoryFileStatus.(InMemoryFileSystem.java:380)
      	at org.apache.hadoop.fs.InMemoryFileSystem$RawInMemoryFileSystem.getFileStatus(InMemoryFileSystem.java:283)
      	at org.apache.hadoop.fs.FileSystem.isDirectory(FileSystem.java:423)
      	at org.apache.hadoop.fs.ChecksumFileSystem.rename(ChecksumFileSystem.java:386)
      	at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.copyOutput(ReduceTask.java:716)
      	at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.run(ReduceTask.java:637)
      
      2007-12-22 00:19:07,641 INFO org.apache.hadoop.mapred.ReduceTask: task_200712220008_0003_r_000024_0 done copying task_200712220008_0003_m_000228_0 output from qqq801.ppp.com.
      2007-12-22 00:19:07,641 INFO org.apache.hadoop.mapred.ReduceTask: task_200712220008_0003_r_000024_0 Copying task_200712220008_0003_m_000337_0 output from qqq841.ppp.com.
      

      Could this error be somehow related to my having different # of records?

        Attachments

        1. 2486.patch
          0.9 kB
          Devaraj Das

          Activity

            People

            • Assignee:
              ddas Devaraj Das
              Reporter:
              knoguchi Koji Noguchi
            • Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: