Uploaded image for project: 'Hadoop Map/Reduce'
  1. Hadoop Map/Reduce
  2. MAPREDUCE-2378

Reduce fails when running on 1 small file.

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Cannot Reproduce
    • 0.21.0
    • None
    • None
    • java version "1.6.0_07"
      Diablo Java(TM) SE Runtime Environment (build 1.6.0_07-b02)
      Diablo Java HotSpot(TM) 64-Bit Server VM (build 10.0-b23, mixed mode)

    Description

      If i run the wordcount example on 1 small (less than 2MB) file i get the following error:

      log4j:ERROR Failed to flush writer,
      java.io.InterruptedIOException
      at java.io.FileOutputStream.writeBytes(Native Method)
      at java.io.FileOutputStream.write(FileOutputStream.java:260)
      at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
      at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
      at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
      at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
      at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
      at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:58)
      at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:316)
      at org.apache.log4j.WriterAppender.append(WriterAppender.java:160)
      at org.apache.hadoop.mapred.TaskLogAppender.append(TaskLogAppender.java:58)
      at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
      at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
      at org.apache.log4j.Category.callAppenders(Category.java:206)
      at org.apache.log4j.Category.forcedLog(Category.java:391)
      at org.apache.log4j.Category.log(Category.java:856)
      at org.apache.commons.logging.impl.Log4JLogger.info(Log4JLogger.java:199)
      at org.apache.hadoop.mapreduce.task.reduce.ShuffleScheduler.freeHost(ShuffleScheduler.java:345)
      at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:152)

      If i run the wordcount test with 2 files, it works fine.

      I have actually repeated this with my own code. I am working on something that requires me to map/reduce a small file and I had to work around the problem by splitting the file into 2 1MB pieces for my job to run.

      All our jobs that run on 1 single larger file (over 1GB) work flawlessly. I am not exactly sure the threshold, From the testing i have done it seems to be any file smaller than the default HDFS block size (64MB) Sometimes it seems random in the 5-64MB range. But its 100% for the 5MB and smaller files.

      Attachments

        1. failed reduce task log.html
          14 kB
          Aaron Baff

        Activity

          People

            Unassigned Unassigned
            simonbsd Simon Dircks
            Votes:
            1 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: