Uploaded image for project: 'Hadoop Map/Reduce'
  1. Hadoop Map/Reduce
  2. MAPREDUCE-6923

Optimize MapReduce Shuffle I/O for small partitions

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • 2.9.0, 3.0.0-beta1
    • None
    • None
    • Observed in Hadoop 2.7.3 and above (judging from the source code of future versions), and Ubuntu 16.04.

    Description

      When a job configuration results in small partitions read by each reducer from each mapper (e.g. 65 kilobytes as in my setup: a TeraSort of 256 gigabytes using 2048 mappers and reducers each), and setting

      <property>
        <name>mapreduce.shuffle.transferTo.allowed</name>
        <value>false</value>
      </property>
      

      then the default setting of

      <property>
        <name>mapreduce.shuffle.transfer.buffer.size</name>
        <value>131072</value>
      </property>
      

      results in almost 100% overhead in reads during shuffle in YARN, because for each 65K needed, 128K are read.

      I propose a fix in FadvisedFileRegion.java as follows:

      ByteBuffer byteBuffer = ByteBuffer.allocate(Math.min(this.shuffleBufferSize, trans > Integer.MAX_VALUE ? Integer.MAX_VALUE : (int) trans));
      

      e.g. here. This sets the shuffle buffer size to the minimum value of the shuffle buffer size specified in the configuration (128K by default), and the actual partition size (65K on average in my setup). In my benchmarks this reduced the read overhead in YARN from about 100% (255 additional gigabytes as described above) down to about 18% (an additional 45 gigabytes). The runtime of the job remained the same in my setup.

      Attachments

        1. MAPREDUCE-6923.00.patch
          2 kB
          Robert Schmidtke
        2. MAPREDUCE-6923.01.patch
          3 kB
          Robert Schmidtke

        Issue Links

          Activity

            People

              rosch Robert Schmidtke
              rosch Robert Schmidtke
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: