Hadoop Common
  1. Hadoop Common
  2. HADOOP-2560

Processing multiple input splits per mapper task

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Duplicate
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None

      Description

      Currently, an input split contains a consecutive chunk of input file, which by default, corresponding to a DFS block.
      This may lead to a large number of mapper tasks if the input data is large. This leads to the following problems:

      1. Shuffling cost: since the framework has to move M * R map output segments to the nodes running reducers,
      larger M means larger shuffling cost.

      2. High JVM initialization overhead

      3. Disk fragmentation: larger number of map output files means lower read throughput for accessing them.

      Ideally, you want to keep the number of mappers to no more than 16 times the number of nodes in the cluster.
      To achive that, we can increase the input split size. However, if a split span over more than one dfs block,
      you lose the data locality scheduling benefits.

      One way to address this problem is to combine multiple input blocks with the same rack into one split.
      If in average we combine B blocks into one split, then we will reduce the number of mappers by a factor of B.
      Since all the blocks for one mapper share a rack, thus we can benefit from rack-aware scheduling.

      Thoughts?

        Issue Links

          Activity

          Allen Wittenauer made changes -
          Status Open [ 1 ] Resolved [ 5 ]
          Resolution Duplicate [ 3 ]
          Allen Wittenauer made changes -
          Link This issue is duplicated by HADOOP-4565 [ HADOOP-4565 ]
          dhruba borthakur made changes -
          Assignee dhruba borthakur [ dhruba ]
          dhruba borthakur made changes -
          Attachment multipleSplitsPerMapper.patch [ 12393082 ]
          Runping Qi made changes -
          Link This issue is related to HADOOP-3293 [ HADOOP-3293 ]
          Runping Qi made changes -
          Link This issue is blocked by HADOOP-249 [ HADOOP-249 ]
          Runping Qi made changes -
          Link This issue is related to HADOOP-249 [ HADOOP-249 ]
          Runping Qi made changes -
          Summary Combining multiple input blocks into one mapper Processing multiple input splits per mapper task
          Description
          Currently, an input split contains a consecutive chunk of input file, which by default, corresponding to a DFS block.
          This may lead to a large number of mapper tasks if the input data is large. This leads to the following problems:

          1. Shuffling cost: since the framework has to move M * R map output segments to the nodes running reducers,
          larger M means larger shuffling cost.

          2. High JVM initialization overhead

          3. Disk fragmentation: larger number of map output files means lower read throughput for accessing them.

          Ideally, you want to keep the number of mappers to no more than 16 times the number of nodes in the cluster.
          To achive that, we can increase the input split size. However, if a split span over more than one dfs block,
          you lose the data locality scheduling benefits.

          One way to address this problem is to combine multiple input blocks with the same rack into one split.
          If in average we combine B blocks into one split, then we will reduce the number of mappers by a factor of B.
          Since all the blocks for one mapper share a rack, thus we can benefit from rack-aware scheduling.

          Thoughts?

          Currently, an input split contains a consecutive chunk of input file, which by default, corresponding to a DFS block.
          This may lead to a large number of mapper tasks if the input data is large. This leads to the following problems:

          1. Shuffling cost: since the framework has to move M * R map output segments to the nodes running reducers,
          larger M means larger shuffling cost.

          2. High JVM initialization overhead

          3. Disk fragmentation: larger number of map output files means lower read throughput for accessing them.

          Ideally, you want to keep the number of mappers to no more than 16 times the number of nodes in the cluster.
          To achive that, we can increase the input split size. However, if a split span over more than one dfs block,
          you lose the data locality scheduling benefits.

          One way to address this problem is to combine multiple input blocks with the same rack into one split.
          If in average we combine B blocks into one split, then we will reduce the number of mappers by a factor of B.
          Since all the blocks for one mapper share a rack, thus we can benefit from rack-aware scheduling.

          Thoughts?

          eric baldeschwieler made changes -
          Link This issue is related to HADOOP-2014 [ HADOOP-2014 ]
          eric baldeschwieler made changes -
          Link This issue blocks HADOOP-2014 [ HADOOP-2014 ]
          eric baldeschwieler made changes -
          Field Original Value New Value
          Link This issue blocks HADOOP-2014 [ HADOOP-2014 ]
          Runping Qi created issue -

            People

            • Assignee:
              dhruba borthakur
              Reporter:
              Runping Qi
            • Votes:
              0 Vote for this issue
              Watchers:
              25 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development