Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-38

default splitter should incorporate fs block size

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • 0.1.0
    • None
    • None

    Description

      By default, the file splitting code should operate as follows.

      inputs are <file>*, numMapTasks, minSplitSize, fsBlockSize
      output is <file,start,length>*

      totalSize = sum of all file sizes;

      desiredSplitSize = totalSize / numMapTasks;
      if (desiredSplitSize > fsBlockSize) /* new */
      desiredSplitSize = fsBlockSize;
      if (desiredSplitSize < minSplitSize)
      desiredSplitSize = minSplitSize;

      chop input files into desiredSplitSize chunks & return them

      In other words, the numMapTasks is a desired minimum. We'll try to chop input into at least numMapTasks chunks, each ideally a single fs block.

      If there's not enough input data to create numMapTasks tasks, each with an entire block, then we'll permit tasks whose input is smaller than a filesystem block, down to a minimum split size.

      This handles cases where:

      • each input record takes a lot of time to process. In this case we want to make sure we use all of the cluster. Thus it is important to permit splits smaller than the fs block size.
      • input i/o dominates. In this case we want to permit the placement of tasks on hosts where their data is local. This is only possible if splits are fs block size or smaller.

      Are there other common cases that this algorithm does not handle well?

      The part marked 'new' above is not currently implemented, but I'd like to add it.

      Does this sound reasonble?

      Attachments

        Activity

          People

            Unassigned Unassigned
            cutting Doug Cutting
            Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: