Details

    • Type: New Feature New Feature
    • Status: Closed
    • Priority: Major Major
    • Resolution: Duplicate
    • Affects Version/s: None
    • Fix Version/s: 0.23.0
    • Component/s: client, jobtracker
    • Labels:
      None
    • Release Note:
      An efficient implementation of small jobs by running all tasks in the same JVM, there-by effecting lower latency.

      Description

      Currently very small map-reduce jobs suffer from latency issues due to overheads in Hadoop Map-Reduce such as scheduling, jvm startup etc. We've periodically tried to optimize all parts of framework to achieve lower latencies.

      I'd like to turn the problem around a little bit. I propose we allow very small jobs to run as a single task job with multiple maps and reduces i.e. similar to our current implementation of the LocalJobRunner. Thus, under certain conditions (maybe user-set configuration, or if input data is small i.e. less a DFS blocksize) we could launch a special task which will run all maps in a serial manner, followed by the reduces. This would really help small jobs achieve significantly smaller latencies, thanks to lesser scheduling overhead, jvm startup, lack of shuffle over the network etc.

      This would be a huge benefit, especially on large clusters, to small Hive/Pig queries.

      Thoughts?

        Issue Links

          Activity

          No work has yet been logged on this issue.

            People

            • Assignee:
              Greg Roelofs
              Reporter:
              Arun C Murthy
            • Votes:
              3 Vote for this issue
              Watchers:
              35 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development