Details

    • Type: New Feature
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 0.21.0
    • Fix Version/s: 0.21.0
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      Vision:

      We want to build a Simulator to simulate large-scale Hadoop clusters, applications and workloads. This would be invaluable in furthering Hadoop by providing a tool for researchers and developers to prototype features (e.g. pluggable block-placement for HDFS, Map-Reduce schedulers etc.) and predict their behaviour and performance with reasonable amount of confidence, there-by aiding rapid innovation.


      First Cut: Simulator for the Map-Reduce Scheduler

      The Map-Reduce Scheduler is a fertile area of interest with at least four schedulers, each with their own set of features, currently in existence: Default Scheduler, Capacity Scheduler, Fairshare Scheduler & Priority Scheduler.

      Each scheduler's scheduling decisions are driven by many factors, such as fairness, capacity guarantee, resource availability, data-locality etc.

      Given that, it is non-trivial to accurately choose a single scheduler or even a set of desired features to predict the right scheduler (or features) for a given workload. Hence a simulator which can predict how well a particular scheduler works for some specific workload by quickly iterating over schedulers and/or scheduler features would be quite useful.

      So, the first cut is to implement a simulator for the Map-Reduce scheduler which take as input a job trace derived from production workload and a cluster definition, and simulates the execution of the jobs in as defined in the trace in this virtual cluster. As output, the detailed job execution trace (recorded in relation to virtual simulated time) could then be analyzed to understand various traits of individual schedulers (individual jobs turn around time, throughput, faireness, capacity guarantee, etc). To support this, we would need a simulator which could accurately model the conditions of the actual system which would affect a schedulers decisions. These include very large-scale clusters (thousands of nodes), the detailed characteristics of the workload thrown at the clusters, job or task failures, data locality, and cluster hardware (cpu, memory, disk i/o, network i/o, network topology) etc.

        Attachments

        1. 19-jobs.topology.json.gz
          5 kB
          Hong Tang
        2. 19-jobs.trace.json.gz
          594 kB
          Hong Tang
        3. mapreduce-728-20090917.patch
          157 kB
          Hong Tang
        4. mapreduce-728-20090917-3.patch
          840 kB
          Hong Tang
        5. mapreduce-728-20090917-4.patch
          842 kB
          Hong Tang
        6. mapreduce-728-20090918.patch
          842 kB
          Hong Tang
        7. mapreduce-728-20090918-2.patch
          842 kB
          Hong Tang
        8. mapreduce-728-20090918-3.patch
          844 kB
          Hong Tang
        9. mapreduce-728-20090918-5.patch
          844 kB
          Hong Tang
        10. mapreduce-728-20090918-6.patch
          844 kB
          Hong Tang
        11. mumak.png
          44 kB
          Arun C Murthy

          Issue Links

            Activity

              People

              • Assignee:
                hong.tang Hong Tang
                Reporter:
                acmurthy Arun C Murthy
              • Votes:
                0 Vote for this issue
                Watchers:
                32 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: