Details

    • Type: Improvement
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.23.0
    • Component/s: mrv2
    • Labels:
      None
    • Release Note:
      Hide
      MapReduce has undergone a complete re-haul in hadoop-0.23 and we now have, what we call, MapReduce 2.0 (MRv2).

      The fundamental idea of MRv2 is to split up the two major functionalities of the JobTracker, resource management and job scheduling/monitoring, into separate daemons. The idea is to have a global ResourceManager (RM) and per-application ApplicationMaster (AM). An application is either a single job in the classical sense of Map-Reduce jobs or a DAG of jobs. The ResourceManager and per-node slave, the NodeManager (NM), form the data-computation framework. The ResourceManager is the ultimate authority that arbitrates resources among all the applications in the system. The per-application ApplicationMaster is, in effect, a framework specific library and is tasked with negotiating resources from the ResourceManager and working with the NodeManager(s) to execute and monitor the tasks.

      The ResourceManager has two main components:
      * Scheduler (S)
      * ApplicationsManager (ASM)

      The Scheduler is responsible for allocating resources to the various running applications subject to familiar constraints of capacities, queues etc. The Scheduler is pure scheduler in the sense that it performs no monitoring or tracking of status for the application. Also, it offers no guarantees on restarting failed tasks either due to application failure or hardware failures. The Scheduler performs its scheduling function based the resource requirements of the applications; it does so based on the abstract notion of a Resource Container which incorporates elements such as memory, cpu, disk, network etc.

      The Scheduler has a pluggable policy plug-in, which is responsible for partitioning the cluster resources among the various queues, applications etc. The current Map-Reduce schedulers such as the CapacityScheduler and the FairScheduler would be some examples of the plug-in.

      The CapacityScheduler supports hierarchical queues to allow for more predictable sharing of cluster resources.
      The ApplicationsManager is responsible for accepting job-submissions, negotiating the first container for executing the application specific ApplicationMaster and provides the service for restarting the ApplicationMaster container on failure.

      The NodeManager is the per-machine framework agent who is responsible for launching the applications' containers, monitoring their resource usage (cpu, memory, disk, network) and reporting the same to the Scheduler.

      The per-application ApplicationMaster has the responsibility of negotiating appropriate resource containers from the Scheduler, tracking their status and monitoring for progress.
      Show
      MapReduce has undergone a complete re-haul in hadoop-0.23 and we now have, what we call, MapReduce 2.0 (MRv2). The fundamental idea of MRv2 is to split up the two major functionalities of the JobTracker, resource management and job scheduling/monitoring, into separate daemons. The idea is to have a global ResourceManager (RM) and per-application ApplicationMaster (AM). An application is either a single job in the classical sense of Map-Reduce jobs or a DAG of jobs. The ResourceManager and per-node slave, the NodeManager (NM), form the data-computation framework. The ResourceManager is the ultimate authority that arbitrates resources among all the applications in the system. The per-application ApplicationMaster is, in effect, a framework specific library and is tasked with negotiating resources from the ResourceManager and working with the NodeManager(s) to execute and monitor the tasks. The ResourceManager has two main components: * Scheduler (S) * ApplicationsManager (ASM) The Scheduler is responsible for allocating resources to the various running applications subject to familiar constraints of capacities, queues etc. The Scheduler is pure scheduler in the sense that it performs no monitoring or tracking of status for the application. Also, it offers no guarantees on restarting failed tasks either due to application failure or hardware failures. The Scheduler performs its scheduling function based the resource requirements of the applications; it does so based on the abstract notion of a Resource Container which incorporates elements such as memory, cpu, disk, network etc. The Scheduler has a pluggable policy plug-in, which is responsible for partitioning the cluster resources among the various queues, applications etc. The current Map-Reduce schedulers such as the CapacityScheduler and the FairScheduler would be some examples of the plug-in. The CapacityScheduler supports hierarchical queues to allow for more predictable sharing of cluster resources. The ApplicationsManager is responsible for accepting job-submissions, negotiating the first container for executing the application specific ApplicationMaster and provides the service for restarting the ApplicationMaster container on failure. The NodeManager is the per-machine framework agent who is responsible for launching the applications' containers, monitoring their resource usage (cpu, memory, disk, network) and reporting the same to the Scheduler. The per-application ApplicationMaster has the responsibility of negotiating appropriate resource containers from the Scheduler, tracking their status and monitoring for progress.
    • Tags:
      mr2,mapreduce-2.0

      Description

      Re-factor MapReduce into a generic resource scheduler and a per-job, user-defined component that manages the application execution.

        Attachments

        1. yarn-state-machine.task-attempt.png
          25 kB
          Greg Roelofs
        2. yarn-state-machine.task-attempt.dot
          3 kB
          Greg Roelofs
        3. yarn-state-machine.task.png
          18 kB
          Greg Roelofs
        4. yarn-state-machine.task.dot
          2 kB
          Greg Roelofs
        5. yarn-state-machine.job.png
          23 kB
          Greg Roelofs
        6. yarn-state-machine.job.dot
          2 kB
          Greg Roelofs
        7. ResourceManager.png
          290 kB
          Binglin Chang
        8. ResourceManager.gv
          6 kB
          Binglin Chang
        9. post-move-patch-final.txt
          131 kB
          Mahadev konar
        10. post-move-patch-20110817.2.txt
          126 kB
          Vinod Kumar Vavilapalli
        11. post-move.patch
          83 kB
          Mahadev konar
        12. post-move.patch
          85 kB
          Arun C Murthy
        13. post-move.patch
          99 kB
          Arun C Murthy
        14. NodeManager.png
          228 kB
          Binglin Chang
        15. NodeManager.gv
          6 kB
          Binglin Chang
        16. multi-column-stable-sort-default-theme.png
          299 kB
          Luke Lu
        17. MR-279-script-final.sh
          3 kB
          Arun C Murthy
        18. MR-279-script-20110817.sh
          3 kB
          Vinod Kumar Vavilapalli
        19. MR-279-script.sh
          2 kB
          Mahadev konar
        20. MR-279-script.sh
          3 kB
          Arun C Murthy
        21. MR-279.sh
          0.8 kB
          Arun C Murthy
        22. MR-279.patch
          3.66 MB
          Arun C Murthy
        23. MR-279.patch
          3.94 MB
          Arun C Murthy
        24. MR-279_MR_files_to_move-20110817.txt
          23 kB
          Vinod Kumar Vavilapalli
        25. MR-279_MR_files_to_move.txt
          23 kB
          Arun C Murthy
        26. MR-279_MR_files_to_move.txt
          23 kB
          Mahadev konar
        27. MapReduce_NextGen_Architecture.pdf
          554 kB
          Arun C Murthy
        28. hadoop_contributors_meet_07_01_2011.pdf
          531 kB
          Sharad Agarwal
        29. capacity-scheduler-dark-theme.png
          192 kB
          Luke Lu

          Issue Links

            Activity

              People

              • Assignee:
                Unassigned
                Reporter:
                acmurthy Arun C Murthy
              • Votes:
                6 Vote for this issue
                Watchers:
                112 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: