Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.23.0
    • Component/s: mrv2
    • Labels:
      None
    • Release Note:
      Hide
      MapReduce has undergone a complete re-haul in hadoop-0.23 and we now have, what we call, MapReduce 2.0 (MRv2).

      The fundamental idea of MRv2 is to split up the two major functionalities of the JobTracker, resource management and job scheduling/monitoring, into separate daemons. The idea is to have a global ResourceManager (RM) and per-application ApplicationMaster (AM). An application is either a single job in the classical sense of Map-Reduce jobs or a DAG of jobs. The ResourceManager and per-node slave, the NodeManager (NM), form the data-computation framework. The ResourceManager is the ultimate authority that arbitrates resources among all the applications in the system. The per-application ApplicationMaster is, in effect, a framework specific library and is tasked with negotiating resources from the ResourceManager and working with the NodeManager(s) to execute and monitor the tasks.

      The ResourceManager has two main components:
      * Scheduler (S)
      * ApplicationsManager (ASM)

      The Scheduler is responsible for allocating resources to the various running applications subject to familiar constraints of capacities, queues etc. The Scheduler is pure scheduler in the sense that it performs no monitoring or tracking of status for the application. Also, it offers no guarantees on restarting failed tasks either due to application failure or hardware failures. The Scheduler performs its scheduling function based the resource requirements of the applications; it does so based on the abstract notion of a Resource Container which incorporates elements such as memory, cpu, disk, network etc.

      The Scheduler has a pluggable policy plug-in, which is responsible for partitioning the cluster resources among the various queues, applications etc. The current Map-Reduce schedulers such as the CapacityScheduler and the FairScheduler would be some examples of the plug-in.

      The CapacityScheduler supports hierarchical queues to allow for more predictable sharing of cluster resources.
      The ApplicationsManager is responsible for accepting job-submissions, negotiating the first container for executing the application specific ApplicationMaster and provides the service for restarting the ApplicationMaster container on failure.

      The NodeManager is the per-machine framework agent who is responsible for launching the applications' containers, monitoring their resource usage (cpu, memory, disk, network) and reporting the same to the Scheduler.

      The per-application ApplicationMaster has the responsibility of negotiating appropriate resource containers from the Scheduler, tracking their status and monitoring for progress.
      Show
      MapReduce has undergone a complete re-haul in hadoop-0.23 and we now have, what we call, MapReduce 2.0 (MRv2). The fundamental idea of MRv2 is to split up the two major functionalities of the JobTracker, resource management and job scheduling/monitoring, into separate daemons. The idea is to have a global ResourceManager (RM) and per-application ApplicationMaster (AM). An application is either a single job in the classical sense of Map-Reduce jobs or a DAG of jobs. The ResourceManager and per-node slave, the NodeManager (NM), form the data-computation framework. The ResourceManager is the ultimate authority that arbitrates resources among all the applications in the system. The per-application ApplicationMaster is, in effect, a framework specific library and is tasked with negotiating resources from the ResourceManager and working with the NodeManager(s) to execute and monitor the tasks. The ResourceManager has two main components: * Scheduler (S) * ApplicationsManager (ASM) The Scheduler is responsible for allocating resources to the various running applications subject to familiar constraints of capacities, queues etc. The Scheduler is pure scheduler in the sense that it performs no monitoring or tracking of status for the application. Also, it offers no guarantees on restarting failed tasks either due to application failure or hardware failures. The Scheduler performs its scheduling function based the resource requirements of the applications; it does so based on the abstract notion of a Resource Container which incorporates elements such as memory, cpu, disk, network etc. The Scheduler has a pluggable policy plug-in, which is responsible for partitioning the cluster resources among the various queues, applications etc. The current Map-Reduce schedulers such as the CapacityScheduler and the FairScheduler would be some examples of the plug-in. The CapacityScheduler supports hierarchical queues to allow for more predictable sharing of cluster resources. The ApplicationsManager is responsible for accepting job-submissions, negotiating the first container for executing the application specific ApplicationMaster and provides the service for restarting the ApplicationMaster container on failure. The NodeManager is the per-machine framework agent who is responsible for launching the applications' containers, monitoring their resource usage (cpu, memory, disk, network) and reporting the same to the Scheduler. The per-application ApplicationMaster has the responsibility of negotiating appropriate resource containers from the Scheduler, tracking their status and monitoring for progress.
    • Tags:
      mr2,mapreduce-2.0

      Description

      Re-factor MapReduce into a generic resource scheduler and a per-job, user-defined component that manages the application execution.

      1. capacity-scheduler-dark-theme.png
        192 kB
        Luke Lu
      2. hadoop_contributors_meet_07_01_2011.pdf
        531 kB
        Sharad Agarwal
      3. MapReduce_NextGen_Architecture.pdf
        554 kB
        Arun C Murthy
      4. MR-279_MR_files_to_move.txt
        23 kB
        Mahadev konar
      5. MR-279_MR_files_to_move.txt
        23 kB
        Arun C Murthy
      6. MR-279_MR_files_to_move-20110817.txt
        23 kB
        Vinod Kumar Vavilapalli
      7. MR-279.patch
        3.94 MB
        Arun C Murthy
      8. MR-279.patch
        3.66 MB
        Arun C Murthy
      9. MR-279.sh
        0.8 kB
        Arun C Murthy
      10. MR-279-script.sh
        3 kB
        Arun C Murthy
      11. MR-279-script.sh
        2 kB
        Mahadev konar
      12. MR-279-script-20110817.sh
        3 kB
        Vinod Kumar Vavilapalli
      13. MR-279-script-final.sh
        3 kB
        Arun C Murthy
      14. multi-column-stable-sort-default-theme.png
        299 kB
        Luke Lu
      15. NodeManager.gv
        6 kB
        Binglin Chang
      16. NodeManager.png
        228 kB
        Binglin Chang
      17. post-move.patch
        99 kB
        Arun C Murthy
      18. post-move.patch
        85 kB
        Arun C Murthy
      19. post-move.patch
        83 kB
        Mahadev konar
      20. post-move-patch-20110817.2.txt
        126 kB
        Vinod Kumar Vavilapalli
      21. post-move-patch-final.txt
        131 kB
        Mahadev konar
      22. ResourceManager.gv
        6 kB
        Binglin Chang
      23. ResourceManager.png
        290 kB
        Binglin Chang
      24. yarn-state-machine.job.dot
        2 kB
        Greg Roelofs
      25. yarn-state-machine.job.png
        23 kB
        Greg Roelofs
      26. yarn-state-machine.task.dot
        2 kB
        Greg Roelofs
      27. yarn-state-machine.task.png
        18 kB
        Greg Roelofs
      28. yarn-state-machine.task-attempt.dot
        3 kB
        Greg Roelofs
      29. yarn-state-machine.task-attempt.png
        25 kB
        Greg Roelofs

        Issue Links

          Activity

          Hide
          Michael Bieniosek added a comment -

          A couple points:

          1) the job client currently submits a job, then exits. This means that the machine where the job client runs does not need to be reliable (it could be my laptop, for example). I think this is a valuable feature. The JobManager you suggest cannot be run on an unreliable machine – I think you mention this in brownie point #3.

          2) One of our problems is that we have substantial amounts of per-job software that is installed via rpm. Our current solution is to create a job-private mapreduce cluster (not using HoD), install a bunch of software, then start the job. This won't work if a machine might be running tasks from multiple jobs simultaneously. This proposal doesn't seem to affect our ability to run private mapreduce clusters. But it does make it less useful for us. You suggest xen, which would let us configure per-task; that might work but it will increase the task overhead. Another possibility is allocating a machine-at-a-time to jobs, so we only have to configure the machine once per job.

          I'm not totally sure what the point is here – it seems like you mainly want to separate the jobtracker's scheduling and monitoring functions. Is there a scaling problem with the jobtracker currently? You discuss the jobtracker being a single point of failure, but the namenode is already a more serious point of failure, since it is much more work to rebuild a namenode if it dies. Are you also trying to replace HoD?

          Show
          Michael Bieniosek added a comment - A couple points: 1) the job client currently submits a job, then exits. This means that the machine where the job client runs does not need to be reliable (it could be my laptop, for example). I think this is a valuable feature. The JobManager you suggest cannot be run on an unreliable machine – I think you mention this in brownie point #3. 2) One of our problems is that we have substantial amounts of per-job software that is installed via rpm. Our current solution is to create a job-private mapreduce cluster (not using HoD), install a bunch of software, then start the job. This won't work if a machine might be running tasks from multiple jobs simultaneously. This proposal doesn't seem to affect our ability to run private mapreduce clusters. But it does make it less useful for us. You suggest xen, which would let us configure per-task; that might work but it will increase the task overhead. Another possibility is allocating a machine-at-a-time to jobs, so we only have to configure the machine once per job. I'm not totally sure what the point is here – it seems like you mainly want to separate the jobtracker's scheduling and monitoring functions. Is there a scaling problem with the jobtracker currently? You discuss the jobtracker being a single point of failure, but the namenode is already a more serious point of failure, since it is much more work to rebuild a namenode if it dies. Are you also trying to replace HoD?
          Hide
          Arun C Murthy added a comment -

          2) One of our problems [...]

          Right, this will not affect your special case at all... you can continue to run multiple clusters on the same machines with different configs, ports etc.

          I'm not totally sure [...]

          Yep. The point is to get people to think about ways of improving Map-Reduce to be scalable/reliable and maintain the single static MR cluster and do away with the notion of job-private clusters i.e. HoD; as expounded in the Motivation section.

          The stretch is to see if we can enhance it to support other, non-MR paradigms too.

          You discuss the jobtracker being a single point of failure, but the namenode is already a more serious point of failure, since it is much more work to rebuild a namenode if it dies.

          Sure, that is at least as important; however I believe it's unrelated to this discussion.

          Show
          Arun C Murthy added a comment - 2) One of our problems [...] Right, this will not affect your special case at all... you can continue to run multiple clusters on the same machines with different configs, ports etc. I'm not totally sure [...] Yep. The point is to get people to think about ways of improving Map-Reduce to be scalable/reliable and maintain the single static MR cluster and do away with the notion of job-private clusters i.e. HoD; as expounded in the Motivation section. The stretch is to see if we can enhance it to support other, non-MR paradigms too. You discuss the jobtracker being a single point of failure, but the namenode is already a more serious point of failure, since it is much more work to rebuild a namenode if it dies. Sure, that is at least as important; however I believe it's unrelated to this discussion.
          Hide
          Jeff Hammerbacher added a comment -

          Thank goodness this ticket has been opened, and thanks for hitting on most of the major abstractions necessary Arun! I've got a few thoughts, organized via quotes from the "Beautiful Code" MapReduce article:

          1) "The MapReduce library first splits the input files into M pieces ... then starts up many copies of the program on a cluster of machines, by making a request to the cluster scheduling system. ... One of the copies is special and is called the MapReduce master."

          so "cluster scheduling system" == JobScheduler and "MapReduce master" = JobManager, except rather than being tethered to the JobClient it's an autonomous process instantiated by the JobScheduler. The sequence of execution described belies locality awareness in the JobScheduler in assigning worker machines and could get rid of one concern, as the private cluster can be assigned in a locality-aware manner. I think the JobClient/JobManager, as designed, has bundled the "MapReduce master" and the client process originating the MapReduce job too tightly.

          Actually, upon further review, I think you've hit upon this idea in your "points to ponder": "We could have the notion of a JobManager being the proxy process running inside the cluster for the JobClient (the job-submitting program which is running outside the colo e.g. user's dev box) ... in fact we can think of the JobManager being another kind of task which needs to be schedu\
          led to run at a TaskTracker."

          2) "The master periodically sends a ping remote procedure call to each worker."

          ...as opposed to a regular heartbeat from the TaskTracker. Probes can be done intelligently depending on the state of the overall Job and could significantly reduce network RPC traffic. Does this matter in practice on large clusters?

          3) "The master logs all updates of its scheduling state to a persistent logfile. If the master dies (a rare occurrence, since there is only one master), it is restarted by the cluster scheduling system."

          in other words, HA for JobManager is handled by the JobScheduler. JobScheduler HA is presumably handled via Chubby, but it is clearly a requirement.

          4) "We use backup tasks to solve the problem of stragglers. When there are only a few map tasks left, the master schedules (on idle workers) one backup execution for each of the remaining in-progress tasks."

          currently, SPECULATIVE_GAP and SPECULATIVE_LAG control speculative execution at the task level. As with heartbeats versus probes, wouldn't this be better handled at the JobManager/MapReduce master level? Either way, this should be a JobConf param. We set this to off for our cluster because it has caused severe instability when running many jobs simultaneously.

          5) "Each worker process installs a signal handler that catches segmentation violations and bus errors. Before invoking a user Map or Reduce operation, the MapReduce library stores the sequence number of the record in a global variable. If the user code generates a signal, the signal handler sends a "last gasp" UDP packet that contains the sequence number to the MapReduce master. When the master has seen more than one failure on a particular record, it indicates that the record should be skipped ..."

          this might be handy, not sure if anything like this exists in Hadoop.

          Finally, I really like the idea of hierarchical (rack-level) reporting as suggested in "Points to Ponder": "Discuss the notion of a rack-level aggregator of TaskTracker statuses i.e. rather than have every TaskTracker update the JobScheduler, a rack-level aggregator could achieve the same?" Of course, I'd like rack-level anything...

          Show
          Jeff Hammerbacher added a comment - Thank goodness this ticket has been opened, and thanks for hitting on most of the major abstractions necessary Arun! I've got a few thoughts, organized via quotes from the "Beautiful Code" MapReduce article: 1) "The MapReduce library first splits the input files into M pieces ... then starts up many copies of the program on a cluster of machines, by making a request to the cluster scheduling system. ... One of the copies is special and is called the MapReduce master." so "cluster scheduling system" == JobScheduler and "MapReduce master" = JobManager, except rather than being tethered to the JobClient it's an autonomous process instantiated by the JobScheduler. The sequence of execution described belies locality awareness in the JobScheduler in assigning worker machines and could get rid of one concern, as the private cluster can be assigned in a locality-aware manner. I think the JobClient/JobManager, as designed, has bundled the "MapReduce master" and the client process originating the MapReduce job too tightly. Actually, upon further review, I think you've hit upon this idea in your "points to ponder": "We could have the notion of a JobManager being the proxy process running inside the cluster for the JobClient (the job-submitting program which is running outside the colo e.g. user's dev box) ... in fact we can think of the JobManager being another kind of task which needs to be schedu\ led to run at a TaskTracker." 2) "The master periodically sends a ping remote procedure call to each worker." ...as opposed to a regular heartbeat from the TaskTracker. Probes can be done intelligently depending on the state of the overall Job and could significantly reduce network RPC traffic. Does this matter in practice on large clusters? 3) "The master logs all updates of its scheduling state to a persistent logfile. If the master dies (a rare occurrence, since there is only one master), it is restarted by the cluster scheduling system." in other words, HA for JobManager is handled by the JobScheduler. JobScheduler HA is presumably handled via Chubby, but it is clearly a requirement. 4) "We use backup tasks to solve the problem of stragglers. When there are only a few map tasks left, the master schedules (on idle workers) one backup execution for each of the remaining in-progress tasks." currently, SPECULATIVE_GAP and SPECULATIVE_LAG control speculative execution at the task level. As with heartbeats versus probes, wouldn't this be better handled at the JobManager/MapReduce master level? Either way, this should be a JobConf param. We set this to off for our cluster because it has caused severe instability when running many jobs simultaneously. 5) "Each worker process installs a signal handler that catches segmentation violations and bus errors. Before invoking a user Map or Reduce operation, the MapReduce library stores the sequence number of the record in a global variable. If the user code generates a signal, the signal handler sends a "last gasp" UDP packet that contains the sequence number to the MapReduce master. When the master has seen more than one failure on a particular record, it indicates that the record should be skipped ..." this might be handy, not sure if anything like this exists in Hadoop. Finally, I really like the idea of hierarchical (rack-level) reporting as suggested in "Points to Ponder": "Discuss the notion of a rack-level aggregator of TaskTracker statuses i.e. rather than have every TaskTracker update the JobScheduler, a rack-level aggregator could achieve the same?" Of course, I'd like rack-level anything...
          Hide
          Arun C Murthy added a comment -

          [...] as opposed to a regular heartbeat from the TaskTracker. Probes can be done intelligently depending on the state of the overall Job and could significantly reduce network RPC traffic. Does this matter in practice on large clusters?

          Yes. To clarify, the idea is that the JobManager pings the TaskTrackers (today the TaskTracker pings the JobTracker) for status-updates for its tasks. Clearly it only pings the TaskTrackers which are currently running its tasks.

          currently, SPECULATIVE_GAP and SPECULATIVE_LAG control speculative execution at the task level. As with heartbeats versus probes, wouldn't this be better handled at the JobManager/MapReduce master level? Either way, this should be a JobConf param.

          Yes. Again the idea is that the JobManager decides to schedule speculative-tasks via SPECULATIVE_

          {LAG|GAP}

          etc., same as the normal tasks. It then asks the JobScheduler for free TaskTrackers.

          Thus which task needs to run (normal/failed/speculative) is decided by the JobManager, whereas where the task should be run (i.e. TaskTracker) is decided by the JobScheduler, it doesn't care about the nature of the task (it does care about the job's priorities etc.).

          We set this to off for our cluster because it has caused severe instability when running many jobs simultaneously.

          Which version of Hadoop are you running? Things have improved a fair bit recently; further improvements are underway (HADOOP-2141).

          Show
          Arun C Murthy added a comment - [...] as opposed to a regular heartbeat from the TaskTracker. Probes can be done intelligently depending on the state of the overall Job and could significantly reduce network RPC traffic. Does this matter in practice on large clusters? Yes. To clarify, the idea is that the JobManager pings the TaskTrackers (today the TaskTracker pings the JobTracker) for status-updates for its tasks. Clearly it only pings the TaskTrackers which are currently running its tasks. currently, SPECULATIVE_GAP and SPECULATIVE_LAG control speculative execution at the task level. As with heartbeats versus probes, wouldn't this be better handled at the JobManager/MapReduce master level? Either way, this should be a JobConf param. Yes. Again the idea is that the JobManager decides to schedule speculative-tasks via SPECULATIVE_ {LAG|GAP} etc., same as the normal tasks. It then asks the JobScheduler for free TaskTrackers. Thus which task needs to run (normal/failed/speculative) is decided by the JobManager, whereas where the task should be run (i.e. TaskTracker) is decided by the JobScheduler, it doesn't care about the nature of the task (it does care about the job's priorities etc.). We set this to off for our cluster because it has caused severe instability when running many jobs simultaneously. Which version of Hadoop are you running? Things have improved a fair bit recently; further improvements are underway ( HADOOP-2141 ).
          Hide
          Jeff Hammerbacher added a comment -

          2) yes, you are correct. i am arguing for this mechanism because the jobmanager can have some intelligence in sending out probes based on overall job status.

          4) yes, i'd like to see the jobmanager handle speculative execution, but i guess i'm arguing that the logic of using SPECULATIVE_

          {LAG|GAP}

          might not be optimal and a different criterion for choosing when to speculate should be considered. we're running a patched 0.14.4 build.

          also, i'd like to add:

          3) clearly the logging of scheduling state by the jobmanager/mapreduce master would be aided by HADOOP-1700.

          finally: could an external sort for text records similar to the one described here: http://portal.acm.org/citation.cfm?id=1229055.1229062 be considered as an improvement over the general external merge sort?

          Show
          Jeff Hammerbacher added a comment - 2) yes, you are correct. i am arguing for this mechanism because the jobmanager can have some intelligence in sending out probes based on overall job status. 4) yes, i'd like to see the jobmanager handle speculative execution, but i guess i'm arguing that the logic of using SPECULATIVE_ {LAG|GAP} might not be optimal and a different criterion for choosing when to speculate should be considered. we're running a patched 0.14.4 build. also, i'd like to add: 3) clearly the logging of scheduling state by the jobmanager/mapreduce master would be aided by HADOOP-1700 . finally: could an external sort for text records similar to the one described here: http://portal.acm.org/citation.cfm?id=1229055.1229062 be considered as an improvement over the general external merge sort?
          Hide
          Doug Cutting added a comment -

          The stated goals of this design are to improve things when running mapreduce on a subset of the nodes of a cluster, when HDFS is run on all nodes. The current approach is to run new mapreduce daemons (jobtracker and tasktrackers) for the subset. The problems are that this does not utilize nodes as fully as they could be (e.g., during the tail of a job) and it inhibits data locality optimizations.

          The proposed solution is to split the jobtracker daemon in two, one shared, long-running daemon, and a per job daemon. My concern with this approach is that adding a new kind of daemon considerably complicates things. New classes of daemons exponentially increase the number of failure modes that must be tested and debugged. This could be warranted if it permitted greater sharing of functionality between systems, reducing the amount of functionality that we must maintain. For example, we could add a general node allocation system, and built map-reduce on top of this. But for that to be a convincingly independent layer, we'd need to demonstrate that we can build other, non-mapreduce systems on it, e.g., perhaps hdfs, but this proposal doesn't seem to offer that.

          I propose that the stated problems can be more simply and directly solved without adding a new daemon, but with the existing integrated system. We can add a job parameter naming the maximum number of nodes that will be used simultaneously. Then a single jobtracker for the entire cluster can schedule tasks for multiple jobs at a time, each running on different subsets of nodes. A cluster of 1000 nodes might be configured to limit jobs to 200 nodes each. As jobs are winding down and no longer use all 200 nodes, the next job can use those nodes, improving utilization, the first stated goal of this issue. The entire cluster is available to the jobtracker for scheduling, so that it can arrange to place tasks on nodes where their data is local, addressing the second stated goal of this issue.

          Splitting the jobtracker sounds like it would simplify things, since it would result in two simpler services, but distributed systems are more impacted by the number of kinds of services than by the complexity of a single service. Thus perhaps the jobtracker could be better structured internally, to separate concerns within its implementation, but I do not yet see an argument for moving them to separate services. That seems like it will only make things less reliable: the same logic running in two daemons that could run equivalently in a single daemon.

          Show
          Doug Cutting added a comment - The stated goals of this design are to improve things when running mapreduce on a subset of the nodes of a cluster, when HDFS is run on all nodes. The current approach is to run new mapreduce daemons (jobtracker and tasktrackers) for the subset. The problems are that this does not utilize nodes as fully as they could be (e.g., during the tail of a job) and it inhibits data locality optimizations. The proposed solution is to split the jobtracker daemon in two, one shared, long-running daemon, and a per job daemon. My concern with this approach is that adding a new kind of daemon considerably complicates things. New classes of daemons exponentially increase the number of failure modes that must be tested and debugged. This could be warranted if it permitted greater sharing of functionality between systems, reducing the amount of functionality that we must maintain. For example, we could add a general node allocation system, and built map-reduce on top of this. But for that to be a convincingly independent layer, we'd need to demonstrate that we can build other, non-mapreduce systems on it, e.g., perhaps hdfs, but this proposal doesn't seem to offer that. I propose that the stated problems can be more simply and directly solved without adding a new daemon, but with the existing integrated system. We can add a job parameter naming the maximum number of nodes that will be used simultaneously. Then a single jobtracker for the entire cluster can schedule tasks for multiple jobs at a time, each running on different subsets of nodes. A cluster of 1000 nodes might be configured to limit jobs to 200 nodes each. As jobs are winding down and no longer use all 200 nodes, the next job can use those nodes, improving utilization, the first stated goal of this issue. The entire cluster is available to the jobtracker for scheduling, so that it can arrange to place tasks on nodes where their data is local, addressing the second stated goal of this issue. Splitting the jobtracker sounds like it would simplify things, since it would result in two simpler services, but distributed systems are more impacted by the number of kinds of services than by the complexity of a single service. Thus perhaps the jobtracker could be better structured internally, to separate concerns within its implementation, but I do not yet see an argument for moving them to separate services. That seems like it will only make things less reliable: the same logic running in two daemons that could run equivalently in a single daemon.
          Hide
          Edward J. Yoon added a comment -

          Wow, nice!

          Show
          Edward J. Yoon added a comment - Wow, nice!
          Hide
          Doug Cutting added a comment -

          I added HADOOP-2573 for the approach I propose above.

          Show
          Doug Cutting added a comment - I added HADOOP-2573 for the approach I propose above.
          Hide
          Pete Wyckoff added a comment -

          Excellent Arun, I think this is a big step in the right direction. I would, however, argue that the "JobScheduler" should not be part of MapReduce itself and rather a separate component. This has several benefits:

          1. clear separation of functionality - MapReduce, DFS and Scheduling on a Grid.
          2. re-usability: just like torque, other completely non MR jobs can run on the cluster and the Scheduler is the vehicle for this. I'm not sure that MapReduce clusters running non MapReduce jobs is quite right ??
          3. scalability - since more than just MR can be run on the cluster, the clusters can be bigger.
          4. a scheduler that takes locality of DFS files into consideration might be good for non MR jobs too.

          This goes back to Eric's concern that the JobTracker should be part of the user program and separates things out nicely.

          My comment is mainly about labeling and semantics, but it is a very important distinction IMHO. Thus hadoop would be comprised of MR, DFS and the Scheduler. Not MR that comprises the TaskScheduler + JobTracker and DFS.

          Show
          Pete Wyckoff added a comment - Excellent Arun, I think this is a big step in the right direction. I would, however, argue that the "JobScheduler" should not be part of MapReduce itself and rather a separate component. This has several benefits: 1. clear separation of functionality - MapReduce, DFS and Scheduling on a Grid. 2. re-usability: just like torque, other completely non MR jobs can run on the cluster and the Scheduler is the vehicle for this. I'm not sure that MapReduce clusters running non MapReduce jobs is quite right ?? 3. scalability - since more than just MR can be run on the cluster, the clusters can be bigger. 4. a scheduler that takes locality of DFS files into consideration might be good for non MR jobs too. This goes back to Eric's concern that the JobTracker should be part of the user program and separates things out nicely. My comment is mainly about labeling and semantics, but it is a very important distinction IMHO. Thus hadoop would be comprised of MR, DFS and the Scheduler. Not MR that comprises the TaskScheduler + JobTracker and DFS.
          Hide
          Arun C Murthy added a comment -

          I would, however, argue that the "JobScheduler" should not be part of MapReduce itself and rather a separate component.

          Sure, that is precisely the idea. I guess we are on the same page now. JobScheduler is the big-daddy of the cluster.

          As Eric alludes, the gravy is that by moving MR into the client-code (JobManager) we can support multiple parallel-computation paradigms, in addition to MR itself. Clearly, we are a long way ...

          Show
          Arun C Murthy added a comment - I would, however, argue that the "JobScheduler" should not be part of MapReduce itself and rather a separate component. Sure, that is precisely the idea. I guess we are on the same page now. JobScheduler is the big-daddy of the cluster. As Eric alludes, the gravy is that by moving MR into the client-code (JobManager) we can support multiple parallel-computation paradigms, in addition to MR itself. Clearly, we are a long way ...
          Hide
          Jeff Hammerbacher added a comment - - edited

          The separation of functionality outlined above (DFS, MR, Cluster Scheduler) would be fantastic. I certainly respect Doug's experience with large distributed systems but it seems the logic required to run multiple MapReduce jobs is different enough from running a single MapReduce job that separate daemons would provide a much cleaner implementation.

          Show
          Jeff Hammerbacher added a comment - - edited The separation of functionality outlined above (DFS, MR, Cluster Scheduler) would be fantastic. I certainly respect Doug's experience with large distributed systems but it seems the logic required to run multiple MapReduce jobs is different enough from running a single MapReduce job that separate daemons would provide a much cleaner implementation.
          Hide
          Pete Wyckoff added a comment -

          > Sure, that is precisely the idea. I guess we are on the same page now. JobScheduler is the big-daddy of the cluster.

          What I meant was more of a SW organization point of view. The JobScheduler should not be part of the MapReduce sub-project.

          Show
          Pete Wyckoff added a comment - > Sure, that is precisely the idea. I guess we are on the same page now. JobScheduler is the big-daddy of the cluster. What I meant was more of a SW organization point of view. The JobScheduler should not be part of the MapReduce sub-project.
          Hide
          Doug Cutting added a comment -

          > the logic required to run multiple MapReduce jobs is different enough from running a single
          > MapReduce job that separate daemons would provide a much cleaner implementation.

          If it would improve the implementation, then we should better layer the logic. I have no problem with that. But layering the logic within a single address space will yield a more reliable system than distributing it across multiple hosts. It may be less scalable to keep all the logic in a single service, but I have yet to be convinced that the jobtracker is a scalability bottleneck. So, sure, let's clean up the jobtracker with modular decomposition, but I have yet to see how running different modules of the jobtracker on different hosts will improve things.

          Show
          Doug Cutting added a comment - > the logic required to run multiple MapReduce jobs is different enough from running a single > MapReduce job that separate daemons would provide a much cleaner implementation. If it would improve the implementation, then we should better layer the logic. I have no problem with that. But layering the logic within a single address space will yield a more reliable system than distributing it across multiple hosts. It may be less scalable to keep all the logic in a single service, but I have yet to be convinced that the jobtracker is a scalability bottleneck. So, sure, let's clean up the jobtracker with modular decomposition, but I have yet to see how running different modules of the jobtracker on different hosts will improve things.
          Hide
          Arun C Murthy added a comment -

          What I meant was more of a SW organization point of view. The JobScheduler should not be part of the MapReduce sub-project.

          Ah, point taken. I misunderstood your previous comment...

          Show
          Arun C Murthy added a comment - What I meant was more of a SW organization point of view. The JobScheduler should not be part of the MapReduce sub-project. Ah, point taken. I misunderstood your previous comment...
          Hide
          Doug Cutting added a comment -

          > The JobScheduler should not be part of the MapReduce sub-project.

          If we can build MapReduce on top of some shared infrastructure, e.g., a JobScheduler, that is independently maintained and used by a larger community than just the mapreduce community, then that might be a good thing. So I'd love to see a proposal that defines a generally useful primitive layer, with examples of multiple, useful systems that can be layered on top of it, including mapreduce. Also, when this is implemented, I would argue that at least one of these other higher-level systems should be implemented too, in addition to mapreduce, to prove the generality of the lower-level system. Things intended to be reusable that are not in fact reused tend not to actually be reusable.

          Whether this more primitive layer should be a library that we use to build mapreduce daemons, or a service is an interesting question. The latter would better permit a cluster to be shared by mapreduce and non-mapreduce tasks.

          Show
          Doug Cutting added a comment - > The JobScheduler should not be part of the MapReduce sub-project. If we can build MapReduce on top of some shared infrastructure, e.g., a JobScheduler, that is independently maintained and used by a larger community than just the mapreduce community, then that might be a good thing. So I'd love to see a proposal that defines a generally useful primitive layer, with examples of multiple, useful systems that can be layered on top of it, including mapreduce. Also, when this is implemented, I would argue that at least one of these other higher-level systems should be implemented too, in addition to mapreduce, to prove the generality of the lower-level system. Things intended to be reusable that are not in fact reused tend not to actually be reusable. Whether this more primitive layer should be a library that we use to build mapreduce daemons, or a service is an interesting question. The latter would better permit a cluster to be shared by mapreduce and non-mapreduce tasks.
          Hide
          Sanjay Radia added a comment -

          Arun's analysis that a job should not get a private job-cluster but merely an ability to run tasks on nodes that have free capacity makes sense. The reservation of a job-cluster is one of the key causes of the low utilization in the current system. Doug's proposal is a work around for the utilization problems without dramatically redesigning the system.

          BTW, as a few others have noted the scheduling function belongs in a separate layer rather than being part of MR. I have a longer comment in Hadoop-2491 where I argue for a more general scheduling and resource allocation layer.

          While are I see the simplicity of having one scheduler I think we may not quite get away with that. I believe we will need two schedulers. The job scheduler's role is to move a submitted job into the run-queue of the grid when the grid has sufficient resources to be able complete the job satisfactorily. Once in the run queue, the job generates tasks which are scheduled by the task scheduler.

          Without a job scheduler, too many jobs may fight for running tasks and all of them progress too slowly.

          BTW I suspect that for map-reduce jobs, we may be able to get away with a very simplistic job-scheduler that uses priorities and takes advantage of the fact that all the mappers are created initially and the reducers follow the mappers. Hence if the task queues are priority based and FCFS and furthermore reduce tasks are give a higher priority then things may work with a simple job scheduler. But more complex (Non-MR) jobs may need a sophisticated job scheduler. My main point is that the abstraction of a job scheduler is needed.

          See Hadoop-2491 for more details.

          Show
          Sanjay Radia added a comment - Arun's analysis that a job should not get a private job-cluster but merely an ability to run tasks on nodes that have free capacity makes sense. The reservation of a job-cluster is one of the key causes of the low utilization in the current system. Doug's proposal is a work around for the utilization problems without dramatically redesigning the system. BTW, as a few others have noted the scheduling function belongs in a separate layer rather than being part of MR. I have a longer comment in Hadoop-2491 where I argue for a more general scheduling and resource allocation layer. While are I see the simplicity of having one scheduler I think we may not quite get away with that. I believe we will need two schedulers. The job scheduler's role is to move a submitted job into the run-queue of the grid when the grid has sufficient resources to be able complete the job satisfactorily. Once in the run queue, the job generates tasks which are scheduled by the task scheduler. Without a job scheduler, too many jobs may fight for running tasks and all of them progress too slowly. BTW I suspect that for map-reduce jobs, we may be able to get away with a very simplistic job-scheduler that uses priorities and takes advantage of the fact that all the mappers are created initially and the reducers follow the mappers. Hence if the task queues are priority based and FCFS and furthermore reduce tasks are give a higher priority then things may work with a simple job scheduler. But more complex (Non-MR) jobs may need a sophisticated job scheduler. My main point is that the abstraction of a job scheduler is needed. See Hadoop-2491 for more details.
          Hide
          Pete Wyckoff added a comment -

          Hadoop scheduling woes:

          IMHO: A big part of the problem is that the Map/Reduce framework and speculative execution was never intended to run long tasks. With short tasks, speculative execution works stunningly well (even without heartbeats since who cares if a task fails - the original designers of speculative execution never used heartbeats or cared if a machine failed) AND any task can be pre-empted and no task blocks other tasks or holds lots of disk space up on the task tracker (ie long running reduce holding map slots).

          I assume I'm preaching to the choir, but not as to how we can make reduce tasks small:

          1. sorting and shuffling are cheaper with "bigger" reduces - yes, but the tradeoff is not worth it (in most? cases), esp to run reduces taking an hour or more. Does anyone know of the actual cost in a real cluster of the size of the reduce vs sort/shuffle time?

          2. 1 or 2 reduces get the majority of the keys - (a) make the #of reduces a prime as this is a hash thing; (b) improve the hashcode implementation as it now uses string.hashCode which is weak.

          3. I want one file as my output? First, I don't understand why (nor have I met anyone who did ) , but even so, there are a million ways around this. e.g., a cat command that looks at each part and cats them in order. Or if these are big files, augment HDFS to handle different sized blocks in 1 file and then create a primitive to fold N files into one (in sorted order )

          4. My algorithm requires all the keys to go to one reduce. Excellent. So, your reduce is too long to use speculative execution and during the entire time you're holding the map slots and the data on the task trackers. I would propose a better model for this is to have _0_ reduces (and replication 1 for the output file) and then run the reduce (being rack aware) using something like torque - and since it's long running, run 2 copies as that's really the only way to mask a slow machine/failure for such a task.

          (a) If tasks are (relatively) short, speculative execution works! and (b) best of all for this JIRA, any task can be pre-empted. So, one can be very aggressive about scheduling in a multi-user environment as there's near 0 cost to pre-empting any task to run a more high priority job.

          Trying to fit a scheduling algorithm that handles long running tasks is really, really tough and why bother when these type tasks don't really fit the framework.

          my 2 cents.

          Show
          Pete Wyckoff added a comment - Hadoop scheduling woes: IMHO: A big part of the problem is that the Map/Reduce framework and speculative execution was never intended to run long tasks. With short tasks, speculative execution works stunningly well (even without heartbeats since who cares if a task fails - the original designers of speculative execution never used heartbeats or cared if a machine failed) AND any task can be pre-empted and no task blocks other tasks or holds lots of disk space up on the task tracker (ie long running reduce holding map slots). I assume I'm preaching to the choir, but not as to how we can make reduce tasks small: 1. sorting and shuffling are cheaper with "bigger" reduces - yes, but the tradeoff is not worth it (in most? cases), esp to run reduces taking an hour or more. Does anyone know of the actual cost in a real cluster of the size of the reduce vs sort/shuffle time? 2. 1 or 2 reduces get the majority of the keys - (a) make the #of reduces a prime as this is a hash thing; (b) improve the hashcode implementation as it now uses string.hashCode which is weak. 3. I want one file as my output? First, I don't understand why (nor have I met anyone who did ) , but even so, there are a million ways around this. e.g., a cat command that looks at each part and cats them in order. Or if these are big files, augment HDFS to handle different sized blocks in 1 file and then create a primitive to fold N files into one (in sorted order ) 4. My algorithm requires all the keys to go to one reduce. Excellent. So, your reduce is too long to use speculative execution and during the entire time you're holding the map slots and the data on the task trackers. I would propose a better model for this is to have _ 0 _ reduces (and replication 1 for the output file) and then run the reduce (being rack aware) using something like torque - and since it's long running, run 2 copies as that's really the only way to mask a slow machine/failure for such a task. (a) If tasks are (relatively) short, speculative execution works! and (b) best of all for this JIRA, any task can be pre-empted. So, one can be very aggressive about scheduling in a multi-user environment as there's near 0 cost to pre-empting any task to run a more high priority job. Trying to fit a scheduling algorithm that handles long running tasks is really, really tough and why bother when these type tasks don't really fit the framework. my 2 cents.
          Hide
          Pete Wyckoff added a comment -

          oops - i should have mentioned:

          At least on 0.15.3 there's a bug in the task tracker garbage collection that causes it to a really, really long time to get map outputs. I don't know the JIRA for this, But, this is a reason reduces often take a long time.

          Show
          Pete Wyckoff added a comment - oops - i should have mentioned: At least on 0.15.3 there's a bug in the task tracker garbage collection that causes it to a really, really long time to get map outputs. I don't know the JIRA for this, But, this is a reason reduces often take a long time.
          Hide
          Arun C Murthy added a comment -

          The original description is too long... I've preserved this here:


          We, at Yahoo!, have been using Hadoop-On-Demand as the resource provisioning/scheduling mechanism.

          With HoD the user uses a self-service system to ask-for a set of nodes. HoD allocates these from a global pool and also provisions a private Map-Reduce cluster for the user. She then runs her jobs and shuts the cluster down via HoD when done. All user-private clusters use the same humongous, static HDFS (e.g. 2k node HDFS).

          More details about HoD are available here: HADOOP-1301.


          Motivation

          The current deployment (Hadoop + HoD) has a couple of implications:

          • Non-optimal Cluster Utilization

          1. Job-private Map-Reduce clusters imply that the user-cluster potentially could be idle for atleast a while before being detected and shut-down.

          2. Elastic Jobs: Map-Reduce jobs, typically, have lots of maps with much-smaller no. of reduces; with maps being light and quick and reduces being i/o heavy and longer-running. Users typically allocate clusters depending on the no. of maps (i.e. input size) which leads to the scenario where all the maps are done (idle nodes in the cluster) and the few reduces are chugging along. Right now, we do not have the ability to shrink the HoD'ed Map-Reduce clusters which would alleviate this issue.

          • Impact on data-locality

          With the current setup of a static, large HDFS and much smaller (5/10/20/50 node) clusters there is a good chance of losing one of Map-Reduce's primary features: ability to execute tasks on the datanodes where the input splits are located. In fact, we have seen the data-local tasks go down to 20-25 percent in the GridMix benchmarks, from the 95-98 percent we see on the randomwriter+sort runs run as part of the hadoopqa benchmarks (admittedly a synthetic benchmark, but yet). Admittedly, HADOOP-1985 (rack-aware Map-Reduce) helps significantly here.


          Primarily, the notion of job-level scheduling leading to private clusers, as opposed to task-level scheduling, is a good peg to hang-on the majority of the blame.

          Keeping the above factors in mind, here are some thoughts on how to re-structure Hadoop Map-Reduce to solve some of these issues.


          State of the Art

          As it exists today, a large, static, Hadoop Map-Reduce cluster (forget HoD for a bit) does provide task-level scheduling; however as it exists today, it's scalability to tens-of-thousands of user-jobs, per-week, is in question.

          Lets review it's current architecture and main components:

          • JobTracker: It does both task-scheduling and task-monitoring (tasktrackers send task-statuses via periodic heartbeats), which implies it is fairly loaded. It is also a single-point of failure in the Map-Reduce framework i.e. its failure implies that all the jobs in the system fail. This means a static, large Map-Reduce cluster is fairly susceptible and a definite suspect. Clearly HoD solves this by having per-job clusters, albeit with the above drawbacks.
          • TaskTracker: The slave in the system which executes one task at-a-time under directions from the JobTracker.
          • JobClient: The per-job client which just submits the job and polls the JobTracker for status.

          Proposal - Map-Reduce 2.0

          The primary idea is to move to task-level scheduling and static Map-Reduce clusters (so as to maintain the same storage cluster and compute cluster paradigm) as a way to directly tackle the two main issues illustrated above. Clearly, we will have to get around the existing problems, especially w.r.t. scalability and reliability.

          The proposal is to re-work Hadoop Map-Reduce to make it suitable for a large, static cluster.

          Here is an overview of how its main components would look like:

          • JobTracker: Turn the JobTracker into a pure task-scheduler, a global one. Lets call this the JobScheduler henceforth. Clearly (data-locality aware) Maui/Moab are candidates for being the scheduler, in which case, the JobScheduler is just a thin wrapper around them.
          • TaskTracker: These stay as before, without some minor changes as illustrated later in the piece.
          • JobClient: Fatten up the JobClient my putting a lot more intelligence into it. Enhance it to talk to the JobTracker to ask for available TaskTrackers and then contact them to schedule and monitor the tasks. So we'll have lots of per-job clients talking to the JobScheduler and the relevant TaskTrackers for their respective jobs, a big change from today. Lets call this the JobManager henceforth.

          A broad sketch of how things would work:

          Deployment

          There is a single, static, large Map-Reduce cluster, and no per-job clusters.

          Essentially there is one global JobScheduler with thousands of independent TaskTrackers, each running on one node.

          As mentioned previously, the JobScheduler is a pure task-scheduler. When contacted by per-job JobManagers querying for TaskTrackers to run their tasks on, the JobTracker takes into the account the job priority, data-placements (HDFS blocks), current-load/capacity of the TaskTrackers and gives the JobManager a free slot for the task(s) in question, if available.

          Each TaskTracker periodically updates the master JobScheduler with information about the currently running tasks and available free-slots. It waits for the per-job JobManager to contact it for free-slots (which abide the JobScheduler's directives) and status for currently-running tasks (of course, the JobManager knows exactly which TaskTrackers it needs to talk to).

          The fact that the JobScheduler is no longer doing the heavy-lifting of monitoring tasks (like the current JobTracker), and hence the jobs, is the key differentiator, which is why it should be very light-weight. (Thus, it is even conceivable to imagine a hot-backup of the JobScheduler, topic for another discussion.)

          Job Execution

          Here is how the job-execution work-flow looks like:

          • User submits a job,
          • The JobClient, as today, validates inputs, computes the input splits etc.
          • Rather than submit the job to the JobTracker which then runs it, the JobClient now dons the role of the JobManager as described above (of course they could be two independent processes working in conjunction with the other... ). The JobManager pro-actively works with the JobScheduler and the TaskTrackers to execute the job. While there are more tasks to run for the still-running job, it contacts the JobScheduler to get 'n' free slots and schedules m tasks (m <= n) on the given TaskTrackers (slots). The JobManager also monitors the tasks by contacting the relevant TaskTrackers (it knows which of the TaskTrackers are running its tasks).

          Brownie Points

          • With Map-Reduce v2.0, we get reliability/scalability of the current (Map-Reduce + HoD) architecture.
          • We get elastic jobs for free since there is no concept of private clusters and clearly JobManagers do not need to hold on to the map-nodes when they are done.
          • We do get data-locality across all jobs, big or small, since there are no off-limit DataNodes (i.e. DataNodes outside the private cluster) for a Map-Reduce cluster, as today.
          • From an architectural standpoint, each component in the system (sans the global scheduler) is nicely independent and impervious of the other:
            • A JobManager is responsible for one and only one job, loss of a JobManager affects only one job.
            • A TaskTracker manages only one node, it's loss affects only one node in the cluster.
            • No user-code runs in the JobScheduler since it's a pure scheduler.
          • We can run all of the user-code (input/output formats, split calculation, task-output promotion etc.) from the JobManager since it is, by definition, the user-client.

          Points to Ponder

          • Given that the JobScheduler, is very light-weight, could we have a hot-backup for HA?
          • Discuss the notion of a rack-level aggregator of TaskTracker statuses i.e. rather than have every TaskTracker update the JobScheduler, a rack-level aggregator could achieve the same?
          • We could have the notion of a JobManager being the proxy process running inside the cluster for the JobClient (the job-submitting program which is running outside the colo e.g. user's dev box) ... in fact we can think of the JobManager being another kind of task which needs to be scheduled to run at a TaskTracker.
          • Task Isolation via separate vms (vmware/xen) rather than just separate jvms?

          How do we get to Map-Reduce 2.0?

          At the risk of sounding hopelessly optimistic, we probably do not have to work too much to get here.

          • Clearly the main changes come in the JobTracker/JobClient where we move the pieces which monitor the job's tasks' progress into the JobScheduler/JobManager.
          • We also need to enhance the JobClient (as the JobManager) to get it to talk to the JobTracker (JobScheduler) to query for the empty slots, which might not be available!
          • Then we need to add RPCs to get the JobClient (JobManager) to talk to the given TaskTrackers to get them to run the tasks, thus reversing the direction of current RPCs needed to start a task (now the TaskTracker asks the JobTracker for tasks to run); we also need new RPCs for the JobClient (JobManager) to talk to the TaskTracker to query it's tasks' statuses.
          • We leave the current heartbeat mechanism from the TaskTracker to the JobTracker (JobScheduler) as-is, sans the task-statuses.

          Glossary

          • JobScheduler - The global, task-scheduler which is today's JobTracker minus the code for tracking/monitoring jobs and their tasks. A pure scheduler.
          • JobManager - The per-job manager which is wholly responsible for working with the JobScheduler and TaskTrackers to schedule it's tasks and track their progress till job-completion (success/failure). Simplistically it is the current JobClient plus the enhancements to enable it to talk to the JobScheduler and TaskTrackers for running/monitoring the tasks.

          Tickets for the Gravy-Train ride

          Eric has started a discussion about generalizing Hadoop to support non-MR tasks, a discussion which has surfaced a few times on our lists, at HADOOP-2491.

          He notes:

          Our primary goal in going this way would be to get better utilization out of map-reduce clusters and support a richer scheduling model. The ability to support alternative job frameworks would just be gravy!

          Putting this in as a place holder. Hope to get folks talking about this to post some more detail.

          This is the start of the path to the promised gravy-land. smile

          We believe Map-Reduce 2.0 is a good start in moving most (if not all) of the Map-Reduce specific code into the user-clients (i.e. JobManager) and taking a shot at generalizing the JobTracker (as the JobScheduler) and the TaskTracker to handle more generic tasks via different (smarter/dumber) user-clients.


          Thoughts?

          Show
          Arun C Murthy added a comment - The original description is too long... I've preserved this here: We, at Yahoo!, have been using Hadoop-On-Demand as the resource provisioning/scheduling mechanism. With HoD the user uses a self-service system to ask-for a set of nodes. HoD allocates these from a global pool and also provisions a private Map-Reduce cluster for the user. She then runs her jobs and shuts the cluster down via HoD when done. All user-private clusters use the same humongous, static HDFS (e.g. 2k node HDFS). More details about HoD are available here: HADOOP-1301 . Motivation The current deployment (Hadoop + HoD) has a couple of implications: Non-optimal Cluster Utilization 1. Job-private Map-Reduce clusters imply that the user-cluster potentially could be idle for atleast a while before being detected and shut-down. 2. Elastic Jobs: Map-Reduce jobs, typically, have lots of maps with much-smaller no. of reduces; with maps being light and quick and reduces being i/o heavy and longer-running. Users typically allocate clusters depending on the no. of maps (i.e. input size) which leads to the scenario where all the maps are done (idle nodes in the cluster) and the few reduces are chugging along. Right now, we do not have the ability to shrink the HoD'ed Map-Reduce clusters which would alleviate this issue. Impact on data-locality With the current setup of a static, large HDFS and much smaller (5/10/20/50 node) clusters there is a good chance of losing one of Map-Reduce's primary features: ability to execute tasks on the datanodes where the input splits are located. In fact, we have seen the data-local tasks go down to 20-25 percent in the GridMix benchmarks, from the 95-98 percent we see on the randomwriter+sort runs run as part of the hadoopqa benchmarks (admittedly a synthetic benchmark, but yet). Admittedly, HADOOP-1985 (rack-aware Map-Reduce) helps significantly here. Primarily, the notion of job-level scheduling leading to private clusers, as opposed to task-level scheduling , is a good peg to hang-on the majority of the blame. Keeping the above factors in mind, here are some thoughts on how to re-structure Hadoop Map-Reduce to solve some of these issues. State of the Art As it exists today, a large, static, Hadoop Map-Reduce cluster (forget HoD for a bit) does provide task-level scheduling; however as it exists today, it's scalability to tens-of-thousands of user-jobs, per-week, is in question. Lets review it's current architecture and main components: JobTracker: It does both task-scheduling and task-monitoring (tasktrackers send task-statuses via periodic heartbeats), which implies it is fairly loaded. It is also a single-point of failure in the Map-Reduce framework i.e. its failure implies that all the jobs in the system fail. This means a static, large Map-Reduce cluster is fairly susceptible and a definite suspect. Clearly HoD solves this by having per-job clusters, albeit with the above drawbacks. TaskTracker: The slave in the system which executes one task at-a-time under directions from the JobTracker. JobClient: The per-job client which just submits the job and polls the JobTracker for status. Proposal - Map-Reduce 2.0 The primary idea is to move to task-level scheduling and static Map-Reduce clusters (so as to maintain the same storage cluster and compute cluster paradigm) as a way to directly tackle the two main issues illustrated above. Clearly, we will have to get around the existing problems, especially w.r.t. scalability and reliability. The proposal is to re-work Hadoop Map-Reduce to make it suitable for a large, static cluster. Here is an overview of how its main components would look like: JobTracker: Turn the JobTracker into a pure task-scheduler, a global one. Lets call this the JobScheduler henceforth. Clearly (data-locality aware) Maui/Moab are candidates for being the scheduler, in which case, the JobScheduler is just a thin wrapper around them. TaskTracker: These stay as before, without some minor changes as illustrated later in the piece. JobClient: Fatten up the JobClient my putting a lot more intelligence into it. Enhance it to talk to the JobTracker to ask for available TaskTrackers and then contact them to schedule and monitor the tasks. So we'll have lots of per-job clients talking to the JobScheduler and the relevant TaskTrackers for their respective jobs, a big change from today. Lets call this the JobManager henceforth. A broad sketch of how things would work: Deployment There is a single, static, large Map-Reduce cluster, and no per-job clusters. Essentially there is one global JobScheduler with thousands of independent TaskTrackers, each running on one node. As mentioned previously, the JobScheduler is a pure task-scheduler. When contacted by per-job JobManagers querying for TaskTrackers to run their tasks on, the JobTracker takes into the account the job priority, data-placements (HDFS blocks), current-load/capacity of the TaskTrackers and gives the JobManager a free slot for the task(s) in question, if available. Each TaskTracker periodically updates the master JobScheduler with information about the currently running tasks and available free-slots. It waits for the per-job JobManager to contact it for free-slots (which abide the JobScheduler's directives) and status for currently-running tasks (of course, the JobManager knows exactly which TaskTrackers it needs to talk to). The fact that the JobScheduler is no longer doing the heavy-lifting of monitoring tasks (like the current JobTracker), and hence the jobs, is the key differentiator, which is why it should be very light-weight. (Thus, it is even conceivable to imagine a hot-backup of the JobScheduler, topic for another discussion.) Job Execution Here is how the job-execution work-flow looks like: User submits a job, The JobClient, as today, validates inputs, computes the input splits etc. Rather than submit the job to the JobTracker which then runs it, the JobClient now dons the role of the JobManager as described above (of course they could be two independent processes working in conjunction with the other... ). The JobManager pro-actively works with the JobScheduler and the TaskTrackers to execute the job. While there are more tasks to run for the still-running job, it contacts the JobScheduler to get 'n' free slots and schedules m tasks (m <= n) on the given TaskTrackers (slots). The JobManager also monitors the tasks by contacting the relevant TaskTrackers (it knows which of the TaskTrackers are running its tasks). Brownie Points With Map-Reduce v2.0, we get reliability/scalability of the current (Map-Reduce + HoD) architecture. We get elastic jobs for free since there is no concept of private clusters and clearly JobManagers do not need to hold on to the map-nodes when they are done. We do get data-locality across all jobs, big or small, since there are no off-limit DataNodes (i.e. DataNodes outside the private cluster) for a Map-Reduce cluster, as today. From an architectural standpoint, each component in the system (sans the global scheduler) is nicely independent and impervious of the other: A JobManager is responsible for one and only one job, loss of a JobManager affects only one job. A TaskTracker manages only one node, it's loss affects only one node in the cluster. No user-code runs in the JobScheduler since it's a pure scheduler. We can run all of the user-code (input/output formats, split calculation, task-output promotion etc.) from the JobManager since it is, by definition, the user-client. Points to Ponder Given that the JobScheduler, is very light-weight, could we have a hot-backup for HA? Discuss the notion of a rack-level aggregator of TaskTracker statuses i.e. rather than have every TaskTracker update the JobScheduler, a rack-level aggregator could achieve the same? We could have the notion of a JobManager being the proxy process running inside the cluster for the JobClient (the job-submitting program which is running outside the colo e.g. user's dev box) ... in fact we can think of the JobManager being another kind of task which needs to be scheduled to run at a TaskTracker. Task Isolation via separate vms (vmware/xen) rather than just separate jvms? How do we get to Map-Reduce 2.0? At the risk of sounding hopelessly optimistic, we probably do not have to work too much to get here. Clearly the main changes come in the JobTracker/JobClient where we move the pieces which monitor the job's tasks' progress into the JobScheduler/JobManager. We also need to enhance the JobClient (as the JobManager) to get it to talk to the JobTracker (JobScheduler) to query for the empty slots, which might not be available! Then we need to add RPCs to get the JobClient (JobManager) to talk to the given TaskTrackers to get them to run the tasks, thus reversing the direction of current RPCs needed to start a task (now the TaskTracker asks the JobTracker for tasks to run); we also need new RPCs for the JobClient (JobManager) to talk to the TaskTracker to query it's tasks' statuses. We leave the current heartbeat mechanism from the TaskTracker to the JobTracker (JobScheduler) as-is, sans the task-statuses. Glossary JobScheduler - The global, task-scheduler which is today's JobTracker minus the code for tracking/monitoring jobs and their tasks. A pure scheduler. JobManager - The per-job manager which is wholly responsible for working with the JobScheduler and TaskTrackers to schedule it's tasks and track their progress till job-completion (success/failure). Simplistically it is the current JobClient plus the enhancements to enable it to talk to the JobScheduler and TaskTrackers for running/monitoring the tasks. Tickets for the Gravy-Train ride Eric has started a discussion about generalizing Hadoop to support non-MR tasks, a discussion which has surfaced a few times on our lists, at HADOOP-2491 . He notes: Our primary goal in going this way would be to get better utilization out of map-reduce clusters and support a richer scheduling model. The ability to support alternative job frameworks would just be gravy! Putting this in as a place holder. Hope to get folks talking about this to post some more detail. This is the start of the path to the promised gravy-land. smile We believe Map-Reduce 2.0 is a good start in moving most (if not all) of the Map-Reduce specific code into the user-clients (i.e. JobManager) and taking a shot at generalizing the JobTracker (as the JobScheduler) and the TaskTracker to handle more generic tasks via different (smarter/dumber) user-clients. Thoughts?
          Hide
          Arun C Murthy added a comment -
          Proposal

          The fundamental idea of the re-factor is to divide the two major functions of the JobTracker, resource management and job scheduling/monitoring, into separate components: a generic resource scheduler and a per-job, user-defined component that manages the application execution.

          The new ResourceManager manages the global assignment of compute resources to applications and the per-application ApplicationMaster manages the application's scheduling and coordination. An application is either a single job in the classic MapReduce jobs or a DAG of such jobs. The ResourceManager and per-machine NodeManager server, which manages the user processes on that machine, form the computation fabric. The per-application ApplicationMaster is, in effect, a framework specific library and is tasked with negotiating resources from the ResourceManager and working with the NodeManager(s) to execute and monitor the tasks.

          The ResourceManager is a pure scheduler in the sense that it performs no monitoring or tracking of status for the application. Also, it offers no guarantees on restarting failed tasks either due to application failure or hardware failures.

          The ResourceManager performs its scheduling function based the resource requirements of the applications; each application has multiple resource request types that represent the resources required for containers. The resource requests include memory, CPU, disk, network etc. Note that this is a significant change from the current model of fixed-type slots in Hadoop MapReduce, which leads to significant negative impact on cluster utilization. The ResourceManager has a scheduler policy plug-in, which is responsible for partitioning the cluster resources among various queues, applications etc. Scheduler plug-ins can be based, for e.g., on the current CapacityScheduler and FairScheduler.

          The NodeManager is the per-machine framework agent who is responsible for launching the applications' containers, monitoring their resource usage (cpu, memory, disk, network) and reporting the same to the Scheduler.

          The per-application ApplicationMaster has the responsibility of negotiating appropriate resource containers from the Scheduler, launching tasks, tracking their status & monitoring for progress, handling task-failures and recovering from saved state on an ResourceManager fail-over.

          Since downtime is more expensive at scale high-availability is built-in from the beginning via Apache ZooKeeper for the ResourceManager and HDFS checkpoint for the MapReduce ApplicationMaster. Security and multi-tenancy support is critical to support many users on the larger clusters. The new architecture will also increase innovation and agility by allowing for user-defined versions of MapReduce runtime. Support for generic resource requests will increase cluster utilization by removing artificial bottlenecks such as hard-partitioning of resources into map and reduce slots.


          We have a prototype we'd like to commit to a branch soon, where we look forward to feedback. From there on, we would love to collaborate to get it committed to trunk.

          Show
          Arun C Murthy added a comment - Proposal The fundamental idea of the re-factor is to divide the two major functions of the JobTracker, resource management and job scheduling/monitoring, into separate components: a generic resource scheduler and a per-job, user-defined component that manages the application execution. The new ResourceManager manages the global assignment of compute resources to applications and the per-application ApplicationMaster manages the application's scheduling and coordination. An application is either a single job in the classic MapReduce jobs or a DAG of such jobs. The ResourceManager and per-machine NodeManager server, which manages the user processes on that machine, form the computation fabric. The per-application ApplicationMaster is, in effect, a framework specific library and is tasked with negotiating resources from the ResourceManager and working with the NodeManager(s) to execute and monitor the tasks. The ResourceManager is a pure scheduler in the sense that it performs no monitoring or tracking of status for the application. Also, it offers no guarantees on restarting failed tasks either due to application failure or hardware failures. The ResourceManager performs its scheduling function based the resource requirements of the applications; each application has multiple resource request types that represent the resources required for containers. The resource requests include memory, CPU, disk, network etc. Note that this is a significant change from the current model of fixed-type slots in Hadoop MapReduce, which leads to significant negative impact on cluster utilization. The ResourceManager has a scheduler policy plug-in, which is responsible for partitioning the cluster resources among various queues, applications etc. Scheduler plug-ins can be based, for e.g., on the current CapacityScheduler and FairScheduler. The NodeManager is the per-machine framework agent who is responsible for launching the applications' containers, monitoring their resource usage (cpu, memory, disk, network) and reporting the same to the Scheduler. The per-application ApplicationMaster has the responsibility of negotiating appropriate resource containers from the Scheduler, launching tasks, tracking their status & monitoring for progress, handling task-failures and recovering from saved state on an ResourceManager fail-over. Since downtime is more expensive at scale high-availability is built-in from the beginning via Apache ZooKeeper for the ResourceManager and HDFS checkpoint for the MapReduce ApplicationMaster. Security and multi-tenancy support is critical to support many users on the larger clusters. The new architecture will also increase innovation and agility by allowing for user-defined versions of MapReduce runtime. Support for generic resource requests will increase cluster utilization by removing artificial bottlenecks such as hard-partitioning of resources into map and reduce slots. We have a prototype we'd like to commit to a branch soon, where we look forward to feedback. From there on, we would love to collaborate to get it committed to trunk.
          Hide
          Jeff Hammerbacher added a comment -

          Hey Arun,

          Wow, thanks for reviving one of my favorite old issues! One question: how much code does the prototype share with Hadoop MapReduce? My understanding is that it's mostly new code. If that's the case, have you considered creating a separate Apache Incubator project for the new two-level scheduler? Mesos, for example, has similar aims and is going the Apache Incubator route. I am aware of at least one other effort on this front, and having the projects gestate in the Incubator rather than as a branch of Hadoop would allow them to release more regularly while young and would be more in line with the dreams of the project split (having HDFS and MapReduce developed as separate projects). This seems like a great opportunity to continue the trend of keeping individual ASF projects small and focused so that releases require less work and can happen more regularly. What do you think?

          Later,
          Jeff

          Show
          Jeff Hammerbacher added a comment - Hey Arun, Wow, thanks for reviving one of my favorite old issues! One question: how much code does the prototype share with Hadoop MapReduce? My understanding is that it's mostly new code. If that's the case, have you considered creating a separate Apache Incubator project for the new two-level scheduler? Mesos, for example, has similar aims and is going the Apache Incubator route. I am aware of at least one other effort on this front, and having the projects gestate in the Incubator rather than as a branch of Hadoop would allow them to release more regularly while young and would be more in line with the dreams of the project split (having HDFS and MapReduce developed as separate projects). This seems like a great opportunity to continue the trend of keeping individual ASF projects small and focused so that releases require less work and can happen more regularly. What do you think? Later, Jeff
          Hide
          Matei Zaharia added a comment -

          +1 on decoupling Hadoop MapReduce from the resource management system in a way that allows Hadoop to run on top of other cluster scheduling systems as well. Apart from simplifying experimentation with these types of two-level schedulers, I think this is would be a good thing for the MapReduce project in general as a way to make the project runnable in the maximum variety of environments. For example, there have already been efforts to get Hadoop running on HPC schedulers (e.g. Grid Engine) or Condor, and that would be quite a bit easier with the refactoring that Arun is doing. I imagine that there will be a lot of other work in cluster scheduling in future years, especially as people start running more non-MapReduce applications, so it would be nice to be able to run the Hadoop software stack in these environments.

          Show
          Matei Zaharia added a comment - +1 on decoupling Hadoop MapReduce from the resource management system in a way that allows Hadoop to run on top of other cluster scheduling systems as well. Apart from simplifying experimentation with these types of two-level schedulers, I think this is would be a good thing for the MapReduce project in general as a way to make the project runnable in the maximum variety of environments. For example, there have already been efforts to get Hadoop running on HPC schedulers (e.g. Grid Engine) or Condor, and that would be quite a bit easier with the refactoring that Arun is doing. I imagine that there will be a lot of other work in cluster scheduling in future years, especially as people start running more non-MapReduce applications, so it would be nice to be able to run the Hadoop software stack in these environments.
          Hide
          Arun C Murthy added a comment -

          Jeff, the prototype uses significant amount of Hadoop MapReduce, especially the MapReduce ApplicationMaster for running MR jobs. There is a new ResourceManager/NodeManager, but we still need to co-evolve and stabilize the entire codebase for serving our primary aim: running Hadoop MapReduce applications. After all, this is a re-factor of Hadoop MapReduce.

          Moving to different projects is premature... it will be very reminiscent of the issues we have with Common and HDFS i.e. every change might be spread over multiple projects, which is a logistical nightmare for developers. Eventually, once we have a few releases under our belt and successful deployments etc., we might be in a better place to revisit this proposal. Make sense?

          Show
          Arun C Murthy added a comment - Jeff, the prototype uses significant amount of Hadoop MapReduce, especially the MapReduce ApplicationMaster for running MR jobs. There is a new ResourceManager/NodeManager, but we still need to co-evolve and stabilize the entire codebase for serving our primary aim: running Hadoop MapReduce applications. After all, this is a re-factor of Hadoop MapReduce. Moving to different projects is premature... it will be very reminiscent of the issues we have with Common and HDFS i.e. every change might be spread over multiple projects, which is a logistical nightmare for developers. Eventually, once we have a few releases under our belt and successful deployments etc., we might be in a better place to revisit this proposal. Make sense?
          Hide
          Scott Carey added a comment -

          Good stuff!

          Does the NodeManager communicate to the ResourceManager similar to now (ping -> response RPC)? I ask because some of the bottlenecks and complexities now are due to this style of RPC. I've changed a couple systems in the past from ping -> response to register -> callback in the past and these became more efficient and the code became simpler. With ZooKeeper in there, I wonder how much of the communicaton now uses ZooKeeper watches for efficiency and low latency.

          When a Job starts up in the ApplicationMaster, does it have to wait for pings to get resources from the scheduler? Or is the data all there in ZK, so that ramp-up times for jobs is much faster and resource reassignment for jobs with short lived tasks isn't completely throttled by the rate of pings?

          In any case, the new architecture is decoupled and it should be much easier to make enhancements with this separation.

          Show
          Scott Carey added a comment - Good stuff! Does the NodeManager communicate to the ResourceManager similar to now (ping -> response RPC)? I ask because some of the bottlenecks and complexities now are due to this style of RPC. I've changed a couple systems in the past from ping -> response to register -> callback in the past and these became more efficient and the code became simpler. With ZooKeeper in there, I wonder how much of the communicaton now uses ZooKeeper watches for efficiency and low latency. When a Job starts up in the ApplicationMaster, does it have to wait for pings to get resources from the scheduler? Or is the data all there in ZK, so that ramp-up times for jobs is much faster and resource reassignment for jobs with short lived tasks isn't completely throttled by the rate of pings? In any case, the new architecture is decoupled and it should be much easier to make enhancements with this separation.
          Hide
          Jeff Hammerbacher added a comment -

          Hey Arun,

          As long as that evolution is happening in a branch, that seems totally reasonable to me. When it comes time to migrate the code into trunk, I hope for the same end state as Matei: I think the resource management system should be a separate project from MapReduce so that each system can evolve and release separately. When we have more clients than just MapReduce for the resource manager, we'll want those new clients to evolve as separate projects rather than all living under the Apache Hadoop umbrella. Now seems like an excellent time to facilitate that end state.

          More specifically, in an ideal world, we'd have four separate projects here: Common (probably folded into Guava or Apache Commons), HDFS, Yahoo! Cluster Manager (Resource Manager + Node Manager), and MapReduce (the ApplicationMaster for MapReduce, I guess). Then, if someone wanted to write Pregel to run against the Cluster Manager, they could implement their own ApplicationMaster in a separate project. Similarly, if someone wanted to run MapReduce against a different cluster manager, that would be simple. More practically, we have the opportunity to get the Cluster Manager project started up as a separate ASF project once it has gestated in a branch here for a bit. Are there any technical barriers to making that happen?

          I'm a huge fan of this work, and having watched a number of ASF projects evolve over the past several years, I suspect that a small, focused project dedicated to cluster resource management will have the best chance of moving quickly.

          Thanks,
          Jeff

          Show
          Jeff Hammerbacher added a comment - Hey Arun, As long as that evolution is happening in a branch, that seems totally reasonable to me. When it comes time to migrate the code into trunk, I hope for the same end state as Matei: I think the resource management system should be a separate project from MapReduce so that each system can evolve and release separately. When we have more clients than just MapReduce for the resource manager, we'll want those new clients to evolve as separate projects rather than all living under the Apache Hadoop umbrella. Now seems like an excellent time to facilitate that end state. More specifically, in an ideal world, we'd have four separate projects here: Common (probably folded into Guava or Apache Commons), HDFS, Yahoo! Cluster Manager (Resource Manager + Node Manager), and MapReduce (the ApplicationMaster for MapReduce, I guess). Then, if someone wanted to write Pregel to run against the Cluster Manager, they could implement their own ApplicationMaster in a separate project. Similarly, if someone wanted to run MapReduce against a different cluster manager, that would be simple. More practically, we have the opportunity to get the Cluster Manager project started up as a separate ASF project once it has gestated in a branch here for a bit. Are there any technical barriers to making that happen? I'm a huge fan of this work, and having watched a number of ASF projects evolve over the past several years, I suspect that a small, focused project dedicated to cluster resource management will have the best chance of moving quickly. Thanks, Jeff
          Hide
          Arun C Murthy added a comment -

          With ZooKeeper in there, I wonder how much of the communicaton now uses ZooKeeper watches for efficiency and low latency.

          Scott - We seriously considered this, but had to continue to use Hadoop RPC for a couple of reasons:
          a) Mahadev, our resident ZK (and the new ResourceManager) expert, was very vary of using ZK watches for scalability reasons. Consider a 10k node cluster with 25-30 containers per node and 10k running jobs - we'd need at least 10k * 10k watches which is a lot for ZK
          b) Security on ZK is still largely unknown, eventually ZK will get there but we'd need a lot of work to do for delegation tokens etc. since we can't do kerberos everywhere.

          Having said that...

          In any case, the new architecture is decoupled and it should be much easier to make enhancements with this separation.

          Exactly. This is something we should definitely re-visit in a subsequent release. Hopefully that makes sense, thanks!

          Show
          Arun C Murthy added a comment - With ZooKeeper in there, I wonder how much of the communicaton now uses ZooKeeper watches for efficiency and low latency. Scott - We seriously considered this, but had to continue to use Hadoop RPC for a couple of reasons: a) Mahadev, our resident ZK (and the new ResourceManager) expert, was very vary of using ZK watches for scalability reasons. Consider a 10k node cluster with 25-30 containers per node and 10k running jobs - we'd need at least 10k * 10k watches which is a lot for ZK b) Security on ZK is still largely unknown, eventually ZK will get there but we'd need a lot of work to do for delegation tokens etc. since we can't do kerberos everywhere. Having said that... In any case, the new architecture is decoupled and it should be much easier to make enhancements with this separation. Exactly. This is something we should definitely re-visit in a subsequent release. Hopefully that makes sense, thanks!
          Hide
          eric baldeschwieler added a comment -

          Hi Jeff,

          A couple of thoughts:

          1) Discussions on how to reorganize the hadoop universe probably should be moved from this bug to their own thread. Can we restrict this thread to discussions about the design and implementation of this work? Feel free to start this discussion on general or in JIRA.

          2) I agree with you that it is important that we structure hadoop so that it is easy to plugin and use other technologies and I would welcome your contribution of code to help make that a reality in this case.

          3) My experience with the project split has been very negative. It is becoming much harder, not easier to evolve the hadoop code base. Hence nigel's suggestions (which I support) to actually move the projects closer together. Since map-reduce is the core of Hadoop, I think it is import that Hadoop remain able to deliver the worlds best MR solution within the project.

          4) We consider this work a natural evolution of the MR project. Please don't refer to it as Yahoo! cluster manager. That will just confuse the discussion. The intent is to complete this work in apache and others are more then welcome to help us with it.

          Thanks,

          E14

          Show
          eric baldeschwieler added a comment - Hi Jeff, A couple of thoughts: 1) Discussions on how to reorganize the hadoop universe probably should be moved from this bug to their own thread. Can we restrict this thread to discussions about the design and implementation of this work? Feel free to start this discussion on general or in JIRA. 2) I agree with you that it is important that we structure hadoop so that it is easy to plugin and use other technologies and I would welcome your contribution of code to help make that a reality in this case. 3) My experience with the project split has been very negative. It is becoming much harder, not easier to evolve the hadoop code base. Hence nigel's suggestions (which I support) to actually move the projects closer together. Since map-reduce is the core of Hadoop, I think it is import that Hadoop remain able to deliver the worlds best MR solution within the project. 4) We consider this work a natural evolution of the MR project. Please don't refer to it as Yahoo! cluster manager. That will just confuse the discussion. The intent is to complete this work in apache and others are more then welcome to help us with it. Thanks, E14
          Hide
          Scott Carey added a comment -

          Consider a 10k node cluster with 25-30 containers per node and 10k running jobs - we'd need at least 10k * 10k watches which is a lot for ZK

          Thanks for the info Arun. There would be a lot to work out to mix in ZK and not run into a scalability wall.
          If you assume that each node has to watch every job, its not going to scale. If each node is only watching one thing when in need of work ("Is there work for me?") you can get a large chunk of the RPC that causes delayed task starts gone. I'm mainly thinking of the "is there work for me now? what about now? And now?" RPC that goes on in hadoop today. That could be inverted into "flag three nodes with local data simultaneously that there is work for them, the first to grab the item wins". How valuable is replacing just part of the RPC? I'm not sure. It would help my clusters, but they aren't that big.
          The other part of the scheduling problem you allude to that requires scanning all available jobs and assigning resources would need some clever work to do in ZK without scalability problems.

          On a related item, I am glad that job submission includes a DAG of tasks. There is a lot of opportunity to reduce latency in job flows there and consolidate work from a half-dozen projects duplicating effort.

          It is becoming much harder, not easier to evolve the hadoop code base.

          The choice to have all three projects be in their own trunk/tags/branches was a mistake IMO. I've done the same elsewhere and learned the hard way: don't put projects under different version trees unless you intend to actually completely decouple them and release them separately.

          Hadoop needs more modularity and plugability, but making Cluster Management and Application Management plug-able does not depend on separate projects, its the other way around. Hadoop needs to become more modular internally, its build more sophisticated, and the build outputs more flexible. After a user can swap out foo-resource-manager.jar with hadoop-resource-manager.jar behind resource-manager-api.jar and expect it to work, a separate project for the hadoop-resource-manager could make sense.

          That said, I agree with Eric's #1 – future modularity and this work are separate discussions / items. IMO any greater project restructuring related to cluster management depends on this, and not the other way around. A project split should not be the enforcer of for modularity, actual proven modularity should be justification for a split. If one is afraid that without a project split, things are bound to be intertwined, other solutions should be found. Releasing separate jars for the components is one way to move forward that does not need a project split – though it might require Maven to make it easy to manage and make a split much easier.

          Show
          Scott Carey added a comment - Consider a 10k node cluster with 25-30 containers per node and 10k running jobs - we'd need at least 10k * 10k watches which is a lot for ZK Thanks for the info Arun. There would be a lot to work out to mix in ZK and not run into a scalability wall. If you assume that each node has to watch every job, its not going to scale. If each node is only watching one thing when in need of work ("Is there work for me?") you can get a large chunk of the RPC that causes delayed task starts gone. I'm mainly thinking of the "is there work for me now? what about now? And now?" RPC that goes on in hadoop today. That could be inverted into "flag three nodes with local data simultaneously that there is work for them, the first to grab the item wins". How valuable is replacing just part of the RPC? I'm not sure. It would help my clusters, but they aren't that big. The other part of the scheduling problem you allude to that requires scanning all available jobs and assigning resources would need some clever work to do in ZK without scalability problems. On a related item, I am glad that job submission includes a DAG of tasks. There is a lot of opportunity to reduce latency in job flows there and consolidate work from a half-dozen projects duplicating effort. It is becoming much harder, not easier to evolve the hadoop code base. The choice to have all three projects be in their own trunk/tags/branches was a mistake IMO. I've done the same elsewhere and learned the hard way: don't put projects under different version trees unless you intend to actually completely decouple them and release them separately. Hadoop needs more modularity and plugability, but making Cluster Management and Application Management plug-able does not depend on separate projects, its the other way around. Hadoop needs to become more modular internally, its build more sophisticated, and the build outputs more flexible. After a user can swap out foo-resource-manager.jar with hadoop-resource-manager.jar behind resource-manager-api.jar and expect it to work, a separate project for the hadoop-resource-manager could make sense. That said, I agree with Eric's #1 – future modularity and this work are separate discussions / items. IMO any greater project restructuring related to cluster management depends on this, and not the other way around. A project split should not be the enforcer of for modularity, actual proven modularity should be justification for a split. If one is afraid that without a project split, things are bound to be intertwined, other solutions should be found. Releasing separate jars for the components is one way to move forward that does not need a project split – though it might require Maven to make it easy to manage and make a split much easier.
          Hide
          Joydeep Sen Sarma added a comment -

          I have been working on maintaining/enhancing MR for FB's use case for last 6 months or so. Here are a few priority areas for us that are relevant to this discussion:

          1. Latency.

          This is, imho, #1 priority wrt scheduling. As Scott has already remarked, the ping-response model is broken. So is preemption as an after-thought. We need to get small/medium jobs scheduled instantly. Period.

          1. Scalability.

          We have made a number of vital fixes to keep the JT working at our scale - but we have merely bought some time.

          1. (wrt. ResourceManager) Open API

          By which i mean something like Thrift/PB/Avro. We are, of course, most comfortable with Thrift and it would be nice (but not critical) if it were possible to build a Thrift wrapper (even if one was not baked in from scratch).

          One thing i have found is that writing Thrift services is a breeze because of inbuilt service framework. Everything else on the serialization side being equal - this has been a big win for me personally as a developer (and something to be considered as other distributed execution frameworks try to use the RM).

          1. Ability to back-plug into older Hadoop versions

          This is related to #3. Unlike many other organizations - we cannot make big jumps in hadoop revisions anymore. We have too many custom changes and we don't have a QA department. Unlike in the past, where we could have depended on Yahoo's QA'ed releases - we don't have that luxury anymore (because we are now both running software at similar versions - and we can't wait until Yahoo has deployed/QA'ed new versions before deploying newer upgrades).

          If the RM api is open (and satisfactory from design perspective) - we can do the work in-house to our older version of Hadoop to use it. This is critical for us (although i am not sure it applies to other users).

          I cannot emphasize the urgency around #1. Whether we continue to use Hadoop or not is predicated on big improvements in latency and efficiency (the latter is a different topic).

          I hope #3 and #4 contribute to the discussion around component architecture. At our scale - i don't think we can build services using large software that is tightly integrated. We need too much customization and we can't afford the long upgrade cycles of such tightly integrated software. Of course, this is specific to our deployment and the requirements of most other deployments is likely to be quite different.


          As a developer - i have found the current JobTracker code totally unmaintainable - I hope the new version (broken across RM/App-Master) is better. There are several design points that have struck me as particularly evil:

          1. synchronous RPC based architecture: limits concurrency and forces bad implementation choices (see #2)
          2. crazy locking: this is just bad implementation for the most part - but i hope the new design/implementation clearly articulates some principles around the fundamental data structures and how transactional changes to these data structures are meant to be accomplished.
          3. poor data structure maintenance: 99% of the data structures in the JT have a pattern of a:
            a. a primary collection (eg: list of all jobs in the system)
            b. several secondary indices/views (list of all runnable jobs from above, list of all completed jobs etc)

          Instead of modeling updates to such collections and related views through a common entry point - updates to primary and secondary data structures are at disjoint places throughout the code and make maintenance of code a nightmare.

          i can only hope that a big rewrite like this will try to address some of these issues (others - like hard-wiring to specific (M/R) task types - are already addressed i presume in the new RM).

          my 2c.

          Show
          Joydeep Sen Sarma added a comment - I have been working on maintaining/enhancing MR for FB's use case for last 6 months or so. Here are a few priority areas for us that are relevant to this discussion: Latency. This is, imho, #1 priority wrt scheduling. As Scott has already remarked, the ping-response model is broken. So is preemption as an after-thought. We need to get small/medium jobs scheduled instantly. Period. Scalability. We have made a number of vital fixes to keep the JT working at our scale - but we have merely bought some time. (wrt. ResourceManager) Open API By which i mean something like Thrift/PB/Avro. We are, of course, most comfortable with Thrift and it would be nice (but not critical) if it were possible to build a Thrift wrapper (even if one was not baked in from scratch). One thing i have found is that writing Thrift services is a breeze because of inbuilt service framework. Everything else on the serialization side being equal - this has been a big win for me personally as a developer (and something to be considered as other distributed execution frameworks try to use the RM). Ability to back-plug into older Hadoop versions This is related to #3. Unlike many other organizations - we cannot make big jumps in hadoop revisions anymore. We have too many custom changes and we don't have a QA department. Unlike in the past, where we could have depended on Yahoo's QA'ed releases - we don't have that luxury anymore (because we are now both running software at similar versions - and we can't wait until Yahoo has deployed/QA'ed new versions before deploying newer upgrades). If the RM api is open (and satisfactory from design perspective) - we can do the work in-house to our older version of Hadoop to use it. This is critical for us (although i am not sure it applies to other users). I cannot emphasize the urgency around #1. Whether we continue to use Hadoop or not is predicated on big improvements in latency and efficiency (the latter is a different topic). I hope #3 and #4 contribute to the discussion around component architecture. At our scale - i don't think we can build services using large software that is tightly integrated. We need too much customization and we can't afford the long upgrade cycles of such tightly integrated software. Of course, this is specific to our deployment and the requirements of most other deployments is likely to be quite different. As a developer - i have found the current JobTracker code totally unmaintainable - I hope the new version (broken across RM/App-Master) is better. There are several design points that have struck me as particularly evil: synchronous RPC based architecture: limits concurrency and forces bad implementation choices (see #2) crazy locking: this is just bad implementation for the most part - but i hope the new design/implementation clearly articulates some principles around the fundamental data structures and how transactional changes to these data structures are meant to be accomplished. poor data structure maintenance: 99% of the data structures in the JT have a pattern of a: a. a primary collection (eg: list of all jobs in the system) b. several secondary indices/views (list of all runnable jobs from above, list of all completed jobs etc) Instead of modeling updates to such collections and related views through a common entry point - updates to primary and secondary data structures are at disjoint places throughout the code and make maintenance of code a nightmare. i can only hope that a big rewrite like this will try to address some of these issues (others - like hard-wiring to specific (M/R) task types - are already addressed i presume in the new RM). my 2c.
          Hide
          Tom White added a comment -

          Hadoop needs to become more modular internally

          +1 An easy way to achieve this here would be to put the resource manager code and new MapReduce ApplicationMaster code into separate source trees under mapreduce. This will help enforce dependencies from the beginning: MR2 depends on MR1 and RM, but RM doesn't depend on anything else (except common for RPC?). Going further, the work to separate out the API and libraries from the implementation should help this effort too, since it will involve removing hard dependencies on the jobtracker from the API classes (see MAPREDUCE-1478, MAPREDUCE-1638).

          Show
          Tom White added a comment - Hadoop needs to become more modular internally +1 An easy way to achieve this here would be to put the resource manager code and new MapReduce ApplicationMaster code into separate source trees under mapreduce. This will help enforce dependencies from the beginning: MR2 depends on MR1 and RM, but RM doesn't depend on anything else (except common for RPC?). Going further, the work to separate out the API and libraries from the implementation should help this effort too, since it will involve removing hard dependencies on the jobtracker from the API classes (see MAPREDUCE-1478 , MAPREDUCE-1638 ).
          Hide
          Arun C Murthy added a comment -

          +1 An easy way to achieve this here would be to put the resource manager code and new MapReduce ApplicationMaster code into separate source trees under mapreduce.

          Agreed! We have done exactly that in the prototype and plan to continue improving modularity.

          Going further, the work to separate out the API and libraries from the implementation should help this effort too, since it will involve removing hard dependencies on the jobtracker from the API classes

          +1

          Show
          Arun C Murthy added a comment - +1 An easy way to achieve this here would be to put the resource manager code and new MapReduce ApplicationMaster code into separate source trees under mapreduce. Agreed! We have done exactly that in the prototype and plan to continue improving modularity. Going further, the work to separate out the API and libraries from the implementation should help this effort too, since it will involve removing hard dependencies on the jobtracker from the API classes +1
          Hide
          Min Zhou added a comment -

          @Arun

          How does ApplicationMaster know its resource requirements before it launches tasks? IMHO, the biggest problem of resources allocation is that we could't determine the CPU/memory/disk/network requirements unless when the task is running. User defined requirements by the configuration files are always improper.
          From your words, the architecture allows end-users to implement any application-specific framework by implementing a custom ApplicationMaster. Even common users can deploy their ApplicationMaster over the cluster they have no any permissions on that? Can you illustrate how to achieve it?

          Show
          Min Zhou added a comment - @Arun How does ApplicationMaster know its resource requirements before it launches tasks? IMHO, the biggest problem of resources allocation is that we could't determine the CPU/memory/disk/network requirements unless when the task is running. User defined requirements by the configuration files are always improper. From your words, the architecture allows end-users to implement any application-specific framework by implementing a custom ApplicationMaster. Even common users can deploy their ApplicationMaster over the cluster they have no any permissions on that? Can you illustrate how to achieve it?
          Hide
          Arun C Murthy added a comment -

          @Min

          How does ApplicationMaster know its resource requirements before it launches tasks?

          The assumption is that the AM has a basic idea about resource requirements for it's application which is feasible for our primary use case: Map-Reduce. OTOH, an AM for other applications has the ability to launch a few tasks, watch their resource consumption/utilization and update future resource requests.

          Even common users can deploy their ApplicationMaster over the cluster they have no any permissions on that?

          From the framework (i.e. RM/NN) perspective, everything in the cluster including AMs is 'user-land'. Thus as long as a user implements the protocols for AMs they can deploy any applications... they do not need any permission to deploy.

          I'm working very hard to get the codebase committed to a branch, once there we would love your f/b on the protocols etc. Hopefully that should help you understand how to implement a custom AM if you so choose... appreciate your patience while I work the system! smile

          Show
          Arun C Murthy added a comment - @Min How does ApplicationMaster know its resource requirements before it launches tasks? The assumption is that the AM has a basic idea about resource requirements for it's application which is feasible for our primary use case: Map-Reduce. OTOH, an AM for other applications has the ability to launch a few tasks, watch their resource consumption/utilization and update future resource requests. Even common users can deploy their ApplicationMaster over the cluster they have no any permissions on that? From the framework (i.e. RM/NN) perspective, everything in the cluster including AMs is 'user-land'. Thus as long as a user implements the protocols for AMs they can deploy any applications... they do not need any permission to deploy. I'm working very hard to get the codebase committed to a branch, once there we would love your f/b on the protocols etc. Hopefully that should help you understand how to implement a custom AM if you so choose... appreciate your patience while I work the system! smile
          Hide
          Mahadev konar added a comment -

          @Scott,
          With respect your comments on ResourceManager/ZooKeeper/RPC:
          We intend to take it slow with ZooKeeper, initially the intention is to put just the allocations (what each job/application is allocated in ZooKeeper, this is mainly for ResourceManager and Application Master restart). I am not really in favor of using ZK notification for getting rid of RPC's. For the scale we are talking abt, the "first to get the work will take it" approach will cause herd affect and will definitely be a cause for concern. I think ZK can be used much more than what we have proposed but itll be gradual process to see what all we can offload to ZK.

          I am pretty hesitant to put RPC load onto ZK and use it as a workload queue for something like this.

          Show
          Mahadev konar added a comment - @Scott, With respect your comments on ResourceManager/ZooKeeper/RPC: We intend to take it slow with ZooKeeper, initially the intention is to put just the allocations (what each job/application is allocated in ZooKeeper, this is mainly for ResourceManager and Application Master restart). I am not really in favor of using ZK notification for getting rid of RPC's. For the scale we are talking abt, the "first to get the work will take it" approach will cause herd affect and will definitely be a cause for concern. I think ZK can be used much more than what we have proposed but itll be gradual process to see what all we can offload to ZK. I am pretty hesitant to put RPC load onto ZK and use it as a workload queue for something like this.
          Hide
          Leitao Guo added a comment -

          @Arun
          I also suspect the assumption that the AM has a basic idea about resource requirements. For example in hive scenario, how does AM know the requirements for resources when facing all kinds of query requests?

          At the same time, if AM finds the request for resources is not enough for the application, will it re-request more resources or just fail?

          Show
          Leitao Guo added a comment - @Arun I also suspect the assumption that the AM has a basic idea about resource requirements. For example in hive scenario, how does AM know the requirements for resources when facing all kinds of query requests? At the same time, if AM finds the request for resources is not enough for the application, will it re-request more resources or just fail?
          Hide
          Tsuyoshi OZAWA added a comment -

          > Hadoop needs to become more modular internally
          +1

          There are a lot of domain-specific programming model by extending MapReduce (e.g haloop, twiter, and so on),
          so this evolution is good to deal with the fashion.

          @Arun
          I'm very interested in this project. How can I join?
          Or, is there some repository to access your prototype code?

          Show
          Tsuyoshi OZAWA added a comment - > Hadoop needs to become more modular internally +1 There are a lot of domain-specific programming model by extending MapReduce (e.g haloop, twiter, and so on), so this evolution is good to deal with the fashion. @Arun I'm very interested in this project. How can I join? Or, is there some repository to access your prototype code?
          Hide
          Tsuyoshi OZAWA added a comment -

          > (e.g haloop, twiter, and so on)
          s/twiter/Twister/g

          Show
          Tsuyoshi OZAWA added a comment - > (e.g haloop, twiter, and so on) s/twiter/Twister/g
          Hide
          MengWang added a comment -

          @All

          How shuffle works in MapReduce 2.0 ?

          Our study shows that shuffle is a performance bottleneck of mapreduce computing. There are some problems of shuffle:
          (1)Shuffle and reduce are tightly-coupled, usually shuffle phase doesn't consume too much memory and CPU, so theoretically, reducetasks's slot can be used for other computing tasks when copying data from maps. This method will enhance cluster utilization. Furthermore, should shuffle be separated from reduce? Then shuffle will not use reduce's slot,we need't distinguish between map slots and reduce slots at all.
          (2)For large jobs, shuffle will use too many network connections, Data transmitted by each network connection is very little, which is inefficient. From 0.21.0 one connection can transfer several map outputs, but i think this is not enough. Maybe we can use a per node shuffle client progress(like tasktracker) to shuffle data for all reduce tasks on this node, then we can shuffle more data trough one connection.
          (3)Too many concurrent connections will cause shuffle server do massive random IO, which is inefficient. Maybe we can aggregate http request(like delay scheduler), then random IO will be sequential.
          (4)How to manage memory used by shuffle efficiently. We use buddy memory allocation, which will waste a considerable amount of memory.
          (5)If shuffle separated from reduce, then we must figure out how to do reduce locality?
          (6)Can we store map outputs in a Storage system(like hdfs)?
          (7)Can shuffle be a general data transfer service, which not only for map/reduce paradigm?

          Show
          MengWang added a comment - @All How shuffle works in MapReduce 2.0 ? Our study shows that shuffle is a performance bottleneck of mapreduce computing. There are some problems of shuffle: (1)Shuffle and reduce are tightly-coupled, usually shuffle phase doesn't consume too much memory and CPU, so theoretically, reducetasks's slot can be used for other computing tasks when copying data from maps. This method will enhance cluster utilization. Furthermore, should shuffle be separated from reduce? Then shuffle will not use reduce's slot,we need't distinguish between map slots and reduce slots at all. (2)For large jobs, shuffle will use too many network connections, Data transmitted by each network connection is very little, which is inefficient. From 0.21.0 one connection can transfer several map outputs, but i think this is not enough. Maybe we can use a per node shuffle client progress(like tasktracker) to shuffle data for all reduce tasks on this node, then we can shuffle more data trough one connection. (3)Too many concurrent connections will cause shuffle server do massive random IO, which is inefficient. Maybe we can aggregate http request(like delay scheduler), then random IO will be sequential. (4)How to manage memory used by shuffle efficiently. We use buddy memory allocation, which will waste a considerable amount of memory. (5)If shuffle separated from reduce, then we must figure out how to do reduce locality? (6)Can we store map outputs in a Storage system(like hdfs)? (7)Can shuffle be a general data transfer service, which not only for map/reduce paradigm?
          Hide
          Arun C Murthy added a comment -

          I'm very interested in this project. How can I join?

          Thanks Ozawa! We are very glad.

          An update is that we have pretty much closed all the loops internally to get the code out into a branch, we'd love to start having everyone involved...

          Show
          Arun C Murthy added a comment - I'm very interested in this project. How can I join? Thanks Ozawa! We are very glad. An update is that we have pretty much closed all the loops internally to get the code out into a branch, we'd love to start having everyone involved...
          Hide
          Arun C Murthy added a comment -

          How shuffle works in MapReduce 2.0 ?

          Meng - pretty much the same as currently, the map-outputs are served over http.

          We have discussed improvements to shuffle along the lines you have suggested for a long while now (I just don't have the jiras handy) and I agree, they are excellent ideas.

          Our hope is that with MRv2 we open up Map-Reduce to significant innovation so that folks can try various ideas like the ones you suggested... make sense?

          Show
          Arun C Murthy added a comment - How shuffle works in MapReduce 2.0 ? Meng - pretty much the same as currently, the map-outputs are served over http. We have discussed improvements to shuffle along the lines you have suggested for a long while now (I just don't have the jiras handy) and I agree, they are excellent ideas. Our hope is that with MRv2 we open up Map-Reduce to significant innovation so that folks can try various ideas like the ones you suggested... make sense?
          Hide
          Lianhui Wang added a comment -

          i think that the MR2.0 may resolve the thing:
          JT doesnot monitor the status of the every job and tasks,because many TT must RPC to the one JT every few seconds.
          and many clients get the job's status through rpc the one JT every few seconds.
          so the majority nodes of the cluster connect to the JT discontinuously,that lead to degrade the performance of the JT.especially the number of cluster increase,example 10K.
          like hdfs's Federation Branch, the MR project must create a new branch for the 2.0.

          Show
          Lianhui Wang added a comment - i think that the MR2.0 may resolve the thing: JT doesnot monitor the status of the every job and tasks,because many TT must RPC to the one JT every few seconds. and many clients get the job's status through rpc the one JT every few seconds. so the majority nodes of the cluster connect to the JT discontinuously,that lead to degrade the performance of the JT.especially the number of cluster increase,example 10K. like hdfs's Federation Branch, the MR project must create a new branch for the 2.0.
          Hide
          Scott Carey added a comment -

          Re: Shuffle.

          See https://issues.apache.org/jira/browse/MAPREDUCE-318

          Those changes are in 0.21+ (and perhaps Y!'s distro but not Cloudera's), I believe. This doesn't do everything mentioned but is a significant improvement.

          Show
          Scott Carey added a comment - Re: Shuffle. See https://issues.apache.org/jira/browse/MAPREDUCE-318 Those changes are in 0.21+ (and perhaps Y!'s distro but not Cloudera's), I believe. This doesn't do everything mentioned but is a significant improvement.
          Hide
          Arun C Murthy added a comment -

          Folks, we are happy to put out a first cut of MRv2.

          A brief overview:

          A global ResourceManager (RM) tracks machine availability and scheduling invariants while a per-application ApplicationMaster (AM) runs inside the cluster and tracks the program semantics for a given job. An application is either a single MapReduce job as the JobTracker supports today, it could be a directed, acyclic graph (DAG) of MapReduce jobs, or it could be a new framework. Each machine in the cluster runs a per-node daemon, the NodeManager (NM), responsible for enforcing and reporting the resource allocations made by the RM and monitoring the lifecycle of processes spawned on behalf of an application. Each process started by the NM is conceptually a container, or a bundle of resources allocated by the RM.

          We call the new framework (RM/NM) as YARN (Yet Another Resource Negotiator)...

          Source layout:

          1. A new yarn source folder contains the RM and NM.
          2. A new mr-client folder contains all of the MapReduce runtime. This includes the MapReduce ApplicationMaster and all of the classes for running MapReduce applications. Please note that the MR runtime has not changed at all, including the user apis - we continue to support both the old 'mapred' api and the new 'mapreduce' api (context-objects). We are moving some classes from src/java/mapred/* to mr-client to achieve the same.
          3. We have continued to keep the old JobTracker/TaskTracker based MapReduce framework in src/java.

          Build:

          1. We decided to embrace maven for MRv2, hence yarn and mr-client are built via maven.
          2. For now the old JT/TT based MR framework continues to use ant/ivy. Hopefully we can change this soon - I know Giri is working on this for common, hdfs and mapreduce at one go.

          There is a INSTALL file which describes how to build, deploy MRv2 and also how to run MR applications.


          I'm planning on committing this patch to a development branch (named MAPREDUCE-279) soon so that we can continue all our work via Apache in the open. We really look forward to feedback and working with the community henceforth. We have many many miles to go and promises to keep!

          PS: I have attached a script (MR-279.sh) to show the the files being moved to mr-client for the MR runtime, a list of files being moved and the actual patch to apply after. Also, please note that the patch is significantly bigger than it should be since it includes binary images (via git diff --text).

          Show
          Arun C Murthy added a comment - Folks, we are happy to put out a first cut of MRv2. A brief overview: A global ResourceManager (RM) tracks machine availability and scheduling invariants while a per-application ApplicationMaster (AM) runs inside the cluster and tracks the program semantics for a given job. An application is either a single MapReduce job as the JobTracker supports today, it could be a directed, acyclic graph (DAG) of MapReduce jobs, or it could be a new framework. Each machine in the cluster runs a per-node daemon, the NodeManager (NM), responsible for enforcing and reporting the resource allocations made by the RM and monitoring the lifecycle of processes spawned on behalf of an application. Each process started by the NM is conceptually a container, or a bundle of resources allocated by the RM. We call the new framework (RM/NM) as YARN (Yet Another Resource Negotiator)... Source layout: A new yarn source folder contains the RM and NM. A new mr-client folder contains all of the MapReduce runtime. This includes the MapReduce ApplicationMaster and all of the classes for running MapReduce applications. Please note that the MR runtime has not changed at all, including the user apis - we continue to support both the old 'mapred' api and the new 'mapreduce' api (context-objects). We are moving some classes from src/java/mapred/* to mr-client to achieve the same. We have continued to keep the old JobTracker/TaskTracker based MapReduce framework in src/java. Build: We decided to embrace maven for MRv2, hence yarn and mr-client are built via maven. For now the old JT/TT based MR framework continues to use ant/ivy. Hopefully we can change this soon - I know Giri is working on this for common, hdfs and mapreduce at one go. There is a INSTALL file which describes how to build, deploy MRv2 and also how to run MR applications. I'm planning on committing this patch to a development branch (named MAPREDUCE-279 ) soon so that we can continue all our work via Apache in the open. We really look forward to feedback and working with the community henceforth. We have many many miles to go and promises to keep! PS: I have attached a script (MR-279.sh) to show the the files being moved to mr-client for the MR runtime, a list of files being moved and the actual patch to apply after. Also, please note that the patch is significantly bigger than it should be since it includes binary images (via git diff --text).
          Hide
          Arun C Murthy added a comment -

          Updated patch, adding missing license headers for some files.

          Show
          Arun C Murthy added a comment - Updated patch, adding missing license headers for some files.
          Hide
          Arun C Murthy added a comment -

          I'm going to commit this to a dev branch (MR-279?) if no one objects. Thanks.

          Show
          Arun C Murthy added a comment - I'm going to commit this to a dev branch (MR-279?) if no one objects. Thanks.
          Hide
          Todd Lipcon added a comment -

          sure, +1 for putting this on a dev (non-release) branch

          Show
          Todd Lipcon added a comment - sure, +1 for putting this on a dev (non-release) branch
          Hide
          Arun C Murthy added a comment -

          Thanks Todd. I've commited to a dev-branch: MR-279.

          Show
          Arun C Murthy added a comment - Thanks Todd. I've commited to a dev-branch: MR-279.
          Hide
          Tom White added a comment -

          There's a lot to digest here, but here are a couple of quick initial high-level comments from a packaging and staging perspective.

          I wonder if it would be easier not to move the src/java/org/apache/hadoop/mapred(uce) trees at this stage. MR 2 could just depend on the MapReduce JAR produced by the ant file, just like it does for Common. This would make the introduction of the codebase easier. There are some changes required in the existing classes, but by the look of things they are fairly minor and by introducing them in situ (in separate JIRAs) we can be sure they won't break existing users, and the changes would be easier to track.

          Alternatively this work could depend on full mavenization (at least of MapReduce), but that's probably some way off.

          MAPREDUCE-1638 is highly relevant for this work, since it aims to split out the MR API from the implementation. I've got an in-progress patch for this, which I'll post soon for discussion.

          Show
          Tom White added a comment - There's a lot to digest here, but here are a couple of quick initial high-level comments from a packaging and staging perspective. I wonder if it would be easier not to move the src/java/org/apache/hadoop/mapred(uce) trees at this stage. MR 2 could just depend on the MapReduce JAR produced by the ant file, just like it does for Common. This would make the introduction of the codebase easier. There are some changes required in the existing classes, but by the look of things they are fairly minor and by introducing them in situ (in separate JIRAs) we can be sure they won't break existing users, and the changes would be easier to track. Alternatively this work could depend on full mavenization (at least of MapReduce), but that's probably some way off. MAPREDUCE-1638 is highly relevant for this work, since it aims to split out the MR API from the implementation. I've got an in-progress patch for this, which I'll post soon for discussion.
          Hide
          Arun C Murthy added a comment -

          Thanks for your f/b Tom.

          I wonder if it would be easier not to move the src/java/org/apache/hadoop/mapred(uce) trees at this stage.

          The main issue is the dependency chain - currently the mr-client depends purely on apis in yarn package. In the alternate proposal (which we considered) mr-client would need to depend on yarn and src/java for the runtime.

          The current scheme is both more modular and enforces discipline by ensuring that the MapReduce runtime (map, sort, shuffle, merge, reduce) cannot, even accidentally, start relying on classes in the server package i.e. JT/TT etc. This also seems like the right end-state for the project.

          Also, as you pointed out, the changes to classes in src/java/org/apache/hadoop/mapred(uce) are very minor and the 'svn mv' is both well documented (MR-279_MR_files_to_move.txt, MR-279.sh) and straight-forward.


          MAPREDUCE-1638 is highly relevant for this work

          Thanks! MAPREDUCE-1638 is very relevant. MAPREDUCE-279 already has some of the changes you proposed there i.e. keeping server classes in a separate source structure from the implementation classes - we should collaborate both on trunk and on the MR-279 branch to ensure consistency. I'm happy to merge if necessary.

          Show
          Arun C Murthy added a comment - Thanks for your f/b Tom. I wonder if it would be easier not to move the src/java/org/apache/hadoop/mapred(uce) trees at this stage. The main issue is the dependency chain - currently the mr-client depends purely on apis in yarn package. In the alternate proposal (which we considered) mr-client would need to depend on yarn and src/java for the runtime. The current scheme is both more modular and enforces discipline by ensuring that the MapReduce runtime (map, sort, shuffle, merge, reduce) cannot, even accidentally, start relying on classes in the server package i.e. JT/TT etc. This also seems like the right end-state for the project. Also, as you pointed out, the changes to classes in src/java/org/apache/hadoop/mapred(uce) are very minor and the 'svn mv' is both well documented (MR-279_MR_files_to_move.txt, MR-279.sh) and straight-forward. MAPREDUCE-1638 is highly relevant for this work Thanks! MAPREDUCE-1638 is very relevant. MAPREDUCE-279 already has some of the changes you proposed there i.e. keeping server classes in a separate source structure from the implementation classes - we should collaborate both on trunk and on the MR-279 branch to ensure consistency. I'm happy to merge if necessary.
          Hide
          Todd Lipcon added a comment -

          Hi Arun. I spent the train ride this morning looking over yarn/src/main/avro in the branch. Here are a few comments, sorry for the somewhat stream-of-consciousness format.

          • Is the correct suffix still .genavro? Thought we'd changed the name to .avroidl or something?
          • Apache licenses needed on these files
          • Does AvroIDL convert javadoc-style comments on records/protocols into JavaDoc on generated code? If so we should do more of that.
          • AMRMProtocol:
            • the "release" parameter to allocate is strange: (a) it seems the function is misnamed if you can also release things as you call it, and (b) why isn't it an array<ContainerId>?
            • if you want to cancel previous resource requests, do you submit a new one with a negative numContainers?
          • ApplicationSubmissionContext:
            • would be good to have some kind of scheduler-specific parameters here? eg maybe a scheduler has something beyond just "priority" (eg. perhaps a deadline)
            • using just URL type directly for resources - seems not quite flexible enough? eg one useful construct would be a URL + checksum
            • what's resources_todo going to be?
            • passing "user" - agreed, this should be more flexible than simple string.
            • Why not contain a ContainerLaunchContext to specify the container in which to run the AM? Seems like lots of duplicated fields.
          • ContainerManager:
            • not following YarnContainerTags - these are opaque enums, how do they get interpolated in a string?
            • how does one access stderr/stdout contents? both while they're being written and after a container has terminated? (maybe I just haven't gotten to that bit yet somewhere else)
          • yarn-types.avro:
            • For the typesafe ID classes, do we need to specify explicit comparison orderings? I don't know Avro behavior here.
            • Did you consider making the ids all strings instead of ints? The pro would be that there could be canonical formats, like "AM-<hex id>" for app masters vs "C-<hex id>" for containers. AWS does a good job of this.
            • Resource: field names should include units, like "int memoryMB"
            • what are ContainerTokens? could use some extra doc at the protocol layer here. (I assume this is for security?)
            • The "Container" type doesn't appear
            • the URL record is missing user/password used for http basic auth or s3n auth
            • there are some hard tabs in this file
            • ApplicationMaster:
              • httpPort seems like it would be better described as something like "httpStatusURL"?
            • LocalResourceVisibility:
              • just to clarify, APPLICATION visibility means "only to this application submitted by this user". ie if joe and bob both submit MapReduce 2.x.y jobs with identical jars, it still won't share, even if sha1s match?
              • if bob submits the same application (ie MR 2.x.y) twice, do APPLICATION visibility files get shared?
          Show
          Todd Lipcon added a comment - Hi Arun. I spent the train ride this morning looking over yarn/src/main/avro in the branch. Here are a few comments, sorry for the somewhat stream-of-consciousness format. Is the correct suffix still .genavro? Thought we'd changed the name to .avroidl or something? Apache licenses needed on these files Does AvroIDL convert javadoc-style comments on records/protocols into JavaDoc on generated code? If so we should do more of that. AMRMProtocol: the "release" parameter to allocate is strange: (a) it seems the function is misnamed if you can also release things as you call it, and (b) why isn't it an array<ContainerId>? if you want to cancel previous resource requests, do you submit a new one with a negative numContainers? ApplicationSubmissionContext: would be good to have some kind of scheduler-specific parameters here? eg maybe a scheduler has something beyond just "priority" (eg. perhaps a deadline) using just URL type directly for resources - seems not quite flexible enough? eg one useful construct would be a URL + checksum what's resources_todo going to be? passing "user" - agreed, this should be more flexible than simple string. Why not contain a ContainerLaunchContext to specify the container in which to run the AM? Seems like lots of duplicated fields. ContainerManager: not following YarnContainerTags - these are opaque enums, how do they get interpolated in a string? how does one access stderr/stdout contents? both while they're being written and after a container has terminated? (maybe I just haven't gotten to that bit yet somewhere else) yarn-types.avro: For the typesafe ID classes, do we need to specify explicit comparison orderings? I don't know Avro behavior here. Did you consider making the ids all strings instead of ints? The pro would be that there could be canonical formats, like "AM-<hex id>" for app masters vs "C-<hex id>" for containers. AWS does a good job of this. Resource: field names should include units, like "int memoryMB" what are ContainerTokens? could use some extra doc at the protocol layer here. (I assume this is for security?) The "Container" type doesn't appear the URL record is missing user/password used for http basic auth or s3n auth there are some hard tabs in this file ApplicationMaster: httpPort seems like it would be better described as something like "httpStatusURL"? LocalResourceVisibility: just to clarify, APPLICATION visibility means "only to this application submitted by this user". ie if joe and bob both submit MapReduce 2.x.y jobs with identical jars, it still won't share, even if sha1s match? if bob submits the same application (ie MR 2.x.y) twice, do APPLICATION visibility files get shared?
          Hide
          Chris Douglas added a comment -

          Why not contain a ContainerLaunchContext to specify the container in which to run the AM? Seems like lots of duplicated fields.

          Agreed. Fixing this also addresses the URL as insufficient for resources. The _todo form was introduced to effect this, and remains in-progress.

          how does one access stderr/stdout contents? both while they're being written and after a container has terminated? (maybe I just haven't gotten to that bit yet somewhere else)

          This is still a TODO (working on it now). In the short term, something similar to what the TT does is probably sufficient, I hope.

          Did you consider making the ids all strings instead of ints? The pro would be that there could be canonical formats, like "AM-<hex id>" for app masters vs "C-<hex id>" for containers.

          Some of the implementation ended up relying on a consistent mapping of int ids to strings, so going all the way could make sense. On the other hand, parsing strings to determine relationships between containers and applications is regrettable.

          the URL record is missing user/password used for http basic auth or s3n auth

          Agreed, full URIs should be supported, though pushing that all the way through FileContext and FileSystem could be painful.

          just to clarify, APPLICATION visibility means "only to this application submitted by this user". ie if joe and bob both submit MapReduce 2.x.y jobs with identical jars, it still won't share, even if sha1s match?

          Right. The target layout for the NodeManager looks roughly like this:

          for x in localdir:
          $x/filecache # public cache
          $x/usercache
          $x/usercache/$user
          $x/usercache/filecache # private cache
          $x/usercache/$user/appcache
          $x/usercache/$user/appcache/$appid
          $x/usercache/$user/appcache/$appid/filecache # application cache
          $x/usercache/$user/appcache/$appid/$containerid
          $x/usercache/$user/appcache/$appid/output # output retained after container exits, i.e. intermediate data
          

          So the end of the container and application can just delete those subdirs. Matching a job jar between invocations would require one to register that resource as PUBLIC/PRIVATE. The APPLICATION scope is more for job.xml and the like.

          Show
          Chris Douglas added a comment - Why not contain a ContainerLaunchContext to specify the container in which to run the AM? Seems like lots of duplicated fields. Agreed. Fixing this also addresses the URL as insufficient for resources. The _todo form was introduced to effect this, and remains in-progress. how does one access stderr/stdout contents? both while they're being written and after a container has terminated? (maybe I just haven't gotten to that bit yet somewhere else) This is still a TODO (working on it now). In the short term, something similar to what the TT does is probably sufficient, I hope. Did you consider making the ids all strings instead of ints? The pro would be that there could be canonical formats, like "AM-<hex id>" for app masters vs "C-<hex id>" for containers. Some of the implementation ended up relying on a consistent mapping of int ids to strings, so going all the way could make sense. On the other hand, parsing strings to determine relationships between containers and applications is regrettable. the URL record is missing user/password used for http basic auth or s3n auth Agreed, full URIs should be supported, though pushing that all the way through FileContext and FileSystem could be painful. just to clarify, APPLICATION visibility means "only to this application submitted by this user". ie if joe and bob both submit MapReduce 2.x.y jobs with identical jars, it still won't share, even if sha1s match? Right. The target layout for the NodeManager looks roughly like this: for x in localdir: $x/filecache # public cache $x/usercache $x/usercache/$user $x/usercache/filecache # private cache $x/usercache/$user/appcache $x/usercache/$user/appcache/$appid $x/usercache/$user/appcache/$appid/filecache # application cache $x/usercache/$user/appcache/$appid/$containerid $x/usercache/$user/appcache/$appid/output # output retained after container exits, i.e. intermediate data So the end of the container and application can just delete those subdirs. Matching a job jar between invocations would require one to register that resource as PUBLIC/PRIVATE. The APPLICATION scope is more for job.xml and the like.
          Hide
          Chris Douglas added a comment -

          Sorry, the location of the private cache is $x/usercache/$user/filecache, not $x/usercache/filecache.

          Show
          Chris Douglas added a comment - Sorry, the location of the private cache is $x/usercache/$user/filecache , not $x/usercache/filecache .
          Hide
          Sharad Agarwal added a comment -

          Is the correct suffix still .genavro?

          Had to live with .genavro as the maven plugin (https://github.com/phunt/avro-maven-plugin) not been updated yet to work with the new extension.

          Does AvroIDL convert javadoc-style comments on records/protocols into JavaDoc on generated code?

          No. I don't see the comments in the generated code.

          Show
          Sharad Agarwal added a comment - Is the correct suffix still .genavro? Had to live with .genavro as the maven plugin ( https://github.com/phunt/avro-maven-plugin ) not been updated yet to work with the new extension. Does AvroIDL convert javadoc-style comments on records/protocols into JavaDoc on generated code? No. I don't see the comments in the generated code.
          Hide
          Todd Lipcon added a comment -

          Looking through the code a bit more I came across Hamlet. It seems you've written your own MVC framework and Java implementation of Haml as part of Yarn.

          Can you shed some light on why existing web frameworks were found to be insufficient? Do we really want a custom HTML generation framework as part of a resource scheduler?

          I don't have much experience with web programming in Java, but I can't imagine we have any use cases that are that unique that they couldn't be satisfied using a popular framework like Spring MVC. I also have strong doubts that a bunch of systems hackers like we have in our community can do a better job at designing and implementing a web framework compared to people who do web programming all day long (witness the completely incorrect job we do of input parameter escaping we do in the current webapps)

          Show
          Todd Lipcon added a comment - Looking through the code a bit more I came across Hamlet. It seems you've written your own MVC framework and Java implementation of Haml as part of Yarn. Can you shed some light on why existing web frameworks were found to be insufficient? Do we really want a custom HTML generation framework as part of a resource scheduler? I don't have much experience with web programming in Java, but I can't imagine we have any use cases that are that unique that they couldn't be satisfied using a popular framework like Spring MVC. I also have strong doubts that a bunch of systems hackers like we have in our community can do a better job at designing and implementing a web framework compared to people who do web programming all day long (witness the completely incorrect job we do of input parameter escaping we do in the current webapps)
          Hide
          eric baldeschwieler added a comment -

          I'll let luke comment on the details. I'd support patches to convert the UI to something more standard, if we can agree on the right thing. Having a good UI is a plus.

          Show
          eric baldeschwieler added a comment - I'll let luke comment on the details. I'd support patches to convert the UI to something more standard, if we can agree on the right thing. Having a good UI is a plus.
          Hide
          Michael Lee added a comment -

          cannot build:
          failed when building hadoop-mapred-279 ( follow instructions in http://svn.apache.org/repos/asf/hadoop/mapreduce/branches/MR-279/INSTALL)

          when build hadoop-mapred-279:
          [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.3.2:testCompile (default-testCompile) on project yarn-common: Compilation failure
          [ERROR] /home/michael/work/hadoop-mapred-279/yarn/yarn-common/src/test/java/org/apache/hadoop/yarn/TestRPC.java:[80,37] incompatible types
          [ERROR] found : java.util.ArrayList<java.lang.CharSequence>
          [ERROR] required: org.apache.avro.generic.GenericArray<java.lang.CharSequence>
          [ERROR] -> [Help 1]
          [ERROR]
          ------------------------------------
          My ENV:
          Maven:
          Apache Maven 3.0.3 (r1075438; 2011-03-01 01:31:09+0800)
          Maven home: /home/michael/local/apache-maven-3.0.3
          Java version: 1.6.0_07, vendor: Sun Microsystems Inc.
          Java home: /home/michael/local/java6/jre
          Default locale: en_US, platform encoding: ANSI_X3.4-1968
          OS name: "linux", version: "2.6.9_5-4-0-3", arch: "amd64", family: "unix"

          JDK:
          java version "1.6.0_07"
          Java(TM) SE Runtime Environment (build 1.6.0_07-b06)
          Java HotSpot(TM) 64-Bit Server VM (build 10.0-b23, mixed mode)

          Ant:
          Apache Ant version 1.7.0 compiled on December 13 2006

          avro-maven-plugin:
          using snapshot from: https://github.com/phunt/avro-maven-plugin/, 1.4.0 branch

          Show
          Michael Lee added a comment - cannot build: failed when building hadoop-mapred-279 ( follow instructions in http://svn.apache.org/repos/asf/hadoop/mapreduce/branches/MR-279/INSTALL ) when build hadoop-mapred-279: [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.3.2:testCompile (default-testCompile) on project yarn-common: Compilation failure [ERROR] /home/michael/work/hadoop-mapred-279/yarn/yarn-common/src/test/java/org/apache/hadoop/yarn/TestRPC.java: [80,37] incompatible types [ERROR] found : java.util.ArrayList<java.lang.CharSequence> [ERROR] required: org.apache.avro.generic.GenericArray<java.lang.CharSequence> [ERROR] -> [Help 1] [ERROR] ------------------------------------ My ENV: Maven: Apache Maven 3.0.3 (r1075438; 2011-03-01 01:31:09+0800) Maven home: /home/michael/local/apache-maven-3.0.3 Java version: 1.6.0_07, vendor: Sun Microsystems Inc. Java home: /home/michael/local/java6/jre Default locale: en_US, platform encoding: ANSI_X3.4-1968 OS name: "linux", version: "2.6.9_5-4-0-3", arch: "amd64", family: "unix" JDK: java version "1.6.0_07" Java(TM) SE Runtime Environment (build 1.6.0_07-b06) Java HotSpot(TM) 64-Bit Server VM (build 10.0-b23, mixed mode) Ant: Apache Ant version 1.7.0 compiled on December 13 2006 avro-maven-plugin: using snapshot from: https://github.com/phunt/avro-maven-plugin/ , 1.4.0 branch
          Hide
          Arun C Murthy added a comment -

          Looking through the code a bit more I came across Hamlet.

          Luke can provide more details, but I believe he took this route due to the lack of a better 'embeddable' alternative.

          Having said that, echo'ing eric14, please feel free to open a jira with an alternate proposal and we can consider moving over to something more standard that satisfies our constraints. Alternately, in the long run, we could move Hamlet out to a separate (incubator?) project to attempt build a community around.

          Show
          Arun C Murthy added a comment - Looking through the code a bit more I came across Hamlet. Luke can provide more details, but I believe he took this route due to the lack of a better 'embeddable' alternative. Having said that, echo'ing eric14, please feel free to open a jira with an alternate proposal and we can consider moving over to something more standard that satisfies our constraints. Alternately, in the long run, we could move Hamlet out to a separate (incubator?) project to attempt build a community around.
          Hide
          Arun C Murthy added a comment -

          I've opened MAPREDUCE-2399 to discuss Hamlet. Please use that jira so that we can keep MAPREDUCE-279 focussed on the next-gen MR framework. Thanks.

          Show
          Arun C Murthy added a comment - I've opened MAPREDUCE-2399 to discuss Hamlet. Please use that jira so that we can keep MAPREDUCE-279 focussed on the next-gen MR framework. Thanks.
          Hide
          Doug Cutting added a comment -

          Sharad> Had to live with .genavro as the maven plugin (https://github.com/phunt/avro-maven-plugin) not been updated yet to work with the new extension.

          FYI, a Maven plugin is included in Avro 1.5.0 that uses the .avdl file suffix.

          Todd> Does AvroIDL convert javadoc-style comments on records/protocols into JavaDoc on generated code?

          Not yet (AVRO-296).

          Show
          Doug Cutting added a comment - Sharad> Had to live with .genavro as the maven plugin ( https://github.com/phunt/avro-maven-plugin ) not been updated yet to work with the new extension. FYI, a Maven plugin is included in Avro 1.5.0 that uses the .avdl file suffix. Todd> Does AvroIDL convert javadoc-style comments on records/protocols into JavaDoc on generated code? Not yet ( AVRO-296 ).
          Hide
          Luke Lu added a comment -

          Looking through the code a bit more I came across Hamlet. It seems you've written your own MVC framework and Java implementation of Haml as part of Yarn.

          Hamlet is a novel but simple and IDE friendly view component of the yarn web framework. It was partly inspired by Haml but hardly a java implementation of Haml.

          Before I wrote the framework, which is very lightweight (leveraging much power from Guice), I investigated a fairly full spectrum of JVM web frameworks courtesy of Matt Raible.

          Though, I'm better known as a system hacker , I have more than a decade of work experience with UI (including desktop) and web frameworks. I am happy discuss details at MAPREDUCE-2399

          Show
          Luke Lu added a comment - Looking through the code a bit more I came across Hamlet. It seems you've written your own MVC framework and Java implementation of Haml as part of Yarn. Hamlet is a novel but simple and IDE friendly view component of the yarn web framework. It was partly inspired by Haml but hardly a java implementation of Haml. Before I wrote the framework, which is very lightweight (leveraging much power from Guice ), I investigated a fairly full spectrum of JVM web frameworks courtesy of Matt Raible . Though, I'm better known as a system hacker , I have more than a decade of work experience with UI (including desktop) and web frameworks. I am happy discuss details at MAPREDUCE-2399
          Hide
          Luke Lu added a comment -

          Arun suggested that I attach some screenshots of the new mapreduce web UI here.

          • multi-column-stable-sort: demonstrates resource manager apps UI (in the default theme) multi-column sort by user name (ascending) and progress (descending.)
          • capacity-scheduler: demonstrates the capacity scheduler UI (in a dark-theme) selecting a sub queue.
          Show
          Luke Lu added a comment - Arun suggested that I attach some screenshots of the new mapreduce web UI here. multi-column-stable-sort: demonstrates resource manager apps UI (in the default theme) multi-column sort by user name (ascending) and progress (descending.) capacity-scheduler: demonstrates the capacity scheduler UI (in a dark-theme) selecting a sub queue.
          Hide
          Konstantin Boudnik added a comment -

          Not to start a religious war or anything, but I am kinda wondering why not to use a standard Java webapp framework such as Grails ? There's a huge community working on it and there's a lot of people with expertise which will help to ease the development of user applications on top of MR2.0.

          Show
          Konstantin Boudnik added a comment - Not to start a religious war or anything, but I am kinda wondering why not to use a standard Java webapp framework such as Grails ? There's a huge community working on it and there's a lot of people with expertise which will help to ease the development of user applications on top of MR2.0.
          Hide
          Arun C Murthy added a comment -

          Cos, again, can you please use MAPREDUCE-2399 to discuss the specifics of the UI? Thanks.

          Show
          Arun C Murthy added a comment - Cos, again, can you please use MAPREDUCE-2399 to discuss the specifics of the UI? Thanks.
          Hide
          Tom White added a comment -

          > Also, as you pointed out, the changes to classes in src/java/org/apache/hadoop/mapred(uce) are very minor

          Yes, but we still need to be sure that they don't break compatibility, which is hard to see in the current patch. However, I agree that collaborating on this part by way of working on MAPREDUCE-1638 and changes in trunk will make the separation cleaner and clarify the changes required for MR2.

          Show
          Tom White added a comment - > Also, as you pointed out, the changes to classes in src/java/org/apache/hadoop/mapred(uce) are very minor Yes, but we still need to be sure that they don't break compatibility, which is hard to see in the current patch. However, I agree that collaborating on this part by way of working on MAPREDUCE-1638 and changes in trunk will make the separation cleaner and clarify the changes required for MR2.
          Hide
          Greg Roelofs added a comment -

          dot(1) files for the Job, Task, and TaskAttempt state machines in MRv2, at least as of late March. I found the graphs very useful while learning and modifying the MRv2 code for MAPREDUCE-2405.

          These can be converted to PostScript or PNG or whatnot with dot, which is part of the Graphviz distribution (graphviz.org, I think). Here's a sample command for PostScript:

          dot -Tps yarn-state-machine.task-attempt.dot > yarn-state-machine.task-attempt.ps

          Ultimately a version of these should be produced natively in some StateMachine method (toDot()?), and I think Chris Douglas may take that up eventually. However, some of the desirable info (e.g., which states send events to or receive them from other state machines) can't really be discovered automatically, so there will continue to be a place for hand-rolled graphs.

          Show
          Greg Roelofs added a comment - dot(1) files for the Job, Task, and TaskAttempt state machines in MRv2, at least as of late March. I found the graphs very useful while learning and modifying the MRv2 code for MAPREDUCE-2405 . These can be converted to PostScript or PNG or whatnot with dot, which is part of the Graphviz distribution (graphviz.org, I think). Here's a sample command for PostScript: dot -Tps yarn-state-machine.task-attempt.dot > yarn-state-machine.task-attempt.ps Ultimately a version of these should be produced natively in some StateMachine method ( toDot() ?), and I think Chris Douglas may take that up eventually. However, some of the desirable info (e.g., which states send events to or receive them from other state machines) can't really be discovered automatically, so there will continue to be a place for hand-rolled graphs.
          Hide
          Amr Awadallah added a comment -

          I am out of office this week and will be slower than usual in
          responding to emails. If this is urgent then please call my cell phone
          (or send an SMS), otherwise I will reply to your email when I get
          back.

          Thanks for your patience,

          – amr

          Show
          Amr Awadallah added a comment - I am out of office this week and will be slower than usual in responding to emails. If this is urgent then please call my cell phone (or send an SMS), otherwise I will reply to your email when I get back. Thanks for your patience, – amr
          Hide
          Tom White added a comment -

          I'm wondering what the maven modules might look like for this when integrated into trunk. Something like:

          • api - containing the user-facing public API of MapReduce (from org.apache.hadoop.mapred(uce)). When MAPREDUCE-1638 is done it will be possible to split the API into a self-contained tree (no dependencies on other parts of MapReduce).
          • lib - containing the user-facing public MapReduce libraries (from org.apache.hadoop.mapred and org.apache.hadoop.mapred(uce).lib). There's a patch in MAPREDUCE-1478 to perform this separation.
          • classic-impl - containing the implementation classes for MapReduce. This is what's left over after doing MAPREDUCE-1638 and MAPREDUCE-1478.
          • nextgen-impl - this is mr-client in the MR-279 branch, which I think should be renamed, since it's not immediately clear what it's a client of in the context of the whole MapReduce project. It has submodules app, common, hs, jobclient, shuffle.
          • yarn - the yarn framework from the MR-279 branch. Yarn is broken into submodules too.

          Given the progress on mavenizing common (HADOOP-6671), is it worth integrating MAPREDUCE-279 at the same time as doing the full Mavenization of MapReduce? That would seem ideal, but perhaps there's an alternative I haven't considered.

          Show
          Tom White added a comment - I'm wondering what the maven modules might look like for this when integrated into trunk. Something like: api - containing the user-facing public API of MapReduce (from org.apache.hadoop.mapred(uce)). When MAPREDUCE-1638 is done it will be possible to split the API into a self-contained tree (no dependencies on other parts of MapReduce). lib - containing the user-facing public MapReduce libraries (from org.apache.hadoop.mapred and org.apache.hadoop.mapred(uce).lib). There's a patch in MAPREDUCE-1478 to perform this separation. classic-impl - containing the implementation classes for MapReduce. This is what's left over after doing MAPREDUCE-1638 and MAPREDUCE-1478 . nextgen-impl - this is mr-client in the MR-279 branch, which I think should be renamed, since it's not immediately clear what it's a client of in the context of the whole MapReduce project. It has submodules app, common, hs, jobclient, shuffle. yarn - the yarn framework from the MR-279 branch. Yarn is broken into submodules too. Given the progress on mavenizing common ( HADOOP-6671 ), is it worth integrating MAPREDUCE-279 at the same time as doing the full Mavenization of MapReduce? That would seem ideal, but perhaps there's an alternative I haven't considered.
          Hide
          Arun C Murthy added a comment -

          Tom, I don't understand the distinction between classic-impl and nextgen-impl...

          Both 'classic' and 'nextgen' MR use the same 'runtime' i.e. MapTask,ReduceTask, shuffle etc. which is what is currently under 'mr-client'. I'm happy to brainstorm on a better name for it... how about 'mr-runtime'?

          Show
          Arun C Murthy added a comment - Tom, I don't understand the distinction between classic-impl and nextgen-impl... Both 'classic' and 'nextgen' MR use the same 'runtime' i.e. MapTask,ReduceTask, shuffle etc. which is what is currently under 'mr-client'. I'm happy to brainstorm on a better name for it... how about 'mr-runtime'?
          Hide
          Arun C Murthy added a comment -

          Yep, it would be nice to completely mavenize and I strongly believe it should be our goal.

          Maybe we can do it in stages, have a hybrid one on day one when we merge MR-279 into trunk and then do the whole nine yards? That way each can proceed independently. Currently it's becoming painful to manage a large branch and hence my suggestion to get it into trunk and do mavenization independently. Thoughts?

          Show
          Arun C Murthy added a comment - Yep, it would be nice to completely mavenize and I strongly believe it should be our goal. Maybe we can do it in stages, have a hybrid one on day one when we merge MR-279 into trunk and then do the whole nine yards? That way each can proceed independently. Currently it's becoming painful to manage a large branch and hence my suggestion to get it into trunk and do mavenization independently. Thoughts?
          Hide
          Nigel Daley added a comment -

          Given these build issues (and just good engineering practice), I'd like to see a Jenkins CI build on this branch so we know when merged to trunk the builds won't be (more) broken.

          Show
          Nigel Daley added a comment - Given these build issues (and just good engineering practice), I'd like to see a Jenkins CI build on this branch so we know when merged to trunk the builds won't be (more) broken.
          Hide
          Tom White added a comment -

          Both 'classic' and 'nextgen' MR use the same 'runtime' i.e. MapTask,ReduceTask, shuffle etc. which is what is currently under 'mr-client'. I'm happy to brainstorm on a better name for it... how about 'mr-runtime'?

          I think 'mr-runtime', or just 'runtime' given that the context is MapReduce, would be fine. I guess that there would be a 'classic' submodule to contain JobTracker, TaskTracker etc, that currently aren't in any MR-279 Maven modules.

          Yep, it would be nice to completely mavenize and I strongly believe it should be our goal.

          +1

          Show
          Tom White added a comment - Both 'classic' and 'nextgen' MR use the same 'runtime' i.e. MapTask,ReduceTask, shuffle etc. which is what is currently under 'mr-client'. I'm happy to brainstorm on a better name for it... how about 'mr-runtime'? I think 'mr-runtime', or just 'runtime' given that the context is MapReduce, would be fine. I guess that there would be a 'classic' submodule to contain JobTracker, TaskTracker etc, that currently aren't in any MR-279 Maven modules. Yep, it would be nice to completely mavenize and I strongly believe it should be our goal. +1
          Hide
          Eli Collins added a comment -

          Is there an MR2 design doc? A couple of people have asked me about this would be very useful to share.

          Show
          Eli Collins added a comment - Is there an MR2 design doc? A couple of people have asked me about this would be very useful to share.
          Hide
          Arun C Murthy added a comment -

          Sigh, I keep missing this.

          I have a slightly old version I'll spruce up and post. Thanks for the reminder.

          Show
          Arun C Murthy added a comment - Sigh, I keep missing this. I have a slightly old version I'll spruce up and post. Thanks for the reminder.
          Hide
          Haoyuan Li added a comment -

          This page doesn't work anymore: http://svn.apache.org/repos/asf/hadoop/mapreduce/branches/MR-279/INSTALL

          Is there any new page to replace this?

          Thank you.

          Show
          Haoyuan Li added a comment - This page doesn't work anymore: http://svn.apache.org/repos/asf/hadoop/mapreduce/branches/MR-279/INSTALL Is there any new page to replace this? Thank you.
          Hide
          Mahadev konar added a comment -

          haoyuan,
          Because of the svn unsplit things have moved. The new link is:

          http://svn.apache.org/repos/asf/hadoop/common/branches/MR-279/mapreduce/INSTALL

          Show
          Mahadev konar added a comment - haoyuan, Because of the svn unsplit things have moved. The new link is: http://svn.apache.org/repos/asf/hadoop/common/branches/MR-279/mapreduce/INSTALL
          Hide
          Haoyuan Li added a comment -

          Thank you Mahadev.

          Show
          Haoyuan Li added a comment - Thank you Mahadev.
          Hide
          Nigel Daley added a comment -

          Arun, are you planning to get a Jenkins build running on this branch before merge?

          Show
          Nigel Daley added a comment - Arun, are you planning to get a Jenkins build running on this branch before merge?
          Hide
          Giridharan Kesavan added a comment -

          Nigel/Arun, I can help setup a build on MR-279

          Show
          Giridharan Kesavan added a comment - Nigel/Arun, I can help setup a build on MR-279
          Hide
          Bill Lee added a comment -

          In this page: http://svn.apache.org/repos/asf/hadoop/common/branches/MR-279/mapreduce/INSTALL

          After running the last command: $HADOOP_COMMON_HOME/bin/hadoop jar $HADOOP_MAPRED_HOME/build/hadoop-mapred-examples-0.22.0-SNAPSHOT.jar randomwriter -Dmapreduce.job.user.name=$USER -Dmapreduce.randomwriter.bytespermap=10000 -Ddfs.blocksize=536870912 -Ddfs.block.size=536870912 -libjars $HADOOP_YARN_INSTALL/hadoop-mapreduce-1.0-SNAPSHOT/modules/hadoop-mapreduce-client-jobclient-1.0-SNAPSHOT.jar output

          What kind of results or terminal output should I expect?

          Thank you.

          Show
          Bill Lee added a comment - In this page: http://svn.apache.org/repos/asf/hadoop/common/branches/MR-279/mapreduce/INSTALL After running the last command: $HADOOP_COMMON_HOME/bin/hadoop jar $HADOOP_MAPRED_HOME/build/hadoop-mapred-examples-0.22.0-SNAPSHOT.jar randomwriter -Dmapreduce.job.user.name=$USER -Dmapreduce.randomwriter.bytespermap=10000 -Ddfs.blocksize=536870912 -Ddfs.block.size=536870912 -libjars $HADOOP_YARN_INSTALL/hadoop-mapreduce-1.0-SNAPSHOT/modules/hadoop-mapreduce-client-jobclient-1.0-SNAPSHOT.jar output What kind of results or terminal output should I expect? Thank you.
          Hide
          eric baldeschwieler added a comment -

          I have joined Hortonworks and am no longer at Yahoo!. Please re-send your message to my non-Yahoo! email address.

          Show
          eric baldeschwieler added a comment - I have joined Hortonworks and am no longer at Yahoo!. Please re-send your message to my non-Yahoo! email address.
          Hide
          Sharad Agarwal added a comment -

          Slides from Hadoop Contributors meet held on 07/01 having some design details on RM and AM. Also the high level APIs for writing new AMs.

          Show
          Sharad Agarwal added a comment - Slides from Hadoop Contributors meet held on 07/01 having some design details on RM and AM. Also the high level APIs for writing new AMs.
          Hide
          Vinod Kumar Vavilapalli added a comment -

          What kind of results or terminal output should I expect?

          Bill, the terminal output should 'almost' be similar to what you see with Hadoop 0.20.

          Please create separate tickets or use mapreduce-dev@hadoop.apache.org mailing list. This one is an umbrella ticket that so many are watching.

          Thanks.

          Show
          Vinod Kumar Vavilapalli added a comment - What kind of results or terminal output should I expect? Bill, the terminal output should 'almost' be similar to what you see with Hadoop 0.20. Please create separate tickets or use mapreduce-dev@hadoop.apache.org mailing list. This one is an umbrella ticket that so many are watching. Thanks.
          Hide
          Giridharan Kesavan added a comment -
          Show
          Giridharan Kesavan added a comment - Build setup on MR-279 branch https://builds.apache.org/job/Hadoop-MR-279-Build/
          Hide
          Arun C Murthy added a comment -

          MRv2 architecture document.

          Show
          Arun C Murthy added a comment - MRv2 architecture document.
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk-Commit #763 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/763/)
          MAPREDUCE-2837. Ported bug fixes from y-merge to prepare for MAPREDUCE-279 merge.

          acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1157249
          Files :

          • /hadoop/common/trunk/mapreduce/src/test/mapred/org/apache/hadoop/mapreduce/security/TestTokenCache.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/ACLsManager.java
          • /hadoop/common/trunk/mapreduce/src/test/mapred/org/apache/hadoop/mapreduce/security/TestBinaryTokenFile.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/MapTask.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/Job.java
          • /hadoop/common/trunk/mapreduce/src/test/mapred-site.xml
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/Shuffle.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/security/TokenCache.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/MergeManager.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/MapOutputFile.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/ReduceTask.java
          • /hadoop/common/trunk/mapreduce/src/webapps/job/jobdetailshistory.jsp
          • /hadoop/common/trunk/mapreduce/src/test/mapred/org/apache/hadoop/security/TestMapredGroupMappingServiceRefresh.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/TaskTracker.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/TaskFinishedEvent.java
          • /hadoop/common/trunk/mapreduce/CHANGES.txt
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/JobACLsManager.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/TaskMemoryManagerThread.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/MROutputFiles.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryParser.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/Task.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/MRConfig.java
          • /hadoop/common/trunk/mapreduce/src/examples/org/apache/hadoop/examples/terasort/TeraInputFormat.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/LocalJobRunner.java
          • /hadoop/common/trunk/mapreduce/src/test/mapred/org/apache/hadoop/mapred/TestMapRed.java
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk-Commit #763 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/763/ ) MAPREDUCE-2837 . Ported bug fixes from y-merge to prepare for MAPREDUCE-279 merge. acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1157249 Files : /hadoop/common/trunk/mapreduce/src/test/mapred/org/apache/hadoop/mapreduce/security/TestTokenCache.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/ACLsManager.java /hadoop/common/trunk/mapreduce/src/test/mapred/org/apache/hadoop/mapreduce/security/TestBinaryTokenFile.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/MapTask.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/Job.java /hadoop/common/trunk/mapreduce/src/test/mapred-site.xml /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/Shuffle.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/security/TokenCache.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/MergeManager.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/MapOutputFile.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/ReduceTask.java /hadoop/common/trunk/mapreduce/src/webapps/job/jobdetailshistory.jsp /hadoop/common/trunk/mapreduce/src/test/mapred/org/apache/hadoop/security/TestMapredGroupMappingServiceRefresh.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/TaskTracker.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/TaskFinishedEvent.java /hadoop/common/trunk/mapreduce/CHANGES.txt /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/JobACLsManager.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/TaskMemoryManagerThread.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/MROutputFiles.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryParser.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/Task.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/MRConfig.java /hadoop/common/trunk/mapreduce/src/examples/org/apache/hadoop/examples/terasort/TeraInputFormat.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/LocalJobRunner.java /hadoop/common/trunk/mapreduce/src/test/mapred/org/apache/hadoop/mapred/TestMapRed.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk #754 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/754/)
          MAPREDUCE-2837. Ported bug fixes from y-merge to prepare for MAPREDUCE-279 merge.

          acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1157249
          Files :

          • /hadoop/common/trunk/mapreduce/src/test/mapred/org/apache/hadoop/mapreduce/security/TestTokenCache.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/ACLsManager.java
          • /hadoop/common/trunk/mapreduce/src/test/mapred/org/apache/hadoop/mapreduce/security/TestBinaryTokenFile.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/MapTask.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/Job.java
          • /hadoop/common/trunk/mapreduce/src/test/mapred-site.xml
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/Shuffle.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/security/TokenCache.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/MergeManager.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/MapOutputFile.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/ReduceTask.java
          • /hadoop/common/trunk/mapreduce/src/webapps/job/jobdetailshistory.jsp
          • /hadoop/common/trunk/mapreduce/src/test/mapred/org/apache/hadoop/security/TestMapredGroupMappingServiceRefresh.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/TaskTracker.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/TaskFinishedEvent.java
          • /hadoop/common/trunk/mapreduce/CHANGES.txt
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/JobACLsManager.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/TaskMemoryManagerThread.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/MROutputFiles.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryParser.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/Task.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/MRConfig.java
          • /hadoop/common/trunk/mapreduce/src/examples/org/apache/hadoop/examples/terasort/TeraInputFormat.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/LocalJobRunner.java
          • /hadoop/common/trunk/mapreduce/src/test/mapred/org/apache/hadoop/mapred/TestMapRed.java
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk #754 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/754/ ) MAPREDUCE-2837 . Ported bug fixes from y-merge to prepare for MAPREDUCE-279 merge. acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1157249 Files : /hadoop/common/trunk/mapreduce/src/test/mapred/org/apache/hadoop/mapreduce/security/TestTokenCache.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/ACLsManager.java /hadoop/common/trunk/mapreduce/src/test/mapred/org/apache/hadoop/mapreduce/security/TestBinaryTokenFile.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/MapTask.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/Job.java /hadoop/common/trunk/mapreduce/src/test/mapred-site.xml /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/Shuffle.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/security/TokenCache.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/MergeManager.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/MapOutputFile.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/ReduceTask.java /hadoop/common/trunk/mapreduce/src/webapps/job/jobdetailshistory.jsp /hadoop/common/trunk/mapreduce/src/test/mapred/org/apache/hadoop/security/TestMapredGroupMappingServiceRefresh.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/TaskTracker.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/TaskFinishedEvent.java /hadoop/common/trunk/mapreduce/CHANGES.txt /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/JobACLsManager.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/TaskMemoryManagerThread.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/MROutputFiles.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryParser.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/Task.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/MRConfig.java /hadoop/common/trunk/mapreduce/src/examples/org/apache/hadoop/examples/terasort/TeraInputFormat.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/LocalJobRunner.java /hadoop/common/trunk/mapreduce/src/test/mapred/org/apache/hadoop/mapred/TestMapRed.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Common-trunk-Commit #742 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/742/)
          MAPREDUCE-2837. Ported bug fixes from y-merge to prepare for MAPREDUCE-279 merge.

          acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1157249
          Files :

          • /hadoop/common/trunk/mapreduce/src/test/mapred/org/apache/hadoop/mapreduce/security/TestTokenCache.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/ACLsManager.java
          • /hadoop/common/trunk/mapreduce/src/test/mapred/org/apache/hadoop/mapreduce/security/TestBinaryTokenFile.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/MapTask.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/Job.java
          • /hadoop/common/trunk/mapreduce/src/test/mapred-site.xml
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/Shuffle.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/security/TokenCache.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/MergeManager.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/MapOutputFile.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/ReduceTask.java
          • /hadoop/common/trunk/mapreduce/src/webapps/job/jobdetailshistory.jsp
          • /hadoop/common/trunk/mapreduce/src/test/mapred/org/apache/hadoop/security/TestMapredGroupMappingServiceRefresh.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/TaskTracker.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/TaskFinishedEvent.java
          • /hadoop/common/trunk/mapreduce/CHANGES.txt
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/JobACLsManager.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/TaskMemoryManagerThread.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/MROutputFiles.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryParser.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/Task.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/MRConfig.java
          • /hadoop/common/trunk/mapreduce/src/examples/org/apache/hadoop/examples/terasort/TeraInputFormat.java
          • /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/LocalJobRunner.java
          • /hadoop/common/trunk/mapreduce/src/test/mapred/org/apache/hadoop/mapred/TestMapRed.java
          Show
          Hudson added a comment - Integrated in Hadoop-Common-trunk-Commit #742 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/742/ ) MAPREDUCE-2837 . Ported bug fixes from y-merge to prepare for MAPREDUCE-279 merge. acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1157249 Files : /hadoop/common/trunk/mapreduce/src/test/mapred/org/apache/hadoop/mapreduce/security/TestTokenCache.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/ACLsManager.java /hadoop/common/trunk/mapreduce/src/test/mapred/org/apache/hadoop/mapreduce/security/TestBinaryTokenFile.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/MapTask.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/Job.java /hadoop/common/trunk/mapreduce/src/test/mapred-site.xml /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/Shuffle.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/security/TokenCache.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/MergeManager.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/MapOutputFile.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/ReduceTask.java /hadoop/common/trunk/mapreduce/src/webapps/job/jobdetailshistory.jsp /hadoop/common/trunk/mapreduce/src/test/mapred/org/apache/hadoop/security/TestMapredGroupMappingServiceRefresh.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/TaskTracker.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/TaskFinishedEvent.java /hadoop/common/trunk/mapreduce/CHANGES.txt /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/JobACLsManager.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/TaskMemoryManagerThread.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/MROutputFiles.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryParser.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/Task.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/MRConfig.java /hadoop/common/trunk/mapreduce/src/examples/org/apache/hadoop/examples/terasort/TeraInputFormat.java /hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/LocalJobRunner.java /hadoop/common/trunk/mapreduce/src/test/mapred/org/apache/hadoop/mapred/TestMapRed.java
          Hide
          Mahadev konar added a comment -

          Thanks to Vinod, attached is a script (MR-279-script.sh) and an input file (MR-279_MR_files_to_move.txt).

          The script needs to be changed to point to MR-279 branch you have checked out and the trunk. The script will move map reduce runtime files around in trunk and copy the new framework from MR-279 branch to trunk.

          After running the script you will have to apply the patch (post-move.patch) on trunk. These small changes are needed to the new framework to work with trunk.

          You will have to run mvn install -DskipTests before you run any ant targets.

          Also, in the script its just local mv/cp. When we actually merge the changes, it will be svn mv/copy.

          Show
          Mahadev konar added a comment - Thanks to Vinod, attached is a script (MR-279-script.sh) and an input file (MR-279_MR_files_to_move.txt). The script needs to be changed to point to MR-279 branch you have checked out and the trunk. The script will move map reduce runtime files around in trunk and copy the new framework from MR-279 branch to trunk. After running the script you will have to apply the patch (post-move.patch) on trunk. These small changes are needed to the new framework to work with trunk. You will have to run mvn install -DskipTests before you run any ant targets. Also, in the script its just local mv/cp. When we actually merge the changes, it will be svn mv/copy.
          Hide
          Philip Zeyliger added a comment -

          I will return on the 24th. For urgent matters, please contact my
          teammates or Amr.

          Thanks,

          – Philip

          Show
          Philip Zeyliger added a comment - I will return on the 24th. For urgent matters, please contact my teammates or Amr. Thanks, – Philip
          Hide
          Alejandro Abdelnur added a comment -

          I've just applied the patch following instructions and it compiles fine.

          Yesterday I've opened a JIRA with things to improve, MAPREDUCE-2842.

          IMO, most of those can be done incrementally after this patch goes in.

          What I think it should be done as part of this patch (MAPREDUCE-279) is the artifact/maven-module-dir names.

          All artifact names should be prefixed with 'hadoop-' (the JARs get the artifact names and it will be easier to troubleshoot, identify the JARS).

          In addition, the maven-module-dir should be the same to make it easier to developers to find their way around.

          The reason for proposing doing this as part of this patch is to avoid doing 2 huge moves of files in SVN.

          (HDFS-2096 is aligned to this naming already)

          Thanks

          Show
          Alejandro Abdelnur added a comment - I've just applied the patch following instructions and it compiles fine. Yesterday I've opened a JIRA with things to improve, MAPREDUCE-2842 . IMO, most of those can be done incrementally after this patch goes in. What I think it should be done as part of this patch ( MAPREDUCE-279 ) is the artifact/maven-module-dir names. All artifact names should be prefixed with 'hadoop-' (the JARs get the artifact names and it will be easier to troubleshoot, identify the JARS). In addition, the maven-module-dir should be the same to make it easier to developers to find their way around. The reason for proposing doing this as part of this patch is to avoid doing 2 huge moves of files in SVN. ( HDFS-2096 is aligned to this naming already) Thanks
          Hide
          Alejandro Abdelnur added a comment -

          I've just updated MAPREDUCE-2842 with the a propose naming for artifacts/module-dirs.

          Show
          Alejandro Abdelnur added a comment - I've just updated MAPREDUCE-2842 with the a propose naming for artifacts/module-dirs.
          Hide
          Arun C Murthy added a comment -

          Thanks Alejandro, I do agree that we should avoid 2 huge svn moves if we can avoid it - let me try to fix up scripts to be in line with your proposals.

          Show
          Arun C Murthy added a comment - Thanks Alejandro, I do agree that we should avoid 2 huge svn moves if we can avoid it - let me try to fix up scripts to be in line with your proposals.
          Hide
          Arun C Murthy added a comment -

          Thanks Alejandro, I do agree that we should avoid 2 huge svn moves if we can avoid it - let me try to fix up scripts to be in line with your proposals.

          Show
          Arun C Murthy added a comment - Thanks Alejandro, I do agree that we should avoid 2 huge svn moves if we can avoid it - let me try to fix up scripts to be in line with your proposals.
          Hide
          Arun C Murthy added a comment -

          Update script for changing layout as suggested by Alejandro in MAPREDUCE-2842 and update post-move.patch to apply to the new layout. This is WIP, I still need to change the artifact names and dependencies as suggested by Alejandro.

          Show
          Arun C Murthy added a comment - Update script for changing layout as suggested by Alejandro in MAPREDUCE-2842 and update post-move.patch to apply to the new layout. This is WIP, I still need to change the artifact names and dependencies as suggested by Alejandro.
          Hide
          Arun C Murthy added a comment -

          nearly there with the artifact/deps changes... not done yet.

          Show
          Arun C Murthy added a comment - nearly there with the artifact/deps changes... not done yet.
          Hide
          Vinod Kumar Vavilapalli added a comment -

          Updated script, to-be-moved-files-list and the post-move patch to reflect the directory structure suggested at MAPREDUCE-2842 (all modules with hadoop- prefix).

          This is close now, mvn install, ant jar jar-test binary etc pass with this. Making sure 'ant test' passes is the pending item.

          Show
          Vinod Kumar Vavilapalli added a comment - Updated script, to-be-moved-files-list and the post-move patch to reflect the directory structure suggested at MAPREDUCE-2842 (all modules with hadoop- prefix). This is close now, mvn install, ant jar jar-test binary etc pass with this. Making sure 'ant test' passes is the pending item.
          Hide
          Thomas Graves added a comment -

          I think the move of mapreduce to hadoop-mapreduce got lost in the latest MR-279-script-20110817.sh.

          Show
          Thomas Graves added a comment - I think the move of mapreduce to hadoop-mapreduce got lost in the latest MR-279-script-20110817.sh.
          Hide
          Mahadev konar added a comment -

          An updated patch on top of Vinod's latest scripts. This fixes ant test.

          Show
          Mahadev konar added a comment - An updated patch on top of Vinod's latest scripts. This fixes ant test.
          Hide
          Arun C Murthy added a comment -

          Per the vote in mapreduce-dev@ I've merged MR-279 to a preview branch (MR-279-merge) and am doing the final set of tests.

          I'm attaching the shell script I used for the merge.

          Show
          Arun C Murthy added a comment - Per the vote in mapreduce-dev@ I've merged MR-279 to a preview branch (MR-279-merge) and am doing the final set of tests. I'm attaching the shell script I used for the merge.
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Common-trunk-Commit #751 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/751/)
          MAPREDUCE-279. MapReduce 2.0. Merging MR-279 branch into trunk. Contributed by Arun C Murthy, Christopher Douglas, Devaraj Das, Greg Roelofs, Jeffrey Naisbitt, Josh Wills, Jonathan Eagles, Krishna Ramachandran, Luke Lu, Mahadev Konar, Robert Evans, Sharad Agarwal, Siddharth Seth, Thomas Graves, and Vinod Kumar Vavilapalli.

          vinodkv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1159166
          Files :

          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/BasicTypeSorterBase.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/Parser.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/filecache/DistributedCache.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-api
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/reduce/LongSumReducer.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/OuterJoinRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/OverrideRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/util/ResourceBundles.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/ReduceAttemptFinishedEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobInfoChangeEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/RecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/MultipleSequenceFileOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/DelegatingMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/OracleDataDrivenDBInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/KeyValueTextInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/AbstractCounters.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ConfigUtil.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TaskTracker.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/LineRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/ArrayListBackedIterator.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/ShuffleScheduler.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/map/TokenCounterMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/IFileOutputStream.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/SequenceFileAsTextRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/LinuxResourceCalculatorPlugin.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-common
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/QueueRefresher.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/AbstractCounterGroup.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorBaseDescriptor.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregator.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/Parser.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/MultithreadedMapRunner.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/HashPartitioner.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/PipesMapRunner.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/counters/LimitExceededException.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/SequenceFileAsTextRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/DBConfiguration.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskCompletionEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/filecache/package-info.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/partition
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/TaggedInputSplit.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TaskAttemptContext.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/TaskInputOutputContext.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/TaskAttemptStartedEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/MultipleInputs.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/TextOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/util/ProcfsBasedProcessTree.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/contrib/gridmix/src/test/org/apache/hadoop/mapred/gridmix/TestResourceUsageEmulators.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/CounterGroup.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/ReduceTaskStatus.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/QueueAclsInfo.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/QueueAclsInfo.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/RawKeyValueIterator.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/TaskFailedEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/TaskType.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/FloatSplitter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/MySQLDataDrivenDBRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/package-info.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/MapTask.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/counters/GenericCounter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/TaskReport.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/TaskUpdatedEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/TaskReport.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/fieldsel/FieldSelectionReducer.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/db/DBOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/EventWriter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/RecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/MultipleOutputs.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobID.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/InputSplit.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/ReduceTask.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/InputSplit.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/counters/CounterGroupFactory.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobStatus.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/MROutputFiles.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MROutputFiles.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/InvalidFileTypeException.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/Reducer.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/ReduceContext.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/split/JobSplitWriter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/Reporter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputFormatCounter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/PipesPartitioner.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/SequenceFileRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-app
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/SequenceFileAsTextRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/Merger.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/DBInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Counters.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/test/mapred/org/apache/hadoop/mapred/TestJvmManager.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/CompositeRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/Events.avpr
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorBaseDescriptor.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/OracleDBRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/OutputLogFilter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/contrib/fairscheduler/ivy.xml
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JvmContext.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-api/pom.xml
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/filecache/package-info.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/util/TestLinuxResourceCalculatorPlugin.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/token/delegation/DelegationTokenSecretManager.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorJobBase.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/MapTaskStatus.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/LineRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/JobStatus.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapTaskStatus.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/LongSumReducer.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/LinuxResourceCalculatorPlugin.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Queue.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/split/package-info.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/ClusterMetrics.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/MultipleInputs.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/chain/Chain.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/EventReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/DoubleValueSum.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/chain/ChainReduceContextImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/AvroArrayUtils.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/SkipBadRecords.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/ShuffleClientMetrics.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/TaskType.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/YARNRunner.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorJob.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFile.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/JoinRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/partition/KeyFieldBasedComparator.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/CompositeInputSplit.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TaskLog.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/UniqValueCount.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/CompositeInputSplit.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/TaskAttemptListenerImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/TaskStartedEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/NLineInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/DelegatingInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/HistoryViewer.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueHistogram.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/FileSplit.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/StringValueMin.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/package-info.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/package-info.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/jobcontrol/JobControl.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DBWritable.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/SkipBadRecords.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-shuffle/pom.xml
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ProcessTree.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/OverrideRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskUmbilicalProtocol.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/SequenceFileInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/OracleDateSplitter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapTaskCompletionEventsUpdate.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/SpillRecord.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/SequenceFileRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/ReduceAttemptFinishedEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/OutputLogFilter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/LongValueSum.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/pipes/PipesReducer.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/DelegatingMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/ReduceTask.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DataDrivenDBInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/DelegatingInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/CumulativePeriodicStats.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/BooleanSplitter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/chain/ChainMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/filecache/DistributedCache.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/ReduceContext.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/UserDefinedValueAggregatorDescriptor.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/PeriodicStatsAccumulator.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/partition/BinaryPartitioner.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/StatisticsCollector.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/util/CountersStrings.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/ID.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/BasicTypeSorterBase.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/JobPriority.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/token/JobTokenSelector.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/StringValueMax.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/filecache/DistributedCache.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JobConf.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/chain/ChainMapContextImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/InnerJoinRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Counter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/ComposableRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/OutputHandler.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JvmContext.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/OracleDataDrivenDBInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/MultiFileInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/partition/KeyFieldBasedPartitioner.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/NullOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/LazyOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/MapFileOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskAttemptContext.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/SequenceFileAsBinaryInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Utils.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/FileInputFormatCounter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/PipesNonJavaInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Mapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-api/src
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/JobFinishedEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/InvalidJobConfException.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/contrib/capacity-scheduler/ivy.xml
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobPriority.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/InputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/README
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/BigDecimalSplitter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/KeyFieldBasedComparator.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/HistoryEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DBOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/ComposableInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/HashPartitioner.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/DateSplitter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/ResetableIterator.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/SpillRecord.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/DBRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/protocol
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/MergeManager.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TaskCompletionEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/DelegatingMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/CompositeInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-jobclient/pom.xml
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/avro
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/LongValueSum.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/KeyFieldBasedPartitioner.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/StreamBackedIterator.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/MapReduceBase.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapReduceBase.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Reducer.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/ClusterMetrics.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/DBSplitter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/map/RegexMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/protocol/ClientProtocolProvider.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapFileOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/chain/ChainReducer.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/RawKeyValueIterator.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/output/NullOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/test/mapred/org/apache/hadoop/mapreduce/security/TestTokenCache.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/InputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/SequenceFileAsTextInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/CombineFileRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/LongValueMin.java
          • /hadoop/common/trunk/hadoop-mapreduce/ivy.xml
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/Counters.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/MySQLDataDrivenDBRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapTask.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/Job.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/contrib/vaidya/ivy.xml
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/StatePeriodicStats.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/InnerJoinRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/CombineFileInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/FileSystemCounter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskAttemptContextImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/InMemoryReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/server/tasktracker/TTConfig.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/map/WrappedMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Reducer.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskID.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/BinaryPartitioner.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/Mapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/FieldSelectionMapReduce.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/SequenceFileAsBinaryInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/package-info.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/DeprecatedQueueConfigurationParser.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/partition/KeyFieldBasedComparator.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/partition/TotalOrderPartitioner.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/Counter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/LineRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/CombineFileInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/tools
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/RamManager.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/ComposableInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/MapRunnable.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/SequenceFileInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/MultipleSequenceFileOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/StatusReporter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/jobcontrol/Job.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/TaskInputOutputContextImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/LongValueMax.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/partition/InputSampler.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/MultiFilterRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/QueueConfigurationParser.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/ShuffleHeader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorJob.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/ReduceContextImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/SequenceFileAsBinaryInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/ComposableRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskLogAppender.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/OuterJoinRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/IdentityReducer.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/MapContextImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/BackupStore.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskLog.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/LongValueMin.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/RegexMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/db/DBOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/PipesReducer.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/fieldsel/FieldSelectionReducer.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/server/jobtracker/JTConfig.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/jobcontrol/Job.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/ArrayListBackedIterator.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/partition/KeyFieldBasedPartitioner.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/MySQLDBRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/SequenceFileAsBinaryOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryParser.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/security/token/delegation/DelegationTokenSelector.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/Reducer.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/RecordWriter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Master.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/JobSubmittedEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/counters/AbstractCounter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/fieldsel/FieldSelectionHelper.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/reduce/WrappedReducer.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/TaskID.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/util/ProcessTree.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/TaskAttemptID.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/SequenceFileRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/CombineFileRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/CompositeInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/InMemoryWriter.java
          • /hadoop/common/trunk/hadoop-mapreduce/pom.xml
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/partition/HashPartitioner.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/JoinRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/HostUtil.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapRunnable.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/reduce/IntSumReducer.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorCombiner.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/TaskAttemptContextImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/TaskFinishedEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/split/package-info.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/DoubleValueSum.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/LongValueMax.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/RecordWriter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/pipes/BinaryProtocol.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-server
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JobInfo.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobInitedEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/TextInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/OutputCommitter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TaskLogAppender.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/MergeThread.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/FileInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/util/package-info.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/UniqValueCount.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/JobStatusChangedEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/TotalOrderPartitioner.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/MapOutput.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobConf.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-jobclient
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/pipes/OutputHandler.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/RegexMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/token/package-info.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/BackupStore.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/partition/KeyFieldHelper.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobContext.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-common/pom.xml
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/SequenceFileOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/pom.xml
          • /hadoop/common/trunk/hadoop-mapreduce/src/contrib/vertica/ivy.xml
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/RamManager.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/StreamBackedIterator.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/ValueHistogram.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorReducer.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/map/InverseMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/StringValueMin.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobConfigurable.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/CombineFileSplit.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/EventFetcher.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/TaggedInputSplit.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/tools/CLI.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/protocol/package-info.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/TupleWritable.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/split/JobSplitWriter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Mapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TaskUmbilicalProtocol.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/output/MultipleOutputs.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/OutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/pipes/UpwardProtocol.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/ResetableIterator.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/ID.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/security/SecureShuffleUtils.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/LazyOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/JobContextImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/split/SplitMetaInfoReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/ChainReducer.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/StatePeriodicStats.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/OverrideRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/output/FileOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IndexCache.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/JobInfoChangeEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/StatusReporter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MultiFileInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/FileSplit.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmitter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/counters/FrameworkCounterGroup.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/MapFileOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorDescriptor.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/MultiFileSplit.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/tools/CLI.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/OutputCommitter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/Mapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/MultipleTextOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/OutputCommitter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/StringValueMax.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/TaskAttemptID.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/QueueState.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/CompositeRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/ProgressSplitsBlock.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/chain/ChainReduceContextImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/QueueState.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/WrappedRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/ShuffleClientMetrics.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/SequenceFileInputFilter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregator.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorJob.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/OutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/FileAlreadyExistsException.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/counters/AbstractCounterGroup.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/MultithreadedMapRunner.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/TextSplitter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/Shuffle.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/protocol/ClientProtocol.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JVMId.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/package-info.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobProfile.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JobConfigurable.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ResourceCalculatorPlugin.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorCombiner.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/server
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MultiFileSplit.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/CounterGroupBase.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DBSplitter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/UserDefinedValueAggregatorDescriptor.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/ClusterStatus.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/ClusterStatus.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/map/MultithreadedMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/ShuffleScheduler.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/test/mapred/org/apache/hadoop/mapreduce/util/TestLinuxResourceCalculatorPlugin.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/TaskID.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/KeyValueLineRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/FilterOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/SequenceFileInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/QueueACL.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/IntegerSplitter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/security/token/JobTokenSecretManager.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/test/java/org/apache
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/GenericCounter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/reduce/LongSumReducer.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/SequenceFileOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/TaskAttemptUnsuccessfulCompletionEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JobContext.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/Partitioner.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/FileInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TaskScheduler.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/security/token/delegation/DelegationTokenSecretManager.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/FileSystemCounter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/ExceptionReporter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/JobSubmitter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/NLineInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/contrib/data_join/ivy.xml
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/InvalidInputException.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/jobcontrol/ControlledJob.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/MultipleInputs.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/EventReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/TokenCountMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TaskID.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobUnsuccessfulCompletionEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobPriorityChangeEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskReport.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/contrib/dynamic-scheduler/ivy.xml
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/IdentityReducer.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskStatus.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/partition/TotalOrderPartitioner.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/CompositeRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DateSplitter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/ChainReducer.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/MultipleOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/InputSplit.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/security/token/delegation/DelegationTokenIdentifier.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Partitioner.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/IndexCache.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/SequenceFileAsTextInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/SequenceFileAsTextRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FilterOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorCombiner.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/CounterGroupFactory.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobQueueInfo.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/TaskAttemptContext.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/PeriodicStatsAccumulator.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/FileInputFormatCounter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-common/scripts
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileOutputCommitter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/output/TextOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/BinaryPartitioner.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/contrib/block_forensics/ivy.xml
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/MultiFilterRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/partition/BinaryPartitioner.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/OracleDataDrivenDBRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/JobUnsuccessfulCompletionEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/map/TokenCounterMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/MarkableIteratorInterface.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/OverrideRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/security/token/JobTokenSelector.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TaskAttemptID.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskAttemptID.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregator.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
          • /hadoop/common/trunk/hadoop-mapreduce/src/contrib/index/ivy.xml
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/WrappedRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/FileOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/security/TokenCache.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JobProfile.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/counters/AbstractCounters.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/counters/Limits.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/token/delegation/DelegationTokenSelector.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/TaggedInputSplit.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/ArrayListBackedIterator.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/TupleWritable.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-shuffle
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/token/JobTokenIdentifier.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JobQueueInfo.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobInfo.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/pipes/DownwardProtocol.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/TaskFailedEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/MapAttemptFinishedEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/db/DBInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/reduce
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JobPriority.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/InnerJoinRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DBRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/OracleDBRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/TaskCounter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/SequenceFileInputFilter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/SequenceFileInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/CleanupQueue.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/HistoryViewer.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorDescriptor.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/bin
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/protocol/ClientProtocolProvider.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ProcfsBasedProcessTree.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/BinaryProtocol.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/OutputCommitter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/server/tasktracker
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/OutputCollector.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/FileSystemCounterGroup.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/RecordWriter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/jobcontrol
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/Operation.java
          • /hadoop/common/trunk/mapreduce
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/package-info.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/filecache/package-info.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/map
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TaskReport.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/filecache
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/KeyValueTextInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/InputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/MapContextImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TaskStatus.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/TaskUpdatedEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/chain/ChainMapContextImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/ComposableInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/OuterJoinRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/split
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/output/SequenceFileOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/TaskCompletionEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/INSTALL
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/CombineFileInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MarkableIterator.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/TaskCounter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/InputSplit.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorJobBase.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/SequenceFileAsTextInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/CleanupQueue.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/TupleWritable.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/InputSampler.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JobACLsManager.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/TaggedInputSplit.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/UserDefinedValueAggregatorDescriptor.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/MultiFilterRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/jobcontrol
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/pom.xml
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/MRConfig.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorDescriptor.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/map/WrappedMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DataDrivenDBRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/chain/ChainReducer.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JobClient.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/fieldsel/FieldSelectionHelper.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/MapHost.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/ProgressSplitsBlock.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/UpwardProtocol.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/db/DBWritable.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/QueueACL.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/contrib/raid/ivy.xml
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/HistoryEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/DeprecatedQueueConfigurationParser.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/LazyOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobPriority.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/chain
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/filecache/ClientDistributedCacheManager.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/security/token/delegation/package-info.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/ChainMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/TaskInputOutputContextImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/ArrayListBackedIterator.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/NLineInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobACLsManager.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/StreamBackedIterator.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/SequenceFileAsBinaryInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/QueueConfigurationParser.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TextOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/Clock.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/DelegatingRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/OutputCollector.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/RecordWriter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JvmTask.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/AvroArrayUtils.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/KeyFieldBasedComparator.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorBaseDescriptor.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/token/JobTokenSecretManager.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/ID.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/pom.xml
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/InputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/MapTaskCompletionEventsUpdate.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/test/java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/OutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/fieldsel/FieldSelectionMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/CombineFileInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestLinuxResourceCalculatorPlugin.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/TaskStartedEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/DelegatingMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DBConfiguration.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/BufferSorter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorReducer.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/output/LazyOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorDescriptor.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JobStatus.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/filecache/DistributedCache.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/contrib/streaming/ivy.xml
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/TaskInputOutputContext.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/jobcontrol/JobControl.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRConfig.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/DoubleValueSum.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/RunningJob.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/OracleDataDrivenDBRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/jobcontrol/JobControl.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/db
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/InverseMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorBaseDescriptor.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/CompositeInputSplit.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobStatusChangedEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/JobContextImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/UniqValueCount.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/TaskAttemptUnsuccessfulCompletionEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/Submitter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-app/pom.xml
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/OracleDateSplitter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/LimitExceededException.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ResourceBundles.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/EventWriter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JVMId.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/db/DBConfiguration.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/ChainMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/split/JobSplit.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/BufferSorter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/Limits.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueHistogram.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/NLineInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/InputSampler.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/StringValueMin.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorReducer.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorCombiner.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/output/SequenceFileAsBinaryOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/assembly
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/FileOutputCommitter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/pipes/PipesNonJavaInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/TokenCountMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorJob.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Operation.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-server/pom.xml
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/Application.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/MergeSorter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/test/mapred/org/apache/hadoop/mapred/TestTaskTrackerLocalization.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MapOutput.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/OutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/TextSplitter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Shuffle.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/SortedRanges.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/filecache/ClientDistributedCacheManager.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobID.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/InvalidInputException.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/TestUberAM.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/MapContext.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MapContext.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/util
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-common
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/map/InverseMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/LongValueSum.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/KeyValueTextInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/CombineFileSplit.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TextInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/package-info.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/SequenceFileAsBinaryOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TextOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/token/DelegationTokenRenewal.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/KeyValueLineRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/protocol/package-info.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskAttemptImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/InvalidFileTypeException.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobClient.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobFinishedEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/ComposableRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/SequenceFileInputFilter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/SequenceFileAsTextInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/StringValueMax.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/FilterOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/db/DBWritable.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/TaskTrackerInfo.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/QueueInfo.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/TotalOrderPartitioner.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/FloatSplitter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/contrib/mumak/ivy.xml
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/BigDecimalSplitter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/RecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/pipes/Submitter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/token/delegation/package-info.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/SortedRanges.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/TaskAttemptFinishedEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/LongValueSum.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/test
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/StreamBackedIterator.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-hs
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/Chain.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/DelegatingInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/KeyFieldBasedPartitioner.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/IdentityMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/assembly/all.xml
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/InMemoryReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/TaskAttemptContext.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TIPStatus.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/ShuffleHeader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/MRConstants.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobSubmittedEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/RunningJob.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/NullOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/db/DBInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeThread.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/QueueManager.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/JobPriorityChangeEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/QueueInfo.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MergeSorter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DBInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/FileSplit.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/RecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/LineRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedJob.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/MarkableIterator.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/CompositeInputSplit.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JobContextImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/token/delegation/DelegationTokenIdentifier.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/contrib/gridmix/ivy.xml
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManager.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/DataDrivenDBInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TextInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/partition/InputSampler.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryParser.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/TaskFinishedEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/TaskAttemptStartedEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/counters/package-info.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/FrameworkCounterGroup.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/server/tasktracker/TTConfig.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JobEndNotifier.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobStatus.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/BooleanSplitter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/MapRunner.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/fieldsel
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/reduce/WrappedReducer.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/InverseMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/CompositeRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/QueueManager.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/pipes/PipesPartitioner.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/KeyValueLineRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/MapAttemptFinishedEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/SequenceFileOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ResourceCalculatorPlugin.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/LongValueMin.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobContextImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/TokenCache.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/pipes/PipesMapRunner.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/SequenceFileRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/server/jobtracker/JTConfig.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/split/JobSplit.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/CompositeInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JobID.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/DelegatingInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/JobACL.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/MySQLDBRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/EventFetcher.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/output/MapFileOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/InvalidInputException.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-common/pom.xml
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/filecache
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobEndNotifier.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/security/token/JobTokenIdentifier.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/chain/ChainMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/conf
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/output/FileOutputFormatCounter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/DownwardProtocol.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MRConstants.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/ID.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/pipes/Application.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JvmTask.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/FieldSelectionMapReduce.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/IntegerSplitter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/contrib/mumak/src/test/org/apache/hadoop/mapred/MockSimulatorJobTracker.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregator.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/dev-support
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileAlreadyExistsException.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/LongValueMin.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/ComposableRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/InMemoryWriter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/protocol/ClientProtocol.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/security/token/package-info.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/MultipleOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/SequenceFileInputFilter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/avro/Events.avpr
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/InnerJoinRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/Partitioner.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/util/ConfigUtil.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/util/ResourceCalculatorPlugin.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/util/LinuxResourceCalculatorPlugin.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/Queue.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/LongValueMax.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-common/src
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/CumulativePeriodicStats.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/QueueAclsInfo.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Merger.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/IFile.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/CountersStrings.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/WrappedRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/JoinRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/DBOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/WrappedRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Cluster.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/MRJobConfig.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/Parser.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/TaskCompletionEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/JoinRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/package-info.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/JobID.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/ComposableInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorJobBase.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/MultipleOutputs.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/Chain.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/TaskAttemptContextImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/ExceptionReporter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/CombineFileSplit.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/LongValueMax.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorReducer.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/IFileInputStream.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/DoubleValueSum.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/security/token/DelegationTokenRenewal.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Clock.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TIPStatus.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MapHost.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/JobInitedEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/AbstractCounter.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/TextInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/JobCounter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFileOutputStream.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/TaskTrackerInfo.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobCounter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/InvalidJobConfException.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/CombineFileRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/MultipleOutputs.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/UniqValueCount.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/KeyValueTextInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/ResetableIterator.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/Counters.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileSplit.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/ValueHistogram.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/output/FilterOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/reduce/IntSumReducer.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Partitioner.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/StringValueMin.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorJobBase.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/IdentityMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/SequenceFileAsBinaryOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/jobcontrol/ControlledJob.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/Task.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/jobcontrol/JobControl.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Reporter.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/UserDefinedValueAggregatorDescriptor.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapRunner.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/AuditLogger.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/counters/FileSystemCounterGroup.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/token/delegation
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/DBWritable.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/OuterJoinRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/ReduceContextImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/Parser.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/pom.xml
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/SecureShuffleUtils.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/partition/KeyFieldHelper.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/partition/HashPartitioner.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/split/SplitMetaInfoReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/ResetableIterator.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/fieldsel/FieldSelectionMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/FileInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/JobContext.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobContext.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/CombineFileSplit.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobACL.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/LongSumReducer.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/Utils.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/map/RegexMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/MultipleInputs.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/chain/Chain.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/QueueAclsInfo.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/DelegatingRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/DataDrivenDBRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/ivy/libraries.properties
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/MultipleTextOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/StringValueMax.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/ReduceTaskStatus.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/filecache/package-info.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/CombineFileRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/CompositeInputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/Cluster.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Counters.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/InvalidInputException.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/db/DBConfiguration.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/TaskAttemptFinishedEvent.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/MapOutputFile.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapOutputFile.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/TupleWritable.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/NullOutputFormat.java
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TaskAttemptContextImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFileInputStream.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/server/jobtracker
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src
          • /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/counters/CounterGroupBase.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/KeyValueLineRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Constants.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/MultiFilterRecordReader.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/token
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/map/MultithreadedMapper.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/AuditLogger.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/test/java/org
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/StatisticsCollector.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/CounterGroup.java
          • /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MarkableIteratorInterface.java
          Show
          Hudson added a comment - Integrated in Hadoop-Common-trunk-Commit #751 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/751/ ) MAPREDUCE-279 . MapReduce 2.0. Merging MR-279 branch into trunk. Contributed by Arun C Murthy, Christopher Douglas, Devaraj Das, Greg Roelofs, Jeffrey Naisbitt, Josh Wills, Jonathan Eagles, Krishna Ramachandran, Luke Lu, Mahadev Konar, Robert Evans, Sharad Agarwal, Siddharth Seth, Thomas Graves, and Vinod Kumar Vavilapalli. vinodkv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1159166 Files : /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/BasicTypeSorterBase.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/Parser.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/filecache/DistributedCache.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-api /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/reduce/LongSumReducer.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/OuterJoinRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/OverrideRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/util/ResourceBundles.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/ReduceAttemptFinishedEvent.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobInfoChangeEvent.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/RecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/MultipleSequenceFileOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/DelegatingMapper.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/OracleDataDrivenDBInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/KeyValueTextInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/AbstractCounters.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ConfigUtil.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TaskTracker.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/LineRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/ArrayListBackedIterator.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/ShuffleScheduler.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/map/TokenCounterMapper.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/IFileOutputStream.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/SequenceFileAsTextRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/LinuxResourceCalculatorPlugin.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-common /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorMapper.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/QueueRefresher.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/AbstractCounterGroup.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorBaseDescriptor.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregator.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/Parser.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/MultithreadedMapRunner.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/HashPartitioner.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/PipesMapRunner.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/counters/LimitExceededException.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/SequenceFileAsTextRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/DBConfiguration.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskCompletionEvent.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/filecache/package-info.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/partition /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/TaggedInputSplit.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TaskAttemptContext.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/TaskInputOutputContext.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/TaskAttemptStartedEvent.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/MultipleInputs.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/TextOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/util/ProcfsBasedProcessTree.java /hadoop/common/trunk/hadoop-mapreduce/src/contrib/gridmix/src/test/org/apache/hadoop/mapred/gridmix/TestResourceUsageEmulators.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/CounterGroup.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/ReduceTaskStatus.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/QueueAclsInfo.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/QueueAclsInfo.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/RawKeyValueIterator.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/TaskFailedEvent.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/TaskType.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/FloatSplitter.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/MySQLDataDrivenDBRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/package-info.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/MapTask.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/counters/GenericCounter.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/TaskReport.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/TaskUpdatedEvent.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/TaskReport.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/fieldsel/FieldSelectionReducer.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/db/DBOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/EventWriter.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/RecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/MultipleOutputs.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobID.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/InputSplit.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/ReduceTask.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/InputSplit.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/counters/CounterGroupFactory.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobStatus.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/MROutputFiles.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MROutputFiles.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/InvalidFileTypeException.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/Reducer.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/ReduceContext.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/split/JobSplitWriter.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/Reporter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputFormatCounter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/PipesPartitioner.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/SequenceFileRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-app /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/SequenceFileAsTextRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/Merger.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/DBInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Counters.java /hadoop/common/trunk/hadoop-mapreduce/src/test/mapred/org/apache/hadoop/mapred/TestJvmManager.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/CompositeRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/Events.avpr /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorBaseDescriptor.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/OracleDBRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/OutputLogFilter.java /hadoop/common/trunk/hadoop-mapreduce/src/contrib/fairscheduler/ivy.xml /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JvmContext.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-api/pom.xml /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/filecache/package-info.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/util/TestLinuxResourceCalculatorPlugin.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/token/delegation/DelegationTokenSecretManager.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorJobBase.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/MapTaskStatus.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/LineRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/JobStatus.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapTaskStatus.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/LongSumReducer.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/LinuxResourceCalculatorPlugin.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Queue.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/split/package-info.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/ClusterMetrics.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/MultipleInputs.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/chain/Chain.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/EventReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/DoubleValueSum.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/chain/ChainReduceContextImpl.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/AvroArrayUtils.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/SkipBadRecords.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/ShuffleClientMetrics.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/TaskType.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/YARNRunner.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorJob.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFile.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/JoinRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/partition/KeyFieldBasedComparator.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/CompositeInputSplit.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TaskLog.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/UniqValueCount.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/CompositeInputSplit.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/TaskAttemptListenerImpl.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/TaskStartedEvent.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/NLineInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/DelegatingInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/HistoryViewer.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueHistogram.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/FileSplit.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/StringValueMin.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/package-info.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/package-info.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/jobcontrol/JobControl.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DBWritable.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/SkipBadRecords.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-shuffle/pom.xml /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ProcessTree.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/OverrideRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskUmbilicalProtocol.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/SequenceFileInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorMapper.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/OracleDateSplitter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapTaskCompletionEventsUpdate.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/SpillRecord.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/SequenceFileRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/ReduceAttemptFinishedEvent.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/OutputLogFilter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/LongValueSum.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/pipes/PipesReducer.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/DelegatingMapper.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/ReduceTask.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DataDrivenDBInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/DelegatingInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/CumulativePeriodicStats.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/BooleanSplitter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/chain/ChainMapper.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/filecache/DistributedCache.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/ReduceContext.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/UserDefinedValueAggregatorDescriptor.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/PeriodicStatsAccumulator.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/partition/BinaryPartitioner.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/StatisticsCollector.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/util/CountersStrings.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/ID.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/BasicTypeSorterBase.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/JobPriority.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/token/JobTokenSelector.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/StringValueMax.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/filecache/DistributedCache.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JobConf.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/chain/ChainMapContextImpl.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/InnerJoinRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Counter.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/ComposableRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/OutputHandler.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JvmContext.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/OracleDataDrivenDBInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/MultiFileInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/partition/KeyFieldBasedPartitioner.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/NullOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/LazyOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/MapFileOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskAttemptContext.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/SequenceFileAsBinaryInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Utils.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/FileInputFormatCounter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/PipesNonJavaInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Mapper.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-api/src /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/JobFinishedEvent.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/InvalidJobConfException.java /hadoop/common/trunk/hadoop-mapreduce/src/contrib/capacity-scheduler/ivy.xml /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobPriority.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/InputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/README /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/BigDecimalSplitter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/KeyFieldBasedComparator.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/HistoryEvent.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DBOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/ComposableInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/HashPartitioner.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/DateSplitter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/ResetableIterator.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/SpillRecord.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/DBRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/protocol /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/MergeManager.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TaskCompletionEvent.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/DelegatingMapper.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/CompositeInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-jobclient/pom.xml /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/avro /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/LongValueSum.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/KeyFieldBasedPartitioner.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/StreamBackedIterator.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/MapReduceBase.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapReduceBase.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Reducer.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/ClusterMetrics.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/DBSplitter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/map/RegexMapper.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/protocol/ClientProtocolProvider.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapFileOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/chain/ChainReducer.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/RawKeyValueIterator.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/output/NullOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/test/mapred/org/apache/hadoop/mapreduce/security/TestTokenCache.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/InputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/SequenceFileAsTextInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/CombineFileRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/LongValueMin.java /hadoop/common/trunk/hadoop-mapreduce/ivy.xml /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/Counters.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/MySQLDataDrivenDBRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapTask.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/Job.java /hadoop/common/trunk/hadoop-mapreduce/src/contrib/vaidya/ivy.xml /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/StatePeriodicStats.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/InnerJoinRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/CombineFileInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/FileSystemCounter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskAttemptContextImpl.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/InMemoryReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/server/tasktracker/TTConfig.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/map/WrappedMapper.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Reducer.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskID.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/BinaryPartitioner.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/Mapper.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/FieldSelectionMapReduce.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/SequenceFileAsBinaryInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/package-info.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/DeprecatedQueueConfigurationParser.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/partition/KeyFieldBasedComparator.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/partition/TotalOrderPartitioner.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/Counter.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/LineRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/CombineFileInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/tools /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/RamManager.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/ComposableInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/MapRunnable.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/SequenceFileInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/MultipleSequenceFileOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/StatusReporter.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/jobcontrol/Job.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/TaskInputOutputContextImpl.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/LongValueMax.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/partition/InputSampler.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/MultiFilterRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/QueueConfigurationParser.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/ShuffleHeader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorJob.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/ReduceContextImpl.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/SequenceFileAsBinaryInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/ComposableRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskLogAppender.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/OuterJoinRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/IdentityReducer.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/MapContextImpl.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/BackupStore.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskLog.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/LongValueMin.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/RegexMapper.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/db/DBOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/PipesReducer.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/fieldsel/FieldSelectionReducer.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/server/jobtracker/JTConfig.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/jobcontrol/Job.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/ArrayListBackedIterator.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/partition/KeyFieldBasedPartitioner.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/MySQLDBRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/SequenceFileAsBinaryOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryParser.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/security/token/delegation/DelegationTokenSelector.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/Reducer.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/RecordWriter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Master.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/JobSubmittedEvent.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/counters/AbstractCounter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/fieldsel/FieldSelectionHelper.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/reduce/WrappedReducer.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/TaskID.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/util/ProcessTree.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/TaskAttemptID.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/SequenceFileRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/CombineFileRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/CompositeInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/InMemoryWriter.java /hadoop/common/trunk/hadoop-mapreduce/pom.xml /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/partition/HashPartitioner.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/JoinRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/HostUtil.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapRunnable.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/reduce/IntSumReducer.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorCombiner.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/TaskAttemptContextImpl.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/TaskFinishedEvent.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/split/package-info.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/DoubleValueSum.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/LongValueMax.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/RecordWriter.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/pipes/BinaryProtocol.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-server /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JobInfo.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobInitedEvent.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/TextInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/OutputCommitter.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TaskLogAppender.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/MergeThread.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/FileInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/util/package-info.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/UniqValueCount.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/JobStatusChangedEvent.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/TotalOrderPartitioner.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/MapOutput.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobConf.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-jobclient /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/pipes/OutputHandler.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/RegexMapper.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/token/package-info.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/BackupStore.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/partition/KeyFieldHelper.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobContext.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-common/pom.xml /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/SequenceFileOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/pom.xml /hadoop/common/trunk/hadoop-mapreduce/src/contrib/vertica/ivy.xml /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/RamManager.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/StreamBackedIterator.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/ValueHistogram.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorReducer.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/map/InverseMapper.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/StringValueMin.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobConfigurable.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/CombineFileSplit.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/EventFetcher.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/TaggedInputSplit.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/tools/CLI.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/protocol/package-info.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/TupleWritable.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/split/JobSplitWriter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Mapper.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TaskUmbilicalProtocol.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/output/MultipleOutputs.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/OutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/pipes/UpwardProtocol.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/ResetableIterator.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/ID.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/security/SecureShuffleUtils.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/LazyOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/JobContextImpl.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/split/SplitMetaInfoReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/ChainReducer.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/StatePeriodicStats.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/OverrideRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/output/FileOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IndexCache.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/JobInfoChangeEvent.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/StatusReporter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MultiFileInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/FileSplit.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmitter.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/counters/FrameworkCounterGroup.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/MapFileOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorDescriptor.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/MultiFileSplit.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/tools/CLI.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/OutputCommitter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/Mapper.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/MultipleTextOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/OutputCommitter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/StringValueMax.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/TaskAttemptID.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/QueueState.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/CompositeRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/ProgressSplitsBlock.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/chain/ChainReduceContextImpl.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/QueueState.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/WrappedRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/ShuffleClientMetrics.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/SequenceFileInputFilter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregator.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorJob.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/OutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/FileAlreadyExistsException.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/counters/AbstractCounterGroup.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/MultithreadedMapRunner.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/TextSplitter.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/Shuffle.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/protocol/ClientProtocol.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JVMId.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/package-info.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobProfile.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JobConfigurable.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ResourceCalculatorPlugin.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorCombiner.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/server /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MultiFileSplit.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/CounterGroupBase.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DBSplitter.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/UserDefinedValueAggregatorDescriptor.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/ClusterStatus.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/ClusterStatus.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/map/MultithreadedMapper.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/ShuffleScheduler.java /hadoop/common/trunk/hadoop-mapreduce/src/test/mapred/org/apache/hadoop/mapreduce/util/TestLinuxResourceCalculatorPlugin.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/TaskID.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/KeyValueLineRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/FilterOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/SequenceFileInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/QueueACL.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/IntegerSplitter.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/security/token/JobTokenSecretManager.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/test/java/org/apache /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/GenericCounter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorMapper.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/reduce/LongSumReducer.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/SequenceFileOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/TaskAttemptUnsuccessfulCompletionEvent.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JobContext.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/Partitioner.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/FileInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TaskScheduler.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/security/token/delegation/DelegationTokenSecretManager.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/FileSystemCounter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/ExceptionReporter.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/JobSubmitter.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/NLineInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/contrib/data_join/ivy.xml /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/InvalidInputException.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/jobcontrol/ControlledJob.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/MultipleInputs.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/EventReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/TokenCountMapper.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TaskID.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobUnsuccessfulCompletionEvent.java /hadoop/common/trunk/hadoop-mapreduce /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobPriorityChangeEvent.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskReport.java /hadoop/common/trunk/hadoop-mapreduce/src/contrib/dynamic-scheduler/ivy.xml /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/IdentityReducer.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskStatus.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/partition/TotalOrderPartitioner.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/CompositeRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DateSplitter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/ChainReducer.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/MultipleOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/InputSplit.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/security/token/delegation/DelegationTokenIdentifier.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Partitioner.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/IndexCache.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/SequenceFileAsTextInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/SequenceFileAsTextRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FilterOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorCombiner.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/CounterGroupFactory.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobQueueInfo.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/TaskAttemptContext.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/PeriodicStatsAccumulator.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/FileInputFormatCounter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-common/scripts /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileOutputCommitter.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/output/TextOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/BinaryPartitioner.java /hadoop/common/trunk/hadoop-mapreduce/src/contrib/block_forensics/ivy.xml /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/MultiFilterRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/partition/BinaryPartitioner.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/OracleDataDrivenDBRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/JobUnsuccessfulCompletionEvent.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/map/TokenCounterMapper.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/MarkableIteratorInterface.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/OverrideRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/security/token/JobTokenSelector.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TaskAttemptID.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskAttemptID.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregator.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml /hadoop/common/trunk/hadoop-mapreduce/src/contrib/index/ivy.xml /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/WrappedRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/FileOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/security/TokenCache.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JobProfile.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/counters/AbstractCounters.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/counters/Limits.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/token/delegation/DelegationTokenSelector.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/TaggedInputSplit.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/ArrayListBackedIterator.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/TupleWritable.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-shuffle /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/token/JobTokenIdentifier.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JobQueueInfo.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobInfo.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/pipes/DownwardProtocol.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/TaskFailedEvent.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/MapAttemptFinishedEvent.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/db/DBInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/reduce /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JobPriority.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/InnerJoinRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DBRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/OracleDBRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/TaskCounter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/SequenceFileInputFilter.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/SequenceFileInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/CleanupQueue.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/HistoryViewer.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorDescriptor.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/bin /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/protocol/ClientProtocolProvider.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ProcfsBasedProcessTree.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/BinaryProtocol.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/OutputCommitter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/server/tasktracker /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/OutputCollector.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/FileSystemCounterGroup.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/RecordWriter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/jobcontrol /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/Operation.java /hadoop/common/trunk/mapreduce /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/package-info.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/filecache/package-info.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/map /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TaskReport.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/filecache /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/KeyValueTextInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/InputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/MapContextImpl.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TaskStatus.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/TaskUpdatedEvent.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/chain/ChainMapContextImpl.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/ComposableInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/OuterJoinRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/split /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/output/SequenceFileOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/TaskCompletionEvent.java /hadoop/common/trunk/hadoop-mapreduce/INSTALL /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/CombineFileInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MarkableIterator.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/TaskCounter.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/InputSplit.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorJobBase.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/SequenceFileAsTextInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/CleanupQueue.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/TupleWritable.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/InputSampler.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JobACLsManager.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/TaggedInputSplit.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/UserDefinedValueAggregatorDescriptor.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/MultiFilterRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/jobcontrol /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/pom.xml /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/MRConfig.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorDescriptor.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/map/WrappedMapper.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DataDrivenDBRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/chain/ChainReducer.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JobClient.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/fieldsel/FieldSelectionHelper.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/MapHost.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/ProgressSplitsBlock.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/UpwardProtocol.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/db/DBWritable.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/QueueACL.java /hadoop/common/trunk/hadoop-mapreduce/src/contrib/raid/ivy.xml /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/HistoryEvent.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/DeprecatedQueueConfigurationParser.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/LazyOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobPriority.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/chain /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/filecache/ClientDistributedCacheManager.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/security/token/delegation/package-info.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/ChainMapper.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/TaskInputOutputContextImpl.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/ArrayListBackedIterator.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/NLineInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobACLsManager.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/StreamBackedIterator.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/SequenceFileAsBinaryInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/QueueConfigurationParser.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TextOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/Clock.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/DelegatingRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/OutputCollector.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/RecordWriter.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JvmTask.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/AvroArrayUtils.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/KeyFieldBasedComparator.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorBaseDescriptor.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/token/JobTokenSecretManager.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/ID.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/pom.xml /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/InputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/MapTaskCompletionEventsUpdate.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/test/java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/OutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/fieldsel/FieldSelectionMapper.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/CombineFileInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestLinuxResourceCalculatorPlugin.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/TaskStartedEvent.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/DelegatingMapper.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DBConfiguration.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/BufferSorter.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorReducer.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/output/LazyOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorDescriptor.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JobStatus.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/filecache/DistributedCache.java /hadoop/common/trunk/hadoop-mapreduce/src/contrib/streaming/ivy.xml /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/TaskInputOutputContext.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/jobcontrol/JobControl.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRConfig.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/DoubleValueSum.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/RunningJob.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/OracleDataDrivenDBRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/jobcontrol/JobControl.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/db /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/InverseMapper.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorBaseDescriptor.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/CompositeInputSplit.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobStatusChangedEvent.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/JobContextImpl.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/UniqValueCount.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/TaskAttemptUnsuccessfulCompletionEvent.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/Submitter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-app/pom.xml /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/OracleDateSplitter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/LimitExceededException.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ResourceBundles.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/EventWriter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JVMId.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/db/DBConfiguration.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/ChainMapper.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/split/JobSplit.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/BufferSorter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/Limits.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueHistogram.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/NLineInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/InputSampler.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/StringValueMin.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorReducer.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorCombiner.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/output/SequenceFileAsBinaryOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/assembly /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/FileOutputCommitter.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/pipes/PipesNonJavaInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/TokenCountMapper.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorJob.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Operation.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-server/pom.xml /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/Application.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/MergeSorter.java /hadoop/common/trunk/hadoop-mapreduce/src/test/mapred/org/apache/hadoop/mapred/TestTaskTrackerLocalization.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MapOutput.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/OutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/TextSplitter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Shuffle.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/SortedRanges.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/filecache/ClientDistributedCacheManager.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobID.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/InvalidInputException.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/TestUberAM.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/MapContext.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MapContext.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/util /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-common /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/map/InverseMapper.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/LongValueSum.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/KeyValueTextInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/CombineFileSplit.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TextInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/package-info.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/SequenceFileAsBinaryOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TextOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/token/DelegationTokenRenewal.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/KeyValueLineRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/protocol/package-info.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskAttemptImpl.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/InvalidFileTypeException.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobClient.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobFinishedEvent.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/ComposableRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/SequenceFileInputFilter.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/SequenceFileAsTextInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/StringValueMax.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/FilterOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/db/DBWritable.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/TaskTrackerInfo.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/QueueInfo.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/TotalOrderPartitioner.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/FloatSplitter.java /hadoop/common/trunk/hadoop-mapreduce/src/contrib/mumak/ivy.xml /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/BigDecimalSplitter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/RecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/pipes/Submitter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/token/delegation/package-info.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/SortedRanges.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/TaskAttemptFinishedEvent.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/LongValueSum.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/test /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/StreamBackedIterator.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-hs /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/Chain.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/DelegatingInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/KeyFieldBasedPartitioner.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/IdentityMapper.java /hadoop/common/trunk/hadoop-mapreduce/assembly/all.xml /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/InMemoryReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/TaskAttemptContext.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TIPStatus.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/ShuffleHeader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/MRConstants.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobSubmittedEvent.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/RunningJob.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/NullOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/db/DBInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeThread.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/QueueManager.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/JobPriorityChangeEvent.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/QueueInfo.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MergeSorter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DBInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/FileSplit.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/RecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/LineRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedJob.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/MarkableIterator.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/CompositeInputSplit.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JobContextImpl.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/token/delegation/DelegationTokenIdentifier.java /hadoop/common/trunk/hadoop-mapreduce/src/contrib/gridmix/ivy.xml /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManager.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/DataDrivenDBInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TextInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/partition/InputSampler.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryParser.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/TaskFinishedEvent.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/TaskAttemptStartedEvent.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/counters/package-info.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/FrameworkCounterGroup.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/server/tasktracker/TTConfig.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JobEndNotifier.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobStatus.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/BooleanSplitter.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/MapRunner.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/fieldsel /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/reduce/WrappedReducer.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/InverseMapper.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/CompositeRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/QueueManager.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/pipes/PipesPartitioner.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/KeyValueLineRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorMapper.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/MapAttemptFinishedEvent.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/SequenceFileOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ResourceCalculatorPlugin.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/LongValueMin.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobContextImpl.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/TokenCache.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/pipes/PipesMapRunner.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/SequenceFileRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/server/jobtracker/JTConfig.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/split/JobSplit.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/CompositeInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/JobID.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/DelegatingInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/JobACL.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/MySQLDBRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/EventFetcher.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/output/MapFileOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/InvalidInputException.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-common/pom.xml /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/filecache /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobEndNotifier.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/security/token/JobTokenIdentifier.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/chain/ChainMapper.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/conf /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/output/FileOutputFormatCounter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/DownwardProtocol.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MRConstants.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/ID.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/pipes/Application.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JvmTask.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/FieldSelectionMapReduce.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/IntegerSplitter.java /hadoop/common/trunk/hadoop-mapreduce/src/contrib/mumak/src/test/org/apache/hadoop/mapred/MockSimulatorJobTracker.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregator.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/dev-support /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileAlreadyExistsException.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/LongValueMin.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/ComposableRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/InMemoryWriter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/protocol/ClientProtocol.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/security/token/package-info.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/MultipleOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/SequenceFileInputFilter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/avro/Events.avpr /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/InnerJoinRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/Partitioner.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/util/ConfigUtil.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/util/ResourceCalculatorPlugin.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/util/LinuxResourceCalculatorPlugin.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/Queue.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/LongValueMax.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-common/src /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/CumulativePeriodicStats.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/QueueAclsInfo.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Merger.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/IFile.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/CountersStrings.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/WrappedRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/JoinRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/DBOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/WrappedRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Cluster.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/MRJobConfig.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/Parser.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/TaskCompletionEvent.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/JoinRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/package-info.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/JobID.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/ComposableInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorJobBase.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/MultipleOutputs.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/Chain.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/TaskAttemptContextImpl.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/ExceptionReporter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/CombineFileSplit.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/LongValueMax.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorReducer.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/IFileInputStream.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/DoubleValueSum.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/security/token/DelegationTokenRenewal.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Clock.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TIPStatus.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MapHost.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/JobInitedEvent.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/AbstractCounter.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/input/TextInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/JobCounter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFileOutputStream.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/TaskTrackerInfo.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobCounter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/InvalidJobConfException.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/CombineFileRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/MultipleOutputs.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/UniqValueCount.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/KeyValueTextInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/ResetableIterator.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/Counters.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileSplit.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/ValueHistogram.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/output/FilterOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/reduce/IntSumReducer.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Partitioner.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/StringValueMin.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorJobBase.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/IdentityMapper.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/SequenceFileAsBinaryOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/jobcontrol/ControlledJob.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/Task.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/jobcontrol/JobControl.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Reporter.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/UserDefinedValueAggregatorDescriptor.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapRunner.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/AuditLogger.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/counters/FileSystemCounterGroup.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/token/delegation /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/DBWritable.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/OuterJoinRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/ReduceContextImpl.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/Parser.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/pom.xml /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/SecureShuffleUtils.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/partition/KeyFieldHelper.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/partition/HashPartitioner.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/split/SplitMetaInfoReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/join/ResetableIterator.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/fieldsel/FieldSelectionMapper.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/FileInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/JobContext.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobContext.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/CombineFileSplit.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobACL.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/LongSumReducer.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/Utils.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/map/RegexMapper.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/MultipleInputs.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/chain/Chain.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/QueueAclsInfo.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/DelegatingRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/db/DataDrivenDBRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/ivy/libraries.properties /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/MultipleTextOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/aggregate/StringValueMax.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/ReduceTaskStatus.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/filecache/package-info.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/CombineFileRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/CompositeInputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/Cluster.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Counters.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/InvalidInputException.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/db/DBConfiguration.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/TaskAttemptFinishedEvent.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/MapOutputFile.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapOutputFile.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/lib/join/TupleWritable.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/lib/NullOutputFormat.java /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapred/TaskAttemptContextImpl.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFileInputStream.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/server/jobtracker /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src /hadoop/common/trunk/hadoop-mapreduce/src/java/org/apache/hadoop/mapreduce/counters/CounterGroupBase.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/KeyValueLineRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Constants.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/MultiFilterRecordReader.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/token /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/map/MultithreadedMapper.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/AuditLogger.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/test/java/org /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/StatisticsCollector.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/CounterGroup.java /hadoop/common/trunk/hadoop-mapreduce/hadoop-mr-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MarkableIteratorInterface.java
          Hide
          Vinod Kumar Vavilapalli added a comment -

          I just merged MR-279 branch into mapreduce trunk (finally ).

          It's been a gigantic effort. Thanks to each and everyone who made contributions, large and small!

          I am closing this ticket as resolved.

          Show
          Vinod Kumar Vavilapalli added a comment - I just merged MR-279 branch into mapreduce trunk (finally ). It's been a gigantic effort. Thanks to each and everyone who made contributions, large and small! I am closing this ticket as resolved.
          Hide
          Binglin Chang added a comment -

          Ultimately a version of these should be produced natively in some StateMachine method (toDot()?), and I think Chris Douglas may take that up eventually. However, some of the desirable info (e.g., which states send events to or receive them from other state machines) can't really be discovered automatically, so there will continue to be a place for hand-rolled graphs.

          What's the current progress of this work? I find visualization of state machine really help when reading & learning MRv2 code, both YARN & MRv2. I add some code in yarn-common to generate graphviz dot file automatically when I try to learn YARN code yesterday, it works fine for me, maybe it is useful for others too.

          Show
          Binglin Chang added a comment - Ultimately a version of these should be produced natively in some StateMachine method (toDot()?), and I think Chris Douglas may take that up eventually. However, some of the desirable info (e.g., which states send events to or receive them from other state machines) can't really be discovered automatically, so there will continue to be a place for hand-rolled graphs. What's the current progress of this work? I find visualization of state machine really help when reading & learning MRv2 code, both YARN & MRv2. I add some code in yarn-common to generate graphviz dot file automatically when I try to learn YARN code yesterday, it works fine for me, maybe it is useful for others too.
          Hide
          Binglin Chang added a comment -

          State graph for ResourceManager

          Show
          Binglin Chang added a comment - State graph for ResourceManager
          Hide
          Sharad Agarwal added a comment -

          Thanks Binglin, it is incredibly useful. I have filed MAPREDUCE-2930 where you may want to contribute the patch. It will help to keep the graphs up to date.

          Show
          Sharad Agarwal added a comment - Thanks Binglin, it is incredibly useful. I have filed MAPREDUCE-2930 where you may want to contribute the patch. It will help to keep the graphs up to date.
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Common-trunk-Commit #857 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/857/)
          Adding back hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources which was missed during the merge of MAPREDUCE-279.

          acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1166972
          Files :

          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/META-INF/services/org.apache.hadoop.mapreduce.protocol.ClientProtocolProvider
          Show
          Hudson added a comment - Integrated in Hadoop-Common-trunk-Commit #857 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/857/ ) Adding back hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources which was missed during the merge of MAPREDUCE-279 . acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1166972 Files : /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/META-INF/services/org.apache.hadoop.mapreduce.protocol.ClientProtocolProvider
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk-Commit #934 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/934/)
          Adding back hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources which was missed during the merge of MAPREDUCE-279.

          acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1166972
          Files :

          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/META-INF/services/org.apache.hadoop.mapreduce.protocol.ClientProtocolProvider
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #934 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/934/ ) Adding back hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources which was missed during the merge of MAPREDUCE-279 . acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1166972 Files : /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/META-INF/services/org.apache.hadoop.mapreduce.protocol.ClientProtocolProvider
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk-Commit #868 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/868/)
          Adding back hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources which was missed during the merge of MAPREDUCE-279.

          acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1166972
          Files :

          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/META-INF/services/org.apache.hadoop.mapreduce.protocol.ClientProtocolProvider
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk-Commit #868 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/868/ ) Adding back hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources which was missed during the merge of MAPREDUCE-279 . acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1166972 Files : /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/META-INF/services/org.apache.hadoop.mapreduce.protocol.ClientProtocolProvider
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk #788 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/788/)
          Adding back hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources which was missed during the merge of MAPREDUCE-279.

          acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1166972
          Files :

          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/META-INF/services/org.apache.hadoop.mapreduce.protocol.ClientProtocolProvider
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #788 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/788/ ) Adding back hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources which was missed during the merge of MAPREDUCE-279 . acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1166972 Files : /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/META-INF/services/org.apache.hadoop.mapreduce.protocol.ClientProtocolProvider
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk #812 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/812/)
          Adding back hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources which was missed during the merge of MAPREDUCE-279.

          acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1166972
          Files :

          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/META-INF/services/org.apache.hadoop.mapreduce.protocol.ClientProtocolProvider
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk #812 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/812/ ) Adding back hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources which was missed during the merge of MAPREDUCE-279 . acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1166972 Files : /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/META-INF/services/org.apache.hadoop.mapreduce.protocol.ClientProtocolProvider
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk-Commit #1179 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1179/)
          MAPREDUCE-279. Adding a changelog to branch-0.23.

          acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1185488
          Files :

          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #1179 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1179/ ) MAPREDUCE-279 . Adding a changelog to branch-0.23. acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1185488 Files : /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Common-trunk-Commit #1100 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1100/)
          MAPREDUCE-279. Adding a changelog to branch-0.23.

          acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1185488
          Files :

          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
          Show
          Hudson added a comment - Integrated in Hadoop-Common-trunk-Commit #1100 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1100/ ) MAPREDUCE-279 . Adding a changelog to branch-0.23. acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1185488 Files : /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Common-0.23-Commit #15 (See https://builds.apache.org/job/Hadoop-Common-0.23-Commit/15/)
          MAPREDUCE-279. Adding a changelog to branch-0.23.

          acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1185489
          Files :

          • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
          Show
          Hudson added a comment - Integrated in Hadoop-Common-0.23-Commit #15 (See https://builds.apache.org/job/Hadoop-Common-0.23-Commit/15/ ) MAPREDUCE-279 . Adding a changelog to branch-0.23. acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1185489 Files : /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-0.23-Commit #16 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/16/)
          MAPREDUCE-279. Adding a changelog to branch-0.23.

          acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1185489
          Files :

          • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-0.23-Commit #16 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/16/ ) MAPREDUCE-279 . Adding a changelog to branch-0.23. acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1185489 Files : /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk-Commit #1119 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1119/)
          MAPREDUCE-279. Adding a changelog to branch-0.23.

          acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1185488
          Files :

          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk-Commit #1119 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1119/ ) MAPREDUCE-279 . Adding a changelog to branch-0.23. acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1185488 Files : /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-0.23-Commit #17 (See https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/17/)
          MAPREDUCE-279. Adding a changelog to branch-0.23.

          acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1185489
          Files :

          • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-0.23-Commit #17 (See https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/17/ ) MAPREDUCE-279 . Adding a changelog to branch-0.23. acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1185489 Files : /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
          Hide
          Arun C Murthy added a comment -

          Editorial pass over hadoop-0.23 content.

          Show
          Arun C Murthy added a comment - Editorial pass over hadoop-0.23 content.
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-0.23-Build #43 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/43/)
          MAPREDUCE-279. Adding a changelog to branch-0.23.

          acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1185489
          Files :

          • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-0.23-Build #43 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/43/ ) MAPREDUCE-279 . Adding a changelog to branch-0.23. acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1185489 Files : /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk #864 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/864/)
          MAPREDUCE-279. Adding a changelog to branch-0.23.

          acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1185488
          Files :

          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk #864 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/864/ ) MAPREDUCE-279 . Adding a changelog to branch-0.23. acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1185488 Files : /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-0.23-Build #55 (See https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/55/)
          MAPREDUCE-279. Adding a changelog to branch-0.23.

          acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1185489
          Files :

          • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-0.23-Build #55 (See https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/55/ ) MAPREDUCE-279 . Adding a changelog to branch-0.23. acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1185489 Files : /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk #834 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/834/)
          MAPREDUCE-279. Adding a changelog to branch-0.23.

          acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1185488
          Files :

          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #834 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/834/ ) MAPREDUCE-279 . Adding a changelog to branch-0.23. acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1185488 Files : /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt

            People

            • Assignee:
              Unassigned
              Reporter:
              Arun C Murthy
            • Votes:
              6 Vote for this issue
              Watchers:
              111 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development