Hadoop Map/Reduce
  1. Hadoop Map/Reduce
  2. MAPREDUCE-777

A method for finding and tracking jobs from the new API

    Details

    • Type: New Feature New Feature
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.21.0
    • Component/s: client
    • Labels:
      None
    • Hadoop Flags:
      Incompatible change, Reviewed
    • Release Note:
      Enhance the Context Objects api to add features to find and track jobs.

      Description

      We need to create a replacement interface for the JobClient API in the new interface. In particular, the user needs to be able to query and track jobs that were launched by other processes.

      1. patch-777.txt
        156 kB
        Amareshwari Sriramadasu
      2. patch-777-1.txt
        157 kB
        Amareshwari Sriramadasu
      3. patch-777-2.txt
        158 kB
        Amareshwari Sriramadasu
      4. m-777.patch
        4 kB
        Owen O'Malley
      5. patch-777-3.txt
        121 kB
        Amareshwari Sriramadasu
      6. patch-777-4.txt
        202 kB
        Amareshwari Sriramadasu
      7. patch-777-5.txt
        204 kB
        Amareshwari Sriramadasu
      8. patch-777-6.txt
        203 kB
        Amareshwari Sriramadasu
      9. patch-777-7.txt
        203 kB
        Amareshwari Sriramadasu
      10. patch-777-8.txt
        207 kB
        Amareshwari Sriramadasu
      11. patch-777-9.txt
        305 kB
        Amareshwari Sriramadasu
      12. patch-777-10.txt
        313 kB
        Amareshwari Sriramadasu
      13. patch-777-11.txt
        313 kB
        Amareshwari Sriramadasu
      14. patch-777-12.txt
        312 kB
        Amareshwari Sriramadasu
      15. patch-777-13.txt
        313 kB
        Amareshwari Sriramadasu
      16. patch-777-14.txt
        316 kB
        Amareshwari Sriramadasu
      17. patch-777-15.txt
        317 kB
        Amareshwari Sriramadasu
      18. patch-777-16.txt
        317 kB
        Amareshwari Sriramadasu
      19. patch-777-17.txt
        317 kB
        Amareshwari Sriramadasu

        Issue Links

          Activity

          Hide
          Todd Lipcon added a comment -

          Introducing a factory governed by a conf parameter in here would be nice as well - since it's currently a static class it's very hard to interpose any custom code.

          Show
          Todd Lipcon added a comment - Introducing a factory governed by a conf parameter in here would be nice as well - since it's currently a static class it's very hard to interpose any custom code.
          Hide
          Owen O'Malley added a comment -

          What is the use case that you are thinking of?

          Show
          Owen O'Malley added a comment - What is the use case that you are thinking of?
          Hide
          Todd Lipcon added a comment -

          We have the need for a program to run an arbitrary client-provided jar and then monitor the jobs submitted by it. The easiest way to go about this is to interpose code in front of JobClient to catch the submission and hand off the job ID to the thread/process that needs to follow along. Without factory-izing JobClient, doing this is relatively tricky.

          Show
          Todd Lipcon added a comment - We have the need for a program to run an arbitrary client-provided jar and then monitor the jobs submitted by it. The easiest way to go about this is to interpose code in front of JobClient to catch the submission and hand off the job ID to the thread/process that needs to follow along. Without factory-izing JobClient, doing this is relatively tricky.
          Hide
          Owen O'Malley added a comment -

          I can't believe I'm saying this, but rather than putting a plugin there, isn't it easier to just use aspectj to instrument the Job API?

          Show
          Owen O'Malley added a comment - I can't believe I'm saying this, but rather than putting a plugin there, isn't it easier to just use aspectj to instrument the Job API?
          Hide
          Todd Lipcon added a comment -

          I can't believe we did this, but that's exactly what we did

          Show
          Todd Lipcon added a comment - I can't believe we did this, but that's exactly what we did
          Hide
          Amareshwari Sriramadasu added a comment -

          Attaching an early patch for review.

          Patch does the following:
          1. Deprecates RunningJob api. Job should be used instead. Made sure that job has all methods that RunningJob has.
          2. Ports the existing JobClient functionality to new api.
          This made org.apache.hadoop.mapred.HistoryViewer, org.apache.hadoop.mapred.LocalJobRunner, QueueAclsInfo and JobSubmissionProtocol classes public.
          Modified JobSubmissionProtocol to use new api.

          Owen, can you please look at the patch once and let me know if you want to add/remove/change any api?

          Show
          Amareshwari Sriramadasu added a comment - Attaching an early patch for review. Patch does the following: 1. Deprecates RunningJob api. Job should be used instead. Made sure that job has all methods that RunningJob has. 2. Ports the existing JobClient functionality to new api. This made org.apache.hadoop.mapred.HistoryViewer, org.apache.hadoop.mapred.LocalJobRunner, QueueAclsInfo and JobSubmissionProtocol classes public. Modified JobSubmissionProtocol to use new api. Owen, can you please look at the patch once and let me know if you want to add/remove/change any api?
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12416042/patch-777.txt
          against trunk revision 802645.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 9 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          -1 javac. The applied patch generated 2272 javac compiler warnings (more than the trunk's current 2232 warnings).

          +1 findbugs. The patch does not introduce any new Findbugs warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed core unit tests.

          -1 contrib tests. The patch failed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/461/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/461/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/461/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/461/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12416042/patch-777.txt against trunk revision 802645. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 9 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The applied patch generated 2272 javac compiler warnings (more than the trunk's current 2232 warnings). +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/461/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/461/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/461/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/461/console This message is automatically generated.
          Hide
          Amareshwari Sriramadasu added a comment -

          Patch updated with trunk

          Show
          Amareshwari Sriramadasu added a comment - Patch updated with trunk
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12416762/patch-777-1.txt
          against trunk revision 804865.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 9 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          -1 javac. The applied patch generated 2272 javac compiler warnings (more than the trunk's current 2232 warnings).

          +1 findbugs. The patch does not introduce any new Findbugs warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed core unit tests.

          -1 contrib tests. The patch failed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/485/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/485/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/485/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/485/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12416762/patch-777-1.txt against trunk revision 804865. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 9 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The applied patch generated 2272 javac compiler warnings (more than the trunk's current 2232 warnings). +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/485/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/485/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/485/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/485/console This message is automatically generated.
          Hide
          Amareshwari Sriramadasu added a comment -

          Test failure TestRecoveryManager is not related to the patch. It is due to MAPREDUCE-880

          Show
          Amareshwari Sriramadasu added a comment - Test failure TestRecoveryManager is not related to the patch. It is due to MAPREDUCE-880
          Hide
          Amareshwari Sriramadasu added a comment -

          -1 javac. Looks spurious. I dont see any new javac warnings introduced in the patch.

          Show
          Amareshwari Sriramadasu added a comment - -1 javac. Looks spurious. I dont see any new javac warnings introduced in the patch.
          Hide
          Amareshwari Sriramadasu added a comment -

          Cancelling patch to incorporate offline comments from Amar
          Comments include:
          1. Introduce Counters.downgrade() instead of constructor
          2.

          +      org.apache.hadoop.mapreduce.JobClient.TaskStatusFilter newFilter = 
          +        getNewFilter(filter);
          +      printTaskEvents(events, newFilter, profiling, mapRanges, reduceRanges);
          

          Use getNewFilter directly.

          3. deprecate public methods in jobtracker, that got changed for new JobSubmissionProtocol
          4. Move Counters(org.apache.hadoop.mapred.Counters counters) to a method in old api

          Show
          Amareshwari Sriramadasu added a comment - Cancelling patch to incorporate offline comments from Amar Comments include: 1. Introduce Counters.downgrade() instead of constructor 2. + org.apache.hadoop.mapreduce.JobClient.TaskStatusFilter newFilter = + getNewFilter(filter); + printTaskEvents(events, newFilter, profiling, mapRanges, reduceRanges); Use getNewFilter directly. 3. deprecate public methods in jobtracker, that got changed for new JobSubmissionProtocol 4. Move Counters(org.apache.hadoop.mapred.Counters counters) to a method in old api
          Hide
          Amareshwari Sriramadasu added a comment -

          Patch incorporating review comments except comment(4).

          Move Counters(org.apache.hadoop.mapred.Counters counters) to a method in old api

          This needs CounterGroup constructor(s) to be made public. So, did not move this method to old api.

          Show
          Amareshwari Sriramadasu added a comment - Patch incorporating review comments except comment(4). Move Counters(org.apache.hadoop.mapred.Counters counters) to a method in old api This needs CounterGroup constructor(s) to be made public. So, did not move this method to old api.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12416974/patch-777-2.txt
          against trunk revision 805324.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 9 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          -1 javac. The applied patch generated 2272 javac compiler warnings (more than the trunk's current 2232 warnings).

          +1 findbugs. The patch does not introduce any new Findbugs warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed core unit tests.

          -1 contrib tests. The patch failed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/494/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/494/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/494/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/494/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12416974/patch-777-2.txt against trunk revision 805324. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 9 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The applied patch generated 2272 javac compiler warnings (more than the trunk's current 2232 warnings). +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/494/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/494/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/494/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/494/console This message is automatically generated.
          Hide
          Tom White added a comment -

          I think we can support the use case Todd mentioned above with a relatively small change (and without resorting to aspects). Change the JobClient constructors to static factory methods:

          public JobClient() { ... }
          
          public JobClient(Configuration conf) throws IOException { ... }
          

          becomes

          public static JobClient get() { ... }
          
          public static JobClient get(Configuration conf) throws IOException { ... }
          

          Then at a later point we can change the implementation of the static methods to return a custom implementation of JobClient, without having to change JobClient's API.

          Show
          Tom White added a comment - I think we can support the use case Todd mentioned above with a relatively small change (and without resorting to aspects). Change the JobClient constructors to static factory methods: public JobClient() { ... } public JobClient(Configuration conf) throws IOException { ... } becomes public static JobClient get() { ... } public static JobClient get(Configuration conf) throws IOException { ... } Then at a later point we can change the implementation of the static methods to return a custom implementation of JobClient, without having to change JobClient's API.
          Hide
          Owen O'Malley added a comment -

          I'm not happy with this patch. I need to go through it in more depth, but:

          1. The setters mostly look right, although some of them are missing the assertion that the job is in the setup phase.

          2. The getters should move to JobContext.

          3. I think JobClient is a bad name for the job browser. Something like JobBrowser is probably clearer.

          Show
          Owen O'Malley added a comment - I'm not happy with this patch. I need to go through it in more depth, but: 1. The setters mostly look right, although some of them are missing the assertion that the job is in the setup phase. 2. The getters should move to JobContext. 3. I think JobClient is a bad name for the job browser. Something like JobBrowser is probably clearer.
          Hide
          Owen O'Malley added a comment -

          This is closer to what I had in mind. I think we need to take this chance to do a major clean up of the interface.

          Show
          Owen O'Malley added a comment - This is closer to what I had in mind. I think we need to take this chance to do a major clean up of the interface.
          Hide
          Arun C Murthy added a comment -

          +1

          Show
          Arun C Murthy added a comment - +1
          Hide
          Philip Zeyliger added a comment -

          Overall, +1 on having this interface! Some thoughts:

          • Can getReasonForBlackList return an enum?
          • Is there a reason why getJobs returns Job[] and not Collection<Job>?
          • It seems like people may want to push filters down in getJobs.
          • Instead of get(Map,Reduce,SetupAndCleanup)TaskReports, should that just be a getTaskReport(TaskType)? The number of task types is likely to increase.

          – Philip

          Show
          Philip Zeyliger added a comment - Overall, +1 on having this interface! Some thoughts: Can getReasonForBlackList return an enum? Is there a reason why getJobs returns Job[] and not Collection<Job>? It seems like people may want to push filters down in getJobs. Instead of get(Map,Reduce,SetupAndCleanup)TaskReports, should that just be a getTaskReport(TaskType)? The number of task types is likely to increase. – Philip
          Hide
          Amareshwari Sriramadasu added a comment -

          Some questions/comments on proposed api:
          1.

          +
          +  /* mapred.Queue.QueueState needs to extend this class */
          +  public static enum QueueState {
          +    STOPPED("stopped"), RUNNING("running");
          

          enum cannot be extended in java. Owen, do you mean I should wrap this in a class?

          2. bq. public Job getJob(JobID job) throws IOException

          { return null; }
          For returning Job handle from JobID, we should find a way to get configuration of the job, through JobSubmissionProtocol.

          3.
          
          

          + public QueueInfo getQueue(String name) throws IOException { return null; }

          + public Metrics getClusterStatus() throws IOException

          { return null; }
          
          

          For these apis, JobSubmissionProtocol should be modified to return QueueInfo and Metrics, instead of bigger objects JobQueueInfo and ClusterStatus, right?

          Show
          Amareshwari Sriramadasu added a comment - Some questions/comments on proposed api: 1. + + /* mapred.Queue.QueueState needs to extend this class */ + public static enum QueueState { + STOPPED( "stopped" ), RUNNING( "running" ); enum cannot be extended in java. Owen, do you mean I should wrap this in a class? 2. bq. public Job getJob(JobID job) throws IOException { return null; } For returning Job handle from JobID, we should find a way to get configuration of the job, through JobSubmissionProtocol. 3. + public QueueInfo getQueue(String name) throws IOException { return null; } + public Metrics getClusterStatus() throws IOException { return null; } For these apis, JobSubmissionProtocol should be modified to return QueueInfo and Metrics, instead of bigger objects JobQueueInfo and ClusterStatus, right?
          Hide
          Arun C Murthy added a comment -

          # Instead of get(Map,Reduce,SetupAndCleanup)TaskReports, should that just be a getTaskReport(TaskType)? The number of task types is likely to increase.

          +1

          Show
          Arun C Murthy added a comment - # Instead of get(Map,Reduce,SetupAndCleanup)TaskReports, should that just be a getTaskReport(TaskType)? The number of task types is likely to increase. +1
          Hide
          Sreekanth Ramakrishnan added a comment -

          With respect to

            public class QueueInfo {
              String getName() { return null; }
              String getSchedulingInfo() throws IOException { return null; }
              Job[] getJobs(int maxJobs) throws IOException { return null; }
              QueueState getState() throws IOException { return null; }
            }
          

          Currently the class org.apache.hadoop.mapred.JobQueueInfo is client only view of the information pertaining to the queue and is not used in framework for any other purpose, why don't we reuse it instead of a creating a new class?

          Also, in the framework the concept of queue was nothing but a tag associated with a Job and some schedulers need not honor the queue and can store the job in a single queue rather than in separate queue, are we planning to change that?

          Then, sending a list of jobs for all the client request might not be required as, there are currently two queue commands i.e. queue -list which prints out list of queues and associated scheduling information and queue -info <queuename> [-listjobs] the option of list jobs is optional in second case, by using the proposed api we might end up sending list of jobs all the time even tho' client does not request it.

          Finally, MAPREDUCE-853 is introducing an hierarchy of queues and we should also try to handle those scenarios in the JIRA.

          Show
          Sreekanth Ramakrishnan added a comment - With respect to public class QueueInfo { String getName() { return null; } String getSchedulingInfo() throws IOException { return null; } Job[] getJobs(int maxJobs) throws IOException { return null; } QueueState getState() throws IOException { return null; } } Currently the class org.apache.hadoop.mapred.JobQueueInfo is client only view of the information pertaining to the queue and is not used in framework for any other purpose, why don't we reuse it instead of a creating a new class? Also, in the framework the concept of queue was nothing but a tag associated with a Job and some schedulers need not honor the queue and can store the job in a single queue rather than in separate queue, are we planning to change that? Then, sending a list of jobs for all the client request might not be required as, there are currently two queue commands i.e. queue -list which prints out list of queues and associated scheduling information and queue -info <queuename> [-listjobs] the option of list jobs is optional in second case, by using the proposed api we might end up sending list of jobs all the time even tho' client does not request it. Finally, MAPREDUCE-853 is introducing an hierarchy of queues and we should also try to handle those scenarios in the JIRA.
          Hide
          Amareshwari Sriramadasu added a comment -

          Attaching an early patch for review.

          Patch adds public classes org.apache.hadoop.mapreduce.Cluster, org.apache.hadoop.mapreduce.CLI .
          Moves org.apache.hadoop.mapred.JobSubmissionProtocol to public org.apache.hadoop.mapreduce.ClientProtocol.

          Cluster maintains the life-cycle of RPCProxy. Instance of Cluster creates the RPCProxy. and Cluster.close() should be called for stopping the proxy.

          Job is passed with handle to cluster. It uses cluster handle to submit job and qeury its status.
          All RunningJob methods are added to org.apache.hadoop.mapreduce.Job .
          Moved the Job submission code to private class JobSubmitter.

          org.apache.hadoop.mapreduce.CLI implements Tool and provides the functionality of bin/hadoop job option.

          I'm still working on changing old client to use new code and remove duplication.

          Show
          Amareshwari Sriramadasu added a comment - Attaching an early patch for review. Patch adds public classes org.apache.hadoop.mapreduce.Cluster, org.apache.hadoop.mapreduce.CLI . Moves org.apache.hadoop.mapred.JobSubmissionProtocol to public org.apache.hadoop.mapreduce.ClientProtocol. Cluster maintains the life-cycle of RPCProxy. Instance of Cluster creates the RPCProxy. and Cluster.close() should be called for stopping the proxy. Job is passed with handle to cluster. It uses cluster handle to submit job and qeury its status. All RunningJob methods are added to org.apache.hadoop.mapreduce.Job . Moved the Job submission code to private class JobSubmitter. org.apache.hadoop.mapreduce.CLI implements Tool and provides the functionality of bin/hadoop job option. I'm still working on changing old client to use new code and remove duplication.
          Hide
          Philip Zeyliger added a comment -

          Took a quick pass at your patch. Some comments, mostly documentation-related.

          + static Counters downgrade(org.apache.hadoop.mapreduce.Counters counters) {

          You might have some JavaDoc for this method. Also, variables would be clearer if everything were old_counter and new_counter, since it's hard to keep track what's what.

          ClientProtocol

          Are we settled on the name ClientProtocol? It's quite generic sounding, and, without the package, hard to decipher. Since these protocols will be the names of the public-ish wire APIs, perhaps JobClientProtocol would be more descriptive?

          +public class CLI extends Configured implements Tool {

          Some of Hadoop uses apache.commons.cli to parse command line arguments. (And there's CLI2 too, referred to in Maven, though I don't see any usages of it. You might consider using a command-line parsing library.

          You might also consider splitting up the run() method into separate methods (even classes) for each piece of functionality. This will make it much easier to test, and easier to parse, too.

          +public interface ClientProtocol extends VersionedProtocol {

          In the javadoc here documenting the history of this protocol, you might mention the rename.

          "Changed protocol to use new api"

          This is not very descriptive for someone unfamiliar with this ticket.

          Cheers,

          – Philip

          Show
          Philip Zeyliger added a comment - Took a quick pass at your patch. Some comments, mostly documentation-related. + static Counters downgrade(org.apache.hadoop.mapreduce.Counters counters) { You might have some JavaDoc for this method. Also, variables would be clearer if everything were old_counter and new_counter, since it's hard to keep track what's what. ClientProtocol Are we settled on the name ClientProtocol? It's quite generic sounding, and, without the package, hard to decipher. Since these protocols will be the names of the public-ish wire APIs, perhaps JobClientProtocol would be more descriptive? +public class CLI extends Configured implements Tool { Some of Hadoop uses apache.commons.cli to parse command line arguments. (And there's CLI2 too, referred to in Maven, though I don't see any usages of it. You might consider using a command-line parsing library. You might also consider splitting up the run() method into separate methods (even classes) for each piece of functionality. This will make it much easier to test, and easier to parse, too. +public interface ClientProtocol extends VersionedProtocol { In the javadoc here documenting the history of this protocol, you might mention the rename. "Changed protocol to use new api" This is not very descriptive for someone unfamiliar with this ticket. Cheers, – Philip
          Hide
          Arun C Murthy added a comment -

          I think this is on the right track... I'm happy to see this shaping up well!

          I'm a little unsure about JobSubmitter being a separate class, seems to me that since a Job can 'submit' itself (o.a.h.mapreduce.Job.submit) it shouldn't need another class (JobSubmitter) for that functionality. Maybe JobSubmitter (or JobSubmissionHelper) should have only static methods? Anyway, it's a minor issue. Thoughts?

          Show
          Arun C Murthy added a comment - I think this is on the right track... I'm happy to see this shaping up well! I'm a little unsure about JobSubmitter being a separate class, seems to me that since a Job can 'submit' itself (o.a.h.mapreduce.Job.submit) it shouldn't need another class (JobSubmitter) for that functionality. Maybe JobSubmitter (or JobSubmissionHelper) should have only static methods? Anyway, it's a minor issue. Thoughts?
          Hide
          Arun C Murthy added a comment -

          In general, we should look at this as an opportunity to clean up the job-submission interface (currently JobClient) and the goal is not to be compatible on a feature-by-feature basis. I'll try and take a closer look at the interfaces added to org.apache.hadoop.mapreduce.Job soon, but I thought I should spell out the underlying vision.

          Show
          Arun C Murthy added a comment - In general, we should look at this as an opportunity to clean up the job-submission interface (currently JobClient) and the goal is not to be compatible on a feature-by-feature basis. I'll try and take a closer look at the interfaces added to org.apache.hadoop.mapreduce.Job soon, but I thought I should spell out the underlying vision.
          Hide
          Amareshwari Sriramadasu added a comment -

          Patch changing the old JobClient to use the code in mapreduce package. Now the old JobClient is just a wrapper for the public methods.
          Patch also incorporates some of the comments from Philip.

          Are we settled on the name ClientProtocol? It's quite generic sounding, and, without the package, hard to decipher. Since these protocols will be the names of the public-ish wire APIs, perhaps JobClientProtocol would be more descriptive.

          The protocol has more than Job. It has methods to access cluster also.

          Some of Hadoop uses apache.commons.cli to parse command line arguments. (And there's CLI2 too, referred to in Maven, though I don't see any usages of it. You might consider using a command-line parsing library.

          I also thought of this. But, this can be done in a seperate jira.

          I'm a little unsure about JobSubmitter being a separate class, seems to me that since a Job can 'submit' itself

          Since the submission code was huge, moved it to this private class.
          Moved old submission code(writing old splits) also here, sothat the old JobClient doesn't need to know about any submission logic.

          Show
          Amareshwari Sriramadasu added a comment - Patch changing the old JobClient to use the code in mapreduce package. Now the old JobClient is just a wrapper for the public methods. Patch also incorporates some of the comments from Philip. Are we settled on the name ClientProtocol? It's quite generic sounding, and, without the package, hard to decipher. Since these protocols will be the names of the public-ish wire APIs, perhaps JobClientProtocol would be more descriptive. The protocol has more than Job. It has methods to access cluster also. Some of Hadoop uses apache.commons.cli to parse command line arguments. (And there's CLI2 too, referred to in Maven, though I don't see any usages of it. You might consider using a command-line parsing library. I also thought of this. But, this can be done in a seperate jira. I'm a little unsure about JobSubmitter being a separate class, seems to me that since a Job can 'submit' itself Since the submission code was huge, moved it to this private class. Moved old submission code(writing old splits) also here, sothat the old JobClient doesn't need to know about any submission logic.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12419048/patch-777-4.txt
          against trunk revision 812546.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 36 new or modified tests.

          -1 javadoc. The javadoc tool appears to have generated 1 warning messages.

          -1 javac. The applied patch generated 2292 javac compiler warnings (more than the trunk's current 2236 warnings).

          +1 findbugs. The patch does not introduce any new Findbugs warnings.

          -1 release audit. The applied patch generated 221 release audit warnings (more than the trunk's current 220 warnings).

          -1 core tests. The patch failed core unit tests.

          -1 contrib tests. The patch failed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/53/testReport/
          Release audit warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/53/artifact/trunk/patchprocess/releaseAuditDiffWarnings.txt
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/53/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/53/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/53/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12419048/patch-777-4.txt against trunk revision 812546. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 36 new or modified tests. -1 javadoc. The javadoc tool appears to have generated 1 warning messages. -1 javac. The applied patch generated 2292 javac compiler warnings (more than the trunk's current 2236 warnings). +1 findbugs. The patch does not introduce any new Findbugs warnings. -1 release audit. The applied patch generated 221 release audit warnings (more than the trunk's current 220 warnings). -1 core tests. The patch failed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/53/testReport/ Release audit warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/53/artifact/trunk/patchprocess/releaseAuditDiffWarnings.txt Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/53/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/53/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/53/console This message is automatically generated.
          Hide
          Amareshwari Sriramadasu added a comment -

          -1 overall. Is because of some hudson errors. Will re-submit the patch

          Show
          Amareshwari Sriramadasu added a comment - -1 overall. Is because of some hudson errors. Will re-submit the patch
          Hide
          Amareshwari Sriramadasu added a comment -

          Patch fixes a couple of bugs in LocalJobRunner and JobTracker.

          Show
          Amareshwari Sriramadasu added a comment - Patch fixes a couple of bugs in LocalJobRunner and JobTracker.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12419169/patch-777-5.txt
          against trunk revision 813308.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 39 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          -1 javac. The applied patch generated 2292 javac compiler warnings (more than the trunk's current 2236 warnings).

          +1 findbugs. The patch does not introduce any new Findbugs warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed core unit tests.

          -1 contrib tests. The patch failed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/55/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/55/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/55/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/55/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12419169/patch-777-5.txt against trunk revision 813308. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 39 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The applied patch generated 2292 javac compiler warnings (more than the trunk's current 2236 warnings). +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/55/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/55/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/55/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/55/console This message is automatically generated.
          Hide
          Amareshwari Sriramadasu added a comment -

          Making old job client set mapred.mapper.new-api and mapred.reducer.new-api to false, if they are not already set.

          Show
          Amareshwari Sriramadasu added a comment - Making old job client set mapred.mapper.new-api and mapred.reducer.new-api to false, if they are not already set.
          Hide
          Amareshwari Sriramadasu added a comment -

          Removing some debugging log messages from earlier patch.

          Show
          Amareshwari Sriramadasu added a comment - Removing some debugging log messages from earlier patch.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12419295/patch-777-7.txt
          against trunk revision 813660.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 36 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          -1 javac. The applied patch generated 2292 javac compiler warnings (more than the trunk's current 2236 warnings).

          +1 findbugs. The patch does not introduce any new Findbugs warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed core unit tests.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/66/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/66/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/66/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/66/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12419295/patch-777-7.txt against trunk revision 813660. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 36 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The applied patch generated 2292 javac compiler warnings (more than the trunk's current 2236 warnings). +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/66/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/66/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/66/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/66/console This message is automatically generated.
          Hide
          Philip Zeyliger added a comment -

          I may be crazy to harm on this, but "ClientProtocol" still reads as very generic to me. Perhaps JobTrackerClientProtocol, to at least indicate one of the components involved?

          – Philip

          Show
          Philip Zeyliger added a comment - I may be crazy to harm on this, but "ClientProtocol" still reads as very generic to me. Perhaps JobTrackerClientProtocol, to at least indicate one of the components involved? – Philip
          Hide
          Amareshwari Sriramadasu added a comment -

          Patch updated with trunk

          Show
          Amareshwari Sriramadasu added a comment - Patch updated with trunk
          Hide
          Arun C Murthy added a comment -

          Review comments:

          1. As far as possible we shouldn't expose old interfaces through the new ones (e.g. ClusterStatus, JobStatus, TaskReport) etc. in ClientProtocol, Cluster etc. I'm still debating if we should deprecate JobStatus/TaskReport and replace them with newer ones in org.apache.hadoop.mapreduce and making the old ones derive from the new ones. Maybe it's beyond the scope of this jira
          2. Cluster shouldn't expose getClient() api, that is a hack. We should have everyone using public stable apis on Cluster - if necessary JobClient should construct old interface return values (ClusterStatus) from Cluster.
          3. ClientProtocol.getClusterStatus should return Cluster.Metrics
          4. Cluster shouldn't have getUGI interface
          5. Move Cluster. {QueueState|QueueInfo}

            to separate files

          6. JobClient should have @deprecated javadoc and it should point users to Job and Cluster
          7. Job has too many new constructors, we should minimize them as far as possible
          8. Job's constructor always makes 'new JobConf(jobConf)', that seems undesirable in several cases - Owen?
          Show
          Arun C Murthy added a comment - Review comments: As far as possible we shouldn't expose old interfaces through the new ones (e.g. ClusterStatus, JobStatus, TaskReport) etc. in ClientProtocol, Cluster etc. I'm still debating if we should deprecate JobStatus/TaskReport and replace them with newer ones in org.apache.hadoop.mapreduce and making the old ones derive from the new ones. Maybe it's beyond the scope of this jira Cluster shouldn't expose getClient() api, that is a hack. We should have everyone using public stable apis on Cluster - if necessary JobClient should construct old interface return values (ClusterStatus) from Cluster. ClientProtocol.getClusterStatus should return Cluster.Metrics Cluster shouldn't have getUGI interface Move Cluster. {QueueState|QueueInfo} to separate files JobClient should have @deprecated javadoc and it should point users to Job and Cluster Job has too many new constructors, we should minimize them as far as possible Job's constructor always makes 'new JobConf(jobConf)', that seems undesirable in several cases - Owen?
          Hide
          Owen O'Malley added a comment - - edited

          As far as possible we shouldn't expose old interfaces through the new ones (e.g. ClusterStatus, JobStatus, TaskReport) etc. in ClientProtocol, Cluster etc. I'm still debating if we should deprecate JobStatus/TaskReport and replace them with newer ones in org.apache.hadoop.mapreduce and making the old ones derive from the new ones. Maybe it's beyond the scope of this jira

          We can't expose the old classes in the new API. In particular, it is critical that we can have clients just use the new API with no references to the mapred package. To do that, as Arun says, we need to move the functionality into the new API and have the old API extended the new classes. (And be deprecated) I think that has to be in the scope of this jira since this is the jira that is adding them to the new API.

          Show
          Owen O'Malley added a comment - - edited As far as possible we shouldn't expose old interfaces through the new ones (e.g. ClusterStatus, JobStatus, TaskReport) etc. in ClientProtocol, Cluster etc. I'm still debating if we should deprecate JobStatus/TaskReport and replace them with newer ones in org.apache.hadoop.mapreduce and making the old ones derive from the new ones. Maybe it's beyond the scope of this jira We can't expose the old classes in the new API. In particular, it is critical that we can have clients just use the new API with no references to the mapred package. To do that, as Arun says, we need to move the functionality into the new API and have the old API extended the new classes. (And be deprecated) I think that has to be in the scope of this jira since this is the jira that is adding them to the new API.
          Hide
          Owen O'Malley added a comment -

          8. Job's constructor always makes 'new JobConf(jobConf)', that seems undesirable in several cases - Owen?

          It should probably just say that the JobConf is cloned before changes are made.

          Show
          Owen O'Malley added a comment - 8. Job's constructor always makes 'new JobConf(jobConf)', that seems undesirable in several cases - Owen? It should probably just say that the JobConf is cloned before changes are made.
          Hide
          Arun C Murthy added a comment -

          Also, as a part of this jira we should drop JobProfile from ClientProtocol and just have ClientProtocol.getJobStatus() which has all the requisite info.

          Show
          Arun C Murthy added a comment - Also, as a part of this jira we should drop JobProfile from ClientProtocol and just have ClientProtocol.getJobStatus() which has all the requisite info.
          Hide
          Amareshwari Sriramadasu added a comment -

          Patch with review comments incorporated

          Show
          Amareshwari Sriramadasu added a comment - Patch with review comments incorporated
          Hide
          Amareshwari Sriramadasu added a comment -

          Uploaded Patch incorporates all the review comments.
          Also makes methods in ClientProtocol throw InterruptedException

          Show
          Amareshwari Sriramadasu added a comment - Uploaded Patch incorporates all the review comments. Also makes methods in ClientProtocol throw InterruptedException
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12419739/patch-777-9.txt
          against trunk revision 815628.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 39 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          -1 javac. The patch appears to cause tar ant target to fail.

          -1 findbugs. The patch appears to cause Findbugs to fail.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed core unit tests.

          -1 contrib tests. The patch failed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/85/testReport/
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/85/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/85/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12419739/patch-777-9.txt against trunk revision 815628. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 39 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The patch appears to cause tar ant target to fail. -1 findbugs. The patch appears to cause Findbugs to fail. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/85/testReport/ Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/85/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/85/console This message is automatically generated.
          Hide
          Amareshwari Sriramadasu added a comment -

          Uploaded Patch incorporates all the review comments.
          Also makes methods in ClientProtocol throw InterruptedException

          Show
          Amareshwari Sriramadasu added a comment - Uploaded Patch incorporates all the review comments. Also makes methods in ClientProtocol throw InterruptedException
          Hide
          Tom White added a comment -
          • The number of classes in the old org.apache.hadoop.mapred package is very large and daunting for users of MapReduce. We should only add classes to the new org.apache.hadoop.mapreduce if they are a part of the core public API for MapReduce. Internal classes with public visibility belong in another package. On this basis I would suggest moving (by analogy with HDFS packaging)
            • CLI to org.apache.hadoop.mapreduce.tools
            • ClientProtocol to org.apache.hadoop.mapreduce.protocol
          • The Job constructors should be changed to be static factory methods to make Job submission more flexible in future (see https://issues.apache.org/jira/browse/MAPREDUCE-777?focusedCommentId=12746014&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12746014)
          • Some of the value object classes have setters even though their state should only be read by user code. These are: JobStatus, QueueAclsInfo, QueueInfo, TaskCompletionEvent, TaskReport. These should be made immutable, or have package-private or protected setters.
          • Cluster has a mixture of methods that return arrays and those that return collections. Can we change them all to be consistent (preferably collections)?
          • Rename Cluster#getTTExpiryInterval() to the more readable getTaskTrackerExpiryInterval().
          • ClusterMetrics#getDeccommisionTrackers() is misspelled and should be getDecommissionedTaskTrackers(). Similarly change the instance variable numDecommisionedTrackers to numDecommissionedTrackers (double 's'). In fact, the get*TaskTrackers() methods would be better called get*TaskTrackerCount() since they don't return tasktracker objects, but a count of those objects.
          • CLI's usage string refers to JobClient.
          • JobStatus's javadoc refers to JobProfile, which is in the mapred package so we probably don't want to refer to it.
          • All public classes need javadoc to explain their role.
          Show
          Tom White added a comment - The number of classes in the old org.apache.hadoop.mapred package is very large and daunting for users of MapReduce. We should only add classes to the new org.apache.hadoop.mapreduce if they are a part of the core public API for MapReduce. Internal classes with public visibility belong in another package. On this basis I would suggest moving (by analogy with HDFS packaging) CLI to org.apache.hadoop.mapreduce.tools ClientProtocol to org.apache.hadoop.mapreduce.protocol The Job constructors should be changed to be static factory methods to make Job submission more flexible in future (see https://issues.apache.org/jira/browse/MAPREDUCE-777?focusedCommentId=12746014&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12746014 ) Some of the value object classes have setters even though their state should only be read by user code. These are: JobStatus, QueueAclsInfo, QueueInfo, TaskCompletionEvent, TaskReport. These should be made immutable, or have package-private or protected setters. Cluster has a mixture of methods that return arrays and those that return collections. Can we change them all to be consistent (preferably collections)? Rename Cluster#getTTExpiryInterval() to the more readable getTaskTrackerExpiryInterval(). ClusterMetrics#getDeccommisionTrackers() is misspelled and should be getDecommissionedTaskTrackers(). Similarly change the instance variable numDecommisionedTrackers to numDecommissionedTrackers (double 's'). In fact, the get*TaskTrackers() methods would be better called get*TaskTrackerCount() since they don't return tasktracker objects, but a count of those objects. CLI's usage string refers to JobClient. JobStatus's javadoc refers to JobProfile, which is in the mapred package so we probably don't want to refer to it. All public classes need javadoc to explain their role.
          Hide
          Amareshwari Sriramadasu added a comment -

          Uploaded Patch incorporates all the review comments.
          Also makes methods in ClientProtocol throw InterruptedException

          Show
          Amareshwari Sriramadasu added a comment - Uploaded Patch incorporates all the review comments. Also makes methods in ClientProtocol throw InterruptedException
          Hide
          Amareshwari Sriramadasu added a comment -

          Please ignore "Uploaded patch...." comment. That looks like a browser problem

          Show
          Amareshwari Sriramadasu added a comment - Please ignore "Uploaded patch...." comment. That looks like a browser problem
          Hide
          Amareshwari Sriramadasu added a comment -

          Patch with Tom's comments incorporated.

          Show
          Amareshwari Sriramadasu added a comment - Patch with Tom's comments incorporated.
          Hide
          Tom White added a comment -

          Thanks Amareshwari.

          ClusterMetrics#getDeccommisionedTaskTrackerCount() should be getDecommissionedTaskTrackerCount() (single 'c', double 's').

          Show
          Tom White added a comment - Thanks Amareshwari. ClusterMetrics#getDeccommisionedTaskTrackerCount() should be getDecommissionedTaskTrackerCount() (single 'c', double 's').
          Hide
          Amareshwari Sriramadasu added a comment -

          Sorry! missed that again. Corrected the spelling now.

          Show
          Amareshwari Sriramadasu added a comment - Sorry! missed that again. Corrected the spelling now.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12419840/patch-777-11.txt
          against trunk revision 815628.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 39 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          -1 javac. The applied patch generated 2289 javac compiler warnings (more than the trunk's current 2235 warnings).

          +1 findbugs. The patch does not introduce any new Findbugs warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed core unit tests.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/93/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/93/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/93/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/93/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12419840/patch-777-11.txt against trunk revision 815628. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 39 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The applied patch generated 2289 javac compiler warnings (more than the trunk's current 2235 warnings). +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/93/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/93/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/93/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/93/console This message is automatically generated.
          Hide
          Amareshwari Sriramadasu added a comment -

          Patch updated with the trunk

          Show
          Amareshwari Sriramadasu added a comment - Patch updated with the trunk
          Hide
          Arun C Murthy added a comment -

          This is close...

          1. We should at least open a new jira for more tests: TestCluster, TestJob etc., I see you've converted TestJobClient
          2. o.a.h.mapreduce.JobStatus. {RUNNING|SUCCEEDED|...} etc. shud be an enum and it should have a 'int getValue()' which returns values compatible with o.a.h.mapred.JobStatus.{RUNNING|SUCCEEDED|...}
          3. Typo: arreyToBlackListInfo
          4. Job has a copy-paste javadoc which mentions NetworkedJob
          Show
          Arun C Murthy added a comment - This is close... We should at least open a new jira for more tests: TestCluster, TestJob etc., I see you've converted TestJobClient o.a.h.mapreduce.JobStatus. {RUNNING|SUCCEEDED|...} etc. shud be an enum and it should have a 'int getValue()' which returns values compatible with o.a.h.mapred.JobStatus.{RUNNING|SUCCEEDED|...} Typo: arreyToBlackListInfo Job has a copy-paste javadoc which mentions NetworkedJob
          Hide
          Arun C Murthy added a comment -

          5. Queue.getJobQueueInfo calls Enum.name() (state.name()), it should use Enum.toString() ?

          Show
          Arun C Murthy added a comment - 5. Queue.getJobQueueInfo calls Enum.name() (state.name()), it should use Enum.toString() ?
          Hide
          Amareshwari Sriramadasu added a comment -

          Patch updated with the comments

          Show
          Amareshwari Sriramadasu added a comment - Patch updated with the comments
          Hide
          Iyappan Srinivasan added a comment -

          +1 from QA

          Tested all options of bin/hadoop job and checked for its correctness as well as check all options of bin/hadoop queueue
          and checked for its correctness.

          1) JobClient <command> <args>
          [-submit <job-file>]
          [-status <job-id>]
          [-counter <job-id> <group-name> <counter-name>]
          [-kill <job-id>]
          [-set-priority <job-id> <priority>]. Valid values for priorities are: VERY_HIGH HIGH NORMAL LOW
          VERY_LOW
          -events <job-id> <from-event-#> <#-of-events>
          [-history <jobOutputDir>]
          [-list [all]]
          [-list-active-trackers]
          [-list-blacklisted-trackers]
          [-list-attempt-ids <job-id> <task-type> <task-state>]

          [-kill-task <task-attempt-id>]
          [-fail-task <task-attempt-id>]

          Generic options supported are
          -conf <configuration file>
          -D <property=value>
          -fs <local|namenode:port>
          -jt <local|jobtracker:port>
          -files <comma separated list of files>
          -libjars <comma separated list of jars>
          -archives <comma separated list of archives>

          For scheduler, I started a capacity scheduler with linux task controller, with different queues and different
          permissions to different users..

          2) bin/hadoop queue <command> <args>
          [-list]
          [-info <job-queue-name> [-showJobs]]
          [-showacls]

          Generic options supported are
          -conf <configuration file> specify an application configuration file
          -D <property=value> use value for given property
          -fs <local|namenode:port> specify a namenode
          -jt <local|jobtracker:port> specify a job tracker
          -files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
          -libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
          -archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute
          machines.

          Other test sceanrios were, testing it with JT restart, running job, waiting job etc. The values can be seen by making
          the variable mapred.job.tracker.retire.jobs to false.

          Raised these generic bugs/improvements, which were already found in trunk before this patch.:

          1) MAPREDUCE-983 bin/hadoop job -fs file:///sdsdad -list still works. It is not picking up the latest fs input

          2) MAPREDUCE-984 bin/hadoop job -kill command says" job successfully killed" even though job has retired

          3) MAPREDUCE-985 job -kill-task <task-id>] and -fail-task <task-id> are not task-ids they are attempt ids

          4) MAPREDUCE-994 bin/hadoop job -counter help options do not give information on permissible values.

          5) MAPREDUCE-993 bin/hadoop job events <jobid> <from-event#> <#-of-events> help message is confusing

          6) MAPREDUCE-992 bin/hadoop job -events < jobid> gives event links which does not work.

          Show
          Iyappan Srinivasan added a comment - +1 from QA Tested all options of bin/hadoop job and checked for its correctness as well as check all options of bin/hadoop queueue and checked for its correctness. 1) JobClient <command> <args> [-submit <job-file>] [-status <job-id>] [-counter <job-id> <group-name> <counter-name>] [-kill <job-id>] [-set-priority <job-id> <priority>] . Valid values for priorities are: VERY_HIGH HIGH NORMAL LOW VERY_LOW -events <job-id> <from-event-#> <#-of-events> [-history <jobOutputDir>] [-list [all] ] [-list-active-trackers] [-list-blacklisted-trackers] [-list-attempt-ids <job-id> <task-type> <task-state>] [-kill-task <task-attempt-id>] [-fail-task <task-attempt-id>] Generic options supported are -conf <configuration file> -D <property=value> -fs <local|namenode:port> -jt <local|jobtracker:port> -files <comma separated list of files> -libjars <comma separated list of jars> -archives <comma separated list of archives> For scheduler, I started a capacity scheduler with linux task controller, with different queues and different permissions to different users.. 2) bin/hadoop queue <command> <args> [-list] [-info <job-queue-name> [-showJobs] ] [-showacls] Generic options supported are -conf <configuration file> specify an application configuration file -D <property=value> use value for given property -fs <local|namenode:port> specify a namenode -jt <local|jobtracker:port> specify a job tracker -files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster -libjars <comma separated list of jars> specify comma separated jar files to include in the classpath. -archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines. Other test sceanrios were, testing it with JT restart, running job, waiting job etc. The values can be seen by making the variable mapred.job.tracker.retire.jobs to false. Raised these generic bugs/improvements, which were already found in trunk before this patch.: 1) MAPREDUCE-983 bin/hadoop job -fs file:///sdsdad -list still works. It is not picking up the latest fs input 2) MAPREDUCE-984 bin/hadoop job -kill command says" job successfully killed" even though job has retired 3) MAPREDUCE-985 job -kill-task <task-id>] and -fail-task <task-id> are not task-ids they are attempt ids 4) MAPREDUCE-994 bin/hadoop job -counter help options do not give information on permissible values. 5) MAPREDUCE-993 bin/hadoop job events <jobid> <from-event #> <#-of-events> help message is confusing 6) MAPREDUCE-992 bin/hadoop job -events < jobid> gives event links which does not work.
          Hide
          Arun C Murthy added a comment -

          +1

          I'd commit this once Hudson gives it the once-over.

          Show
          Arun C Murthy added a comment - +1 I'd commit this once Hudson gives it the once-over.
          Hide
          Tom White added a comment -

          More comments, mainly naming:

          • There are occurrences of both "blacklist" and "blackList" in the public API (e.g. TaskTrackerInfo#getReasonForBlackList() and getBlacklistReport()). Either is correct since the word may be spelled as "blacklist" or "black list", but we need to be consistent throughout.
          • Cluster#getFs() would be better as getFileSystem() (particularly with the debate in HADOOP-6223). Also it would be good to have javadoc describing the fact it is returning the file system where job-specific files are placed.
          • JobStatus# {setup,map,reduce,cleanup}

            Progress() would be better as getters to be consistent with the rest of the class.

          • TaskCompletionEvent#getTaskAttemptID() should be getTaskAttemptId() to be consistent with getEventId().
          • TaskCompletionEvent#setTaskID() should be setTaskAttemptId().
          • TaskReport's method names should be made consistent with this convention too.
          Show
          Tom White added a comment - More comments, mainly naming: There are occurrences of both "blacklist" and "blackList" in the public API (e.g. TaskTrackerInfo#getReasonForBlackList() and getBlacklistReport()). Either is correct since the word may be spelled as "blacklist" or "black list", but we need to be consistent throughout. Cluster#getFs() would be better as getFileSystem() (particularly with the debate in HADOOP-6223 ). Also it would be good to have javadoc describing the fact it is returning the file system where job-specific files are placed. JobStatus# {setup,map,reduce,cleanup} Progress() would be better as getters to be consistent with the rest of the class. TaskCompletionEvent#getTaskAttemptID() should be getTaskAttemptId() to be consistent with getEventId(). TaskCompletionEvent#setTaskID() should be setTaskAttemptId(). TaskReport's method names should be made consistent with this convention too.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12419858/patch-777-13.txt
          against trunk revision 816147.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 39 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          -1 javac. The patch appears to cause tar ant target to fail.

          -1 findbugs. The patch appears to cause Findbugs to fail.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed core unit tests.

          -1 contrib tests. The patch failed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h3.grid.sp2.yahoo.net/43/testReport/
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h3.grid.sp2.yahoo.net/43/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h3.grid.sp2.yahoo.net/43/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12419858/patch-777-13.txt against trunk revision 816147. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 39 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The patch appears to cause tar ant target to fail. -1 findbugs. The patch appears to cause Findbugs to fail. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h3.grid.sp2.yahoo.net/43/testReport/ Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h3.grid.sp2.yahoo.net/43/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h3.grid.sp2.yahoo.net/43/console This message is automatically generated.
          Hide
          Amareshwari Sriramadasu added a comment -

          Patch incorporating comments.

          Show
          Amareshwari Sriramadasu added a comment - Patch incorporating comments.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12419878/patch-777-14.txt
          against trunk revision 816240.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 39 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          -1 javac. The patch appears to cause tar ant target to fail.

          +1 findbugs. The patch does not introduce any new Findbugs warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed core unit tests.

          -1 contrib tests. The patch failed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h3.grid.sp2.yahoo.net/45/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h3.grid.sp2.yahoo.net/45/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h3.grid.sp2.yahoo.net/45/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h3.grid.sp2.yahoo.net/45/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12419878/patch-777-14.txt against trunk revision 816240. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 39 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The patch appears to cause tar ant target to fail. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h3.grid.sp2.yahoo.net/45/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h3.grid.sp2.yahoo.net/45/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h3.grid.sp2.yahoo.net/45/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h3.grid.sp2.yahoo.net/45/console This message is automatically generated.
          Hide
          Amareshwari Sriramadasu added a comment -

          MAPREDUCE-907 broke contrib tests, since InterruptedException was not handled.
          Patch throws out InterruptedException.

          Show
          Amareshwari Sriramadasu added a comment - MAPREDUCE-907 broke contrib tests, since InterruptedException was not handled. Patch throws out InterruptedException.
          Hide
          Hemanth Yamijala added a comment -

          Amareshwari, the patch queue is getting longer. Can you please start a parallel run locally and post results ?

          Show
          Hemanth Yamijala added a comment - Amareshwari, the patch queue is getting longer. Can you please start a parallel run locally and post results ?
          Hide
          Amareshwari Sriramadasu added a comment -

          -1 javac. Is because of deprecated classes.

          All contrib tests passed on my machine.

          Show
          Amareshwari Sriramadasu added a comment - -1 javac. Is because of deprecated classes. All contrib tests passed on my machine.
          Hide
          Amareshwari Sriramadasu added a comment -

          Found one more place where InterruptedException is not handled. Strangely, it is not resulting in compilation failure, but failing javac.

          Running test-patch and ant test locally, will update the results.

          Show
          Amareshwari Sriramadasu added a comment - Found one more place where InterruptedException is not handled. Strangely, it is not resulting in compilation failure, but failing javac. Running test-patch and ant test locally, will update the results.
          Hide
          Amareshwari Sriramadasu added a comment -

          test-patch result:

               [exec]
               [exec] -1 overall.
               [exec]
               [exec]     +1 @author.  The patch does not contain any @author tags.
               [exec]
               [exec]     +1 tests included.  The patch appears to include 39 new or modified tests.
               [exec]
               [exec]     +1 javadoc.  The javadoc tool did not generate any warning messages.
               [exec]
               [exec]     -1 javac.  The applied patch generated 2289 javac compiler warnings (more than the trunk's current 2235 warnings).
               [exec]
               [exec]     +1 findbugs.  The patch does not introduce any new Findbugs warnings.
               [exec]
               [exec]     +1 release audit.  The applied patch does not increase the total number of release audit warnings.
               [exec]
               [exec]
          

          -1 javac. Is because of deprecated classes

          Show
          Amareshwari Sriramadasu added a comment - test-patch result: [exec] [exec] -1 overall. [exec] [exec] +1 @author. The patch does not contain any @author tags. [exec] [exec] +1 tests included. The patch appears to include 39 new or modified tests. [exec] [exec] +1 javadoc. The javadoc tool did not generate any warning messages. [exec] [exec] -1 javac. The applied patch generated 2289 javac compiler warnings (more than the trunk's current 2235 warnings). [exec] [exec] +1 findbugs. The patch does not introduce any new Findbugs warnings. [exec] [exec] +1 release audit. The applied patch does not increase the total number of release audit warnings. [exec] [exec] -1 javac. Is because of deprecated classes
          Hide
          Amareshwari Sriramadasu added a comment -

          All core and contrib tests passed.

          Show
          Amareshwari Sriramadasu added a comment - All core and contrib tests passed.
          Hide
          Amareshwari Sriramadasu added a comment -

          Patch with minor change. Instead of throwing InterruptedExcption from sqoop method, I catch the exception and throw IOException as suggested by Arun.

          Show
          Amareshwari Sriramadasu added a comment - Patch with minor change. Instead of throwing InterruptedExcption from sqoop method, I catch the exception and throw IOException as suggested by Arun.
          Hide
          Arun C Murthy added a comment -

          I just committed this. Thanks, Amareshwari!

          Show
          Arun C Murthy added a comment - I just committed this. Thanks, Amareshwari!
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk-Commit #49 (See http://hudson.zones.apache.org/hudson/job/Hadoop-Mapreduce-trunk-Commit/49/)
          . Brand new apis to track and query jobs as a replacement for JobClient. Contributed by Amareshwari Sriramadasu.

          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk-Commit #49 (See http://hudson.zones.apache.org/hudson/job/Hadoop-Mapreduce-trunk-Commit/49/ ) . Brand new apis to track and query jobs as a replacement for JobClient. Contributed by Amareshwari Sriramadasu.
          Hide
          Nigel Daley added a comment -

          So now everyone is going to get a -1 on javadoc? What's being done about this?

          Show
          Nigel Daley added a comment - So now everyone is going to get a -1 on javadoc? What's being done about this?
          Hide
          Nigel Daley added a comment -

          Committer and contributors need to look at failures more closely. The reason the contrib got a -1 was because this patch broke the eclipse plugin build! See http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h3.grid.sp2.yahoo.net/45/console

          MAPREDUCE-1003 was filed and now fixed for this issue.

          Show
          Nigel Daley added a comment - Committer and contributors need to look at failures more closely. The reason the contrib got a -1 was because this patch broke the eclipse plugin build! See http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h3.grid.sp2.yahoo.net/45/console MAPREDUCE-1003 was filed and now fixed for this issue.

            People

            • Assignee:
              Amareshwari Sriramadasu
              Reporter:
              Owen O'Malley
            • Votes:
              0 Vote for this issue
              Watchers:
              20 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development