Hadoop Map/Reduce
  1. Hadoop Map/Reduce
  2. MAPREDUCE-1229

[Mumak] Allow customization of job submission policy

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.21.0, 0.22.0
    • Fix Version/s: 0.21.0
    • Component/s: contrib/mumak
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      Currently, mumak replay job submission faithfully. To make mumak useful for evaluation purposes, it would be great if we can support other job submission policies such as sequential job submission, or stress job submission.

      1. mapreduce-1229-20091121.patch
        25 kB
        Hong Tang
      2. mapreduce-1229-20091123.patch
        25 kB
        Hong Tang
      3. mapreduce-1229-20091130.patch
        27 kB
        Hong Tang
      4. mapreduce-1229-20091201.patch
        25 kB
        Hong Tang

        Activity

        Hide
        Hong Tang added a comment -

        Patch that implements three policies: REPLAY (same as the original code), SERIAL (submit job one-by-one), and STRESS (submit jobs until the cluster is saturated).

        Show
        Hong Tang added a comment - Patch that implements three policies: REPLAY (same as the original code), SERIAL (submit job one-by-one), and STRESS (submit jobs until the cluster is saturated).
        Hide
        Hong Tang added a comment -

        Manual sanity check of the results: under REPLAY, the included 19 job traces complete in 2 hrs. Under SERIAL, it takes 7 hrs, and under STRESS, it takes 1.6 hrs (all in simulated time).

        Show
        Hong Tang added a comment - Manual sanity check of the results: under REPLAY, the included 19 job traces complete in 2 hrs. Under SERIAL, it takes 7 hrs, and under STRESS, it takes 1.6 hrs (all in simulated time).
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12425724/mapreduce-1229-20091121.patch
        against trunk revision 882790.

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 10 new or modified tests.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        -1 findbugs. The patch appears to introduce 1 new Findbugs warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed core unit tests.

        -1 contrib tests. The patch failed contrib unit tests.

        Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h3.grid.sp2.yahoo.net/149/testReport/
        Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h3.grid.sp2.yahoo.net/149/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h3.grid.sp2.yahoo.net/149/artifact/trunk/build/test/checkstyle-errors.html
        Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h3.grid.sp2.yahoo.net/149/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12425724/mapreduce-1229-20091121.patch against trunk revision 882790. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 10 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. -1 findbugs. The patch appears to introduce 1 new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h3.grid.sp2.yahoo.net/149/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h3.grid.sp2.yahoo.net/149/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h3.grid.sp2.yahoo.net/149/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h3.grid.sp2.yahoo.net/149/console This message is automatically generated.
        Hide
        Hong Tang added a comment -

        Fixing findbugs warning. Resubmitting to hudson.

        Show
        Hong Tang added a comment - Fixing findbugs warning. Resubmitting to hudson.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12425876/mapreduce-1229-20091123.patch
        against trunk revision 883452.

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 10 new or modified tests.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 findbugs. The patch does not introduce any new Findbugs warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed core unit tests.

        -1 contrib tests. The patch failed contrib unit tests.

        Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/261/testReport/
        Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/261/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/261/artifact/trunk/build/test/checkstyle-errors.html
        Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/261/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12425876/mapreduce-1229-20091123.patch against trunk revision 883452. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 10 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/261/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/261/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/261/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/261/console This message is automatically generated.
        Hide
        Dick King added a comment -

        1: Should TestSimulator*JobSubmission check to see whether the total "runtime" was reasonable for the Policy?

        2: minor nit: Should SimulatorJobSubmissionPolicy/getPolicy(Configuration) use valueOf(policy.toUpper()) instead of looping through the types?

        3: medium sized nit: in SimulatorJobClient.isOverloaded() there are two literals, 0.9 and 2.0F, that ought to be static private named values.

        4: Here is my biggest point. The existing code cannot submit a job more often than once every five seconds when the jobs were spaced further apart than that and the policy is STRESS .

        Please consider adding code to call the processLoadProbingEvent core code when we processJobCompleteEvent or a processJobSubmitEvent . That includes potentially adding a new LoadProbingEvent . This can lead to an accumulation because each LoadProbingEvent replaces itself, so we should track the ones that are in flight in a PriorityQueue and only add a new LoadProbingEvent whenever the new event has a time stamp strictly earlier than the earliest one already in flight. This will limit us to two events in flight with the current adjustLoadProbingInterval .

        If you don't do that, then if a real dreadnaught of a job gets dropped into the system and the probing interval gets long it could take us a while to notice that we're okay to submit jobs, in the case where the job has many tasks finishing at about the same time, and we could submit tiny jobs as onsies every five seconds when the cluster is clear enough to accommodate lots of jobs. When the cluster can handle N jobs in less than 5N seconds for some N, we won't overload it with the existing code.

        Show
        Dick King added a comment - 1: Should TestSimulator*JobSubmission check to see whether the total "runtime" was reasonable for the Policy? 2: minor nit: Should SimulatorJobSubmissionPolicy/getPolicy(Configuration) use valueOf(policy.toUpper()) instead of looping through the types? 3: medium sized nit: in SimulatorJobClient.isOverloaded() there are two literals, 0.9 and 2.0F, that ought to be static private named values. 4: Here is my biggest point. The existing code cannot submit a job more often than once every five seconds when the jobs were spaced further apart than that and the policy is STRESS . Please consider adding code to call the processLoadProbingEvent core code when we processJobCompleteEvent or a processJobSubmitEvent . That includes potentially adding a new LoadProbingEvent . This can lead to an accumulation because each LoadProbingEvent replaces itself, so we should track the ones that are in flight in a PriorityQueue and only add a new LoadProbingEvent whenever the new event has a time stamp strictly earlier than the earliest one already in flight. This will limit us to two events in flight with the current adjustLoadProbingInterval . If you don't do that, then if a real dreadnaught of a job gets dropped into the system and the probing interval gets long it could take us a while to notice that we're okay to submit jobs, in the case where the job has many tasks finishing at about the same time, and we could submit tiny jobs as onsies every five seconds when the cluster is clear enough to accommodate lots of jobs. When the cluster can handle N jobs in less than 5N seconds for some N, we won't overload it with the existing code.
        Hide
        Hong Tang added a comment -

        Attached new patch that addresses the comments by Dick.

        1: Should TestSimulator*JobSubmission check to see whether the total "runtime" was reasonable for the Policy?

        Currently, each policy is tested as a separate test case. It may be hard to combine them and compare the virtual runtime, which is only present as console output. I did do some basic sanity check manually after the run.

        2: minor nit: Should SimulatorJobSubmissionPolicy/getPolicy(Configuration) use valueOf(policy.toUpper()) instead of looping through the types?

        Updated in the patch based on the suggestion.

        3: medium sized nit: in SimulatorJobClient.isOverloaded() there are two literals, 0.9 and 2.0F, that ought to be static private named values.

        Added final variables to represent the magic constants, and added comments.

        4: Here is my biggest point. The existing code cannot submit a job more often than once every five seconds when the jobs were spaced further apart than that and the policy is STRESS .

        Please consider adding code to call the processLoadProbingEvent core code when we processJobCompleteEvent or a processJobSubmitEvent . That includes potentially adding a new LoadProbingEvent . This can lead to an accumulation because each LoadProbingEvent replaces itself, so we should track the ones that are in flight in a PriorityQueue and only add a new LoadProbingEvent whenever the new event has a time stamp strictly earlier than the earliest one already in flight. This will limit us to two events in flight with the current adjustLoadProbingInterval .

        If you don't do that, then if a real dreadnaught of a job gets dropped into the system and the probing interval gets long it could take us a while to notice that we're okay to submit jobs, in the case where the job has many tasks finishing at about the same time, and we could submit tiny jobs as onsies every five seconds when the cluster is clear enough to accommodate lots of jobs. When the cluster can handle N jobs in less than 5N seconds for some N, we won't overload it with the existing code.

        I changed the minimum load probing interval to 1 seconds (from 5 seconds). Note that when a job is submitted, it could take a few seconds before JT assigns the map tasks to TTs with free map slots. So reducing this interval further could lead to artificial load spikes.

        I also added load checks after each job completion, and if the cluster is underloaded, we submit another job (and reset the load checking interval to the minimum value). This does bring in a potential danger when many jobs happen to complete at the same time, and inject a lot of jobs into the system. But I think such risk should be fairly low and thus would not worry much about it.

        Show
        Hong Tang added a comment - Attached new patch that addresses the comments by Dick. 1: Should TestSimulator*JobSubmission check to see whether the total "runtime" was reasonable for the Policy? Currently, each policy is tested as a separate test case. It may be hard to combine them and compare the virtual runtime, which is only present as console output. I did do some basic sanity check manually after the run. 2: minor nit: Should SimulatorJobSubmissionPolicy/getPolicy(Configuration) use valueOf(policy.toUpper()) instead of looping through the types? Updated in the patch based on the suggestion. 3: medium sized nit: in SimulatorJobClient.isOverloaded() there are two literals, 0.9 and 2.0F, that ought to be static private named values. Added final variables to represent the magic constants, and added comments. 4: Here is my biggest point. The existing code cannot submit a job more often than once every five seconds when the jobs were spaced further apart than that and the policy is STRESS . Please consider adding code to call the processLoadProbingEvent core code when we processJobCompleteEvent or a processJobSubmitEvent . That includes potentially adding a new LoadProbingEvent . This can lead to an accumulation because each LoadProbingEvent replaces itself, so we should track the ones that are in flight in a PriorityQueue and only add a new LoadProbingEvent whenever the new event has a time stamp strictly earlier than the earliest one already in flight. This will limit us to two events in flight with the current adjustLoadProbingInterval . If you don't do that, then if a real dreadnaught of a job gets dropped into the system and the probing interval gets long it could take us a while to notice that we're okay to submit jobs, in the case where the job has many tasks finishing at about the same time, and we could submit tiny jobs as onsies every five seconds when the cluster is clear enough to accommodate lots of jobs. When the cluster can handle N jobs in less than 5N seconds for some N, we won't overload it with the existing code. I changed the minimum load probing interval to 1 seconds (from 5 seconds). Note that when a job is submitted, it could take a few seconds before JT assigns the map tasks to TTs with free map slots. So reducing this interval further could lead to artificial load spikes. I also added load checks after each job completion, and if the cluster is underloaded, we submit another job (and reset the load checking interval to the minimum value). This does bring in a potential danger when many jobs happen to complete at the same time, and inject a lot of jobs into the system. But I think such risk should be fairly low and thus would not worry much about it.
        Hide
        Hong Tang added a comment -

        After sitting on it overnight, I think I can simplify isOverloaded() by eliminating the check of occupied map slot percentage and use mainly pending map task count, which seems to be updated by job tracker right after job submission. This would allow me to ramping up the load to the cluster at a rapid rate of one job per mili-second without worrying about overshooting.

        Will upload a patch shortly.

        Show
        Hong Tang added a comment - After sitting on it overnight, I think I can simplify isOverloaded() by eliminating the check of occupied map slot percentage and use mainly pending map task count, which seems to be updated by job tracker right after job submission. This would allow me to ramping up the load to the cluster at a rapid rate of one job per mili-second without worrying about overshooting. Will upload a patch shortly.
        Hide
        Hong Tang added a comment -

        New patch incorporate the ideas outlined in my previous comments.

        Show
        Hong Tang added a comment - New patch incorporate the ideas outlined in my previous comments.
        Hide
        Hong Tang added a comment -

        Sample test output after the patch - as we can see, the jobs are submitted to the cluster within the first 20ms.

        Job job_200904211745_0002 is submitted at 1259697521583
        1259697521583 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (20.0 <= 2.0*4635)
        Job job_200904211745_0003 is submitted at 1259697521584
        1259697521584 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (23.0 <= 2.0*4635)
        Job job_200904211745_0004 is submitted at 1259697521585
        1259697521585 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (154.0 <= 2.0*4635)
        Job job_200904211745_0005 is submitted at 1259697521586
        1259697521586 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (174.0 <= 2.0*4635)
        Job job_200904211745_0006 is submitted at 1259697521587
        1259697521587 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (209.0 <= 2.0*4635)
        Job job_200904211745_0007 is submitted at 1259697521588
        1259697521588 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (3413.0 <= 2.0*4635)
        Job job_200904211745_0008 is submitted at 1259697521589
        1259697521589 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (6617.0 <= 2.0*4635)
        Job job_200904211745_0009 is submitted at 1259697521590
        1259697521590 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (6718.0 <= 2.0*4635)
        Job job_200904211745_0010 is submitted at 1259697521591
        1259697521591 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (6719.0 <= 2.0*4635)
        Job job_200904211745_0011 is submitted at 1259697521592
        1259697521592 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (7219.0 <= 2.0*4635)
        Job job_200904211745_0012 is submitted at 1259697521593
        1259697521593 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (7220.0 <= 2.0*4635)
        Job job_200904211745_0013 is submitted at 1259697521594
        1259697521594 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (7240.0 <= 2.0*4635)
        Job job_200904211745_0015 is submitted at 1259697521595
        1259697521595 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (7241.0 <= 2.0*4635)
        Job job_200904211745_0014 is submitted at 1259697521596
        1259697521596 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (7242.0 <= 2.0*4635)
        Job job_200904211745_0016 is submitted at 1259697521597
        1259697521597 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (7742.0 <= 2.0*4635)
        Job job_200904211745_0018 is submitted at 1259697521598
        1259697521598 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (8242.0 <= 2.0*4635)
        Job job_200904211745_0019 is submitted at 1259697521599
        1259697521599 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (8243.0 <= 2.0*4635)
        Job job_200904211745_0017 is submitted at 1259697521600
        1259697521600 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (8244.0 <= 2.0*4635)
        Job job_200904211745_0020 is submitted at 1259697521601
        
        Show
        Hong Tang added a comment - Sample test output after the patch - as we can see, the jobs are submitted to the cluster within the first 20ms. Job job_200904211745_0002 is submitted at 1259697521583 1259697521583 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (20.0 <= 2.0*4635) Job job_200904211745_0003 is submitted at 1259697521584 1259697521584 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (23.0 <= 2.0*4635) Job job_200904211745_0004 is submitted at 1259697521585 1259697521585 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (154.0 <= 2.0*4635) Job job_200904211745_0005 is submitted at 1259697521586 1259697521586 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (174.0 <= 2.0*4635) Job job_200904211745_0006 is submitted at 1259697521587 1259697521587 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (209.0 <= 2.0*4635) Job job_200904211745_0007 is submitted at 1259697521588 1259697521588 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (3413.0 <= 2.0*4635) Job job_200904211745_0008 is submitted at 1259697521589 1259697521589 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (6617.0 <= 2.0*4635) Job job_200904211745_0009 is submitted at 1259697521590 1259697521590 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (6718.0 <= 2.0*4635) Job job_200904211745_0010 is submitted at 1259697521591 1259697521591 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (6719.0 <= 2.0*4635) Job job_200904211745_0011 is submitted at 1259697521592 1259697521592 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (7219.0 <= 2.0*4635) Job job_200904211745_0012 is submitted at 1259697521593 1259697521593 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (7220.0 <= 2.0*4635) Job job_200904211745_0013 is submitted at 1259697521594 1259697521594 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (7240.0 <= 2.0*4635) Job job_200904211745_0015 is submitted at 1259697521595 1259697521595 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (7241.0 <= 2.0*4635) Job job_200904211745_0014 is submitted at 1259697521596 1259697521596 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (7242.0 <= 2.0*4635) Job job_200904211745_0016 is submitted at 1259697521597 1259697521597 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (7742.0 <= 2.0*4635) Job job_200904211745_0018 is submitted at 1259697521598 1259697521598 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (8242.0 <= 2.0*4635) Job job_200904211745_0019 is submitted at 1259697521599 1259697521599 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (8243.0 <= 2.0*4635) Job job_200904211745_0017 is submitted at 1259697521600 1259697521600 Overloaded is false: incompleteMapTasks <= 2.0*mapSlotCapacity (8244.0 <= 2.0*4635) Job job_200904211745_0020 is submitted at 1259697521601
        Hide
        Dick King added a comment -

        +1

        I took a look at the revised patch [ mapreduce-1229-20091201.patch ] and I like it .

        -dk

        Show
        Dick King added a comment - +1 I took a look at the revised patch [ mapreduce-1229-20091201.patch ] and I like it . -dk
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12426574/mapreduce-1229-20091201.patch
        against trunk revision 885530.

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 10 new or modified tests.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 findbugs. The patch does not introduce any new Findbugs warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed core unit tests.

        -1 contrib tests. The patch failed contrib unit tests.

        Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/283/testReport/
        Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/283/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/283/artifact/trunk/build/test/checkstyle-errors.html
        Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/283/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12426574/mapreduce-1229-20091201.patch against trunk revision 885530. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 10 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/283/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/283/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/283/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/283/console This message is automatically generated.
        Hide
        Chris Douglas added a comment -

        +1

        I committed this. Thanks, Hong!

        Show
        Chris Douglas added a comment - +1 I committed this. Thanks, Hong!
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Mapreduce-trunk-Commit #139 (See http://hudson.zones.apache.org/hudson/job/Hadoop-Mapreduce-trunk-Commit/139/)
        . Allow customization of job submission policy in Mumak.
        Contributed by Hong Tang

        Show
        Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk-Commit #139 (See http://hudson.zones.apache.org/hudson/job/Hadoop-Mapreduce-trunk-Commit/139/ ) . Allow customization of job submission policy in Mumak. Contributed by Hong Tang
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Mapreduce-trunk #162 (See http://hudson.zones.apache.org/hudson/job/Hadoop-Mapreduce-trunk/162/)
        . Allow customization of job submission policy in Mumak.
        Contributed by Hong Tang

        Show
        Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk #162 (See http://hudson.zones.apache.org/hudson/job/Hadoop-Mapreduce-trunk/162/ ) . Allow customization of job submission policy in Mumak. Contributed by Hong Tang

          People

          • Assignee:
            Hong Tang
            Reporter:
            Hong Tang
          • Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development