Uploaded image for project: 'Hadoop YARN'
  1. Hadoop YARN
  2. YARN-3733

Fix DominantRC#compare() does not work as expected if cluster resource is empty

    Details

    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      Steps to reproduce
      =================
      1. Install HA with 2 RM 2 NM (3072 MB * 2 total cluster)
      2. Configure map and reduce size to 512 MB after changing scheduler minimum size to 512 MB
      3. Configure capacity scheduler and AM limit to .5 (DominantResourceCalculator is configured)
      4. Submit 30 concurrent task
      5. Switch RM

      Actual
      =====
      For 12 Jobs AM gets allocated and all 12 starts running
      No other Yarn child is initiated , all 12 Jobs in Running state for ever

      Expected
      =======
      Only 6 should be running at a time since max AM allocated is .5 (3072 MB)

      1. 0001-YARN-3733.patch
        5 kB
        Rohith Sharma K S
      2. 0002-YARN-3733.patch
        10 kB
        Rohith Sharma K S
      3. 0002-YARN-3733.patch
        11 kB
        Rohith Sharma K S
      4. YARN-3733.patch
        6 kB
        Rohith Sharma K S

        Activity

        Hide
        rohithsharma Rohith Sharma K S added a comment -

        Verified the RM logs from Bibin A Chundatt offline. The sequence of events ocured are

        1. 30 applications are submitted to RM1 concurrently. pendingApplications=18 and activeApplications=12. Active applications are started RUNNING state.
        2. RM1 switched to standby, RM2 transitioned to Active state. Currently active RM is RM2.
        3. Previous submitted 30 applications started recovering. As part of recovery process, all the 30 applications submitted to schedulers and all these applications become active i.e activeApplications=30 and pendingApplications=0 which is not expected to happen.
        4. NM registered with RM and running AM's registered with RM.
        5. Since 30 applications are activated, schedulers tries to launch all the activated applications ApplicatonMater and occupied full cluster capacity.

        Basically the issue AM limit check in LeafQueue#activateApplications is not working as expected for DominantResourceAllocator. In order to confirm this, written simple program for both Default and Dominant resource allocator like below memory configurations. Output of the program is
        For DefaultResourceAllocator, result is false which Limits the applications being activated when AM resource Limit is exceeded.
        For DominatReosurceAllocator, result is true which allows all the applications to be activated even if AM resource Limit is exceeded.

        2015-05-28 14:00:52,704 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: application AMResource <memory:4096, vCores:1> maxAMResourcePerQueuePercent 0.5 amLimit <memory:0, vCores:0> lastClusterResource <memory:0, vCores:0> amIfStarted <memory:4096, vCores:1>
        
        package com.test.hadoop;
        
        import org.apache.hadoop.yarn.api.records.Resource;
        import org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator;
        import org.apache.hadoop.yarn.util.resource.DominantResourceCalculator;
        import org.apache.hadoop.yarn.util.resource.ResourceCalculator;
        import org.apache.hadoop.yarn.util.resource.Resources;
        
        public class TestResourceCalculator {
        
          public static void main(String[] args) {
            // Default Resource Allocator
            ResourceCalculator defaultResourceCalculator =
                new DefaultResourceCalculator();
        
            // Dominant Resource Allocator
            ResourceCalculator dominantResourceCalculator =
                new DominantResourceCalculator();
        
            Resource lastClusterResource = Resource.newInstance(0, 0);
            Resource amIfStarted = Resource.newInstance(4096, 1);
            Resource amLimit = Resource.newInstance(0, 0);
        
           // expected result false, but actual also false
            System.out.println("DefaultResourceCalculator : "
                + Resources.lessThanOrEqual(defaultResourceCalculator,
                    lastClusterResource, amIfStarted, amLimit));
        
           // expected result false, but actual also true for DominantResourceAllocator
            System.out.println("DominantResourceCalculator : "
                + Resources.lessThanOrEqual(dominantResourceCalculator,
                    lastClusterResource, amIfStarted, amLimit));
        
          }
        }
        
        
        Show
        rohithsharma Rohith Sharma K S added a comment - Verified the RM logs from Bibin A Chundatt offline. The sequence of events ocured are 30 applications are submitted to RM1 concurrently. pendingApplications=18 and activeApplications=12 . Active applications are started RUNNING state. RM1 switched to standby, RM2 transitioned to Active state. Currently active RM is RM2. Previous submitted 30 applications started recovering. As part of recovery process, all the 30 applications submitted to schedulers and all these applications become active i.e activeApplications=30 and pendingApplications=0 which is not expected to happen. NM registered with RM and running AM's registered with RM. Since 30 applications are activated, schedulers tries to launch all the activated applications ApplicatonMater and occupied full cluster capacity. Basically the issue AM limit check in LeafQueue#activateApplications is not working as expected for DominantResourceAllocator . In order to confirm this, written simple program for both Default and Dominant resource allocator like below memory configurations. Output of the program is For DefaultResourceAllocator, result is false which Limits the applications being activated when AM resource Limit is exceeded. For DominatReosurceAllocator, result is true which allows all the applications to be activated even if AM resource Limit is exceeded. 2015-05-28 14:00:52,704 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: application AMResource <memory:4096, vCores:1> maxAMResourcePerQueuePercent 0.5 amLimit <memory:0, vCores:0> lastClusterResource <memory:0, vCores:0> amIfStarted <memory:4096, vCores:1> package com.test.hadoop; import org.apache.hadoop.yarn.api.records.Resource; import org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator; import org.apache.hadoop.yarn.util.resource.DominantResourceCalculator; import org.apache.hadoop.yarn.util.resource.ResourceCalculator; import org.apache.hadoop.yarn.util.resource.Resources; public class TestResourceCalculator { public static void main( String [] args) { // Default Resource Allocator ResourceCalculator defaultResourceCalculator = new DefaultResourceCalculator(); // Dominant Resource Allocator ResourceCalculator dominantResourceCalculator = new DominantResourceCalculator(); Resource lastClusterResource = Resource.newInstance(0, 0); Resource amIfStarted = Resource.newInstance(4096, 1); Resource amLimit = Resource.newInstance(0, 0); // expected result false , but actual also false System .out.println( "DefaultResourceCalculator : " + Resources.lessThanOrEqual(defaultResourceCalculator, lastClusterResource, amIfStarted, amLimit)); // expected result false , but actual also true for DominantResourceAllocator System .out.println( "DominantResourceCalculator : " + Resources.lessThanOrEqual(dominantResourceCalculator, lastClusterResource, amIfStarted, amLimit)); } }
        Hide
        rohithsharma Rohith Sharma K S added a comment -

        Steps to reproduce the scenario quickly. Assume that configuration for max-am-resurce-limit is 0.5 and cluster capacity is 10GB after NM is registered. So,Max AM ResourceLimit is 5B

        1. Start RM configuring DominantResourceAllocator.(Dont start NM in the cluster)
        2. Submit 10 applications with 1GB each, and all 10 applications get activated.
        3. Start NM, RM launched all 10 applications AM's and cluster is full where cluster is hangs forever.
          When there is no NM is registered, submitted applications should not be activated i.e should not participate in scheduling.
        Show
        rohithsharma Rohith Sharma K S added a comment - Steps to reproduce the scenario quickly. Assume that configuration for max-am-resurce-limit is 0.5 and cluster capacity is 10GB after NM is registered. So,Max AM ResourceLimit is 5B Start RM configuring DominantResourceAllocator.(Dont start NM in the cluster) Submit 10 applications with 1GB each, and all 10 applications get activated. Start NM, RM launched all 10 applications AM's and cluster is full where cluster is hangs forever. When there is no NM is registered, submitted applications should not be activated i.e should not participate in scheduling.
        Hide
        sunilg Sunil G added a comment -

        Is this happening only in case of DominentResourceAllocator?

        In DominentResourceAllocator,

          protected float getResourceAsValue(
              Resource clusterResource, Resource resource, boolean dominant) {
            // Just use 'dominant' resource
            return (dominant) ?
                Math.max(
                    (float)resource.getMemory() / clusterResource.getMemory(), 
                    (float)resource.getVirtualCores() / clusterResource.getVirtualCores()
                    ) 
                :
                  Math.min(
                      (float)resource.getMemory() / clusterResource.getMemory(), 
                      (float)resource.getVirtualCores() / clusterResource.getVirtualCores()
                      ); 
          }
        

        Here clusterResource.getMemory() is 0. Due to float conversion, exception is not throwing and causing a wrong calculation totally. Hence in LeafQueue#activateApplications, all app will get activated.

        Show
        sunilg Sunil G added a comment - Is this happening only in case of DominentResourceAllocator? In DominentResourceAllocator, protected float getResourceAsValue( Resource clusterResource, Resource resource, boolean dominant) { // Just use 'dominant' resource return (dominant) ? Math .max( ( float )resource.getMemory() / clusterResource.getMemory(), ( float )resource.getVirtualCores() / clusterResource.getVirtualCores() ) : Math .min( ( float )resource.getMemory() / clusterResource.getMemory(), ( float )resource.getVirtualCores() / clusterResource.getVirtualCores() ); } Here clusterResource.getMemory() is 0. Due to float conversion, exception is not throwing and causing a wrong calculation totally. Hence in LeafQueue#activateApplications, all app will get activated.
        Hide
        bibinchundatt Bibin A Chundatt added a comment -

        Sunil G Currently i have check with DominantResourceCalculator only.
        Will try the same with Default also, but as per comment from Rohith Sharma K S the same is happening only in this case.

        Rohith Sharma K S and Sunil G. Thnks a lot for your efforts, to find the root cause of the same.

        Show
        bibinchundatt Bibin A Chundatt added a comment - Sunil G Currently i have check with DominantResourceCalculator only. Will try the same with Default also, but as per comment from Rohith Sharma K S the same is happening only in this case. Rohith Sharma K S and Sunil G . Thnks a lot for your efforts, to find the root cause of the same.
        Hide
        rohithsharma Rohith Sharma K S added a comment -

        Attached the patch fixing the issue. Kindly review the patch.

        Show
        rohithsharma Rohith Sharma K S added a comment - Attached the patch fixing the issue. Kindly review the patch.
        Hide
        hadoopqa Hadoop QA added a comment -



        -1 overall



        Vote Subsystem Runtime Comment
        0 pre-patch 16m 14s Pre-patch trunk compilation is healthy.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 tests included 0m 0s The patch appears to include 1 new or modified test files.
        +1 javac 7m 28s There were no new javac warning messages.
        +1 javadoc 9m 38s There were no new javadoc warning messages.
        +1 release audit 0m 22s The applied patch does not increase the total number of release audit warnings.
        +1 checkstyle 0m 53s There were no new checkstyle issues.
        +1 whitespace 0m 0s The patch has no lines that end in whitespace.
        +1 install 1m 35s mvn install still works.
        +1 eclipse:eclipse 0m 35s The patch built with eclipse:eclipse.
        +1 findbugs 1m 33s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
        -1 yarn tests 1m 56s Tests failed in hadoop-yarn-common.
            40m 21s  



        Reason Tests
        Failed unit tests hadoop.yarn.util.resource.TestResourceCalculator



        Subsystem Report/Notes
        Patch URL http://issues.apache.org/jira/secure/attachment/12735960/YARN-3733.patch
        Optional Tests javadoc javac unit findbugs checkstyle
        git revision trunk / ae14543
        hadoop-yarn-common test log https://builds.apache.org/job/PreCommit-YARN-Build/8119/artifact/patchprocess/testrun_hadoop-yarn-common.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/8119/testReport/
        Java 1.7.0_55
        uname Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/8119/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 16m 14s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 1 new or modified test files. +1 javac 7m 28s There were no new javac warning messages. +1 javadoc 9m 38s There were no new javadoc warning messages. +1 release audit 0m 22s The applied patch does not increase the total number of release audit warnings. +1 checkstyle 0m 53s There were no new checkstyle issues. +1 whitespace 0m 0s The patch has no lines that end in whitespace. +1 install 1m 35s mvn install still works. +1 eclipse:eclipse 0m 35s The patch built with eclipse:eclipse. +1 findbugs 1m 33s The patch does not introduce any new Findbugs (version 3.0.0) warnings. -1 yarn tests 1m 56s Tests failed in hadoop-yarn-common.     40m 21s   Reason Tests Failed unit tests hadoop.yarn.util.resource.TestResourceCalculator Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12735960/YARN-3733.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / ae14543 hadoop-yarn-common test log https://builds.apache.org/job/PreCommit-YARN-Build/8119/artifact/patchprocess/testrun_hadoop-yarn-common.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/8119/testReport/ Java 1.7.0_55 uname Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-YARN-Build/8119/console This message was automatically generated.
        Hide
        hadoopqa Hadoop QA added a comment -



        +1 overall



        Vote Subsystem Runtime Comment
        0 pre-patch 16m 12s Pre-patch trunk compilation is healthy.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 tests included 0m 0s The patch appears to include 1 new or modified test files.
        +1 javac 7m 35s There were no new javac warning messages.
        +1 javadoc 9m 37s There were no new javadoc warning messages.
        +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings.
        +1 checkstyle 0m 54s There were no new checkstyle issues.
        +1 whitespace 0m 0s The patch has no lines that end in whitespace.
        +1 install 1m 34s mvn install still works.
        +1 eclipse:eclipse 0m 37s The patch built with eclipse:eclipse.
        +1 findbugs 1m 33s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
        +1 yarn tests 1m 58s Tests passed in hadoop-yarn-common.
            40m 27s  



        Subsystem Report/Notes
        Patch URL http://issues.apache.org/jira/secure/attachment/12736037/YARN-3733.patch
        Optional Tests javadoc javac unit findbugs checkstyle
        git revision trunk / d725dd8
        hadoop-yarn-common test log https://builds.apache.org/job/PreCommit-YARN-Build/8125/artifact/patchprocess/testrun_hadoop-yarn-common.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/8125/testReport/
        Java 1.7.0_55
        uname Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/8125/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - +1 overall Vote Subsystem Runtime Comment 0 pre-patch 16m 12s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 1 new or modified test files. +1 javac 7m 35s There were no new javac warning messages. +1 javadoc 9m 37s There were no new javadoc warning messages. +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings. +1 checkstyle 0m 54s There were no new checkstyle issues. +1 whitespace 0m 0s The patch has no lines that end in whitespace. +1 install 1m 34s mvn install still works. +1 eclipse:eclipse 0m 37s The patch built with eclipse:eclipse. +1 findbugs 1m 33s The patch does not introduce any new Findbugs (version 3.0.0) warnings. +1 yarn tests 1m 58s Tests passed in hadoop-yarn-common.     40m 27s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12736037/YARN-3733.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / d725dd8 hadoop-yarn-common test log https://builds.apache.org/job/PreCommit-YARN-Build/8125/artifact/patchprocess/testrun_hadoop-yarn-common.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/8125/testReport/ Java 1.7.0_55 uname Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-YARN-Build/8125/console This message was automatically generated.
        Hide
        devaraj.k Devaraj K added a comment -

        Thanks Bibin A Chundatt, Rohith Sharma K S and Sunil G for reporting and fixing, appreciate your efforts.

        Some comments on the patch.

        1.

        +    if (Float.isNaN(l) && Float.isNaN(r)) {
        +      return 0;
        +    } else if (Float.isNaN(l)) {
        +      return -1;
        +    } else if (Float.isNaN(r)) {
        +      return 1;
        +    }
        +
        +    // TODO what if both l and r infinity? Should infinity compared? how?
        +
        

        Here l and r are getting derived from lhs, rhs and clusterResource which are not infinite. Can we check for lhs/rhs emptiness and compare these before ending up with infinite values?

        2. The newly added code is duplicated in two places, can you eliminate the duplicate code?

        3. In the Test class, Can you add the message for all assertEquals() using this API.

        Assert.assertEquals(String message, expected, actual)
        
        Show
        devaraj.k Devaraj K added a comment - Thanks Bibin A Chundatt , Rohith Sharma K S and Sunil G for reporting and fixing, appreciate your efforts. Some comments on the patch. 1. + if (Float.isNaN(l) && Float.isNaN(r)) { + return 0; + } else if (Float.isNaN(l)) { + return -1; + } else if (Float.isNaN(r)) { + return 1; + } + + // TODO what if both l and r infinity? Should infinity compared? how? + Here l and r are getting derived from lhs, rhs and clusterResource which are not infinite. Can we check for lhs/rhs emptiness and compare these before ending up with infinite values? 2. The newly added code is duplicated in two places, can you eliminate the duplicate code? 3. In the Test class, Can you add the message for all assertEquals() using this API. Assert.assertEquals(String message, expected, actual)
        Hide
        sunilg Sunil G added a comment -

        In current patch, the new check for Float.isNaN() is done after the call to getResourceAsValue. Hence if clusterResource is 0 (for memory or for scores), there is a chance that we can get infinity.

        So We may need option like

        • a) Verify infinity by calling isInfinite(float v). Quoting from jdk7
          isInfinite
          public static boolean isInfinite(float v)
          Returns true if the specified number is infinitely large in magnitude, false otherwise.
          
        • b) Handle Exception for these cases. But not feeling a good options as we may lack backward compatibility.
        Show
        sunilg Sunil G added a comment - In current patch, the new check for Float.isNaN() is done after the call to getResourceAsValue. Hence if clusterResource is 0 (for memory or for scores), there is a chance that we can get infinity. So We may need option like a) Verify infinity by calling isInfinite(float v) . Quoting from jdk7 isInfinite public static boolean isInfinite(float v) Returns true if the specified number is infinitely large in magnitude, false otherwise. b) Handle Exception for these cases. But not feeling a good options as we may lack backward compatibility.
        Hide
        rohithsharma Rohith Sharma K S added a comment -

        Thanks Devaraj K and Sunil G for review

        Can we check for lhs/rhs emptiness and compare these before ending up with infinite values?

        If we calculater for emptyness, this would affect specific input values like clusterResource=<0,0> lhs=<1,1>, and rhs<2,2>. Then which one is considered as dominant? bcs directly dominant component can not be retrieved by memory or cpu.

        And I listed out what are the possible combination of inputs would ocure in YARN. These are

        Sl.no clusterResorce lhs rhs Remark
        1 <0,0> <0,0> <0,0> Valid Input;Handled
        2 <0,0> <positive integer,positive integer> <0,0> NaN vs Infinity: Patch Handle This scenario
        3 <0,0> <0,0> <positive integer,positive integer> Nan vs Infinity: Patch Handle This scenario
        4 <0,0> <positive integer,positive integer> <positive integer,positive integer> Infinity vs Infinity: Can this type can ocur in YARN?
        5 <0,0> <positive integer,0> <0,positive integer> Is this valid input? Can this type can ocur in YARN?
        Show
        rohithsharma Rohith Sharma K S added a comment - Thanks Devaraj K and Sunil G for review Can we check for lhs/rhs emptiness and compare these before ending up with infinite values? If we calculater for emptyness, this would affect specific input values like clusterResource=<0,0> lhs=<1,1>, and rhs<2,2>. Then which one is considered as dominant? bcs directly dominant component can not be retrieved by memory or cpu. And I listed out what are the possible combination of inputs would ocure in YARN. These are Sl.no clusterResorce lhs rhs Remark 1 <0,0> <0,0> <0,0> Valid Input;Handled 2 <0,0> <positive integer,positive integer> <0,0> NaN vs Infinity: Patch Handle This scenario 3 <0,0> <0,0> <positive integer,positive integer> Nan vs Infinity: Patch Handle This scenario 4 <0,0> <positive integer,positive integer> <positive integer,positive integer> Infinity vs Infinity: Can this type can ocur in YARN? 5 <0,0> <positive integer,0> <0,positive integer> Is this valid input? Can this type can ocur in YARN?
        Hide
        rohithsharma Rohith Sharma K S added a comment -

        2. The newly added code is duplicated in two places, can you eliminate the duplicate code?

        sencond time validation is not required ICO NaN,will remove this in next patch.

        Show
        rohithsharma Rohith Sharma K S added a comment - 2. The newly added code is duplicated in two places, can you eliminate the duplicate code? sencond time validation is not required ICO NaN,will remove this in next patch.
        Hide
        rohithsharma Rohith Sharma K S added a comment -

        Verify infinity by calling isInfinite(float v). Quoting from jdk7

        Since infinity is derived from lhs and rhs, infinity can not be differentiated for the clusterResource=<0,0> lhs=<1,1>, and rhs<2,2>. Method getResourceAsValue() return infinity for both l and r which cant compare it.

        Show
        rohithsharma Rohith Sharma K S added a comment - Verify infinity by calling isInfinite(float v). Quoting from jdk7 Since infinity is derived from lhs and rhs, infinity can not be differentiated for the clusterResource=<0,0> lhs=<1,1>, and rhs<2,2>. Method getResourceAsValue() return infinity for both l and r which cant compare it.
        Hide
        rohithsharma Rohith Sharma K S added a comment -

        This issue fix need to go in for 2.7.1. Updated the target version as 2.7.1

        Show
        rohithsharma Rohith Sharma K S added a comment - This issue fix need to go in for 2.7.1. Updated the target version as 2.7.1
        Hide
        sunilg Sunil G added a comment -

        I feel "clusterResource=<0,0> lhs=<1,1>, and rhs<2,2>" may happen. But we cannot differentiate which is bigger infinity here and thats not correct. Why could we check for clusterResource=<0,0> prior to * getResourceAsValue()* check and handle from there.

        Show
        sunilg Sunil G added a comment - I feel "clusterResource=<0,0> lhs=<1,1>, and rhs<2,2>" may happen. But we cannot differentiate which is bigger infinity here and thats not correct. Why could we check for clusterResource=<0,0> prior to * getResourceAsValue()* check and handle from there.
        Hide
        sunilg Sunil G added a comment -

        I feel "clusterResource=<0,0> lhs=<1,1>, and rhs<2,2>" may happen. But we cannot differentiate which is bigger infinity here and thats not correct. Why could we check for clusterResource=<0,0> prior to * getResourceAsValue()* check and handle from there.

        Show
        sunilg Sunil G added a comment - I feel "clusterResource=<0,0> lhs=<1,1>, and rhs<2,2>" may happen. But we cannot differentiate which is bigger infinity here and thats not correct. Why could we check for clusterResource=<0,0> prior to * getResourceAsValue()* check and handle from there.
        Hide
        rohithsharma Rohith Sharma K S added a comment -

        Updated the summary as per defect.

        Show
        rohithsharma Rohith Sharma K S added a comment - Updated the summary as per defect.
        Hide
        rohithsharma Rohith Sharma K S added a comment -

        The updated patch that fixes for 2nd and 3rd scenarios(This issue scenario fixes) in above table and refactored the test code.

        As a overall solution that solves input combination like 4th and 5th from above table, need to explore more on how to define fraction and how to decide which one is dominant. Any suggestions on this?

        Show
        rohithsharma Rohith Sharma K S added a comment - The updated patch that fixes for 2nd and 3rd scenarios(This issue scenario fixes) in above table and refactored the test code. As a overall solution that solves input combination like 4th and 5th from above table, need to explore more on how to define fraction and how to decide which one is dominant. Any suggestions on this?
        Hide
        sunilg Sunil G added a comment -

        Hi Rohith Sharma K S
        Thanks for the detailed scenario.

        Scenario 4 can be possible, correct?. clusterResource<0,0> : lhs <2,2> and rhs <3,2>.

        Currently getResourceAsValue gives back the max ratio of mem/vcores if dominent. Else gives the min ratio.
        If clusterResource is 0, then could we directly send the max of mem/vcore if dominent, and min in other case. This has to be made more better algorithm when more resources comes in.
        This is not completely perfect as we treat memory and vcores leniently. Pls share your thoughts.

        Show
        sunilg Sunil G added a comment - Hi Rohith Sharma K S Thanks for the detailed scenario. Scenario 4 can be possible, correct?. clusterResource<0,0> : lhs <2,2> and rhs <3,2>. Currently getResourceAsValue gives back the max ratio of mem/vcores if dominent. Else gives the min ratio. If clusterResource is 0, then could we directly send the max of mem/vcore if dominent, and min in other case. This has to be made more better algorithm when more resources comes in. This is not completely perfect as we treat memory and vcores leniently. Pls share your thoughts.
        Hide
        hadoopqa Hadoop QA added a comment -



        +1 overall



        Vote Subsystem Runtime Comment
        0 pre-patch 16m 6s Pre-patch trunk compilation is healthy.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 tests included 0m 0s The patch appears to include 1 new or modified test files.
        +1 javac 7m 33s There were no new javac warning messages.
        +1 javadoc 9m 36s There were no new javadoc warning messages.
        +1 release audit 0m 22s The applied patch does not increase the total number of release audit warnings.
        +1 checkstyle 0m 54s There were no new checkstyle issues.
        +1 whitespace 0m 0s The patch has no lines that end in whitespace.
        +1 install 1m 33s mvn install still works.
        +1 eclipse:eclipse 0m 32s The patch built with eclipse:eclipse.
        +1 findbugs 1m 33s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
        +1 yarn tests 1m 57s Tests passed in hadoop-yarn-common.
            40m 10s  



        Subsystem Report/Notes
        Patch URL http://issues.apache.org/jira/secure/attachment/12736802/0001-YARN-3733.patch
        Optional Tests javadoc javac unit findbugs checkstyle
        git revision trunk / 990078b
        hadoop-yarn-common test log https://builds.apache.org/job/PreCommit-YARN-Build/8166/artifact/patchprocess/testrun_hadoop-yarn-common.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/8166/testReport/
        Java 1.7.0_55
        uname Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/8166/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - +1 overall Vote Subsystem Runtime Comment 0 pre-patch 16m 6s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 1 new or modified test files. +1 javac 7m 33s There were no new javac warning messages. +1 javadoc 9m 36s There were no new javadoc warning messages. +1 release audit 0m 22s The applied patch does not increase the total number of release audit warnings. +1 checkstyle 0m 54s There were no new checkstyle issues. +1 whitespace 0m 0s The patch has no lines that end in whitespace. +1 install 1m 33s mvn install still works. +1 eclipse:eclipse 0m 32s The patch built with eclipse:eclipse. +1 findbugs 1m 33s The patch does not introduce any new Findbugs (version 3.0.0) warnings. +1 yarn tests 1m 57s Tests passed in hadoop-yarn-common.     40m 10s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12736802/0001-YARN-3733.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / 990078b hadoop-yarn-common test log https://builds.apache.org/job/PreCommit-YARN-Build/8166/artifact/patchprocess/testrun_hadoop-yarn-common.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/8166/testReport/ Java 1.7.0_55 uname Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-YARN-Build/8166/console This message was automatically generated.
        Hide
        leftnoteasy Wangda Tan added a comment -

        Took a look at the patch and discussion. Thanks for working on this Rohith Sharma K S.

        I think Sunil G mentioned https://issues.apache.org/jira/browse/YARN-3733?focusedCommentId=14568880&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14568880 makes sense to me. If the clusterResource is 0, we can compare individual resource type. It could be:

        Returns >: when l.mem > right.mem || l.cpu > right.cpu
        Returns =: when (l.mem <= right.mem && l.cpu >= right.cpu) || (l.mem >= right.mem && l.cpu <= right.cpu)
        Returns <: when l.mem < right.mem || l.cpu < right.cpu
        

        This produces same result as the INF approach in the patch, but also can compare if both l/r have > 0 values. The reason I prefer this is, I'm sure the patch can solve the am-resource-percent problem. But with suggested approach, we can make sure getting more reasonable result if we need to compare non-zero-resource when clusterResource is zero. (For example, sort applications by their requirements when clusterResource is zero).

        And to avoid future regression, could you add a test to verify the am-resource-limit problem is solved?

        Show
        leftnoteasy Wangda Tan added a comment - Took a look at the patch and discussion. Thanks for working on this Rohith Sharma K S . I think Sunil G mentioned https://issues.apache.org/jira/browse/YARN-3733?focusedCommentId=14568880&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14568880 makes sense to me. If the clusterResource is 0, we can compare individual resource type. It could be: Returns >: when l.mem > right.mem || l.cpu > right.cpu Returns =: when (l.mem <= right.mem && l.cpu >= right.cpu) || (l.mem >= right.mem && l.cpu <= right.cpu) Returns <: when l.mem < right.mem || l.cpu < right.cpu This produces same result as the INF approach in the patch, but also can compare if both l/r have > 0 values. The reason I prefer this is, I'm sure the patch can solve the am-resource-percent problem. But with suggested approach, we can make sure getting more reasonable result if we need to compare non-zero-resource when clusterResource is zero. (For example, sort applications by their requirements when clusterResource is zero). And to avoid future regression, could you add a test to verify the am-resource-limit problem is solved?
        Hide
        rohithsharma Rohith Sharma K S added a comment -

        Thanks Sunil G and Wangda Tan for sharing your thoughts..

        I modified bit of logic and the order of if check so that it should handle all the possible combination of inputs below table. The problem was in 5th and 7th inputs. The validation returning 1 but it was expected to be zero for 5th combinations i.e flow never reach 2nd check since 1st step is OR for memory vs cpu.

        Sl.no cr lhs rhs Output
        1 <0,0> <1,1> <1,1> 0
        2 <0,0> <1,1> <0,0> 1
        3 <0,0> <0,0> <1,1> -1
        4 <0,0> <0,1> <1,0> 0
        5 <0,0> <1,0> <0,1> 0
        6 <0,0> <1,1> <1,0> 1
        7 <0,0> <1,0> <1,1> -1

        Updated Patch has followig change :

        1. Changed the logic for comparing lhs and rhs resources when clusterResource is empty as suggested.
        2. Added test for AMLimit usage.
        3. Addred test for all above cobination of inputs.

        Kindly review the patch

        Show
        rohithsharma Rohith Sharma K S added a comment - Thanks Sunil G and Wangda Tan for sharing your thoughts.. I modified bit of logic and the order of if check so that it should handle all the possible combination of inputs below table. The problem was in 5th and 7th inputs. The validation returning 1 but it was expected to be zero for 5th combinations i.e flow never reach 2nd check since 1st step is OR for memory vs cpu. Sl.no cr lhs rhs Output 1 <0,0> <1,1> <1,1> 0 2 <0,0> <1,1> <0,0> 1 3 <0,0> <0,0> <1,1> -1 4 <0,0> <0,1> <1,0> 0 5 <0,0> <1,0> <0,1> 0 6 <0,0> <1,1> <1,0> 1 7 <0,0> <1,0> <1,1> -1 Updated Patch has followig change : Changed the logic for comparing lhs and rhs resources when clusterResource is empty as suggested. Added test for AMLimit usage. Addred test for all above cobination of inputs. Kindly review the patch
        Hide
        sunilg Sunil G added a comment -

        Thank you Rohith Sharma K S for the detailed information and patch.

        1. Could we add a test case where only memory or vcores are more in TestCapacityScheduler.

            Resource amResource2 =
                Resource.newInstance(amResourceLimit.getMemory() + 1,
                    amResourceLimit.getVirtualCores());
        

        2. In TestCapacityScheduler#verifyAMLimitForLeafQueue, while submitting second app, you could change the app name to "app-2".

        Show
        sunilg Sunil G added a comment - Thank you Rohith Sharma K S for the detailed information and patch. 1. Could we add a test case where only memory or vcores are more in TestCapacityScheduler. Resource amResource2 = Resource.newInstance(amResourceLimit.getMemory() + 1, amResourceLimit.getVirtualCores()); 2. In TestCapacityScheduler#verifyAMLimitForLeafQueue, while submitting second app, you could change the app name to "app-2".
        Hide
        hadoopqa Hadoop QA added a comment -



        +1 overall



        Vote Subsystem Runtime Comment
        0 pre-patch 17m 20s Pre-patch trunk compilation is healthy.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 tests included 0m 0s The patch appears to include 2 new or modified test files.
        +1 javac 7m 34s There were no new javac warning messages.
        +1 javadoc 9m 37s There were no new javadoc warning messages.
        +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings.
        +1 checkstyle 1m 40s There were no new checkstyle issues.
        +1 whitespace 0m 0s The patch has no lines that end in whitespace.
        +1 install 1m 32s mvn install still works.
        +1 eclipse:eclipse 0m 33s The patch built with eclipse:eclipse.
        +1 findbugs 3m 0s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
        +1 yarn tests 1m 57s Tests passed in hadoop-yarn-common.
        +1 yarn tests 50m 12s Tests passed in hadoop-yarn-server-resourcemanager.
            93m 53s  



        Subsystem Report/Notes
        Patch URL http://issues.apache.org/jira/secure/attachment/12737171/0002-YARN-3733.patch
        Optional Tests javadoc javac unit findbugs checkstyle
        git revision trunk / c59e745
        hadoop-yarn-common test log https://builds.apache.org/job/PreCommit-YARN-Build/8174/artifact/patchprocess/testrun_hadoop-yarn-common.txt
        hadoop-yarn-server-resourcemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/8174/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/8174/testReport/
        Java 1.7.0_55
        uname Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/8174/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - +1 overall Vote Subsystem Runtime Comment 0 pre-patch 17m 20s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 2 new or modified test files. +1 javac 7m 34s There were no new javac warning messages. +1 javadoc 9m 37s There were no new javadoc warning messages. +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings. +1 checkstyle 1m 40s There were no new checkstyle issues. +1 whitespace 0m 0s The patch has no lines that end in whitespace. +1 install 1m 32s mvn install still works. +1 eclipse:eclipse 0m 33s The patch built with eclipse:eclipse. +1 findbugs 3m 0s The patch does not introduce any new Findbugs (version 3.0.0) warnings. +1 yarn tests 1m 57s Tests passed in hadoop-yarn-common. +1 yarn tests 50m 12s Tests passed in hadoop-yarn-server-resourcemanager.     93m 53s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12737171/0002-YARN-3733.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / c59e745 hadoop-yarn-common test log https://builds.apache.org/job/PreCommit-YARN-Build/8174/artifact/patchprocess/testrun_hadoop-yarn-common.txt hadoop-yarn-server-resourcemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/8174/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/8174/testReport/ Java 1.7.0_55 uname Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-YARN-Build/8174/console This message was automatically generated.
        Hide
        leftnoteasy Wangda Tan added a comment -

        Patch LGTM generally, will commit the patch once Sunil G +1.

        Show
        leftnoteasy Wangda Tan added a comment - Patch LGTM generally, will commit the patch once Sunil G +1.
        Hide
        rohithsharma Rohith Sharma K S added a comment -

        only memory or vcores are more in TestCapacityScheduler.

        All the combination of inputs are verified in the TestResourceCalculator. And in TestCapacityScheduler, app submission happens only for memory in MockRM.submitApp, so default vcore minimum allocation is 1 which will be taken by default. So just changing memory to amResourceLimit.getMemory() + 2 should enough.

        TestCapacityScheduler#verifyAMLimitForLeafQueue, while submitting second app, you could change the app name to "app-2".

        Agree.

        I will upload a patch soon

        Show
        rohithsharma Rohith Sharma K S added a comment - only memory or vcores are more in TestCapacityScheduler. All the combination of inputs are verified in the TestResourceCalculator. And in TestCapacityScheduler, app submission happens only for memory in MockRM.submitApp , so default vcore minimum allocation is 1 which will be taken by default. So just changing memory to amResourceLimit.getMemory() + 2 should enough. TestCapacityScheduler#verifyAMLimitForLeafQueue, while submitting second app, you could change the app name to "app-2". Agree. I will upload a patch soon
        Hide
        rohithsharma Rohith Sharma K S added a comment -

        Updated the patch fixing test side comments.. Kindly review the patch

        Show
        rohithsharma Rohith Sharma K S added a comment - Updated the patch fixing test side comments.. Kindly review the patch
        Hide
        hadoopqa Hadoop QA added a comment -



        +1 overall



        Vote Subsystem Runtime Comment
        0 pre-patch 17m 28s Pre-patch trunk compilation is healthy.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 tests included 0m 0s The patch appears to include 2 new or modified test files.
        +1 javac 7m 36s There were no new javac warning messages.
        +1 javadoc 9m 39s There were no new javadoc warning messages.
        +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings.
        +1 checkstyle 1m 40s There were no new checkstyle issues.
        +1 whitespace 0m 0s The patch has no lines that end in whitespace.
        +1 install 1m 36s mvn install still works.
        +1 eclipse:eclipse 0m 34s The patch built with eclipse:eclipse.
        +1 findbugs 2m 59s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
        +1 yarn tests 1m 56s Tests passed in hadoop-yarn-common.
        +1 yarn tests 50m 24s Tests passed in hadoop-yarn-server-resourcemanager.
            94m 18s  



        Subsystem Report/Notes
        Patch URL http://issues.apache.org/jira/secure/attachment/12737453/0002-YARN-3733.patch
        Optional Tests javadoc javac unit findbugs checkstyle
        git revision trunk / b5f0d29
        hadoop-yarn-common test log https://builds.apache.org/job/PreCommit-YARN-Build/8191/artifact/patchprocess/testrun_hadoop-yarn-common.txt
        hadoop-yarn-server-resourcemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/8191/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/8191/testReport/
        Java 1.7.0_55
        uname Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/8191/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - +1 overall Vote Subsystem Runtime Comment 0 pre-patch 17m 28s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 2 new or modified test files. +1 javac 7m 36s There were no new javac warning messages. +1 javadoc 9m 39s There were no new javadoc warning messages. +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings. +1 checkstyle 1m 40s There were no new checkstyle issues. +1 whitespace 0m 0s The patch has no lines that end in whitespace. +1 install 1m 36s mvn install still works. +1 eclipse:eclipse 0m 34s The patch built with eclipse:eclipse. +1 findbugs 2m 59s The patch does not introduce any new Findbugs (version 3.0.0) warnings. +1 yarn tests 1m 56s Tests passed in hadoop-yarn-common. +1 yarn tests 50m 24s Tests passed in hadoop-yarn-server-resourcemanager.     94m 18s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12737453/0002-YARN-3733.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / b5f0d29 hadoop-yarn-common test log https://builds.apache.org/job/PreCommit-YARN-Build/8191/artifact/patchprocess/testrun_hadoop-yarn-common.txt hadoop-yarn-server-resourcemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/8191/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/8191/testReport/ Java 1.7.0_55 uname Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-YARN-Build/8191/console This message was automatically generated.
        Hide
        sunilg Sunil G added a comment -

        Patch looks good to me. +1

        For MockRM.submitApp, I think we need to support the addition of Cores and Memory. I will file a separate ticket to handle the same if its fine.

        Show
        sunilg Sunil G added a comment - Patch looks good to me. +1 For MockRM.submitApp, I think we need to support the addition of Cores and Memory. I will file a separate ticket to handle the same if its fine.
        Hide
        rohithsharma Rohith Sharma K S added a comment -

        +1 for handling virtual core's. This will good immprovement for testing DominantRC functionality precicely.

        Show
        rohithsharma Rohith Sharma K S added a comment - +1 for handling virtual core's. This will good immprovement for testing DominantRC functionality precicely.
        Hide
        leftnoteasy Wangda Tan added a comment -

        Great! Committing...

        Show
        leftnoteasy Wangda Tan added a comment - Great! Committing...
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Hadoop-trunk-Commit #7965 (See https://builds.apache.org/job/Hadoop-trunk-Commit/7965/)
        YARN-3733. Fix DominantRC#compare() does not work as expected if cluster resource is empty. (Rohith Sharmaks via wangda) (wangda: rev ebd797c48fe236b404cf3a125ac9d1f7714e291e)

        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
        • hadoop-yarn-project/CHANGES.txt
        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #7965 (See https://builds.apache.org/job/Hadoop-trunk-Commit/7965/ ) YARN-3733 . Fix DominantRC#compare() does not work as expected if cluster resource is empty. (Rohith Sharmaks via wangda) (wangda: rev ebd797c48fe236b404cf3a125ac9d1f7714e291e) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Hadoop-trunk-Commit #7970 (See https://builds.apache.org/job/Hadoop-trunk-Commit/7970/)
        Add missing test file of YARN-3733 (wangda: rev 405bbcf68c32d8fd8a83e46e686eacd14e5a533c)

        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceCalculator.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #7970 (See https://builds.apache.org/job/Hadoop-trunk-Commit/7970/ ) Add missing test file of YARN-3733 (wangda: rev 405bbcf68c32d8fd8a83e46e686eacd14e5a533c) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceCalculator.java
        Hide
        leftnoteasy Wangda Tan added a comment -

        Committed to trunk/branch-2/branch-2.7, thanks Rohith Sharma K S and review from Sunil G!

        Show
        leftnoteasy Wangda Tan added a comment - Committed to trunk/branch-2/branch-2.7, thanks Rohith Sharma K S and review from Sunil G !
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #219 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/219/)
        YARN-3733. Fix DominantRC#compare() does not work as expected if cluster resource is empty. (Rohith Sharmaks via wangda) (wangda: rev ebd797c48fe236b404cf3a125ac9d1f7714e291e)

        • hadoop-yarn-project/CHANGES.txt
        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java
        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
          Add missing test file of YARN-3733 (wangda: rev 405bbcf68c32d8fd8a83e46e686eacd14e5a533c)
        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceCalculator.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #219 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/219/ ) YARN-3733 . Fix DominantRC#compare() does not work as expected if cluster resource is empty. (Rohith Sharmaks via wangda) (wangda: rev ebd797c48fe236b404cf3a125ac9d1f7714e291e) hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java Add missing test file of YARN-3733 (wangda: rev 405bbcf68c32d8fd8a83e46e686eacd14e5a533c) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceCalculator.java
        Hide
        hudson Hudson added a comment -

        SUCCESS: Integrated in Hadoop-Yarn-trunk #949 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/949/)
        YARN-3733. Fix DominantRC#compare() does not work as expected if cluster resource is empty. (Rohith Sharmaks via wangda) (wangda: rev ebd797c48fe236b404cf3a125ac9d1f7714e291e)

        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
        • hadoop-yarn-project/CHANGES.txt
        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java
          Add missing test file of YARN-3733 (wangda: rev 405bbcf68c32d8fd8a83e46e686eacd14e5a533c)
        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceCalculator.java
        Show
        hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Yarn-trunk #949 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/949/ ) YARN-3733 . Fix DominantRC#compare() does not work as expected if cluster resource is empty. (Rohith Sharmaks via wangda) (wangda: rev ebd797c48fe236b404cf3a125ac9d1f7714e291e) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java Add missing test file of YARN-3733 (wangda: rev 405bbcf68c32d8fd8a83e46e686eacd14e5a533c) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceCalculator.java
        Hide
        hudson Hudson added a comment -

        SUCCESS: Integrated in Hadoop-Hdfs-trunk #2147 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2147/)
        YARN-3733. Fix DominantRC#compare() does not work as expected if cluster resource is empty. (Rohith Sharmaks via wangda) (wangda: rev ebd797c48fe236b404cf3a125ac9d1f7714e291e)

        • hadoop-yarn-project/CHANGES.txt
        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java
          Add missing test file of YARN-3733 (wangda: rev 405bbcf68c32d8fd8a83e46e686eacd14e5a533c)
        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceCalculator.java
        Show
        hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Hdfs-trunk #2147 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2147/ ) YARN-3733 . Fix DominantRC#compare() does not work as expected if cluster resource is empty. (Rohith Sharmaks via wangda) (wangda: rev ebd797c48fe236b404cf3a125ac9d1f7714e291e) hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java Add missing test file of YARN-3733 (wangda: rev 405bbcf68c32d8fd8a83e46e686eacd14e5a533c) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceCalculator.java
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #208 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/208/)
        YARN-3733. Fix DominantRC#compare() does not work as expected if cluster resource is empty. (Rohith Sharmaks via wangda) (wangda: rev ebd797c48fe236b404cf3a125ac9d1f7714e291e)

        • hadoop-yarn-project/CHANGES.txt
        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java
        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
          Add missing test file of YARN-3733 (wangda: rev 405bbcf68c32d8fd8a83e46e686eacd14e5a533c)
        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceCalculator.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #208 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/208/ ) YARN-3733 . Fix DominantRC#compare() does not work as expected if cluster resource is empty. (Rohith Sharmaks via wangda) (wangda: rev ebd797c48fe236b404cf3a125ac9d1f7714e291e) hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java Add missing test file of YARN-3733 (wangda: rev 405bbcf68c32d8fd8a83e46e686eacd14e5a533c) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceCalculator.java
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Hadoop-Mapreduce-trunk #2165 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2165/)
        YARN-3733. Fix DominantRC#compare() does not work as expected if cluster resource is empty. (Rohith Sharmaks via wangda) (wangda: rev ebd797c48fe236b404cf3a125ac9d1f7714e291e)

        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
        • hadoop-yarn-project/CHANGES.txt
        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java
          Add missing test file of YARN-3733 (wangda: rev 405bbcf68c32d8fd8a83e46e686eacd14e5a533c)
        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceCalculator.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk #2165 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2165/ ) YARN-3733 . Fix DominantRC#compare() does not work as expected if cluster resource is empty. (Rohith Sharmaks via wangda) (wangda: rev ebd797c48fe236b404cf3a125ac9d1f7714e291e) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java Add missing test file of YARN-3733 (wangda: rev 405bbcf68c32d8fd8a83e46e686eacd14e5a533c) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceCalculator.java
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #217 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/217/)
        YARN-3733. Fix DominantRC#compare() does not work as expected if cluster resource is empty. (Rohith Sharmaks via wangda) (wangda: rev ebd797c48fe236b404cf3a125ac9d1f7714e291e)

        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java
        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
        • hadoop-yarn-project/CHANGES.txt
          Add missing test file of YARN-3733 (wangda: rev 405bbcf68c32d8fd8a83e46e686eacd14e5a533c)
        • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceCalculator.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #217 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/217/ ) YARN-3733 . Fix DominantRC#compare() does not work as expected if cluster resource is empty. (Rohith Sharmaks via wangda) (wangda: rev ebd797c48fe236b404cf3a125ac9d1f7714e291e) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java hadoop-yarn-project/CHANGES.txt Add missing test file of YARN-3733 (wangda: rev 405bbcf68c32d8fd8a83e46e686eacd14e5a533c) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceCalculator.java
        Hide
        vinodkv Vinod Kumar Vavilapalli added a comment -

        Pulled this into 2.6.1, after fixing one merge conflict. Ran compilation and TestCapacityScheduler, TestResourceCalculator before the push.

        Show
        vinodkv Vinod Kumar Vavilapalli added a comment - Pulled this into 2.6.1, after fixing one merge conflict. Ran compilation and TestCapacityScheduler, TestResourceCalculator before the push.

          People

          • Assignee:
            rohithsharma Rohith Sharma K S
            Reporter:
            bibinchundatt Bibin A Chundatt
          • Votes:
            0 Vote for this issue
            Watchers:
            13 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development