Uploaded image for project: 'Hadoop YARN'
  1. Hadoop YARN
  2. YARN-5918

Handle Opportunistic scheduling allocate request failure when NM is lost

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.9.0, 3.0.0-alpha2
    • Component/s: None
    • Labels:
      None

      Description

      Allocate request failure during Opportunistic container allocation when nodemanager is lost

      2016-11-20 10:38:49,011 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root     OPERATION=AM Released Container TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1479637990302_0002    CONTAINERID=container_e12_1479637990302_0002_01_000006  RESOURCE=<memory:1024, vCores:1>
      2016-11-20 10:38:49,011 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Removed node docker2:38297 clusterResource: <memory:4096, vCores:8>
      2016-11-20 10:38:49,434 WARN org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8030, call Call#35 Retry#0 org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB.allocate from 172.17.0.2:51584
      java.lang.NullPointerException
              at org.apache.hadoop.yarn.server.resourcemanager.OpportunisticContainerAllocatorAMService.convertToRemoteNode(OpportunisticContainerAllocatorAMService.java:420)
              at org.apache.hadoop.yarn.server.resourcemanager.OpportunisticContainerAllocatorAMService.convertToRemoteNodes(OpportunisticContainerAllocatorAMService.java:412)
              at org.apache.hadoop.yarn.server.resourcemanager.OpportunisticContainerAllocatorAMService.getLeastLoadedNodes(OpportunisticContainerAllocatorAMService.java:402)
              at org.apache.hadoop.yarn.server.resourcemanager.OpportunisticContainerAllocatorAMService.allocate(OpportunisticContainerAllocatorAMService.java:236)
              at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
              at org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
              at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:467)
              at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:990)
              at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:846)
              at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:789)
              at java.security.AccessController.doPrivileged(Native Method)
              at javax.security.auth.Subject.doAs(Subject.java:422)
              at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1857)
              at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2539)
      2016-11-20 10:38:50,824 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_e12_1479637990302_0002_01_000002 Container Transitioned from RUNNING to COMPLETED
      
      
      1. YARN-5918.0001.patch
        2 kB
        Bibin A Chundatt
      2. YARN-5918.0002.patch
        9 kB
        Bibin A Chundatt
      3. YARN-5918.0003.patch
        11 kB
        Bibin A Chundatt
      4. YARN-5918.0004.patch
        10 kB
        Bibin A Chundatt

        Activity

        Hide
        bibinchundatt Bibin A Chundatt added a comment -

        Attaching patch for the same

        Show
        bibinchundatt Bibin A Chundatt added a comment - Attaching patch for the same
        Hide
        varun_saxena Varun Saxena added a comment -

        Thanks Bibin for reporting the issue. Can you add a test case here ?

        Show
        varun_saxena Varun Saxena added a comment - Thanks Bibin for reporting the issue. Can you add a test case here ?
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 20s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
        +1 mvninstall 9m 11s trunk passed
        +1 compile 0m 43s trunk passed
        +1 checkstyle 0m 26s trunk passed
        +1 mvnsite 0m 50s trunk passed
        +1 mvneclipse 0m 20s trunk passed
        +1 findbugs 1m 18s trunk passed
        +1 javadoc 0m 29s trunk passed
        +1 mvninstall 0m 44s the patch passed
        +1 compile 0m 41s the patch passed
        +1 javac 0m 41s the patch passed
        +1 checkstyle 0m 23s the patch passed
        +1 mvnsite 0m 51s the patch passed
        +1 mvneclipse 0m 20s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 1m 25s the patch passed
        +1 javadoc 0m 28s the patch passed
        -1 unit 46m 0s hadoop-yarn-server-resourcemanager in the patch failed.
        +1 asflicense 0m 16s The patch does not generate ASF License warnings.
        66m 18s



        Reason Tests
        Failed junit tests hadoop.yarn.server.resourcemanager.TestRMRestart
          hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:a9ad5d6
        JIRA Issue YARN-5918
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12839723/YARN-5918.0001.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 5bccaa18a79f 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 7584fbf
        Default Java 1.8.0_111
        findbugs v3.0.0
        unit https://builds.apache.org/job/PreCommit-YARN-Build/13988/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/13988/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/13988/console
        Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 20s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 9m 11s trunk passed +1 compile 0m 43s trunk passed +1 checkstyle 0m 26s trunk passed +1 mvnsite 0m 50s trunk passed +1 mvneclipse 0m 20s trunk passed +1 findbugs 1m 18s trunk passed +1 javadoc 0m 29s trunk passed +1 mvninstall 0m 44s the patch passed +1 compile 0m 41s the patch passed +1 javac 0m 41s the patch passed +1 checkstyle 0m 23s the patch passed +1 mvnsite 0m 51s the patch passed +1 mvneclipse 0m 20s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 25s the patch passed +1 javadoc 0m 28s the patch passed -1 unit 46m 0s hadoop-yarn-server-resourcemanager in the patch failed. +1 asflicense 0m 16s The patch does not generate ASF License warnings. 66m 18s Reason Tests Failed junit tests hadoop.yarn.server.resourcemanager.TestRMRestart   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue YARN-5918 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12839723/YARN-5918.0001.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 5bccaa18a79f 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 7584fbf Default Java 1.8.0_111 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-YARN-Build/13988/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/13988/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Console output https://builds.apache.org/job/PreCommit-YARN-Build/13988/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        varun_saxena Varun Saxena added a comment -

        While making null checks fixes the NPE, can something else be done or needs to be done ? If we fix code as above, we will return less nodes for scheduling opportunistic containers than yarn.opportunistic-container-allocation.nodes-used configuration even though enough nodes are available. But this should be updated the very next second (as per default config) which maybe fine.

        Cluster nodes are sorted in NodeQueueLoadMonitor every 1 second by default and stored in a list. Although we remove node when a node is lost from cluster nodes, we do not remove it from sorted nodes. Because for doing it we will have to iterate over the list. Can we keep a set instead ? Also when we get least loaded nodes when allocate request comes, we simply create a sublist from the sorted nodes. We can potentially iterate over the list and check if node is still running or not to avoid NPE but this would be slower than creating a sublist especially number of nodes configured for scheduling opportunistic containers are way larger than default of 10.

        I guess we can check with guys working on distributed scheduling before deciding on a fix.
        cc Arun Suresh

        Show
        varun_saxena Varun Saxena added a comment - While making null checks fixes the NPE, can something else be done or needs to be done ? If we fix code as above, we will return less nodes for scheduling opportunistic containers than yarn.opportunistic-container-allocation.nodes-used configuration even though enough nodes are available. But this should be updated the very next second (as per default config) which maybe fine. Cluster nodes are sorted in NodeQueueLoadMonitor every 1 second by default and stored in a list. Although we remove node when a node is lost from cluster nodes, we do not remove it from sorted nodes. Because for doing it we will have to iterate over the list. Can we keep a set instead ? Also when we get least loaded nodes when allocate request comes, we simply create a sublist from the sorted nodes. We can potentially iterate over the list and check if node is still running or not to avoid NPE but this would be slower than creating a sublist especially number of nodes configured for scheduling opportunistic containers are way larger than default of 10. I guess we can check with guys working on distributed scheduling before deciding on a fix. cc Arun Suresh
        Hide
        asuresh Arun Suresh added a comment -

        Thanks for raising this Bibin A Chundatt and for chiming in Varun Saxena.

        If we fix code as above, we will return less nodes for scheduling opportunistic containers than yarn.opportunistic-container-allocation.nodes-used configuration even though enough nodes are available. But this should be updated the very next second (as per default config) which maybe fine.

        As you pointed out, this is actually fine.

        Although we remove node when a node is lost from cluster nodes, we do not remove it from sorted nodes. Because for doing it we will have to iterate over the list. Can we keep a set instead ?

        We had initially thought of using a SortedSet, but Insertions and deletions were somewhat expensive and a LinkedList cheaply satisfied our use-case.

        Can you maybe add a test to TestNodeQueueLoadMonitor for this ?
        +1 pending.

        Show
        asuresh Arun Suresh added a comment - Thanks for raising this Bibin A Chundatt and for chiming in Varun Saxena . If we fix code as above, we will return less nodes for scheduling opportunistic containers than yarn.opportunistic-container-allocation.nodes-used configuration even though enough nodes are available. But this should be updated the very next second (as per default config) which maybe fine. As you pointed out, this is actually fine. Although we remove node when a node is lost from cluster nodes, we do not remove it from sorted nodes. Because for doing it we will have to iterate over the list. Can we keep a set instead ? We had initially thought of using a SortedSet, but Insertions and deletions were somewhat expensive and a LinkedList cheaply satisfied our use-case. Can you maybe add a test to TestNodeQueueLoadMonitor for this ? +1 pending.
        Hide
        bibinchundatt Bibin A Chundatt added a comment -

        Node gets removed from scheduler while creating remote Node from least used Node. Attaching patch after adding testcase to simulate the same.

        Show
        bibinchundatt Bibin A Chundatt added a comment - Node gets removed from scheduler while creating remote Node from least used Node. Attaching patch after adding testcase to simulate the same.
        Hide
        hadoopqa Hadoop QA added a comment -
        +1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 18s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        +1 mvninstall 7m 40s trunk passed
        +1 compile 0m 35s trunk passed
        +1 checkstyle 0m 20s trunk passed
        +1 mvnsite 0m 40s trunk passed
        +1 mvneclipse 0m 18s trunk passed
        +1 findbugs 0m 59s trunk passed
        +1 javadoc 0m 23s trunk passed
        +1 mvninstall 0m 33s the patch passed
        +1 compile 0m 31s the patch passed
        +1 javac 0m 31s the patch passed
        +1 checkstyle 0m 18s the patch passed
        +1 mvnsite 0m 37s the patch passed
        +1 mvneclipse 0m 15s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 1m 5s the patch passed
        +1 javadoc 0m 20s the patch passed
        +1 unit 43m 57s hadoop-yarn-server-resourcemanager in the patch passed.
        +1 asflicense 0m 17s The patch does not generate ASF License warnings.
        60m 27s



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:a9ad5d6
        JIRA Issue YARN-5918
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12840087/YARN-5918.0002.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 7d0a0e635840 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 83cc726
        Default Java 1.8.0_111
        findbugs v3.0.0
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14030/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/14030/console
        Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - +1 overall Vote Subsystem Runtime Comment 0 reexec 0m 18s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 7m 40s trunk passed +1 compile 0m 35s trunk passed +1 checkstyle 0m 20s trunk passed +1 mvnsite 0m 40s trunk passed +1 mvneclipse 0m 18s trunk passed +1 findbugs 0m 59s trunk passed +1 javadoc 0m 23s trunk passed +1 mvninstall 0m 33s the patch passed +1 compile 0m 31s the patch passed +1 javac 0m 31s the patch passed +1 checkstyle 0m 18s the patch passed +1 mvnsite 0m 37s the patch passed +1 mvneclipse 0m 15s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 5s the patch passed +1 javadoc 0m 20s the patch passed +1 unit 43m 57s hadoop-yarn-server-resourcemanager in the patch passed. +1 asflicense 0m 17s The patch does not generate ASF License warnings. 60m 27s Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue YARN-5918 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12840087/YARN-5918.0002.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 7d0a0e635840 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 83cc726 Default Java 1.8.0_111 findbugs v3.0.0 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14030/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Console output https://builds.apache.org/job/PreCommit-YARN-Build/14030/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        varun_saxena Varun Saxena added a comment -

        Thanks Bibin A Chundatt for the latest patch.

        We actually did not need such an elaborate test. We could have probably achieved the same by mocking the scheduler to simulate case of NPE. But its not necessarily bad to have some sort of E2E test case simulating the complete scenario.

        Few comments.

        1. This test doesn't seem to belong to TestNodeQueueLoadMonitor. We can shift it to some other test class or create a new class if it cant fit in any.
        2. We are iterating 30 times over and sleeping for 100 ms in each run. This will unnecessarily make the test run for 3 seconds in loop. We can get rid of this loop and make run time of test faster. We can do it as under.
          • You can set YarnConfiguration.NM_CONTAINER_QUEUING_SORTING_NODES_INTERVAL_MS in configuration as 100 ms instead of 1000 ms default.
          • Move node added and node update events sent to AM Service outside the loop.
          • We can check get OpportunisticContainerContext from FiCaSchedulerApp and use it to check against getNodeMap to have a deterministic test case and reduce test time.
          • After invoking node add and update events, we can loop 10-20 times over(say) to send allocate with a sleep of say 50 ms. Break from the loop as soon as getNodeMap has 2 nodes. Now send remove node event to scheduler and then loop over to send allocate and wait till getNodeMap becomes 1.
        3. Not related to your patch. In NodeQueueLoadMonitor we have some LOG.debug statements without isDebugEnabled guard. Maybe we can fix this here as well.
        Show
        varun_saxena Varun Saxena added a comment - Thanks Bibin A Chundatt for the latest patch. We actually did not need such an elaborate test. We could have probably achieved the same by mocking the scheduler to simulate case of NPE. But its not necessarily bad to have some sort of E2E test case simulating the complete scenario. Few comments. This test doesn't seem to belong to TestNodeQueueLoadMonitor. We can shift it to some other test class or create a new class if it cant fit in any. We are iterating 30 times over and sleeping for 100 ms in each run. This will unnecessarily make the test run for 3 seconds in loop. We can get rid of this loop and make run time of test faster. We can do it as under. You can set YarnConfiguration.NM_CONTAINER_QUEUING_SORTING_NODES_INTERVAL_MS in configuration as 100 ms instead of 1000 ms default. Move node added and node update events sent to AM Service outside the loop. We can check get OpportunisticContainerContext from FiCaSchedulerApp and use it to check against getNodeMap to have a deterministic test case and reduce test time. After invoking node add and update events, we can loop 10-20 times over(say) to send allocate with a sleep of say 50 ms. Break from the loop as soon as getNodeMap has 2 nodes. Now send remove node event to scheduler and then loop over to send allocate and wait till getNodeMap becomes 1. Not related to your patch. In NodeQueueLoadMonitor we have some LOG.debug statements without isDebugEnabled guard. Maybe we can fix this here as well.
        Hide
        bibinchundatt Bibin A Chundatt added a comment -

        Attaching patch handling comments

        Show
        bibinchundatt Bibin A Chundatt added a comment - Attaching patch handling comments
        Hide
        asuresh Arun Suresh added a comment -

        Thanks for the patch and the testcase Bibin A Chundatt.

        We already have a o.a.h.yarn.server.resourcemanager.TestOpportunisticContainerAllocatorAMService. Lets move the new test into that class.

        +1 from me pending jenkins

        Show
        asuresh Arun Suresh added a comment - Thanks for the patch and the testcase Bibin A Chundatt . We already have a o.a.h.yarn.server.resourcemanager.TestOpportunisticContainerAllocatorAMService . Lets move the new test into that class. +1 from me pending jenkins
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 17s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        +1 mvninstall 7m 24s trunk passed
        +1 compile 0m 34s trunk passed
        +1 checkstyle 0m 21s trunk passed
        +1 mvnsite 0m 40s trunk passed
        +1 mvneclipse 0m 18s trunk passed
        +1 findbugs 1m 1s trunk passed
        +1 javadoc 0m 22s trunk passed
        +1 mvninstall 0m 34s the patch passed
        +1 compile 0m 31s the patch passed
        +1 javac 0m 31s the patch passed
        +1 checkstyle 0m 18s the patch passed
        +1 mvnsite 0m 37s the patch passed
        +1 mvneclipse 0m 16s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 1m 16s the patch passed
        +1 javadoc 0m 21s the patch passed
        -1 unit 43m 2s hadoop-yarn-server-resourcemanager in the patch failed.
        +1 asflicense 0m 19s The patch does not generate ASF License warnings.
        59m 26s



        Reason Tests
        Failed junit tests hadoop.yarn.server.resourcemanager.TestRMRestart
          hadoop.yarn.server.resourcemanager.scheduler.fair.TestAppRunnability



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:a9ad5d6
        JIRA Issue YARN-5918
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12840245/YARN-5918.0003.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux d1a436e4dc80 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 3541ed8
        Default Java 1.8.0_111
        findbugs v3.0.0
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14051/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14051/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/14051/console
        Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 17s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 7m 24s trunk passed +1 compile 0m 34s trunk passed +1 checkstyle 0m 21s trunk passed +1 mvnsite 0m 40s trunk passed +1 mvneclipse 0m 18s trunk passed +1 findbugs 1m 1s trunk passed +1 javadoc 0m 22s trunk passed +1 mvninstall 0m 34s the patch passed +1 compile 0m 31s the patch passed +1 javac 0m 31s the patch passed +1 checkstyle 0m 18s the patch passed +1 mvnsite 0m 37s the patch passed +1 mvneclipse 0m 16s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 16s the patch passed +1 javadoc 0m 21s the patch passed -1 unit 43m 2s hadoop-yarn-server-resourcemanager in the patch failed. +1 asflicense 0m 19s The patch does not generate ASF License warnings. 59m 26s Reason Tests Failed junit tests hadoop.yarn.server.resourcemanager.TestRMRestart   hadoop.yarn.server.resourcemanager.scheduler.fair.TestAppRunnability Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue YARN-5918 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12840245/YARN-5918.0003.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux d1a436e4dc80 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 3541ed8 Default Java 1.8.0_111 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-YARN-Build/14051/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14051/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Console output https://builds.apache.org/job/PreCommit-YARN-Build/14051/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        bibinchundatt Bibin A Chundatt added a comment -

        Attaching patch after moving testcase

        Show
        bibinchundatt Bibin A Chundatt added a comment - Attaching patch after moving testcase
        Hide
        hadoopqa Hadoop QA added a comment -
        +1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 17s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        +1 mvninstall 6m 48s trunk passed
        +1 compile 0m 34s trunk passed
        +1 checkstyle 0m 21s trunk passed
        +1 mvnsite 0m 39s trunk passed
        +1 mvneclipse 0m 17s trunk passed
        +1 findbugs 1m 2s trunk passed
        +1 javadoc 0m 21s trunk passed
        +1 mvninstall 0m 31s the patch passed
        +1 compile 0m 31s the patch passed
        +1 javac 0m 31s the patch passed
        +1 checkstyle 0m 18s the patch passed
        +1 mvnsite 0m 38s the patch passed
        +1 mvneclipse 0m 15s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 1m 5s the patch passed
        +1 javadoc 0m 19s the patch passed
        +1 unit 42m 13s hadoop-yarn-server-resourcemanager in the patch passed.
        +1 asflicense 0m 16s The patch does not generate ASF License warnings.
        57m 42s



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:a9ad5d6
        JIRA Issue YARN-5918
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12840271/YARN-5918.0004.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux d5e558a8c1ae 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 3541ed8
        Default Java 1.8.0_111
        findbugs v3.0.0
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14054/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/14054/console
        Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - +1 overall Vote Subsystem Runtime Comment 0 reexec 0m 17s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 6m 48s trunk passed +1 compile 0m 34s trunk passed +1 checkstyle 0m 21s trunk passed +1 mvnsite 0m 39s trunk passed +1 mvneclipse 0m 17s trunk passed +1 findbugs 1m 2s trunk passed +1 javadoc 0m 21s trunk passed +1 mvninstall 0m 31s the patch passed +1 compile 0m 31s the patch passed +1 javac 0m 31s the patch passed +1 checkstyle 0m 18s the patch passed +1 mvnsite 0m 38s the patch passed +1 mvneclipse 0m 15s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 5s the patch passed +1 javadoc 0m 19s the patch passed +1 unit 42m 13s hadoop-yarn-server-resourcemanager in the patch passed. +1 asflicense 0m 16s The patch does not generate ASF License warnings. 57m 42s Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue YARN-5918 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12840271/YARN-5918.0004.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux d5e558a8c1ae 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 3541ed8 Default Java 1.8.0_111 findbugs v3.0.0 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14054/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Console output https://builds.apache.org/job/PreCommit-YARN-Build/14054/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        asuresh Arun Suresh added a comment -

        Committed this to trunk. Thanks for the patch Bibin A Chundatt and for the reviews Varun Saxena.

        Show
        asuresh Arun Suresh added a comment - Committed this to trunk. Thanks for the patch Bibin A Chundatt and for the reviews Varun Saxena .
        Hide
        hudson Hudson added a comment -

        SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10880 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10880/)
        YARN-5918. Handle Opportunistic scheduling allocate request failure when (arun suresh: rev 005850b28feb2f7bb8c2844d11e3f9d21b45d754)

        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/OpportunisticContainerAllocatorAMService.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestOpportunisticContainerAllocatorAMService.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/distributed/NodeQueueLoadMonitor.java
        Show
        hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10880 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10880/ ) YARN-5918 . Handle Opportunistic scheduling allocate request failure when (arun suresh: rev 005850b28feb2f7bb8c2844d11e3f9d21b45d754) (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/OpportunisticContainerAllocatorAMService.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestOpportunisticContainerAllocatorAMService.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/distributed/NodeQueueLoadMonitor.java
        Hide
        asuresh Arun Suresh added a comment -

        Committing this to branch-2

        Show
        asuresh Arun Suresh added a comment - Committing this to branch-2

          People

          • Assignee:
            bibinchundatt Bibin A Chundatt
            Reporter:
            bibinchundatt Bibin A Chundatt
          • Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development