Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-14062

ApplicationMasterProtocolPBClientImpl.allocate fails with EOFException when RPC privacy is enabled

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Critical
    • Resolution: Fixed
    • Affects Version/s: 2.8.0
    • Fix Version/s: 2.8.0, 3.0.0-alpha4
    • Component/s: None
    • Labels:
      None

      Description

      When privacy is enabled for RPC (hadoop.rpc.protection = privacy), ApplicationMasterProtocolPBClientImpl.allocate sometimes (but not always) fails with an EOFException. I've reproduced this with Spark 2.0.2 built against latest branch-2.8 and with a simple distcp job on latest branch-2.8.

      Steps to reproduce using distcp:

      1. Set hadoop.rpc.protection equal to privacy
      2. Write data to HDFS. I did this with Spark as follows:

      sc.parallelize(1 to (5*1024*1024)).map(k => Seq(k, org.apache.commons.lang.RandomStringUtils.random(1024, "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWxyZ0123456789")).mkString("|")).toDF().repartition(100).write.parquet("hdfs:///tmp/testData")
      

      3. Attempt to distcp that data to another location in HDFS. For example:

      hadoop distcp -Dmapreduce.framework.name=yarn hdfs:///tmp/testData hdfs:///tmp/testDataCopy
      

      I observed this error in the ApplicationMaster's syslog:

      2016-12-19 19:13:50,097 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer setup for JobId: job_1482189777425_0004, File: hdfs://<namenode_host>:8020/tmp/hadoop-yarn/staging/<hdfs_user>/.staging/job_1482189777425_0004/job_1482189777425_0004_1.jhist
      2016-12-19 19:13:51,004 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:0 ScheduledMaps:4 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 HostLocal:0 RackLocal:0
      2016-12-19 19:13:51,031 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1482189777425_0004: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:22528, vCores:23> knownNMs=3
      2016-12-19 19:13:52,043 INFO [RMCommunicator Allocator] org.apache.hadoop.io.retry.RetryInvocationHandler: Exception while invoking ApplicationMasterProtocolPBClientImpl.allocate over null. Retrying after sleeping for 30000ms.
      java.io.EOFException: End of File Exception between local host is: "<application_master_host>/<ip_addr>"; destination host is: "<rm_host>":8030; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException
      	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
      	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
      	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
      	at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
      	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:801)
      	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
      	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1486)
      	at org.apache.hadoop.ipc.Client.call(Client.java:1428)
      	at org.apache.hadoop.ipc.Client.call(Client.java:1338)
      	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
      	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
      	at com.sun.proxy.$Proxy80.allocate(Unknown Source)
      	at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:77)
      	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
      	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      	at java.lang.reflect.Method.invoke(Method.java:497)
      	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398)
      	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
      	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
      	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
      	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335)
      	at com.sun.proxy.$Proxy81.allocate(Unknown Source)
      	at org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor.makeRemoteRequest(RMContainerRequestor.java:204)
      	at org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.getResources(RMContainerAllocator.java:735)
      	at org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.heartbeat(RMContainerAllocator.java:269)
      	at org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator$AllocatorRunnable.run(RMCommunicator.java:281)
      	at java.lang.Thread.run(Thread.java:745)
      Caused by: java.io.EOFException
      	at java.io.DataInputStream.readInt(DataInputStream.java:392)
      	at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1785)
      	at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1156)
      	at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1053)
      

      Marking as "critical" since this blocks YARN users from encrypting RPC in their Hadoop clusters.

      1. HADOOP-14062.001.patch
        18 kB
        Steven Rand
      2. HADOOP-14062.002.patch
        22 kB
        Steven Rand
      3. HADOOP-14062.003.patch
        3 kB
        Steven Rand
      4. HADOOP-14062.004.patch
        3 kB
        Steven Rand
      5. HADOOP-14062-branch-2.8.0.004.patch
        18 kB
        Steven Rand
      6. HADOOP-14062-branch-2.8.0.005.patch
        4 kB
        Jian He
      7. HADOOP-14062-branch-2.8.0.005.patch
        4 kB
        Steven Rand
      8. HADOOP-14062-branch-2.8.0.dummy.patch
        1 kB
        Jian He
      9. yarn-rm-log.txt
        3.25 MB
        Steven Rand

        Activity

        Hide
        jianhe Jian He added a comment -

        Steven Rand, do you have server side log where this exception happens?

        Show
        jianhe Jian He added a comment - Steven Rand , do you have server side log where this exception happens?
        Hide
        Steven Rand Steven Rand added a comment -

        Jian He, I've attached the log from the Resource Manager while the distcp job is running. It contains no errors or warnings, and as far as I can tell nothing out of the ordinary.

        Show
        Steven Rand Steven Rand added a comment - Jian He , I've attached the log from the Resource Manager while the distcp job is running. It contains no errors or warnings, and as far as I can tell nothing out of the ordinary.
        Hide
        Steven Rand Steven Rand added a comment -

        This issue also reproduces on latest branch-2.8.0 (most recent commit at the time of writing being e94a2bcfa82be0e24a49585b50dfdd0c3dfeb2e7). A cleaner repro than the one given above, by the way, is to simply run hadoop jar hadoop-2.8.0-SNAPSHOT/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.0-SNAPSHOT-tests.jar TestDFSIO -write -nrFiles 10 -fileSize 1000.

        I will try to debug this today, but I haven't made much progress in my previous attempts, so any help or tips would be greatly appreciated.

        Show
        Steven Rand Steven Rand added a comment - This issue also reproduces on latest branch-2.8.0 (most recent commit at the time of writing being e94a2bcfa82be0e24a49585b50dfdd0c3dfeb2e7). A cleaner repro than the one given above, by the way, is to simply run hadoop jar hadoop-2.8.0-SNAPSHOT/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.0-SNAPSHOT-tests.jar TestDFSIO -write -nrFiles 10 -fileSize 1000 . I will try to debug this today, but I haven't made much progress in my previous attempts, so any help or tips would be greatly appreciated.
        Hide
        Steven Rand Steven Rand added a comment -

        Relevant part of AM container log at DEBUG level:

        2017-01-13 14:27:45,422 DEBUG [RMCommunicator Allocator] org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: NEGOTIATE
        
        2017-01-13 14:27:45,423 DEBUG [RMCommunicator Allocator] org.apache.hadoop.security.SaslRpcClient: Get token info proto:interface org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB info:org.apache.hadoop.yarn.security.SchedulerSecurityInfo$1@76856ec2
        2017-01-13 14:27:45,423 DEBUG [RMCommunicator Allocator] org.apache.hadoop.yarn.security.AMRMTokenSelector: Looking for a token with service 10.0.22.125:8030
        2017-01-13 14:27:45,423 DEBUG [RMCommunicator Allocator] org.apache.hadoop.yarn.security.AMRMTokenSelector: Token kind is YARN_AM_RM_TOKEN and the token's service name is 10.0.22.125:8030
        2017-01-13 14:27:45,423 DEBUG [RMCommunicator Allocator] org.apache.hadoop.security.SaslRpcClient: Creating SASL DIGEST-MD5(TOKEN)  client to authenticate to service at default
        2017-01-13 14:27:45,423 DEBUG [RMCommunicator Allocator] org.apache.hadoop.security.SaslRpcClient: Use TOKEN authentication for protocol ApplicationMasterProtocolPB
        2017-01-13 14:27:45,423 DEBUG [RMCommunicator Allocator] org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting username: Cg0KCQgDELeEw8iZKxABEOq0jBE=
        2017-01-13 14:27:45,423 DEBUG [RMCommunicator Allocator] org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting userPassword
        2017-01-13 14:27:45,423 DEBUG [RMCommunicator Allocator] org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting realm: default
        2017-01-13 14:27:45,423 DEBUG [RMCommunicator Allocator] org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: INITIATE
        token: "charset=utf-8,username=\"Cg0KCQgDELeEw8iZKxABEOq0jBE=\",realm=\"default\",nonce=\"WR1cP6XR4aRmnTNDvPNIJVyQoVcKs/L31wZ/2aTq\",nc=00000001,cnonce=\"h2O79JUxneBTofVfLlzS4BnkU1k2QWeFV8K0f6V7\",digest-uri=\"/default\",maxbuf=65536,response=5c07c8484a146548ee71bc1347451c23,qop=auth-conf,cipher=\"3des\""
        auths {
          method: "TOKEN"
          mechanism: "DIGEST-MD5"
          protocol: ""
          serverId: "default"
        }
        
        2017-01-13 14:27:45,425 DEBUG [RMCommunicator Allocator] org.apache.hadoop.ipc.Client: Negotiated QOP is :auth-conf
        2017-01-13 14:27:45,425 DEBUG [IPC Parameter Sending Thread #0] org.apache.hadoop.ipc.Client: IPC Client (1943634922) connection to <RM_HOSTNAME>/10.0.22.125:8030 from <YARN_USER> sending #25
        2017-01-13 14:27:45,426 DEBUG [IPC Client (1943634922) connection to <RM_HOSTNAME>/10.0.22.125:8030 from <YARN_USER>] org.apache.hadoop.ipc.Client: IPC Client (1943634922) connection to <RM_HOSTNAME>/10.0.22.125:8030 from <YARN_USER>: starting, having connections 2
        2017-01-13 14:27:45,426 DEBUG [IPC Parameter Sending Thread #0] org.apache.hadoop.security.SaslRpcClient: wrapping token of length:306
        2017-01-13 14:27:45,427 DEBUG [IPC Parameter Sending Thread #0] org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: WRAP
        token: "i\236:\323o!\016C\032\032\303c%I\036\305\245\253J\271\311\226\213\246\234\260\341\315\371p\017\271\027k\223\367D\326\276\210e\330\237\207\247\r\002\037\004\031l\231q\203\232rnx\271\317h\017\247\357\325_}\233s,\250\340\002\345\3318\364\307\246\032+\fI\307\366\351\303l\006\030\tY\251\205WE\227;\276\022Z\210\363W\317\252\376$\316\243\214\313\004\317hv\033\333\211\230w\273s\375S\260\262\205}\343\033\362m\006\a\250\236e\266\034\362\352\317\211=F\367#H6\237\322\232$]\217%\340\260Y\330\034\302\266\315\201*=//\350\220U\276lk\345\253o\021\242e\365M&\200\037\235?O\224\371\317\2322{f\262\270\323\b\247\264\364\231\243\257\342\334e\036d\001ez\266`R\362\026~\340.A\356]E\331\005\214\033E\223\202\t\275\275h\355\v\240B\300\265\335\303a\021\326 \331\247\206f|\211H|E~\334;\210\371p\370\222^\241O\005;3\rNY\2456\006\257\222\220K;\363\222J\311\006\030\243\337\340pG/\357w\017\2276Z~\313\303\241\246\016\232_\345D\374\343o\266\231yF\371\000\001\000\000\000\000"
        
        2017-01-13 14:27:45,427 DEBUG [IPC Client (1943634922) connection to <RM_HOSTNAME>/10.0.22.125:8030 from <YARN_USER>] org.apache.hadoop.security.SaslRpcClient: reading next wrapped RPC packet
        2017-01-13 14:27:45,430 DEBUG [IPC Client (1943634922) connection to <RM_HOSTNAME>/10.0.22.125:8030 from <YARN_USER>] org.apache.hadoop.security.SaslRpcClient: unwrapping token of length:3574
        2017-01-13 14:27:45,431 DEBUG [IPC Client (1943634922) connection to <RM_HOSTNAME>/10.0.22.125:8030 from <YARN_USER>] org.apache.hadoop.ipc.Client: closing ipc connection to <RM_HOSTNAME>/10.0.22.125:8030: null
        java.io.EOFException
        	at java.io.DataInputStream.readInt(DataInputStream.java:392)
        	at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1785)
        	at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1156)
        	at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1053)
        2017-01-13 14:27:45,431 DEBUG [IPC Client (1943634922) connection to <RM_HOSTNAME>/10.0.22.125:8030 from <YARN_USER>] org.apache.hadoop.ipc.Client: IPC Client (1943634922) connection to <RM_HOSTNAME>/10.0.22.125:8030 from <YARN_USER>: closed
        2017-01-13 14:27:45,431 DEBUG [IPC Client (1943634922) connection to <RM_HOSTNAME>/10.0.22.125:8030 from <YARN_USER>] org.apache.hadoop.ipc.Client: IPC Client (1943634922) connection to <RM_HOSTNAME>/10.0.22.125:8030 from <YARN_USER>: stopped, remaining connections 1
        2017-01-13 14:27:45,431 INFO [RMCommunicator Allocator] org.apache.hadoop.io.retry.RetryInvocationHandler: Exception while invoking ApplicationMasterProtocolPBClientImpl.allocate over null. Retrying after sleeping for 30000ms.
        java.io.EOFException: End of File Exception between local host is: "<AM_HOST>/10.0.22.190"; destination host is: "<RM_HOSTNAME>":8030; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException
        	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        	... rest of stack trace from above ...
        

        The part that sticks out to me is org.apache.hadoop.security.SaslRpcClient: unwrapping token of length:3574, indicating that the response received from the RM is of a valid length. However, the EOFException comes from int length = in.readInt();, meaning that after we've unwrapped the RPC response and created a DataInputStream from it, the length of the DataInputStream is less than 4 bytes.

        This seems to indicate a problem in SaslRpcClient (maybe SaslRpcClient#getInputStream?), but other RPC calls from the AM to the RM and back are working just fine (e.g., before we get around to allocating containers the AM was able to successfully call registerApplicationMaster.

        Show
        Steven Rand Steven Rand added a comment - Relevant part of AM container log at DEBUG level: 2017-01-13 14:27:45,422 DEBUG [RMCommunicator Allocator] org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: NEGOTIATE 2017-01-13 14:27:45,423 DEBUG [RMCommunicator Allocator] org.apache.hadoop.security.SaslRpcClient: Get token info proto: interface org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB info:org.apache.hadoop.yarn.security.SchedulerSecurityInfo$1@76856ec2 2017-01-13 14:27:45,423 DEBUG [RMCommunicator Allocator] org.apache.hadoop.yarn.security.AMRMTokenSelector: Looking for a token with service 10.0.22.125:8030 2017-01-13 14:27:45,423 DEBUG [RMCommunicator Allocator] org.apache.hadoop.yarn.security.AMRMTokenSelector: Token kind is YARN_AM_RM_TOKEN and the token's service name is 10.0.22.125:8030 2017-01-13 14:27:45,423 DEBUG [RMCommunicator Allocator] org.apache.hadoop.security.SaslRpcClient: Creating SASL DIGEST-MD5(TOKEN) client to authenticate to service at default 2017-01-13 14:27:45,423 DEBUG [RMCommunicator Allocator] org.apache.hadoop.security.SaslRpcClient: Use TOKEN authentication for protocol ApplicationMasterProtocolPB 2017-01-13 14:27:45,423 DEBUG [RMCommunicator Allocator] org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting username: Cg0KCQgDELeEw8iZKxABEOq0jBE= 2017-01-13 14:27:45,423 DEBUG [RMCommunicator Allocator] org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting userPassword 2017-01-13 14:27:45,423 DEBUG [RMCommunicator Allocator] org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting realm: default 2017-01-13 14:27:45,423 DEBUG [RMCommunicator Allocator] org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: INITIATE token: "charset=utf-8,username=\" Cg0KCQgDELeEw8iZKxABEOq0jBE=\ ",realm=\" default \ ",nonce=\" WR1cP6XR4aRmnTNDvPNIJVyQoVcKs/L31wZ/2aTq\ ",nc=00000001,cnonce=\" h2O79JUxneBTofVfLlzS4BnkU1k2QWeFV8K0f6V7\ ",digest-uri=\" / default \ ",maxbuf=65536,response=5c07c8484a146548ee71bc1347451c23,qop=auth-conf,cipher=\" 3des\"" auths { method: "TOKEN" mechanism: "DIGEST-MD5" protocol: "" serverId: " default " } 2017-01-13 14:27:45,425 DEBUG [RMCommunicator Allocator] org.apache.hadoop.ipc.Client: Negotiated QOP is :auth-conf 2017-01-13 14:27:45,425 DEBUG [IPC Parameter Sending Thread #0] org.apache.hadoop.ipc.Client: IPC Client (1943634922) connection to <RM_HOSTNAME>/10.0.22.125:8030 from <YARN_USER> sending #25 2017-01-13 14:27:45,426 DEBUG [IPC Client (1943634922) connection to <RM_HOSTNAME>/10.0.22.125:8030 from <YARN_USER>] org.apache.hadoop.ipc.Client: IPC Client (1943634922) connection to <RM_HOSTNAME>/10.0.22.125:8030 from <YARN_USER>: starting, having connections 2 2017-01-13 14:27:45,426 DEBUG [IPC Parameter Sending Thread #0] org.apache.hadoop.security.SaslRpcClient: wrapping token of length:306 2017-01-13 14:27:45,427 DEBUG [IPC Parameter Sending Thread #0] org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: WRAP token: "i\236:\323o!\016C\032\032\303c%I\036\305\245\253J\271\311\226\213\246\234\260\341\315\371p\017\271\027k\223\367D\326\276\210e\330\237\207\247\r\002\037\004\031l\231q\203\232rnx\271\317h\017\247\357\325_}\233s,\250\340\002\345\3318\364\307\246\032+\fI\307\366\351\303l\006\030\tY\251\205WE\227;\276\022Z\210\363W\317\252\376$\316\243\214\313\004\317hv\033\333\211\230w\273s\375S\260\262\205}\343\033\362m\006\a\250\236e\266\034\362\352\317\211=F\367#H6\237\322\232$]\217%\340\260Y\330\034\302\266\315\201*= //\350\220U\276lk\345\253o\021\242e\365M&\200\037\235?O\224\371\317\2322{f\262\270\323\b\247\264\364\231\243\257\342\334e\036d\001ez\266`R\362\026~\340.A\356]E\331\005\214\033E\223\202\t\275\275h\355\v\240B\300\265\335\303a\021\326 \331\247\206f|\211H|E~\334;\210\371p\370\222^\241O\005;3\rNY\2456\006\257\222\220K;\363\222J\311\006\030\243\337\340pG/\357w\017\2276Z~\313\303\241\246\016\232_\345D\374\343o\266\231yF\371\000\001\000\000\000\000" 2017-01-13 14:27:45,427 DEBUG [IPC Client (1943634922) connection to <RM_HOSTNAME>/10.0.22.125:8030 from <YARN_USER>] org.apache.hadoop.security.SaslRpcClient: reading next wrapped RPC packet 2017-01-13 14:27:45,430 DEBUG [IPC Client (1943634922) connection to <RM_HOSTNAME>/10.0.22.125:8030 from <YARN_USER>] org.apache.hadoop.security.SaslRpcClient: unwrapping token of length:3574 2017-01-13 14:27:45,431 DEBUG [IPC Client (1943634922) connection to <RM_HOSTNAME>/10.0.22.125:8030 from <YARN_USER>] org.apache.hadoop.ipc.Client: closing ipc connection to <RM_HOSTNAME>/10.0.22.125:8030: null java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1785) at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1156) at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1053) 2017-01-13 14:27:45,431 DEBUG [IPC Client (1943634922) connection to <RM_HOSTNAME>/10.0.22.125:8030 from <YARN_USER>] org.apache.hadoop.ipc.Client: IPC Client (1943634922) connection to <RM_HOSTNAME>/10.0.22.125:8030 from <YARN_USER>: closed 2017-01-13 14:27:45,431 DEBUG [IPC Client (1943634922) connection to <RM_HOSTNAME>/10.0.22.125:8030 from <YARN_USER>] org.apache.hadoop.ipc.Client: IPC Client (1943634922) connection to <RM_HOSTNAME>/10.0.22.125:8030 from <YARN_USER>: stopped, remaining connections 1 2017-01-13 14:27:45,431 INFO [RMCommunicator Allocator] org.apache.hadoop.io.retry.RetryInvocationHandler: Exception while invoking ApplicationMasterProtocolPBClientImpl.allocate over null . Retrying after sleeping for 30000ms. java.io.EOFException: End of File Exception between local host is: "<AM_HOST>/10.0.22.190" ; destination host is: "<RM_HOSTNAME>" :8030; : java.io.EOFException; For more details see: http: //wiki.apache.org/hadoop/EOFException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ... rest of stack trace from above ... The part that sticks out to me is org.apache.hadoop.security.SaslRpcClient: unwrapping token of length:3574 , indicating that the response received from the RM is of a valid length. However, the EOFException comes from int length = in.readInt(); , meaning that after we've unwrapped the RPC response and created a DataInputStream from it, the length of the DataInputStream is less than 4 bytes. This seems to indicate a problem in SaslRpcClient (maybe SaslRpcClient#getInputStream?), but other RPC calls from the AM to the RM and back are working just fine (e.g., before we get around to allocating containers the AM was able to successfully call registerApplicationMaster .
        Hide
        Steven Rand Steven Rand added a comment -

        This is also an issue on 2.8.0-RC1. Jian He, do you think there's any chance of this being fixed before 2.8.0 is released?

        Show
        Steven Rand Steven Rand added a comment - This is also an issue on 2.8.0-RC1. Jian He , do you think there's any chance of this being fixed before 2.8.0 is released?
        Hide
        Steven Rand Steven Rand added a comment -

        As an additional data point, setting hadoop.rpc.protection to integrity is also enough to reproduce the issue (previously I'd only tried with privacy).

        Show
        Steven Rand Steven Rand added a comment - As an additional data point, setting hadoop.rpc.protection to integrity is also enough to reproduce the issue (previously I'd only tried with privacy ).
        Hide
        Steven Rand Steven Rand added a comment -

        // I deleted my above comment because it was inaccurate.

        Looked into this more today with a debugger. I still haven't figured out quite what's going on, but thought it might be useful to update this with some more information.

        One of the first four bytes of the token variable returned by saslClient.unwrap [1] is consistently negative. Therefore DataInputStream#readInt thinks that the stream has prematurely ended, and throws an EOFException, when it's called by Client#readResponse [2].

        [1] https://github.com/apache/hadoop/blob/branch-2.8.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java#L610
        [2] https://github.com/apache/hadoop/blob/branch-2.8.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L1156

        Show
        Steven Rand Steven Rand added a comment - // I deleted my above comment because it was inaccurate. Looked into this more today with a debugger. I still haven't figured out quite what's going on, but thought it might be useful to update this with some more information. One of the first four bytes of the token variable returned by saslClient.unwrap [1] is consistently negative. Therefore DataInputStream#readInt thinks that the stream has prematurely ended, and throws an EOFException , when it's called by Client#readResponse [2]. [1] https://github.com/apache/hadoop/blob/branch-2.8.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java#L610 [2] https://github.com/apache/hadoop/blob/branch-2.8.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L1156
        Hide
        Steven Rand Steven Rand added a comment -

        The problem is that Client$IpcStreams#readResponse is trying to get the length of the input stream after it's been unwrapped. Before being unwrapped, the input stream does in fact contain the length of the RPC message in its first four bytes, as we see in SaslRpcClient#readNextRpcPacket(): https://github.com/apache/hadoop/blob/branch-2.8.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java#L589.

        However, the stream that we pass to saslClient.unwrap can't contain those four bytes. From the javadoc: "incoming is the contents of the SASL buffer as defined in RFC 2222 without the leading four octet field that represents the length." And indeed, the length of the incoming RPC is 3586 in the case I'm looking at, but then token is only 3570 bytes when we pass it to saslClient.unwrap, so the first four bytes are definitely gone. (I'm not sure what the other 12 missing bytes are though.)

        Also, from what I can tell of the SASL code, the unwrap implementation won't put the 4-byte header back on.

        So it doesn't make sense to call int length = in.readInt(); on an unwrapped stream, as the first four bytes of that do not contain its length. I can submit a patch for this.

        Show
        Steven Rand Steven Rand added a comment - The problem is that Client$IpcStreams#readResponse is trying to get the length of the input stream after it's been unwrapped. Before being unwrapped, the input stream does in fact contain the length of the RPC message in its first four bytes, as we see in SaslRpcClient#readNextRpcPacket(): https://github.com/apache/hadoop/blob/branch-2.8.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java#L589 . However, the stream that we pass to saslClient.unwrap can't contain those four bytes. From the javadoc : "incoming is the contents of the SASL buffer as defined in RFC 2222 without the leading four octet field that represents the length." And indeed, the length of the incoming RPC is 3586 in the case I'm looking at, but then token is only 3570 bytes when we pass it to saslClient.unwrap , so the first four bytes are definitely gone. (I'm not sure what the other 12 missing bytes are though.) Also, from what I can tell of the SASL code , the unwrap implementation won't put the 4-byte header back on. So it doesn't make sense to call int length = in.readInt(); on an unwrapped stream, as the first four bytes of that do not contain its length. I can submit a patch for this.
        Hide
        Steven Rand Steven Rand added a comment -

        Also the reason why other RPC calls are working is evidently that they're not encrypted, which seems worrisome in its own right. For example, the code that receives an RPC response from the NameNode for a lease renewal request goes from Client$Connection$PingInputStream#read to SocketInputStream#read and never makes its way into SaslRpcClient code.

        Show
        Steven Rand Steven Rand added a comment - Also the reason why other RPC calls are working is evidently that they're not encrypted, which seems worrisome in its own right. For example, the code that receives an RPC response from the NameNode for a lease renewal request goes from Client$Connection$PingInputStream#read to SocketInputStream#read and never makes its way into SaslRpcClient code.
        Hide
        Steven Rand Steven Rand added a comment -

        Some more testing with a debugger indicates that this problem is related to HADOOP-10940. Before then, in.readInt() in Client#receiveRpcResponse would call read(byte[] buf, int off, int len) with len=8192 in SaslRpcClient, which would give back the correct result. After that change, in.readInt() in Client$IpcStreams#readResponse just calls read() in SaslRpcClient, which then calls read(byte[], int off, int len) with len=1 4 separate times.

        The salient part of that change seems to be that before, in was wrapped in a BufferedInputStream: https://github.com/apache/hadoop/commit/4b9845bc53f47a32b4dfb1f271e6e193ce813f79#diff-034bce8e7837d9f0c216f85d4c185755L833.
        And calling BufferedInputStream#read() calls fill(), which calls read(byte[], int, int) as described above – see http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8-b132/java/io/BufferedInputStream.java?av=f#246.

        After HADOOP-10940, we still wrap the SocketInputStream for in in a BufferedInputStream, but that itself is wrapped in a SaslRpcClient$WrappedInputStream, so we get the behavior where a single-byte read is performed four times.

        I'm not sure what the actual problem here is – is it that we're eventually calling the wrong read method in SaslRpcClient$WrappedInputStream, or that the first four bytes in the unwrapped input stream don't actually contain the length of the RPC message (and they should)?

        In any case, I'm attaching a patch that fixes this in the most naive way possible, but will defer to people who know this code better on what the correct fix is, and am happy to update the patch based on feedback. Naturally I should also add a test, but would like to have more confidence in the fix first. (So far I've tested by deploying the patch to a stack and successfully running TestDFSIO with SASL qop set to integrity.)

        Daryn Sharp or Kihwal Lee, would either of you be able to review this? It seems like you guys have a good understanding of this code and would be able to point me in the right direction.

        Show
        Steven Rand Steven Rand added a comment - Some more testing with a debugger indicates that this problem is related to HADOOP-10940 . Before then, in.readInt() in Client#receiveRpcResponse would call read(byte[] buf, int off, int len) with len=8192 in SaslRpcClient, which would give back the correct result. After that change, in.readInt() in Client$IpcStreams#readResponse just calls read() in SaslRpcClient, which then calls read(byte[], int off, int len) with len=1 4 separate times. The salient part of that change seems to be that before, in was wrapped in a BufferedInputStream : https://github.com/apache/hadoop/commit/4b9845bc53f47a32b4dfb1f271e6e193ce813f79#diff-034bce8e7837d9f0c216f85d4c185755L833 . And calling BufferedInputStream#read() calls fill() , which calls read(byte[], int, int) as described above – see http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8-b132/java/io/BufferedInputStream.java?av=f#246 . After HADOOP-10940 , we still wrap the SocketInputStream for in in a BufferedInputStream , but that itself is wrapped in a SaslRpcClient$WrappedInputStream , so we get the behavior where a single-byte read is performed four times. I'm not sure what the actual problem here is – is it that we're eventually calling the wrong read method in SaslRpcClient$WrappedInputStream , or that the first four bytes in the unwrapped input stream don't actually contain the length of the RPC message (and they should)? In any case, I'm attaching a patch that fixes this in the most naive way possible, but will defer to people who know this code better on what the correct fix is, and am happy to update the patch based on feedback. Naturally I should also add a test, but would like to have more confidence in the fix first. (So far I've tested by deploying the patch to a stack and successfully running TestDFSIO with SASL qop set to integrity .) Daryn Sharp or Kihwal Lee , would either of you be able to review this? It seems like you guys have a good understanding of this code and would be able to point me in the right direction.
        Hide
        Steven Rand Steven Rand added a comment -

        Junping Du, I'm wondering whether you have any opinions here, since you've been working on the 2.8.0 release. I could be wrong, of course, but I'm concerned that this is a non-trivial regression from 2.7.3, and I think it'd be great if we could fix this (or determine that I'm just doing something wrong) before 2.8.0 is released.

        Show
        Steven Rand Steven Rand added a comment - Junping Du , I'm wondering whether you have any opinions here, since you've been working on the 2.8.0 release. I could be wrong, of course, but I'm concerned that this is a non-trivial regression from 2.7.3, and I think it'd be great if we could fix this (or determine that I'm just doing something wrong) before 2.8.0 is released.
        Hide
        Steven Rand Steven Rand added a comment -

        // Attaching a better patch file now that I've gotten IntelliJ to behave better re: imports.

        Show
        Steven Rand Steven Rand added a comment - // Attaching a better patch file now that I've gotten IntelliJ to behave better re: imports.
        Hide
        jianhe Jian He added a comment -

        Steven Rand, the server log you provided does not have any exceptions - it's for a different time range. Are you able to get the corresponding server log when the exception happens ?
        I also converted this to a hadoop common jira

        Show
        jianhe Jian He added a comment - Steven Rand , the server log you provided does not have any exceptions - it's for a different time range. Are you able to get the corresponding server log when the exception happens ? I also converted this to a hadoop common jira
        Hide
        jianhe Jian He added a comment -

        Have you tested your patch with regard to the AM allocate call ?
        Also, If possible, can you try to write a unit test ? A unit test is useful to show the correctness of the patch.

        Show
        jianhe Jian He added a comment - Have you tested your patch with regard to the AM allocate call ? Also, If possible, can you try to write a unit test ? A unit test is useful to show the correctness of the patch.
        Hide
        Steven Rand Steven Rand added a comment - - edited

        Jian He, the server logs are for a different time range, but correspond to another instance of the same problem happening. I have never seen any errors or warning in the RM's log when this problem occurs – it appears to be entirely client-side. I can reproduce the issue again and attach AM and RM logs from the same time if it would be helpful, but the contents of the RM log will be the same as they are in the current attachment.

        I will write a unit test, but am also hoping for feedback on whether the approach taken in the current patch even makes sense. I can't tell whether the problem is that the unwrapped input stream needs to be wrapped in a BufferedInputStream, or whether it's that the first four bytes of the unwrapped input stream are supposed to be the length of the stream but instead are something else.

        EDIT: I forgot to say that yes, I have tested my patch using TestDFSIO and the patch resolves the issue as far as I can tell.

        Show
        Steven Rand Steven Rand added a comment - - edited Jian He , the server logs are for a different time range, but correspond to another instance of the same problem happening. I have never seen any errors or warning in the RM's log when this problem occurs – it appears to be entirely client-side. I can reproduce the issue again and attach AM and RM logs from the same time if it would be helpful, but the contents of the RM log will be the same as they are in the current attachment. I will write a unit test, but am also hoping for feedback on whether the approach taken in the current patch even makes sense. I can't tell whether the problem is that the unwrapped input stream needs to be wrapped in a BufferedInputStream , or whether it's that the first four bytes of the unwrapped input stream are supposed to be the length of the stream but instead are something else. EDIT: I forgot to say that yes, I have tested my patch using TestDFSIO and the patch resolves the issue as far as I can tell.
        Hide
        jianhe Jian He added a comment - - edited

        Do you mean TestDFSIO fails without the patch the patch, and pass with the patch ? How about the original issue you reported about AM allocate ? Does that also fail without the patch and pass with the patch ? Frankly, I'm also unsure the correctness of the patch. Hence,a UT can prove It. This doesn't seem like an easy bug. Regardless what the solution is, the UT will still be useful and hopefully the same, so the work won't be wasted.

        Show
        jianhe Jian He added a comment - - edited Do you mean TestDFSIO fails without the patch the patch, and pass with the patch ? How about the original issue you reported about AM allocate ? Does that also fail without the patch and pass with the patch ? Frankly, I'm also unsure the correctness of the patch. Hence,a UT can prove It. This doesn't seem like an easy bug. Regardless what the solution is, the UT will still be useful and hopefully the same, so the work won't be wasted.
        Hide
        Steven Rand Steven Rand added a comment -

        Correct, TestDFSIO fails without the patch and succeeds with the patch. The reason why TestDFSIO fails is because of the issue with ApplicationMasterProtocolPBClientImpl.allocate that I initially reported – it's the same problem, just reproduced with MapReduce instead of with Spark.

        I am working on a unit test now and will hopefully have a new patch by the end of tonight.

        Show
        Steven Rand Steven Rand added a comment - Correct, TestDFSIO fails without the patch and succeeds with the patch. The reason why TestDFSIO fails is because of the issue with ApplicationMasterProtocolPBClientImpl.allocate that I initially reported – it's the same problem, just reproduced with MapReduce instead of with Spark. I am working on a unit test now and will hopefully have a new patch by the end of tonight.
        Hide
        Steven Rand Steven Rand added a comment -

        Jian He, I haven't found a good way to add a unit test for this issue to org.apache.hadoop.ipc.TestSaslRPC yet. I will keep trying tomorrow.

        I did notice that in org.apache.hadoop.yarn.client.api.impl.TestAMRMClient, if I add conf.set(CommonConfigurationKeysPublic.HADOOP_RPC_PROTECTION, "privacy"); to the setup() method, then these tests fail with the same EOFException as in the description of this JIRA:

        • testAMRMClient
        • testAMRMClientMatchStorage

        However, after I apply my patch, all of the tests in TestAMRMClient succeed, which seem like a good sign.

        Show
        Steven Rand Steven Rand added a comment - Jian He , I haven't found a good way to add a unit test for this issue to org.apache.hadoop.ipc.TestSaslRPC yet. I will keep trying tomorrow. I did notice that in org.apache.hadoop.yarn.client.api.impl.TestAMRMClient , if I add conf.set(CommonConfigurationKeysPublic.HADOOP_RPC_PROTECTION, "privacy"); to the setup() method, then these tests fail with the same EOFException as in the description of this JIRA: testAMRMClient testAMRMClientMatchStorage However, after I apply my patch, all of the tests in TestAMRMClient succeed, which seem like a good sign.
        Hide
        Steven Rand Steven Rand added a comment -

        I'm attaching a patch with a unit test, but would definitely want to refactor the way the test is done. Ideally I think I'd add a test to either TestSaslRPC or TestAMRMClient, but I haven't found a way of doing so yet, so the patch in its current form duplicates lots of code in TestAMRMClient.

        I'll work on improving it, but wanted to have at least something by the end of today, and this approach was the most obvious / easiest to implement.

        The test in TestAMRMClientWithEncryptedRpc fails without the change to Client, and succeeds with the change.

        Show
        Steven Rand Steven Rand added a comment - I'm attaching a patch with a unit test, but would definitely want to refactor the way the test is done. Ideally I think I'd add a test to either TestSaslRPC or TestAMRMClient , but I haven't found a way of doing so yet, so the patch in its current form duplicates lots of code in TestAMRMClient . I'll work on improving it, but wanted to have at least something by the end of today, and this approach was the most obvious / easiest to implement. The test in TestAMRMClientWithEncryptedRpc fails without the change to Client , and succeeds with the change.
        Hide
        jianhe Jian He added a comment -

        The patch is for 2.8, Does trunk not have this problem ?

        Show
        jianhe Jian He added a comment - The patch is for 2.8, Does trunk not have this problem ?
        Hide
        Steven Rand Steven Rand added a comment -

        I'm not sure, but I'll check. I was targeting 2.8 since the 2.8.0 release is happening soon.

        Show
        Steven Rand Steven Rand added a comment - I'm not sure, but I'll check. I was targeting 2.8 since the 2.8.0 release is happening soon.
        Hide
        Steven Rand Steven Rand added a comment -

        Sorry, I can't get trunk to compile cleanly in IntelliJ, so I can't run any tests. I'll try again tomorrow, and if I can't run the tests I can just make a dist and deploy that + run TestDFSIO with hadoop.rpc.protection=privacy set.

        Show
        Steven Rand Steven Rand added a comment - Sorry, I can't get trunk to compile cleanly in IntelliJ, so I can't run any tests. I'll try again tomorrow, and if I can't run the tests I can just make a dist and deploy that + run TestDFSIO with hadoop.rpc.protection=privacy set.
        Hide
        Steven Rand Steven Rand added a comment -

        Reproduced the issue on trunk, and also that the same change to Client makes the tests pass. Attaching the same patch for trunk.

        Today I'll try to find a nicer way of writing a unit test.

        Show
        Steven Rand Steven Rand added a comment - Reproduced the issue on trunk, and also that the same change to Client makes the tests pass. Attaching the same patch for trunk. Today I'll try to find a nicer way of writing a unit test.
        Hide
        jianhe Jian He added a comment -

        This indeed looks to be a regression of HADOOP-10940. But I cannot figure out why the other calls(register/unregister) are succeeding. Do you have any clue ?
        can you add a comments in the code to explain why this is needed. Also, the trunk patch has compilation error.

        Show
        jianhe Jian He added a comment - This indeed looks to be a regression of HADOOP-10940 . But I cannot figure out why the other calls(register/unregister) are succeeding. Do you have any clue ? can you add a comments in the code to explain why this is needed. Also, the trunk patch has compilation error.
        Hide
        Steven Rand Steven Rand added a comment -

        No, I unfortunately have no clue. Honestly I'm fairly perplexed by the whole situation.

        I was hoping that Daryn Sharp might be able to weigh in, since from looking at the history for Client and SaslRpcClient it seems like you have a good understanding of this code, and were able to fix a break in SASL RPC encryption in HADOOP-9816.

        In any case, I'll add a comment and will fix the patch.

        Show
        Steven Rand Steven Rand added a comment - No, I unfortunately have no clue. Honestly I'm fairly perplexed by the whole situation. I was hoping that Daryn Sharp might be able to weigh in, since from looking at the history for Client and SaslRpcClient it seems like you have a good understanding of this code, and were able to fix a break in SASL RPC encryption in HADOOP-9816 . In any case, I'll add a comment and will fix the patch.
        Hide
        Steven Rand Steven Rand added a comment -

        Attaching updated patches for branch-2.8.0 and trunk.

        Show
        Steven Rand Steven Rand added a comment - Attaching updated patches for branch-2.8.0 and trunk.
        Hide
        jianhe Jian He added a comment -

        Steven Rand, for test case, instead of copying all TestAMRMClient, could you add one test inside TestAMRMClient which only does what is minimally required ?
        Also, please cut the long comment into 2 lines as that exceeds the usual 80 column limit.

        Show
        jianhe Jian He added a comment - Steven Rand , for test case, instead of copying all TestAMRMClient, could you add one test inside TestAMRMClient which only does what is minimally required ? Also, please cut the long comment into 2 lines as that exceeds the usual 80 column limit.
        Hide
        Steven Rand Steven Rand added a comment -

        Jian He I've attached new patches which create a single new test in TestAMRMClient. There's some weirdness in that they have to create a new instance of MiniYARNCluster just for that one test, and I wasn't sure of the best way to handle it, so happy to further edit the patches if there's a better way.

        Show
        Steven Rand Steven Rand added a comment - Jian He I've attached new patches which create a single new test in TestAMRMClient . There's some weirdness in that they have to create a new instance of MiniYARNCluster just for that one test, and I wasn't sure of the best way to handle it, so happy to further edit the patches if there's a better way.
        Hide
        jianhe Jian He added a comment -

        After I removed these code, the test seems also pass, do we need these ?

            tearDown();
            createClientAndCluster(conf);
            // unless we start an application the cancelApp() method will fail when
            // it runs after this test
            startApp();
        
        Show
        jianhe Jian He added a comment - After I removed these code, the test seems also pass, do we need these ? tearDown(); createClientAndCluster(conf); // unless we start an application the cancelApp() method will fail when // it runs after this test startApp();
        Hide
        Steven Rand Steven Rand added a comment -

        For me the tests also succeed if I comment out that code, but I think it's only because the new test happens to run last. When I add @FixMethodOrder(MethodSorters.NAME_ASCENDING) to the class, the test that runs immediately after the new test (testAllocationWithBlacklist) fails if I comment out that code. I think it's because that test calls amClient.init(conf), and since we call conf.unset(CommonConfigurationKeysPublic.HADOOP_RPC_PROTECTION); in the new test, there's a mismatch between the client and the server.

        So I think the two options are:

        1. Don't remove the code in your above comment
        2. Do remove that code, but also remove conf.unset(CommonConfigurationKeysPublic.HADOOP_RPC_PROTECTION);. When I do this all tests succeed regardless of order.

        Jian He I'll defer to you on which option you prefer. Both seem okay to me. The first is a smaller change, since if we do the second, all tests after testAMRMClientWithSaslEncryption run with SASL RPC.

        Show
        Steven Rand Steven Rand added a comment - For me the tests also succeed if I comment out that code, but I think it's only because the new test happens to run last. When I add @FixMethodOrder(MethodSorters.NAME_ASCENDING) to the class, the test that runs immediately after the new test ( testAllocationWithBlacklist ) fails if I comment out that code. I think it's because that test calls amClient.init(conf) , and since we call conf.unset(CommonConfigurationKeysPublic.HADOOP_RPC_PROTECTION); in the new test, there's a mismatch between the client and the server. So I think the two options are: 1. Don't remove the code in your above comment 2. Do remove that code, but also remove conf.unset(CommonConfigurationKeysPublic.HADOOP_RPC_PROTECTION); . When I do this all tests succeed regardless of order. Jian He I'll defer to you on which option you prefer. Both seem okay to me. The first is a smaller change, since if we do the second, all tests after testAMRMClientWithSaslEncryption run with SASL RPC.
        Hide
        jianhe Jian He added a comment -

        yep, latest patch looks good to me.
        could you upload a patch for trunk as well ?

        Once you upload a patch, you can click the submit Patch button which will triggers the jenkins report.

        Show
        jianhe Jian He added a comment - yep, latest patch looks good to me. could you upload a patch for trunk as well ? Once you upload a patch, you can click the submit Patch button which will triggers the jenkins report.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 17s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        0 mvndep 0m 15s Maven dependency ordering for branch
        +1 mvninstall 14m 54s trunk passed
        +1 compile 15m 52s trunk passed
        +1 checkstyle 2m 20s trunk passed
        +1 mvnsite 1m 51s trunk passed
        +1 mvneclipse 0m 45s trunk passed
        +1 findbugs 2m 16s trunk passed
        +1 javadoc 1m 15s trunk passed
        0 mvndep 0m 17s Maven dependency ordering for patch
        +1 mvninstall 1m 0s the patch passed
        +1 compile 12m 7s the patch passed
        +1 javac 12m 7s the patch passed
        +1 checkstyle 1m 59s the patch passed
        +1 mvnsite 1m 34s the patch passed
        +1 mvneclipse 0m 45s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 2m 25s the patch passed
        +1 javadoc 1m 17s the patch passed
        -1 unit 8m 12s hadoop-common in the patch failed.
        +1 unit 17m 12s hadoop-yarn-client in the patch passed.
        +1 asflicense 0m 38s The patch does not generate ASF License warnings.
        111m 47s



        Reason Tests
        Failed junit tests hadoop.security.TestKDiag



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:a9ad5d6
        JIRA Issue HADOOP-14062
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12852762/HADOOP-14062.003.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux a384c8dd0a96 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 003ae00
        Default Java 1.8.0_121
        findbugs v3.0.0
        unit https://builds.apache.org/job/PreCommit-HADOOP-Build/11682/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
        Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11682/testReport/
        modules C: hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: .
        Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11682/console
        Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 17s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 0m 15s Maven dependency ordering for branch +1 mvninstall 14m 54s trunk passed +1 compile 15m 52s trunk passed +1 checkstyle 2m 20s trunk passed +1 mvnsite 1m 51s trunk passed +1 mvneclipse 0m 45s trunk passed +1 findbugs 2m 16s trunk passed +1 javadoc 1m 15s trunk passed 0 mvndep 0m 17s Maven dependency ordering for patch +1 mvninstall 1m 0s the patch passed +1 compile 12m 7s the patch passed +1 javac 12m 7s the patch passed +1 checkstyle 1m 59s the patch passed +1 mvnsite 1m 34s the patch passed +1 mvneclipse 0m 45s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 2m 25s the patch passed +1 javadoc 1m 17s the patch passed -1 unit 8m 12s hadoop-common in the patch failed. +1 unit 17m 12s hadoop-yarn-client in the patch passed. +1 asflicense 0m 38s The patch does not generate ASF License warnings. 111m 47s Reason Tests Failed junit tests hadoop.security.TestKDiag Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HADOOP-14062 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12852762/HADOOP-14062.003.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux a384c8dd0a96 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 003ae00 Default Java 1.8.0_121 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-HADOOP-Build/11682/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11682/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11682/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        Steven Rand Steven Rand added a comment -

        Jian He, it looks like the test failure is unrelated, since other people are seeing it too in HADOOP-14030?

        The patch HADOOP-14062.003.patch is for trunk, and HADOOP-14062-branch-2.8.0.005.patch is for branch-2.8.0. Should I also make a separate patch file for branch-2?

        Show
        Steven Rand Steven Rand added a comment - Jian He , it looks like the test failure is unrelated, since other people are seeing it too in HADOOP-14030 ? The patch HADOOP-14062 .003.patch is for trunk, and HADOOP-14062 -branch-2.8.0.005.patch is for branch-2.8.0. Should I also make a separate patch file for branch-2?
        Hide
        jianhe Jian He added a comment -

        Attached the same branch-2.8 patch to trigger jenkins report

        Show
        jianhe Jian He added a comment - Attached the same branch-2.8 patch to trigger jenkins report
        Hide
        jianhe Jian He added a comment -

        Should I also make a separate patch file for branch-2?

        No, not needed.

        Show
        jianhe Jian He added a comment - Should I also make a separate patch file for branch-2? No, not needed.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 16s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        0 mvndep 0m 15s Maven dependency ordering for branch
        +1 mvninstall 9m 1s branch-2.8.0 passed
        +1 compile 7m 2s branch-2.8.0 passed with JDK v1.8.0_121
        +1 compile 7m 24s branch-2.8.0 passed with JDK v1.7.0_121
        +1 checkstyle 1m 7s branch-2.8.0 passed
        +1 mvnsite 1m 26s branch-2.8.0 passed
        +1 mvneclipse 0m 30s branch-2.8.0 passed
        +1 findbugs 2m 18s branch-2.8.0 passed
        +1 javadoc 1m 1s branch-2.8.0 passed with JDK v1.8.0_121
        +1 javadoc 1m 17s branch-2.8.0 passed with JDK v1.7.0_121
        0 mvndep 0m 16s Maven dependency ordering for patch
        +1 mvninstall 1m 3s the patch passed
        +1 compile 5m 48s the patch passed with JDK v1.8.0_121
        +1 javac 5m 48s the patch passed
        +1 compile 6m 51s the patch passed with JDK v1.7.0_121
        +1 javac 6m 51s the patch passed
        +1 checkstyle 1m 9s the patch passed
        +1 mvnsite 1m 27s the patch passed
        +1 mvneclipse 0m 36s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 2m 46s the patch passed
        +1 javadoc 1m 7s the patch passed with JDK v1.8.0_121
        +1 javadoc 1m 22s the patch passed with JDK v1.7.0_121
        +1 unit 9m 8s hadoop-common in the patch passed with JDK v1.7.0_121.
        -1 unit 66m 19s hadoop-yarn-client in the patch failed with JDK v1.7.0_121.
        +1 asflicense 0m 29s The patch does not generate ASF License warnings.
        229m 18s



        Reason Tests
        JDK v1.8.0_121 Failed junit tests hadoop.yarn.client.TestGetGroups
          hadoop.yarn.client.api.impl.TestAMRMProxy
        JDK v1.8.0_121 Timed out junit tests org.apache.hadoop.yarn.client.cli.TestYarnCLI
          org.apache.hadoop.yarn.client.api.impl.TestAMRMClient
          org.apache.hadoop.yarn.client.api.impl.TestYarnClient
          org.apache.hadoop.yarn.client.api.impl.TestNMClient
        JDK v1.7.0_121 Failed junit tests hadoop.yarn.client.TestGetGroups
          hadoop.yarn.client.api.impl.TestAMRMProxy
          hadoop.yarn.client.TestApplicationClientProtocolOnHA
        JDK v1.7.0_121 Timed out junit tests org.apache.hadoop.yarn.client.cli.TestYarnCLI
          org.apache.hadoop.yarn.client.api.impl.TestAMRMClient
          org.apache.hadoop.yarn.client.api.impl.TestYarnClient
          org.apache.hadoop.yarn.client.api.impl.TestNMClient



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:5af2af1
        JIRA Issue HADOOP-14062
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12854059/HADOOP-14062-branch-2.8.0.005.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 274fbf74bdf3 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision branch-2.8.0 / 159b8b6
        Default Java 1.7.0_121
        Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_121 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_121
        findbugs v3.0.0
        unit https://builds.apache.org/job/PreCommit-HADOOP-Build/11690/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_121.txt
        JDK v1.7.0_121 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11690/testReport/
        modules C: hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: .
        Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11690/console
        Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 16s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 0m 15s Maven dependency ordering for branch +1 mvninstall 9m 1s branch-2.8.0 passed +1 compile 7m 2s branch-2.8.0 passed with JDK v1.8.0_121 +1 compile 7m 24s branch-2.8.0 passed with JDK v1.7.0_121 +1 checkstyle 1m 7s branch-2.8.0 passed +1 mvnsite 1m 26s branch-2.8.0 passed +1 mvneclipse 0m 30s branch-2.8.0 passed +1 findbugs 2m 18s branch-2.8.0 passed +1 javadoc 1m 1s branch-2.8.0 passed with JDK v1.8.0_121 +1 javadoc 1m 17s branch-2.8.0 passed with JDK v1.7.0_121 0 mvndep 0m 16s Maven dependency ordering for patch +1 mvninstall 1m 3s the patch passed +1 compile 5m 48s the patch passed with JDK v1.8.0_121 +1 javac 5m 48s the patch passed +1 compile 6m 51s the patch passed with JDK v1.7.0_121 +1 javac 6m 51s the patch passed +1 checkstyle 1m 9s the patch passed +1 mvnsite 1m 27s the patch passed +1 mvneclipse 0m 36s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 2m 46s the patch passed +1 javadoc 1m 7s the patch passed with JDK v1.8.0_121 +1 javadoc 1m 22s the patch passed with JDK v1.7.0_121 +1 unit 9m 8s hadoop-common in the patch passed with JDK v1.7.0_121. -1 unit 66m 19s hadoop-yarn-client in the patch failed with JDK v1.7.0_121. +1 asflicense 0m 29s The patch does not generate ASF License warnings. 229m 18s Reason Tests JDK v1.8.0_121 Failed junit tests hadoop.yarn.client.TestGetGroups   hadoop.yarn.client.api.impl.TestAMRMProxy JDK v1.8.0_121 Timed out junit tests org.apache.hadoop.yarn.client.cli.TestYarnCLI   org.apache.hadoop.yarn.client.api.impl.TestAMRMClient   org.apache.hadoop.yarn.client.api.impl.TestYarnClient   org.apache.hadoop.yarn.client.api.impl.TestNMClient JDK v1.7.0_121 Failed junit tests hadoop.yarn.client.TestGetGroups   hadoop.yarn.client.api.impl.TestAMRMProxy   hadoop.yarn.client.TestApplicationClientProtocolOnHA JDK v1.7.0_121 Timed out junit tests org.apache.hadoop.yarn.client.cli.TestYarnCLI   org.apache.hadoop.yarn.client.api.impl.TestAMRMClient   org.apache.hadoop.yarn.client.api.impl.TestYarnClient   org.apache.hadoop.yarn.client.api.impl.TestNMClient Subsystem Report/Notes Docker Image:yetus/hadoop:5af2af1 JIRA Issue HADOOP-14062 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12854059/HADOOP-14062-branch-2.8.0.005.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 274fbf74bdf3 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2.8.0 / 159b8b6 Default Java 1.7.0_121 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_121 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_121 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-HADOOP-Build/11690/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_121.txt JDK v1.7.0_121 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11690/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11690/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        Steven Rand Steven Rand added a comment -

        Jian He, I can't tell how real the test failures are. I ran the ones mentioned above locally with the same patch for branch-2.8.0 and in most cases they succeeded:

        • I ran TestGetGroups once locally and it succeeded.
        • I ran TestAMRMProxy three times locally. Once it failed, twice it succeeded.
        • I ran TestYarnCLI once locally and it succeeded.
        • I ran TestAMRMClient several times locally both with and without my patch. Every time at least one test failed, but it was inconsistent which test or tests timed out.
        • I ran TestYarnClient once locally and it succeeded.
        • I ran TestNMClient once locally and it succeed.

        How do you recommend I proceed?

        Show
        Steven Rand Steven Rand added a comment - Jian He , I can't tell how real the test failures are. I ran the ones mentioned above locally with the same patch for branch-2.8.0 and in most cases they succeeded: I ran TestGetGroups once locally and it succeeded. I ran TestAMRMProxy three times locally. Once it failed, twice it succeeded. I ran TestYarnCLI once locally and it succeeded. I ran TestAMRMClient several times locally both with and without my patch. Every time at least one test failed, but it was inconsistent which test or tests timed out. I ran TestYarnClient once locally and it succeeded. I ran TestNMClient once locally and it succeed. How do you recommend I proceed?
        Hide
        jianhe Jian He added a comment -

        I tried to run the tests locally, below tests failed with or without the patch. Is this what you see as well ? I guess it's not related to the patch

          TestYarnClient.testShouldNotRetryForeverForNonNetworkExceptions »  Unexpected ...
          TestYarnClient.testClientStop:149 » YarnRuntime java.net.BindException: Proble...
          TestGetGroups.setUpResourceManager:67 IO ResourceManager failed to start. Fina...
        

        I triggered a new jenkins run.

        Show
        jianhe Jian He added a comment - I tried to run the tests locally, below tests failed with or without the patch. Is this what you see as well ? I guess it's not related to the patch TestYarnClient.testShouldNotRetryForeverForNonNetworkExceptions » Unexpected ... TestYarnClient.testClientStop:149 » YarnRuntime java.net.BindException: Proble... TestGetGroups.setUpResourceManager:67 IO ResourceManager failed to start. Fina... I triggered a new jenkins run.
        Hide
        Steven Rand Steven Rand added a comment -

        For me when I run with the patch on branch-2.8.0 I get:

          TestApplicationClientProtocolOnHA.testGetApplicationReportOnHA:69 »  test time...
          TestApplicationClientProtocolOnHA.testCancelDelegationTokenOnHA:213 »  test ti...
          TestApplicationClientProtocolOnHA.testGetClusterNodesOnHA:104->Object.wait:502->Object.wait:-2 »
        

        And then without the patch:

          TestApplicationClientProtocolOnHA.testCancelDelegationTokenOnHA:213 »  test ti...
          TestApplicationClientProtocolOnHA.testGetQueueInfoOnHA:113 »  test timed out a...
          TestApplicationClientProtocolOnHA.testForceKillApplicationOnHA:189 »  test tim...
          TestApplicationClientProtocolOnHA.testGetContainerReportOnHA:150 »  test timed...
          TestResourceTrackerOnHA.testResourceTrackerOnHA:64 »  test timed out after 150..
        

        But it doesn't seem deterministic – I get different failures on different test runs, both with and without the patch.

        Show
        Steven Rand Steven Rand added a comment - For me when I run with the patch on branch-2.8.0 I get: TestApplicationClientProtocolOnHA.testGetApplicationReportOnHA:69 » test time... TestApplicationClientProtocolOnHA.testCancelDelegationTokenOnHA:213 » test ti... TestApplicationClientProtocolOnHA.testGetClusterNodesOnHA:104-> Object .wait:502-> Object .wait:-2 » And then without the patch: TestApplicationClientProtocolOnHA.testCancelDelegationTokenOnHA:213 » test ti... TestApplicationClientProtocolOnHA.testGetQueueInfoOnHA:113 » test timed out a... TestApplicationClientProtocolOnHA.testForceKillApplicationOnHA:189 » test tim... TestApplicationClientProtocolOnHA.testGetContainerReportOnHA:150 » test timed... TestResourceTrackerOnHA.testResourceTrackerOnHA:64 » test timed out after 150.. But it doesn't seem deterministic – I get different failures on different test runs, both with and without the patch.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 19s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        0 mvndep 0m 15s Maven dependency ordering for branch
        +1 mvninstall 6m 28s branch-2.8.0 passed
        +1 compile 5m 54s branch-2.8.0 passed with JDK v1.8.0_121
        +1 compile 6m 50s branch-2.8.0 passed with JDK v1.7.0_121
        +1 checkstyle 1m 5s branch-2.8.0 passed
        +1 mvnsite 1m 22s branch-2.8.0 passed
        +1 mvneclipse 0m 31s branch-2.8.0 passed
        +1 findbugs 2m 14s branch-2.8.0 passed
        +1 javadoc 1m 1s branch-2.8.0 passed with JDK v1.8.0_121
        +1 javadoc 1m 17s branch-2.8.0 passed with JDK v1.7.0_121
        0 mvndep 0m 16s Maven dependency ordering for patch
        +1 mvninstall 1m 2s the patch passed
        +1 compile 5m 50s the patch passed with JDK v1.8.0_121
        +1 javac 5m 50s the patch passed
        +1 compile 6m 50s the patch passed with JDK v1.7.0_121
        +1 javac 6m 50s the patch passed
        +1 checkstyle 1m 7s the patch passed
        +1 mvnsite 1m 27s the patch passed
        +1 mvneclipse 0m 36s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 2m 49s the patch passed
        +1 javadoc 1m 7s the patch passed with JDK v1.8.0_121
        +1 javadoc 1m 23s the patch passed with JDK v1.7.0_121
        +1 unit 8m 18s hadoop-common in the patch passed with JDK v1.7.0_121.
        -1 unit 66m 15s hadoop-yarn-client in the patch failed with JDK v1.7.0_121.
        +1 asflicense 0m 28s The patch does not generate ASF License warnings.
        224m 9s



        Reason Tests
        JDK v1.8.0_121 Failed junit tests hadoop.yarn.client.TestGetGroups
          hadoop.yarn.client.api.impl.TestAMRMProxy
        JDK v1.8.0_121 Timed out junit tests org.apache.hadoop.yarn.client.cli.TestYarnCLI
          org.apache.hadoop.yarn.client.api.impl.TestAMRMClient
          org.apache.hadoop.yarn.client.api.impl.TestYarnClient
          org.apache.hadoop.yarn.client.api.impl.TestNMClient
        JDK v1.7.0_121 Failed junit tests hadoop.yarn.client.TestGetGroups
          hadoop.yarn.client.api.impl.TestAMRMProxy
        JDK v1.7.0_121 Timed out junit tests org.apache.hadoop.yarn.client.cli.TestYarnCLI
          org.apache.hadoop.yarn.client.api.impl.TestAMRMClient
          org.apache.hadoop.yarn.client.api.impl.TestYarnClient
          org.apache.hadoop.yarn.client.api.impl.TestNMClient



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:5af2af1
        JIRA Issue HADOOP-14062
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12854059/HADOOP-14062-branch-2.8.0.005.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux e737bbeb6350 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision branch-2.8.0 / 159b8b6
        Default Java 1.7.0_121
        Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_121 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_121
        findbugs v3.0.0
        unit https://builds.apache.org/job/PreCommit-HADOOP-Build/11699/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_121.txt
        JDK v1.7.0_121 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11699/testReport/
        modules C: hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: .
        Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11699/console
        Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 19s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 0m 15s Maven dependency ordering for branch +1 mvninstall 6m 28s branch-2.8.0 passed +1 compile 5m 54s branch-2.8.0 passed with JDK v1.8.0_121 +1 compile 6m 50s branch-2.8.0 passed with JDK v1.7.0_121 +1 checkstyle 1m 5s branch-2.8.0 passed +1 mvnsite 1m 22s branch-2.8.0 passed +1 mvneclipse 0m 31s branch-2.8.0 passed +1 findbugs 2m 14s branch-2.8.0 passed +1 javadoc 1m 1s branch-2.8.0 passed with JDK v1.8.0_121 +1 javadoc 1m 17s branch-2.8.0 passed with JDK v1.7.0_121 0 mvndep 0m 16s Maven dependency ordering for patch +1 mvninstall 1m 2s the patch passed +1 compile 5m 50s the patch passed with JDK v1.8.0_121 +1 javac 5m 50s the patch passed +1 compile 6m 50s the patch passed with JDK v1.7.0_121 +1 javac 6m 50s the patch passed +1 checkstyle 1m 7s the patch passed +1 mvnsite 1m 27s the patch passed +1 mvneclipse 0m 36s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 2m 49s the patch passed +1 javadoc 1m 7s the patch passed with JDK v1.8.0_121 +1 javadoc 1m 23s the patch passed with JDK v1.7.0_121 +1 unit 8m 18s hadoop-common in the patch passed with JDK v1.7.0_121. -1 unit 66m 15s hadoop-yarn-client in the patch failed with JDK v1.7.0_121. +1 asflicense 0m 28s The patch does not generate ASF License warnings. 224m 9s Reason Tests JDK v1.8.0_121 Failed junit tests hadoop.yarn.client.TestGetGroups   hadoop.yarn.client.api.impl.TestAMRMProxy JDK v1.8.0_121 Timed out junit tests org.apache.hadoop.yarn.client.cli.TestYarnCLI   org.apache.hadoop.yarn.client.api.impl.TestAMRMClient   org.apache.hadoop.yarn.client.api.impl.TestYarnClient   org.apache.hadoop.yarn.client.api.impl.TestNMClient JDK v1.7.0_121 Failed junit tests hadoop.yarn.client.TestGetGroups   hadoop.yarn.client.api.impl.TestAMRMProxy JDK v1.7.0_121 Timed out junit tests org.apache.hadoop.yarn.client.cli.TestYarnCLI   org.apache.hadoop.yarn.client.api.impl.TestAMRMClient   org.apache.hadoop.yarn.client.api.impl.TestYarnClient   org.apache.hadoop.yarn.client.api.impl.TestNMClient Subsystem Report/Notes Docker Image:yetus/hadoop:5af2af1 JIRA Issue HADOOP-14062 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12854059/HADOOP-14062-branch-2.8.0.005.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux e737bbeb6350 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2.8.0 / 159b8b6 Default Java 1.7.0_121 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_121 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_121 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-HADOOP-Build/11699/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_121.txt JDK v1.7.0_121 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11699/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11699/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        Steven Rand Steven Rand added a comment -

        Jian He, I'm not really sure what to do about this since we can't get the tests to pass, but the failures don't seem related to the patch as far as I can tell. Do you have any suggestions for how to proceed?

        Show
        Steven Rand Steven Rand added a comment - Jian He , I'm not really sure what to do about this since we can't get the tests to pass, but the failures don't seem related to the patch as far as I can tell. Do you have any suggestions for how to proceed?
        Hide
        jianhe Jian He added a comment -

        sorry, I forgot about this. I uploaded a dummy patch for branch-2.8.0, if it fails the same. we can ignore these failures and get this committed.

        Show
        jianhe Jian He added a comment - sorry, I forgot about this. I uploaded a dummy patch for branch-2.8.0, if it fails the same. we can ignore these failures and get this committed.
        Hide
        Steven Rand Steven Rand added a comment -

        Looks like Jenkins hasn't tested the dummy patch – do I need to click "Submit Patch" to make it do that?

        Show
        Steven Rand Steven Rand added a comment - Looks like Jenkins hasn't tested the dummy patch – do I need to click "Submit Patch" to make it do that?
        Hide
        jianhe Jian He added a comment -

        yeah, I just did that. thanks for reminding

        Show
        jianhe Jian He added a comment - yeah, I just did that. thanks for reminding
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 29s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        0 mvndep 0m 17s Maven dependency ordering for branch
        +1 mvninstall 8m 48s branch-2.8.0 passed
        +1 compile 5m 50s branch-2.8.0 passed with JDK v1.8.0_121
        +1 compile 6m 49s branch-2.8.0 passed with JDK v1.7.0_121
        +1 checkstyle 1m 7s branch-2.8.0 passed
        +1 mvnsite 1m 23s branch-2.8.0 passed
        +1 mvneclipse 0m 32s branch-2.8.0 passed
        +1 findbugs 2m 17s branch-2.8.0 passed
        +1 javadoc 1m 1s branch-2.8.0 passed with JDK v1.8.0_121
        +1 javadoc 1m 16s branch-2.8.0 passed with JDK v1.7.0_121
        0 mvndep 0m 16s Maven dependency ordering for patch
        +1 mvninstall 1m 1s the patch passed
        +1 compile 5m 46s the patch passed with JDK v1.8.0_121
        +1 javac 5m 46s the patch passed
        +1 compile 6m 51s the patch passed with JDK v1.7.0_121
        +1 javac 6m 51s the patch passed
        +1 checkstyle 1m 7s the patch passed
        +1 mvnsite 1m 27s the patch passed
        +1 mvneclipse 0m 37s the patch passed
        -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
        +1 findbugs 2m 47s the patch passed
        +1 javadoc 1m 7s the patch passed with JDK v1.8.0_121
        +1 javadoc 1m 23s the patch passed with JDK v1.7.0_121
        +1 unit 8m 19s hadoop-common in the patch passed with JDK v1.7.0_121.
        -1 unit 66m 19s hadoop-yarn-client in the patch failed with JDK v1.7.0_121.
        +1 asflicense 0m 28s The patch does not generate ASF License warnings.
        226m 58s



        Reason Tests
        JDK v1.8.0_121 Failed junit tests hadoop.yarn.client.TestGetGroups
          hadoop.yarn.client.api.impl.TestAMRMProxy
        JDK v1.8.0_121 Timed out junit tests org.apache.hadoop.yarn.client.cli.TestYarnCLI
          org.apache.hadoop.yarn.client.api.impl.TestAMRMClient
          org.apache.hadoop.yarn.client.api.impl.TestYarnClient
          org.apache.hadoop.yarn.client.api.impl.TestNMClient
        JDK v1.7.0_121 Failed junit tests hadoop.yarn.client.TestGetGroups
          hadoop.yarn.client.api.impl.TestAMRMProxy
          hadoop.yarn.client.TestApplicationClientProtocolOnHA
        JDK v1.7.0_121 Timed out junit tests org.apache.hadoop.yarn.client.cli.TestYarnCLI
          org.apache.hadoop.yarn.client.api.impl.TestAMRMClient
          org.apache.hadoop.yarn.client.api.impl.TestYarnClient
          org.apache.hadoop.yarn.client.api.impl.TestNMClient



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:5af2af1
        JIRA Issue HADOOP-14062
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12855696/HADOOP-14062-branch-2.8.0.dummy.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 81fdef4b5671 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision branch-2.8.0 / 58fa75c
        Default Java 1.7.0_121
        Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_121 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_121
        findbugs v3.0.0
        whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/11765/artifact/patchprocess/whitespace-eol.txt
        unit https://builds.apache.org/job/PreCommit-HADOOP-Build/11765/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_121.txt
        JDK v1.7.0_121 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11765/testReport/
        modules C: hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: .
        Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11765/console
        Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 29s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 0m 17s Maven dependency ordering for branch +1 mvninstall 8m 48s branch-2.8.0 passed +1 compile 5m 50s branch-2.8.0 passed with JDK v1.8.0_121 +1 compile 6m 49s branch-2.8.0 passed with JDK v1.7.0_121 +1 checkstyle 1m 7s branch-2.8.0 passed +1 mvnsite 1m 23s branch-2.8.0 passed +1 mvneclipse 0m 32s branch-2.8.0 passed +1 findbugs 2m 17s branch-2.8.0 passed +1 javadoc 1m 1s branch-2.8.0 passed with JDK v1.8.0_121 +1 javadoc 1m 16s branch-2.8.0 passed with JDK v1.7.0_121 0 mvndep 0m 16s Maven dependency ordering for patch +1 mvninstall 1m 1s the patch passed +1 compile 5m 46s the patch passed with JDK v1.8.0_121 +1 javac 5m 46s the patch passed +1 compile 6m 51s the patch passed with JDK v1.7.0_121 +1 javac 6m 51s the patch passed +1 checkstyle 1m 7s the patch passed +1 mvnsite 1m 27s the patch passed +1 mvneclipse 0m 37s the patch passed -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply +1 findbugs 2m 47s the patch passed +1 javadoc 1m 7s the patch passed with JDK v1.8.0_121 +1 javadoc 1m 23s the patch passed with JDK v1.7.0_121 +1 unit 8m 19s hadoop-common in the patch passed with JDK v1.7.0_121. -1 unit 66m 19s hadoop-yarn-client in the patch failed with JDK v1.7.0_121. +1 asflicense 0m 28s The patch does not generate ASF License warnings. 226m 58s Reason Tests JDK v1.8.0_121 Failed junit tests hadoop.yarn.client.TestGetGroups   hadoop.yarn.client.api.impl.TestAMRMProxy JDK v1.8.0_121 Timed out junit tests org.apache.hadoop.yarn.client.cli.TestYarnCLI   org.apache.hadoop.yarn.client.api.impl.TestAMRMClient   org.apache.hadoop.yarn.client.api.impl.TestYarnClient   org.apache.hadoop.yarn.client.api.impl.TestNMClient JDK v1.7.0_121 Failed junit tests hadoop.yarn.client.TestGetGroups   hadoop.yarn.client.api.impl.TestAMRMProxy   hadoop.yarn.client.TestApplicationClientProtocolOnHA JDK v1.7.0_121 Timed out junit tests org.apache.hadoop.yarn.client.cli.TestYarnCLI   org.apache.hadoop.yarn.client.api.impl.TestAMRMClient   org.apache.hadoop.yarn.client.api.impl.TestYarnClient   org.apache.hadoop.yarn.client.api.impl.TestNMClient Subsystem Report/Notes Docker Image:yetus/hadoop:5af2af1 JIRA Issue HADOOP-14062 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12855696/HADOOP-14062-branch-2.8.0.dummy.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 81fdef4b5671 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2.8.0 / 58fa75c Default Java 1.7.0_121 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_121 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_121 findbugs v3.0.0 whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/11765/artifact/patchprocess/whitespace-eol.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/11765/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_121.txt JDK v1.7.0_121 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11765/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11765/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        Steven Rand Steven Rand added a comment -

        Looks like tests are also broken with the dummy patch?

        Show
        Steven Rand Steven Rand added a comment - Looks like tests are also broken with the dummy patch?
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #11373 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11373/)
        HADOOP-14062. ApplicationMasterProtocolPBClientImpl.allocate fails with (jianhe: rev 241c1cc05b71f8b719a85c06e3df930639630726)

        • (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #11373 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11373/ ) HADOOP-14062 . ApplicationMasterProtocolPBClientImpl.allocate fails with (jianhe: rev 241c1cc05b71f8b719a85c06e3df930639630726) (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java
        Hide
        jianhe Jian He added a comment -

        I committed to trunk, branch-2, branch-2.8, branch-2.8.0

        Thanks Steven Rand for the consistent contribution !
        Congrats to your first patch into Hadoop !

        Show
        jianhe Jian He added a comment - I committed to trunk, branch-2, branch-2.8, branch-2.8.0 Thanks Steven Rand for the consistent contribution ! Congrats to your first patch into Hadoop !
        Hide
        Steven Rand Steven Rand added a comment -

        Thanks Jian He for reviewing!

        Show
        Steven Rand Steven Rand added a comment - Thanks Jian He for reviewing!
        Hide
        manojg Manoj Govindassamy added a comment -

        Steven Rand/Jian He,

        Can you please verify if your changes in TestAMRMClient is breaking the build ?

        [ERROR] COMPILATION ERROR : 
        [ERROR] /Users/manoj/work/ups-hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java:[145,5] non-static variable yarnCluster cannot be referenced from a static context
        [ERROR] /Users/manoj/work/ups-hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java:[145,71] non-static variable nodeCount cannot be referenced from a static context
        [ERROR] /Users/manoj/work/ups-hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java:[146,5] non-static variable yarnCluster cannot be referenced from a static context
        ..
        ..
        [ERROR] symbol:   method tearDown()
        [ERROR] location: class org.apache.hadoop.yarn.client.api.impl.TestAMRMClient
        [ERROR] /Users/manoj/work/ups-hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java:[876,5] cannot find symbol
        [ERROR] symbol:   method startApp()
        [ERROR] location: class org.apache.hadoop.yarn.client.api.impl.TestAMRMClient
        [ERROR] /Users/manoj/work/ups-hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java:[881,5] cannot find symbol
        [ERROR] sym
        
        Show
        manojg Manoj Govindassamy added a comment - Steven Rand / Jian He , Can you please verify if your changes in TestAMRMClient is breaking the build ? [ERROR] COMPILATION ERROR : [ERROR] /Users/manoj/work/ups-hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java:[145,5] non-static variable yarnCluster cannot be referenced from a static context [ERROR] /Users/manoj/work/ups-hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java:[145,71] non-static variable nodeCount cannot be referenced from a static context [ERROR] /Users/manoj/work/ups-hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java:[146,5] non-static variable yarnCluster cannot be referenced from a static context .. .. [ERROR] symbol: method tearDown() [ERROR] location: class org.apache.hadoop.yarn.client.api.impl.TestAMRMClient [ERROR] /Users/manoj/work/ups-hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java:[876,5] cannot find symbol [ERROR] symbol: method startApp() [ERROR] location: class org.apache.hadoop.yarn.client.api.impl.TestAMRMClient [ERROR] /Users/manoj/work/ups-hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java:[881,5] cannot find symbol [ERROR] sym
        Hide
        Steven Rand Steven Rand added a comment -

        Yes, it looks like the changes in YARN-6218 conflict with this change. For example, in that commit, the method tearDown was renamed to teardown, so in this patch we call a method that doesn't exist anymore. Want me to make a separate patch to fix this?

        Show
        Steven Rand Steven Rand added a comment - Yes, it looks like the changes in YARN-6218 conflict with this change. For example, in that commit, the method tearDown was renamed to teardown , so in this patch we call a method that doesn't exist anymore. Want me to make a separate patch to fix this?
        Hide
        jianhe Jian He added a comment -

        Manoj Govindassamy, thanks for reporting the issue. sorry for the troubles. I reverted the patch from trunk and branch-2.
        Steven Rand, can you upload a new one ?

        Show
        jianhe Jian He added a comment - Manoj Govindassamy , thanks for reporting the issue. sorry for the troubles. I reverted the patch from trunk and branch-2. Steven Rand , can you upload a new one ?
        Hide
        Steven Rand Steven Rand added a comment -

        Yes, will do, hopefully by the end of today.

        Show
        Steven Rand Steven Rand added a comment - Yes, will do, hopefully by the end of today.
        Hide
        hudson Hudson added a comment -

        SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11374 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11374/)
        Revert "HADOOP-14062. ApplicationMasterProtocolPBClientImpl.allocate (jianhe: rev 2be8947d12714c49ef7a90de82a351d086b435b6)

        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java
        • (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
        Show
        hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11374 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11374/ ) Revert " HADOOP-14062 . ApplicationMasterProtocolPBClientImpl.allocate (jianhe: rev 2be8947d12714c49ef7a90de82a351d086b435b6) (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
        Hide
        Steven Rand Steven Rand added a comment -

        Attaching a new patch for trunk.

        Show
        Steven Rand Steven Rand added a comment - Attaching a new patch for trunk.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 29s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        0 mvndep 1m 48s Maven dependency ordering for branch
        +1 mvninstall 12m 36s trunk passed
        +1 compile 11m 29s trunk passed
        +1 checkstyle 2m 9s trunk passed
        +1 mvnsite 1m 48s trunk passed
        +1 mvneclipse 0m 49s trunk passed
        +1 findbugs 2m 27s trunk passed
        +1 javadoc 1m 22s trunk passed
        0 mvndep 0m 16s Maven dependency ordering for patch
        +1 mvninstall 1m 8s the patch passed
        +1 compile 10m 45s the patch passed
        +1 javac 10m 45s the patch passed
        +1 checkstyle 2m 7s the patch passed
        +1 mvnsite 1m 50s the patch passed
        +1 mvneclipse 0m 53s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 2m 46s the patch passed
        +1 javadoc 1m 28s the patch passed
        -1 unit 8m 28s hadoop-common in the patch failed.
        +1 unit 19m 35s hadoop-yarn-client in the patch passed.
        +1 asflicense 0m 51s The patch does not generate ASF License warnings.
        109m 43s



        Reason Tests
        Failed junit tests hadoop.security.TestKDiag



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:a9ad5d6
        JIRA Issue HADOOP-14062
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12856906/HADOOP-14062.004.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 98c3be7a4f94 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 33a38a5
        Default Java 1.8.0_121
        findbugs v3.0.0
        unit https://builds.apache.org/job/PreCommit-HADOOP-Build/11784/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
        Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11784/testReport/
        modules C: hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: .
        Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11784/console
        Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 29s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 1m 48s Maven dependency ordering for branch +1 mvninstall 12m 36s trunk passed +1 compile 11m 29s trunk passed +1 checkstyle 2m 9s trunk passed +1 mvnsite 1m 48s trunk passed +1 mvneclipse 0m 49s trunk passed +1 findbugs 2m 27s trunk passed +1 javadoc 1m 22s trunk passed 0 mvndep 0m 16s Maven dependency ordering for patch +1 mvninstall 1m 8s the patch passed +1 compile 10m 45s the patch passed +1 javac 10m 45s the patch passed +1 checkstyle 2m 7s the patch passed +1 mvnsite 1m 50s the patch passed +1 mvneclipse 0m 53s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 2m 46s the patch passed +1 javadoc 1m 28s the patch passed -1 unit 8m 28s hadoop-common in the patch failed. +1 unit 19m 35s hadoop-yarn-client in the patch passed. +1 asflicense 0m 51s The patch does not generate ASF License warnings. 109m 43s Reason Tests Failed junit tests hadoop.security.TestKDiag Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HADOOP-14062 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12856906/HADOOP-14062.004.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 98c3be7a4f94 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 33a38a5 Default Java 1.8.0_121 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-HADOOP-Build/11784/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11784/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11784/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        Steven Rand Steven Rand added a comment -

        It looks like HADOOP-14030 strikes again. Should I re-trigger Jenkins to try to get a green build?

        Show
        Steven Rand Steven Rand added a comment - It looks like HADOOP-14030 strikes again. Should I re-trigger Jenkins to try to get a green build?
        Hide
        jianhe Jian He added a comment -

        committed the new patch to trunk and branch-2

        Show
        jianhe Jian He added a comment - committed the new patch to trunk and branch-2

          People

          • Assignee:
            Steven Rand Steven Rand
            Reporter:
            Steven Rand Steven Rand
          • Votes:
            0 Vote for this issue
            Watchers:
            12 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development