Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-12559

KMS connection failures should trigger TGT renewal

    Details

    • Target Version/s:
    • Hadoop Flags:
      Reviewed
    1. HADOOP-12559.00.patch
      2 kB
      Zhe Zhang
    2. HADOOP-12559.01.patch
      2 kB
      Zhe Zhang
    3. HADOOP-12559.02.patch
      1.0 kB
      Zhe Zhang
    4. HADOOP-12559.03.patch
      4 kB
      Zhe Zhang
    5. HADOOP-12559.04.patch
      4 kB
      Zhe Zhang
    6. HADOOP-12559.05.patch
      1.0 kB
      Zhe Zhang

      Issue Links

        Activity

        Hide
        qwertymaniac Harsh J added a comment -

        On the client-end, or the server-end BTW?

        Show
        qwertymaniac Harsh J added a comment - On the client-end, or the server-end BTW?
        Hide
        zhz Zhe Zhang added a comment -

        The fix should be on the client side. When working on the patch I realized this should probably belong to HDFS. I'll move the JIRA when the patch is in a more complete shape.

        Show
        zhz Zhe Zhang added a comment - The fix should be on the client side. When working on the patch I realized this should probably belong to HDFS. I'll move the JIRA when the patch is in a more complete shape.
        Hide
        qwertymaniac Harsh J added a comment -

        The reason I ask is that the NameNode also sees the same error (outside of a DFSClient):

        org.apache.hadoop.security.authentication.client.AuthenticationException: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
        	at org.apache.hadoop.crypto.key.kms.KMSClientProvider.generateEncryptedKey(KMSClientProvider.java:743)
        	at org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:371)
        	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.generateEncryptedDataEncryptionKey(FSNamesystem.java:2530)
        	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2664)
        	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2560)
        	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:585)
        	at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:110)
        	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:395)
        	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
        	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
        	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
        	at java.security.AccessController.doPrivileged(Native Method)
        	at javax.security.auth.Subject.doAs(Subject.java:422)
        	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
        	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)
        Caused by: java.util.concurrent.ExecutionException: java.io.IOException: org.apache.hadoop.security.authentication.client.AuthenticationException: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
        	at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:289)
        	at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:276)
        	at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:111)
        	at com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:132)
        	at com.google.common.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2381)
        	at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2351)
        	at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
        	at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
        	at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
        	at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
        	at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
        	at org.apache.hadoop.crypto.key.kms.ValueQueue.getAtMost(ValueQueue.java:266)
        	at org.apache.hadoop.crypto.key.kms.ValueQueue.getNext(ValueQueue.java:226)
        	at org.apache.hadoop.crypto.key.kms.KMSClientProvider.generateEncryptedKey(KMSClientProvider.java:738)
        	... 16 more
        Caused by: java.io.IOException: org.apache.hadoop.security.authentication.client.AuthenticationException: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
        	at org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:488)
        	at org.apache.hadoop.crypto.key.kms.KMSClientProvider.access$100(KMSClientProvider.java:83)
        	at org.apache.hadoop.crypto.key.kms.KMSClientProvider$EncryptedQueueRefiller.fillQueueForKey(KMSClientProvider.java:132)
        	at org.apache.hadoop.crypto.key.kms.ValueQueue$1.load(ValueQueue.java:181)
        	at org.apache.hadoop.crypto.key.kms.ValueQueue$1.load(ValueQueue.java:175)
        	at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
        	at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
        	... 24 more
        Caused by: org.apache.hadoop.security.authentication.client.AuthenticationException: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
        	at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:306)
        	at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:196)
        	at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:127)
        	at org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
        	at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:322)
        	at org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:482)
        	at org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:477)
        	at java.security.AccessController.doPrivileged(Native Method)
        	at javax.security.auth.Subject.doAs(Subject.java:422)
        	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
        	at org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:477)
        	... 30 more
        Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
        	at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
        	at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
        	at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
        	at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224)
        	at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
        	at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
        	at org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:285)
        	at org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:261)
        	at java.security.AccessController.doPrivileged(Native Method)
        	at javax.security.auth.Subject.doAs(Subject.java:422)
        	at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:261)
        	... 40 more
        

        Would this fix cover this missing tgt issue catch+relogin+retry also?

        Show
        qwertymaniac Harsh J added a comment - The reason I ask is that the NameNode also sees the same error (outside of a DFSClient): org.apache.hadoop.security.authentication.client.AuthenticationException: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt) at org.apache.hadoop.crypto.key.kms.KMSClientProvider.generateEncryptedKey(KMSClientProvider.java:743) at org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:371) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.generateEncryptedDataEncryptionKey(FSNamesystem.java:2530) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2664) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2560) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:585) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:110) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:395) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038) Caused by: java.util.concurrent.ExecutionException: java.io.IOException: org.apache.hadoop.security.authentication.client.AuthenticationException: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt) at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:289) at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:276) at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:111) at com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:132) at com.google.common.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2381) at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2351) at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313) at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228) at com.google.common.cache.LocalCache.get(LocalCache.java:3965) at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969) at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829) at org.apache.hadoop.crypto.key.kms.ValueQueue.getAtMost(ValueQueue.java:266) at org.apache.hadoop.crypto.key.kms.ValueQueue.getNext(ValueQueue.java:226) at org.apache.hadoop.crypto.key.kms.KMSClientProvider.generateEncryptedKey(KMSClientProvider.java:738) ... 16 more Caused by: java.io.IOException: org.apache.hadoop.security.authentication.client.AuthenticationException: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt) at org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:488) at org.apache.hadoop.crypto.key.kms.KMSClientProvider.access$100(KMSClientProvider.java:83) at org.apache.hadoop.crypto.key.kms.KMSClientProvider$EncryptedQueueRefiller.fillQueueForKey(KMSClientProvider.java:132) at org.apache.hadoop.crypto.key.kms.ValueQueue$1.load(ValueQueue.java:181) at org.apache.hadoop.crypto.key.kms.ValueQueue$1.load(ValueQueue.java:175) at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568) at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350) ... 24 more Caused by: org.apache.hadoop.security.authentication.client.AuthenticationException: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt) at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:306) at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:196) at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:127) at org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216) at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:322) at org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:482) at org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:477) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:477) ... 30 more Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt) at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147) at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122) at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187) at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179) at org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:285) at org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:261) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:261) ... 40 more Would this fix cover this missing tgt issue catch+relogin+retry also?
        Hide
        zhz Zhe Zhang added a comment -

        Harsh J Thanks for reporting the new issue. Which version was it? In trunk, it doesn't seem KMSClientProvider#generateEncryptedKey will cause an AuthenticationException.

        Show
        zhz Zhe Zhang added a comment - Harsh J Thanks for reporting the new issue. Which version was it? In trunk, it doesn't seem KMSClientProvider#generateEncryptedKey will cause an AuthenticationException .
        Hide
        zhz Zhe Zhang added a comment -

        Initial patch to fix the issue.

        Show
        zhz Zhe Zhang added a comment - Initial patch to fix the issue.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 0s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
        +1 mvninstall 8m 22s trunk passed
        +1 compile 9m 12s trunk passed with JDK v1.8.0_66
        +1 compile 9m 36s trunk passed with JDK v1.7.0_91
        +1 checkstyle 0m 18s trunk passed
        +1 mvnsite 1m 11s trunk passed
        +1 mvneclipse 0m 14s trunk passed
        +1 findbugs 1m 57s trunk passed
        +1 javadoc 0m 59s trunk passed with JDK v1.8.0_66
        +1 javadoc 1m 6s trunk passed with JDK v1.7.0_91
        +1 mvninstall 1m 39s the patch passed
        +1 compile 9m 2s the patch passed with JDK v1.8.0_66
        +1 javac 9m 2s the patch passed
        +1 compile 9m 36s the patch passed with JDK v1.7.0_91
        +1 javac 9m 36s the patch passed
        +1 checkstyle 0m 16s the patch passed
        +1 mvnsite 1m 7s the patch passed
        +1 mvneclipse 0m 14s the patch passed
        +1 whitespace 0m 0s Patch has no whitespace issues.
        +1 findbugs 2m 6s the patch passed
        +1 javadoc 0m 57s the patch passed with JDK v1.8.0_66
        +1 javadoc 1m 9s the patch passed with JDK v1.7.0_91
        +1 unit 8m 34s hadoop-common in the patch passed with JDK v1.8.0_66.
        +1 unit 8m 35s hadoop-common in the patch passed with JDK v1.7.0_91.
        +1 asflicense 0m 25s Patch does not generate ASF License warnings.
        77m 47s



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:0ca8df7
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12776647/HADOOP-12559.00.patch
        JIRA Issue HADOOP-12559
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 95ad45b7bebe 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 50edcb9
        findbugs v3.0.0
        JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8217/testReport/
        modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
        Max memory used 76MB
        Powered by Apache Yetus http://yetus.apache.org
        Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8217/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 8m 22s trunk passed +1 compile 9m 12s trunk passed with JDK v1.8.0_66 +1 compile 9m 36s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 18s trunk passed +1 mvnsite 1m 11s trunk passed +1 mvneclipse 0m 14s trunk passed +1 findbugs 1m 57s trunk passed +1 javadoc 0m 59s trunk passed with JDK v1.8.0_66 +1 javadoc 1m 6s trunk passed with JDK v1.7.0_91 +1 mvninstall 1m 39s the patch passed +1 compile 9m 2s the patch passed with JDK v1.8.0_66 +1 javac 9m 2s the patch passed +1 compile 9m 36s the patch passed with JDK v1.7.0_91 +1 javac 9m 36s the patch passed +1 checkstyle 0m 16s the patch passed +1 mvnsite 1m 7s the patch passed +1 mvneclipse 0m 14s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 2m 6s the patch passed +1 javadoc 0m 57s the patch passed with JDK v1.8.0_66 +1 javadoc 1m 9s the patch passed with JDK v1.7.0_91 +1 unit 8m 34s hadoop-common in the patch passed with JDK v1.8.0_66. +1 unit 8m 35s hadoop-common in the patch passed with JDK v1.7.0_91. +1 asflicense 0m 25s Patch does not generate ASF License warnings. 77m 47s Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12776647/HADOOP-12559.00.patch JIRA Issue HADOOP-12559 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 95ad45b7bebe 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 50edcb9 findbugs v3.0.0 JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8217/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Max memory used 76MB Powered by Apache Yetus http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8217/console This message was automatically generated.
        Hide
        qwertymaniac Harsh J added a comment -

        Zhe Zhang - I've isolated the issue to a problem with tgt lifetime vs. when the NN renews it, so that trace can be ignored in this JIRA. I'll log a separate JIRA for it and follow up here later. Thanks for taking a look!

        Show
        qwertymaniac Harsh J added a comment - Zhe Zhang - I've isolated the issue to a problem with tgt lifetime vs. when the NN renews it, so that trace can be ignored in this JIRA. I'll log a separate JIRA for it and follow up here later. Thanks for taking a look!
        Hide
        qwertymaniac Harsh J added a comment -

        Although if its possible for the patch to not be specific to re-login during only decrypt calls, it can solve the NN tgt expiry issue also.

        Show
        qwertymaniac Harsh J added a comment - Although if its possible for the patch to not be specific to re-login during only decrypt calls, it can solve the NN tgt expiry issue also.
        Hide
        andrew.wang Andrew Wang added a comment -

        Thanks for working on this Zhe. I agree with Harsh, it'd be nice to address this issue for all KMS operations. Fix looks pretty straightforward if it's just calling that check method everywhere.

        One other review note, let's not change the method signatures or formatting unless necessary.

        Show
        andrew.wang Andrew Wang added a comment - Thanks for working on this Zhe. I agree with Harsh, it'd be nice to address this issue for all KMS operations. Fix looks pretty straightforward if it's just calling that check method everywhere. One other review note, let's not change the method signatures or formatting unless necessary.
        Hide
        zhz Zhe Zhang added a comment -

        Thanks Andrew for taking a look!

        I believe adding the check in createConnection should do the trick – at least for the 2 issues we've found so far.

        The 2 changes to method signatures are to remove an unnecessary exception, and to make the line shorter than 80 chars. We can make these changes separately too.

        Show
        zhz Zhe Zhang added a comment - Thanks Andrew for taking a look! I believe adding the check in createConnection should do the trick – at least for the 2 issues we've found so far. The 2 changes to method signatures are to remove an unnecessary exception, and to make the line shorter than 80 chars. We can make these changes separately too.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 0s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
        +1 mvninstall 8m 5s trunk passed
        +1 compile 9m 1s trunk passed with JDK v1.8.0_66
        +1 compile 9m 30s trunk passed with JDK v1.7.0_91
        +1 checkstyle 0m 17s trunk passed
        +1 mvnsite 1m 7s trunk passed
        +1 mvneclipse 0m 14s trunk passed
        +1 findbugs 1m 53s trunk passed
        +1 javadoc 0m 57s trunk passed with JDK v1.8.0_66
        +1 javadoc 1m 6s trunk passed with JDK v1.7.0_91
        +1 mvninstall 1m 42s the patch passed
        +1 compile 9m 6s the patch passed with JDK v1.8.0_66
        +1 javac 9m 6s the patch passed
        +1 compile 10m 5s the patch passed with JDK v1.7.0_91
        +1 javac 10m 5s the patch passed
        +1 checkstyle 0m 20s the patch passed
        +1 mvnsite 1m 9s the patch passed
        +1 mvneclipse 0m 17s the patch passed
        +1 whitespace 0m 0s Patch has no whitespace issues.
        +1 findbugs 2m 19s the patch passed
        +1 javadoc 1m 7s the patch passed with JDK v1.8.0_66
        +1 javadoc 1m 16s the patch passed with JDK v1.7.0_91
        -1 unit 8m 23s hadoop-common in the patch failed with JDK v1.8.0_66.
        +1 unit 8m 42s hadoop-common in the patch passed with JDK v1.7.0_91.
        +1 asflicense 0m 25s Patch does not generate ASF License warnings.
        78m 10s



        Reason Tests
        JDK v1.8.0_66 Failed junit tests hadoop.security.ssl.TestReloadingX509TrustManager



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:0ca8df7
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12777606/HADOOP-12559.01.patch
        JIRA Issue HADOOP-12559
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 6474287ed593 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 915cd6c
        findbugs v3.0.0
        unit https://builds.apache.org/job/PreCommit-HADOOP-Build/8235/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt
        unit test logs https://builds.apache.org/job/PreCommit-HADOOP-Build/8235/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt
        JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8235/testReport/
        modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
        Max memory used 75MB
        Powered by Apache Yetus 0.1.0 http://yetus.apache.org
        Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8235/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 8m 5s trunk passed +1 compile 9m 1s trunk passed with JDK v1.8.0_66 +1 compile 9m 30s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 17s trunk passed +1 mvnsite 1m 7s trunk passed +1 mvneclipse 0m 14s trunk passed +1 findbugs 1m 53s trunk passed +1 javadoc 0m 57s trunk passed with JDK v1.8.0_66 +1 javadoc 1m 6s trunk passed with JDK v1.7.0_91 +1 mvninstall 1m 42s the patch passed +1 compile 9m 6s the patch passed with JDK v1.8.0_66 +1 javac 9m 6s the patch passed +1 compile 10m 5s the patch passed with JDK v1.7.0_91 +1 javac 10m 5s the patch passed +1 checkstyle 0m 20s the patch passed +1 mvnsite 1m 9s the patch passed +1 mvneclipse 0m 17s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 2m 19s the patch passed +1 javadoc 1m 7s the patch passed with JDK v1.8.0_66 +1 javadoc 1m 16s the patch passed with JDK v1.7.0_91 -1 unit 8m 23s hadoop-common in the patch failed with JDK v1.8.0_66. +1 unit 8m 42s hadoop-common in the patch passed with JDK v1.7.0_91. +1 asflicense 0m 25s Patch does not generate ASF License warnings. 78m 10s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.security.ssl.TestReloadingX509TrustManager Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12777606/HADOOP-12559.01.patch JIRA Issue HADOOP-12559 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 6474287ed593 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 915cd6c findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-HADOOP-Build/8235/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt unit test logs https://builds.apache.org/job/PreCommit-HADOOP-Build/8235/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8235/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Max memory used 75MB Powered by Apache Yetus 0.1.0 http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8235/console This message was automatically generated.
        Hide
        atm Aaron T. Myers added a comment -

        +1, patch looks good to me as well.

        Thanks, Zhe.

        Show
        atm Aaron T. Myers added a comment - +1, patch looks good to me as well. Thanks, Zhe.
        Hide
        zhz Zhe Zhang added a comment -

        Thanks ATM for reviewing. Could you take a look at this updated patch? It calls checkTGTAndReloginFromKeytab in createConnection to address the issue for all TGT related calls. Somehow my 01 patch was stale and only addresses the issue for decryptEncryptedKey. Thanks!

        Show
        zhz Zhe Zhang added a comment - Thanks ATM for reviewing. Could you take a look at this updated patch? It calls checkTGTAndReloginFromKeytab in createConnection to address the issue for all TGT related calls. Somehow my 01 patch was stale and only addresses the issue for decryptEncryptedKey . Thanks!
        Hide
        andrew.wang Andrew Wang added a comment -

        Yea I was wondering about that before, newest patch LGTM +1 though. Will let ATM chime in too.

        Show
        andrew.wang Andrew Wang added a comment - Yea I was wondering about that before, newest patch LGTM +1 though. Will let ATM chime in too.
        Hide
        xyao Xiaoyu Yao added a comment -

        Thanks Zhe Zhang for working on this. The stack I've seen so far are similar to Harsh J attached earlier.

        Caused by: org.apache.hadoop.security.authentication.client.AuthenticationException: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
        	at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:306)
        org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:196)
        	at 
        

        When JDK does not do the authentication implicitly, KerberosAuthenticator#doSpnegoSequence is called. doSpnegoSequence() has an assumption that the current default principal in the Kerberos cache (normally set via kinit). Does the added currentUGI#checkTGTAndReloginFromKeytab() solve the problem by satisfying this assumption? If not, we might just get lucky that JDK does the authentication. I would also suggest that we add a unit test to ensure doSpnegoSequence() working correctly with the fix? There are also some pending discussions around this in HADOOP-10850, HADOOP-10453, etc, which helps to fully understand the problem.

        Correct me if I'm wrong, I thought the problem was the following login inside doSpnegoSequence() did not have correct keytab for hdfs(nn) in this case.

           AccessControlContext context = AccessController.getContext();
              Subject subject = Subject.getSubject(context);
              if (subject == null
                  || (subject.getPrivateCredentials(KerberosKey.class).isEmpty()
                      && subject.getPrivateCredentials(KerberosTicket.class).isEmpty())) {
                LOG.debug("No subject in context, logging in");
                subject = new Subject();
                LoginContext login = new LoginContext("", subject,
                    null, new KerberosConfiguration());
                login.login();
              }
        
        Show
        xyao Xiaoyu Yao added a comment - Thanks Zhe Zhang for working on this. The stack I've seen so far are similar to Harsh J attached earlier. Caused by: org.apache.hadoop.security.authentication.client.AuthenticationException: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt) at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:306) org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:196) at When JDK does not do the authentication implicitly, KerberosAuthenticator#doSpnegoSequence is called. doSpnegoSequence() has an assumption that the current default principal in the Kerberos cache (normally set via kinit). Does the added currentUGI#checkTGTAndReloginFromKeytab() solve the problem by satisfying this assumption? If not, we might just get lucky that JDK does the authentication. I would also suggest that we add a unit test to ensure doSpnegoSequence() working correctly with the fix? There are also some pending discussions around this in HADOOP-10850 , HADOOP-10453 , etc, which helps to fully understand the problem. Correct me if I'm wrong, I thought the problem was the following login inside doSpnegoSequence() did not have correct keytab for hdfs(nn) in this case. AccessControlContext context = AccessController.getContext(); Subject subject = Subject.getSubject(context); if (subject == null || (subject.getPrivateCredentials(KerberosKey.class).isEmpty() && subject.getPrivateCredentials(KerberosTicket.class).isEmpty())) { LOG.debug( "No subject in context, logging in" ); subject = new Subject(); LoginContext login = new LoginContext("", subject, null , new KerberosConfiguration()); login.login(); }
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 1s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
        +1 mvninstall 7m 50s trunk passed
        +1 compile 8m 44s trunk passed with JDK v1.8.0_66
        +1 compile 9m 57s trunk passed with JDK v1.7.0_91
        +1 checkstyle 0m 18s trunk passed
        +1 mvnsite 1m 9s trunk passed
        +1 mvneclipse 0m 14s trunk passed
        +1 findbugs 2m 7s trunk passed
        +1 javadoc 1m 1s trunk passed with JDK v1.8.0_66
        +1 javadoc 1m 12s trunk passed with JDK v1.7.0_91
        +1 mvninstall 1m 39s the patch passed
        +1 compile 9m 27s the patch passed with JDK v1.8.0_66
        +1 javac 9m 27s the patch passed
        +1 compile 9m 30s the patch passed with JDK v1.7.0_91
        +1 javac 9m 30s the patch passed
        +1 checkstyle 0m 17s the patch passed
        +1 mvnsite 1m 5s the patch passed
        +1 mvneclipse 0m 14s the patch passed
        +1 whitespace 0m 0s Patch has no whitespace issues.
        +1 findbugs 2m 7s the patch passed
        +1 javadoc 0m 57s the patch passed with JDK v1.8.0_66
        +1 javadoc 1m 8s the patch passed with JDK v1.7.0_91
        -1 unit 7m 56s hadoop-common in the patch failed with JDK v1.8.0_66.
        -1 unit 7m 52s hadoop-common in the patch failed with JDK v1.7.0_91.
        +1 asflicense 0m 23s Patch does not generate ASF License warnings.
        76m 12s



        Reason Tests
        JDK v1.8.0_66 Failed junit tests hadoop.fs.TestSymlinkLocalFSFileSystem
        JDK v1.7.0_91 Failed junit tests hadoop.crypto.key.TestValueQueue



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:0ca8df7
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12777838/HADOOP-12559.02.patch
        JIRA Issue HADOOP-12559
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 9808392856b5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 8602692
        findbugs v3.0.0
        unit https://builds.apache.org/job/PreCommit-HADOOP-Build/8252/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt
        unit https://builds.apache.org/job/PreCommit-HADOOP-Build/8252/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_91.txt
        unit test logs https://builds.apache.org/job/PreCommit-HADOOP-Build/8252/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HADOOP-Build/8252/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_91.txt
        JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8252/testReport/
        modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
        Max memory used 75MB
        Powered by Apache Yetus 0.1.0 http://yetus.apache.org
        Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8252/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 1s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 7m 50s trunk passed +1 compile 8m 44s trunk passed with JDK v1.8.0_66 +1 compile 9m 57s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 18s trunk passed +1 mvnsite 1m 9s trunk passed +1 mvneclipse 0m 14s trunk passed +1 findbugs 2m 7s trunk passed +1 javadoc 1m 1s trunk passed with JDK v1.8.0_66 +1 javadoc 1m 12s trunk passed with JDK v1.7.0_91 +1 mvninstall 1m 39s the patch passed +1 compile 9m 27s the patch passed with JDK v1.8.0_66 +1 javac 9m 27s the patch passed +1 compile 9m 30s the patch passed with JDK v1.7.0_91 +1 javac 9m 30s the patch passed +1 checkstyle 0m 17s the patch passed +1 mvnsite 1m 5s the patch passed +1 mvneclipse 0m 14s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 2m 7s the patch passed +1 javadoc 0m 57s the patch passed with JDK v1.8.0_66 +1 javadoc 1m 8s the patch passed with JDK v1.7.0_91 -1 unit 7m 56s hadoop-common in the patch failed with JDK v1.8.0_66. -1 unit 7m 52s hadoop-common in the patch failed with JDK v1.7.0_91. +1 asflicense 0m 23s Patch does not generate ASF License warnings. 76m 12s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.fs.TestSymlinkLocalFSFileSystem JDK v1.7.0_91 Failed junit tests hadoop.crypto.key.TestValueQueue Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12777838/HADOOP-12559.02.patch JIRA Issue HADOOP-12559 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 9808392856b5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 8602692 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-HADOOP-Build/8252/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/8252/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-HADOOP-Build/8252/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HADOOP-Build/8252/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8252/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Max memory used 75MB Powered by Apache Yetus 0.1.0 http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8252/console This message was automatically generated.
        Hide
        zhz Zhe Zhang added a comment -

        Thanks Xiaoyu Yao for the suggestion. I'm attaching a new patch with a unit test to emulate an expired TGT. Without calling checkTGTAndReloginFromKeytab, the getKeys call will fail complaining that TGT has expired. I tried other KP calls with the same conclusion.

        The test uses 6 mins MAX_TICKET_LIFETIME. With a smaller value, KDC initialization fails with error "start time is later than end time".

        doSpnegoSequence() has an assumption that the current default principal in the Kerberos cache (normally set via kinit). Does the added currentUGI#checkTGTAndReloginFromKeytab() solve the problem by satisfying this assumption?

        This patch actually addresses an orthogonal issue: the current default principal is in the Kerberos cache, but the TGT has expired. Have you seen a case where the TGT has not expired, but doSpnegoSequence still fails? If so we should address that issue separately.

        Show
        zhz Zhe Zhang added a comment - Thanks Xiaoyu Yao for the suggestion. I'm attaching a new patch with a unit test to emulate an expired TGT. Without calling checkTGTAndReloginFromKeytab , the getKeys call will fail complaining that TGT has expired. I tried other KP calls with the same conclusion. The test uses 6 mins MAX_TICKET_LIFETIME . With a smaller value, KDC initialization fails with error "start time is later than end time". doSpnegoSequence() has an assumption that the current default principal in the Kerberos cache (normally set via kinit). Does the added currentUGI#checkTGTAndReloginFromKeytab() solve the problem by satisfying this assumption? This patch actually addresses an orthogonal issue: the current default principal is in the Kerberos cache, but the TGT has expired. Have you seen a case where the TGT has not expired, but doSpnegoSequence still fails? If so we should address that issue separately.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 0s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        +1 mvninstall 23m 14s trunk passed
        +1 compile 36m 6s trunk passed with JDK v1.8.0_66
        +1 compile 25m 42s trunk passed with JDK v1.7.0_91
        +1 checkstyle 1m 0s trunk passed
        +1 mvnsite 4m 3s trunk passed
        +1 mvneclipse 1m 10s trunk passed
        +1 findbugs 5m 51s trunk passed
        +1 javadoc 2m 57s trunk passed with JDK v1.8.0_66
        +1 javadoc 2m 37s trunk passed with JDK v1.7.0_91
        +1 mvninstall 3m 44s the patch passed
        +1 compile 30m 12s the patch passed with JDK v1.8.0_66
        +1 javac 30m 12s the patch passed
        +1 compile 21m 50s the patch passed with JDK v1.7.0_91
        +1 javac 21m 50s the patch passed
        +1 checkstyle 0m 55s the patch passed
        +1 mvnsite 3m 40s the patch passed
        +1 mvneclipse 1m 4s the patch passed
        +1 whitespace 0m 0s Patch has no whitespace issues.
        +1 findbugs 6m 34s the patch passed
        +1 javadoc 3m 17s the patch passed with JDK v1.8.0_66
        +1 javadoc 3m 11s the patch passed with JDK v1.7.0_91
        -1 unit 21m 4s hadoop-common in the patch failed with JDK v1.8.0_66.
        +1 unit 9m 57s hadoop-kms in the patch passed with JDK v1.8.0_66.
        -1 unit 20m 4s hadoop-common in the patch failed with JDK v1.7.0_91.
        -1 unit 9m 0s hadoop-kms in the patch failed with JDK v1.7.0_91.
        -1 asflicense 0m 55s Patch generated 1 ASF License warnings.
        241m 48s



        Reason Tests
        JDK v1.8.0_66 Failed junit tests hadoop.security.ssl.TestReloadingX509TrustManager
          hadoop.fs.TestLocalFsFCStatistics
          hadoop.fs.shell.find.TestAnd
          hadoop.io.compress.TestCodecPool
          hadoop.fs.shell.find.TestPrint
          hadoop.fs.shell.find.TestPrint0
          hadoop.test.TestTimedOutTestsListener
          hadoop.fs.shell.find.TestIname
          hadoop.fs.shell.find.TestName
          hadoop.fs.shell.find.TestFind
          hadoop.ipc.TestRPCWaitForProxy
          hadoop.ipc.TestProtoBufRpc
        JDK v1.7.0_91 Failed junit tests hadoop.security.ssl.TestReloadingX509TrustManager
          hadoop.fs.shell.find.TestAnd
          hadoop.io.compress.TestCodec
          hadoop.fs.shell.find.TestPrint
          hadoop.fs.shell.find.TestPrint0
          hadoop.test.TestTimedOutTestsListener
          hadoop.fs.shell.find.TestIname
          hadoop.fs.shell.find.TestName
          hadoop.fs.shell.find.TestFind
          hadoop.security.TestShellBasedIdMapping
          hadoop.ipc.TestRPCWaitForProxy
          hadoop.ipc.TestProtoBufRpc
          hadoop.crypto.key.kms.server.TestKMS
        JDK v1.7.0_91 Timed out junit tests org.apache.hadoop.fs.TestFilterFileSystem



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:0ca8df7
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12778323/HADOOP-12559.03.patch
        JIRA Issue HADOOP-12559
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux e5703555dc34 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 4e7d32c
        findbugs v3.0.0
        unit https://builds.apache.org/job/PreCommit-HADOOP-Build/8265/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt
        unit https://builds.apache.org/job/PreCommit-HADOOP-Build/8265/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_91.txt
        unit https://builds.apache.org/job/PreCommit-HADOOP-Build/8265/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-kms-jdk1.7.0_91.txt
        unit test logs https://builds.apache.org/job/PreCommit-HADOOP-Build/8265/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HADOOP-Build/8265/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_91.txt https://builds.apache.org/job/PreCommit-HADOOP-Build/8265/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-kms-jdk1.7.0_91.txt
        JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8265/testReport/
        asflicense https://builds.apache.org/job/PreCommit-HADOOP-Build/8265/artifact/patchprocess/patch-asflicense-problems.txt
        modules C: hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: hadoop-common-project
        Max memory used 75MB
        Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
        Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8265/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 23m 14s trunk passed +1 compile 36m 6s trunk passed with JDK v1.8.0_66 +1 compile 25m 42s trunk passed with JDK v1.7.0_91 +1 checkstyle 1m 0s trunk passed +1 mvnsite 4m 3s trunk passed +1 mvneclipse 1m 10s trunk passed +1 findbugs 5m 51s trunk passed +1 javadoc 2m 57s trunk passed with JDK v1.8.0_66 +1 javadoc 2m 37s trunk passed with JDK v1.7.0_91 +1 mvninstall 3m 44s the patch passed +1 compile 30m 12s the patch passed with JDK v1.8.0_66 +1 javac 30m 12s the patch passed +1 compile 21m 50s the patch passed with JDK v1.7.0_91 +1 javac 21m 50s the patch passed +1 checkstyle 0m 55s the patch passed +1 mvnsite 3m 40s the patch passed +1 mvneclipse 1m 4s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 6m 34s the patch passed +1 javadoc 3m 17s the patch passed with JDK v1.8.0_66 +1 javadoc 3m 11s the patch passed with JDK v1.7.0_91 -1 unit 21m 4s hadoop-common in the patch failed with JDK v1.8.0_66. +1 unit 9m 57s hadoop-kms in the patch passed with JDK v1.8.0_66. -1 unit 20m 4s hadoop-common in the patch failed with JDK v1.7.0_91. -1 unit 9m 0s hadoop-kms in the patch failed with JDK v1.7.0_91. -1 asflicense 0m 55s Patch generated 1 ASF License warnings. 241m 48s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.security.ssl.TestReloadingX509TrustManager   hadoop.fs.TestLocalFsFCStatistics   hadoop.fs.shell.find.TestAnd   hadoop.io.compress.TestCodecPool   hadoop.fs.shell.find.TestPrint   hadoop.fs.shell.find.TestPrint0   hadoop.test.TestTimedOutTestsListener   hadoop.fs.shell.find.TestIname   hadoop.fs.shell.find.TestName   hadoop.fs.shell.find.TestFind   hadoop.ipc.TestRPCWaitForProxy   hadoop.ipc.TestProtoBufRpc JDK v1.7.0_91 Failed junit tests hadoop.security.ssl.TestReloadingX509TrustManager   hadoop.fs.shell.find.TestAnd   hadoop.io.compress.TestCodec   hadoop.fs.shell.find.TestPrint   hadoop.fs.shell.find.TestPrint0   hadoop.test.TestTimedOutTestsListener   hadoop.fs.shell.find.TestIname   hadoop.fs.shell.find.TestName   hadoop.fs.shell.find.TestFind   hadoop.security.TestShellBasedIdMapping   hadoop.ipc.TestRPCWaitForProxy   hadoop.ipc.TestProtoBufRpc   hadoop.crypto.key.kms.server.TestKMS JDK v1.7.0_91 Timed out junit tests org.apache.hadoop.fs.TestFilterFileSystem Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12778323/HADOOP-12559.03.patch JIRA Issue HADOOP-12559 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux e5703555dc34 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 4e7d32c findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-HADOOP-Build/8265/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/8265/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_91.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/8265/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-kms-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-HADOOP-Build/8265/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HADOOP-Build/8265/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_91.txt https://builds.apache.org/job/PreCommit-HADOOP-Build/8265/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-kms-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8265/testReport/ asflicense https://builds.apache.org/job/PreCommit-HADOOP-Build/8265/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: hadoop-common-project Max memory used 75MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8265/console This message was automatically generated.
        Hide
        zhz Zhe Zhang added a comment -

        Updating patch to avoid changing ticket lifetime for the entire TestKMS class.

        I'm still trying to figure out how to set a smaller timeout than 6 mins. Any suggestions are very welcome.

        Show
        zhz Zhe Zhang added a comment - Updating patch to avoid changing ticket lifetime for the entire TestKMS class. I'm still trying to figure out how to set a smaller timeout than 6 mins. Any suggestions are very welcome.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 0s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        +1 mvninstall 8m 15s trunk passed
        +1 compile 9m 14s trunk passed with JDK v1.8.0_66
        +1 compile 9m 53s trunk passed with JDK v1.7.0_91
        +1 checkstyle 0m 23s trunk passed
        +1 mvnsite 1m 30s trunk passed
        +1 mvneclipse 0m 28s trunk passed
        +1 findbugs 2m 28s trunk passed
        +1 javadoc 1m 9s trunk passed with JDK v1.8.0_66
        +1 javadoc 1m 23s trunk passed with JDK v1.7.0_91
        +1 mvninstall 2m 3s the patch passed
        +1 compile 8m 45s the patch passed with JDK v1.8.0_66
        +1 javac 8m 45s the patch passed
        +1 compile 9m 45s the patch passed with JDK v1.7.0_91
        +1 javac 9m 45s the patch passed
        +1 checkstyle 0m 21s the patch passed
        +1 mvnsite 1m 32s the patch passed
        +1 mvneclipse 0m 27s the patch passed
        +1 whitespace 0m 0s Patch has no whitespace issues.
        +1 findbugs 2m 49s the patch passed
        +1 javadoc 1m 13s the patch passed with JDK v1.8.0_66
        +1 javadoc 1m 23s the patch passed with JDK v1.7.0_91
        -1 unit 7m 16s hadoop-common in the patch failed with JDK v1.8.0_66.
        +1 unit 8m 24s hadoop-kms in the patch passed with JDK v1.8.0_66.
        +1 unit 7m 52s hadoop-common in the patch passed with JDK v1.7.0_91.
        +1 unit 8m 27s hadoop-kms in the patch passed with JDK v1.7.0_91.
        +1 asflicense 0m 23s Patch does not generate ASF License warnings.
        96m 41s



        Reason Tests
        JDK v1.8.0_66 Failed junit tests hadoop.ipc.TestIPC



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:0ca8df7
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12778563/HADOOP-12559.04.patch
        JIRA Issue HADOOP-12559
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 45181d6c4345 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / e63388f
        findbugs v3.0.0
        unit https://builds.apache.org/job/PreCommit-HADOOP-Build/8278/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt
        unit test logs https://builds.apache.org/job/PreCommit-HADOOP-Build/8278/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt
        JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8278/testReport/
        modules C: hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: hadoop-common-project
        Max memory used 76MB
        Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
        Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8278/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 8m 15s trunk passed +1 compile 9m 14s trunk passed with JDK v1.8.0_66 +1 compile 9m 53s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 23s trunk passed +1 mvnsite 1m 30s trunk passed +1 mvneclipse 0m 28s trunk passed +1 findbugs 2m 28s trunk passed +1 javadoc 1m 9s trunk passed with JDK v1.8.0_66 +1 javadoc 1m 23s trunk passed with JDK v1.7.0_91 +1 mvninstall 2m 3s the patch passed +1 compile 8m 45s the patch passed with JDK v1.8.0_66 +1 javac 8m 45s the patch passed +1 compile 9m 45s the patch passed with JDK v1.7.0_91 +1 javac 9m 45s the patch passed +1 checkstyle 0m 21s the patch passed +1 mvnsite 1m 32s the patch passed +1 mvneclipse 0m 27s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 2m 49s the patch passed +1 javadoc 1m 13s the patch passed with JDK v1.8.0_66 +1 javadoc 1m 23s the patch passed with JDK v1.7.0_91 -1 unit 7m 16s hadoop-common in the patch failed with JDK v1.8.0_66. +1 unit 8m 24s hadoop-kms in the patch passed with JDK v1.8.0_66. +1 unit 7m 52s hadoop-common in the patch passed with JDK v1.7.0_91. +1 unit 8m 27s hadoop-kms in the patch passed with JDK v1.7.0_91. +1 asflicense 0m 23s Patch does not generate ASF License warnings. 96m 41s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.ipc.TestIPC Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12778563/HADOOP-12559.04.patch JIRA Issue HADOOP-12559 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 45181d6c4345 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / e63388f findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-HADOOP-Build/8278/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt unit test logs https://builds.apache.org/job/PreCommit-HADOOP-Build/8278/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8278/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: hadoop-common-project Max memory used 76MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8278/console This message was automatically generated.
        Hide
        xyao Xiaoyu Yao added a comment -

        Thanks Zhe Zhang for updating the patch with additional. I agree with your analysis that this patch can handle the case where the current user is authenticated by KERBEROS with its Kerberos principle available in keytab but not in TGT cache (not login or expired). However, I think the currentUgi below should be actualUgi to handle the proxy user case.

        currentUgi.checkTGTAndReloginFromKeytab();
        

        The original comment I made is on a different use case where the currentUser is authenticated by TOKEN, e.g., a user token passed from distcp mappers on HDFS datanode when using webhdfs + KMS. When DN talks to KMS with the user token, it won't be able to do SPNEGO based authentication. The additional UGI#checkTGTAndReloginFromKeytab in KMSClientProvider will be a no-op in this case as the token based user won't have its Kerberos principle in local keytab or TGT cache, which failed later in doSpnego with a similar stack. I will open a separate JIRA for that.

        Regarding simulating kerberos ticket timeout, I can do that with 'kinit -l' on a MIT KDC as shown below. The issue seems like a limitation of org.apache.directory.server.kerberos.kdc.KdcServer used by miniKDC. If there is no obvious solution for that, I'm fine without unit test as long we comment on this JIRA about the validation that have been done before commit.

        [ambari-qa@c6402 vagrant]$ kinit -l 1m -kt /etc/security/keytabs/smokeuser.headless.keytab ambari-qa-hdp64@EXAMPLE.COM
        [ambari-qa@c6402 vagrant]$ klist
        Ticket cache: FILE:/tmp/krb5cc_1001
        Default principal: ambari-qa-hdp64@EXAMPLE.COM
        
        Valid starting     Expires            Service principal
        12/22/15 08:41:04  12/22/15 08:42:04  krbtgt/EXAMPLE.COM@EXAMPLE.COM
        	renew until 12/22/15 08:41:04
        
        Show
        xyao Xiaoyu Yao added a comment - Thanks Zhe Zhang for updating the patch with additional. I agree with your analysis that this patch can handle the case where the current user is authenticated by KERBEROS with its Kerberos principle available in keytab but not in TGT cache (not login or expired). However, I think the currentUgi below should be actualUgi to handle the proxy user case. currentUgi.checkTGTAndReloginFromKeytab(); The original comment I made is on a different use case where the currentUser is authenticated by TOKEN, e.g., a user token passed from distcp mappers on HDFS datanode when using webhdfs + KMS. When DN talks to KMS with the user token, it won't be able to do SPNEGO based authentication. The additional UGI#checkTGTAndReloginFromKeytab in KMSClientProvider will be a no-op in this case as the token based user won't have its Kerberos principle in local keytab or TGT cache, which failed later in doSpnego with a similar stack. I will open a separate JIRA for that. Regarding simulating kerberos ticket timeout, I can do that with 'kinit -l' on a MIT KDC as shown below. The issue seems like a limitation of org.apache.directory.server.kerberos.kdc.KdcServer used by miniKDC. If there is no obvious solution for that, I'm fine without unit test as long we comment on this JIRA about the validation that have been done before commit. [ambari-qa@c6402 vagrant]$ kinit -l 1m -kt /etc/security/keytabs/smokeuser.headless.keytab ambari-qa-hdp64@EXAMPLE.COM [ambari-qa@c6402 vagrant]$ klist Ticket cache: FILE:/tmp/krb5cc_1001 Default principal: ambari-qa-hdp64@EXAMPLE.COM Valid starting Expires Service principal 12/22/15 08:41:04 12/22/15 08:42:04 krbtgt/EXAMPLE.COM@EXAMPLE.COM renew until 12/22/15 08:41:04
        Hide
        zhz Zhe Zhang added a comment -

        Thanks for the helpful discussion Xiaoyu. I don't think it's easy to bypass the KDC limitation and efficiently emulate a short TGT lifetime. Following the suggestion I have removed the unit test in v05 patch. It's a good catch that we should use actualUgi when renewing TGT.

        I've verified with the following test code (in the context of TestKMS):

          @Test
          public void testTGTRenewal() throws Exception {
            tearDownMiniKdc();
            Properties kdcConf = MiniKdc.createConf();
            kdcConf.setProperty(MiniKdc.MAX_TICKET_LIFETIME, "360000");
            setUpMiniKdc(kdcConf);
        
            Configuration conf = new Configuration();
            conf.set("hadoop.security.authentication", "kerberos");
            UserGroupInformation.setConfiguration(conf);
            final File testDir = getTestDir();
            conf = createBaseKMSConf(testDir);
            conf.set("hadoop.kms.authentication.type", "kerberos");
            conf.set("hadoop.kms.authentication.kerberos.keytab",
                keytab.getAbsolutePath());
            conf.set("hadoop.kms.authentication.kerberos.principal", "HTTP/localhost");
            conf.set("hadoop.kms.authentication.kerberos.name.rules", "DEFAULT");
        
            final String keyA = "key_a";
            final String keyD = "key_d";
            conf.set(KeyAuthorizationKeyProvider.KEY_ACL + keyA + ".ALL", "*");
            conf.set(KeyAuthorizationKeyProvider.KEY_ACL + keyD + ".ALL", "*");
        
            writeConf(testDir, conf);
        
            runServer(null, null, testDir, new KMSCallable<Void>() {
              @Override
              public Void call() throws Exception {
                final Configuration conf = new Configuration();
                conf.setInt(KeyProvider.DEFAULT_BITLENGTH_NAME, 64);
                final URI uri = createKMSUri(getKMSUrl());
                UserGroupInformation.
                    loginUserFromKeytab("client", keytab.getAbsolutePath());
                try {
                  KeyProvider kp = createProvider(uri, conf);
                  Thread.sleep(360000);
                  kp.getKeys();
                } catch (Exception ex) {
                  String errMsg = ex.getMessage();
                  System.out.println(errMsg);
                  if (errMsg.contains("Failed to find any Kerberos tgt")) {
                    Assert.fail("TGT expired");
                  }
                }
        
                return null;
              }
            });
          }
        

        The test passes with the patch, but fails without it, with the same complain that Harsh commented above:

        org.apache.hadoop.security.authentication.client.AuthenticationException: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
        
        Show
        zhz Zhe Zhang added a comment - Thanks for the helpful discussion Xiaoyu. I don't think it's easy to bypass the KDC limitation and efficiently emulate a short TGT lifetime. Following the suggestion I have removed the unit test in v05 patch. It's a good catch that we should use actualUgi when renewing TGT. I've verified with the following test code (in the context of TestKMS ): @Test public void testTGTRenewal() throws Exception { tearDownMiniKdc(); Properties kdcConf = MiniKdc.createConf(); kdcConf.setProperty(MiniKdc.MAX_TICKET_LIFETIME, "360000" ); setUpMiniKdc(kdcConf); Configuration conf = new Configuration(); conf.set( "hadoop.security.authentication" , "kerberos" ); UserGroupInformation.setConfiguration(conf); final File testDir = getTestDir(); conf = createBaseKMSConf(testDir); conf.set( "hadoop.kms.authentication.type" , "kerberos" ); conf.set( "hadoop.kms.authentication.kerberos.keytab" , keytab.getAbsolutePath()); conf.set( "hadoop.kms.authentication.kerberos.principal" , "HTTP/localhost" ); conf.set( "hadoop.kms.authentication.kerberos.name.rules" , "DEFAULT" ); final String keyA = "key_a" ; final String keyD = "key_d" ; conf.set(KeyAuthorizationKeyProvider.KEY_ACL + keyA + ".ALL" , "*" ); conf.set(KeyAuthorizationKeyProvider.KEY_ACL + keyD + ".ALL" , "*" ); writeConf(testDir, conf); runServer( null , null , testDir, new KMSCallable< Void >() { @Override public Void call() throws Exception { final Configuration conf = new Configuration(); conf.setInt(KeyProvider.DEFAULT_BITLENGTH_NAME, 64); final URI uri = createKMSUri(getKMSUrl()); UserGroupInformation. loginUserFromKeytab( "client" , keytab.getAbsolutePath()); try { KeyProvider kp = createProvider(uri, conf); Thread .sleep(360000); kp.getKeys(); } catch (Exception ex) { String errMsg = ex.getMessage(); System .out.println(errMsg); if (errMsg.contains( "Failed to find any Kerberos tgt" )) { Assert.fail( "TGT expired" ); } } return null ; } }); } The test passes with the patch, but fails without it, with the same complain that Harsh commented above: org.apache.hadoop.security.authentication.client.AuthenticationException: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 0s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
        +1 mvninstall 7m 37s trunk passed
        +1 compile 8m 2s trunk passed with JDK v1.8.0_66
        +1 compile 8m 43s trunk passed with JDK v1.7.0_91
        +1 checkstyle 0m 18s trunk passed
        +1 mvnsite 1m 0s trunk passed
        +1 mvneclipse 0m 15s trunk passed
        +1 findbugs 1m 45s trunk passed
        +1 javadoc 0m 51s trunk passed with JDK v1.8.0_66
        +1 javadoc 1m 1s trunk passed with JDK v1.7.0_91
        +1 mvninstall 1m 45s the patch passed
        +1 compile 7m 49s the patch passed with JDK v1.8.0_66
        +1 javac 7m 49s the patch passed
        +1 compile 8m 44s the patch passed with JDK v1.7.0_91
        +1 javac 8m 44s the patch passed
        +1 checkstyle 0m 14s the patch passed
        +1 mvnsite 1m 1s the patch passed
        +1 mvneclipse 0m 13s the patch passed
        +1 whitespace 0m 0s Patch has no whitespace issues.
        +1 findbugs 1m 55s the patch passed
        +1 javadoc 0m 52s the patch passed with JDK v1.8.0_66
        +1 javadoc 1m 3s the patch passed with JDK v1.7.0_91
        +1 unit 6m 41s hadoop-common in the patch passed with JDK v1.8.0_66.
        +1 unit 6m 58s hadoop-common in the patch passed with JDK v1.7.0_91.
        +1 asflicense 0m 23s Patch does not generate ASF License warnings.
        68m 15s



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:0ca8df7
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12779152/HADOOP-12559.05.patch
        JIRA Issue HADOOP-12559
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 1e3fca6a225c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / df83230
        findbugs v3.0.0
        JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8292/testReport/
        modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
        Max memory used 76MB
        Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
        Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8292/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 7m 37s trunk passed +1 compile 8m 2s trunk passed with JDK v1.8.0_66 +1 compile 8m 43s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 18s trunk passed +1 mvnsite 1m 0s trunk passed +1 mvneclipse 0m 15s trunk passed +1 findbugs 1m 45s trunk passed +1 javadoc 0m 51s trunk passed with JDK v1.8.0_66 +1 javadoc 1m 1s trunk passed with JDK v1.7.0_91 +1 mvninstall 1m 45s the patch passed +1 compile 7m 49s the patch passed with JDK v1.8.0_66 +1 javac 7m 49s the patch passed +1 compile 8m 44s the patch passed with JDK v1.7.0_91 +1 javac 8m 44s the patch passed +1 checkstyle 0m 14s the patch passed +1 mvnsite 1m 1s the patch passed +1 mvneclipse 0m 13s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 55s the patch passed +1 javadoc 0m 52s the patch passed with JDK v1.8.0_66 +1 javadoc 1m 3s the patch passed with JDK v1.7.0_91 +1 unit 6m 41s hadoop-common in the patch passed with JDK v1.8.0_66. +1 unit 6m 58s hadoop-common in the patch passed with JDK v1.7.0_91. +1 asflicense 0m 23s Patch does not generate ASF License warnings. 68m 15s Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12779152/HADOOP-12559.05.patch JIRA Issue HADOOP-12559 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 1e3fca6a225c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / df83230 findbugs v3.0.0 JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8292/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Max memory used 76MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8292/console This message was automatically generated.
        Hide
        xyao Xiaoyu Yao added a comment -

        Thanks Zhe Zhang for updating the patch with comments on the testing details. +1 for V05 patch and I will commit it shortly.

        Show
        xyao Xiaoyu Yao added a comment - Thanks Zhe Zhang for updating the patch with comments on the testing details. +1 for V05 patch and I will commit it shortly.
        Hide
        xyao Xiaoyu Yao added a comment -

        Thanks Zhe Zhang for the contribution and all for the reviews. I've commit the fix to trunk, branch-2 and branch-2.8.

        Show
        xyao Xiaoyu Yao added a comment - Thanks Zhe Zhang for the contribution and all for the reviews. I've commit the fix to trunk, branch-2 and branch-2.8.
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Hadoop-trunk-Commit #9028 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9028/)
        HADOOP-12559. KMS connection failures should trigger TGT renewal. (xyao: rev 993311e547e6dd7757025d5ffc285019bd4fc1f6)

        • hadoop-common-project/hadoop-common/CHANGES.txt
        • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #9028 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9028/ ) HADOOP-12559 . KMS connection failures should trigger TGT renewal. (xyao: rev 993311e547e6dd7757025d5ffc285019bd4fc1f6) hadoop-common-project/hadoop-common/CHANGES.txt hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
        Hide
        zhz Zhe Zhang added a comment -

        Thx Xiaoyu, Harsh, Andrew, and ATM for the helpful reviews!

        Show
        zhz Zhe Zhang added a comment - Thx Xiaoyu, Harsh, Andrew, and ATM for the helpful reviews!
        Hide
        busbey Sean Busbey added a comment -

        This seems like a pretty low-risk fix. Could we get this in the 2.7 and 2.6 lines? (presuming it impacts them as well)

        Show
        busbey Sean Busbey added a comment - This seems like a pretty low-risk fix. Could we get this in the 2.7 and 2.6 lines? (presuming it impacts them as well)
        Hide
        qwertymaniac Harsh J added a comment -

        (We'll need also HADOOP-12682 for tests if backporting BTW)

        Show
        qwertymaniac Harsh J added a comment - (We'll need also HADOOP-12682 for tests if backporting BTW)
        Hide
        zhz Zhe Zhang added a comment -

        Yes I think it's a good idea to backport both to 2.7/2.6. I'll work on it today.

        Show
        zhz Zhe Zhang added a comment - Yes I think it's a good idea to backport both to 2.7/2.6. I'll work on it today.
        Hide
        zhz Zhe Zhang added a comment -

        Backported to 2.7 and 2.6. Working on the test JIRA now.

        Show
        zhz Zhe Zhang added a comment - Backported to 2.7 and 2.6. Working on the test JIRA now.
        Hide
        zhz Zhe Zhang added a comment -

        Just backported HADOOP-12682 to branch-2.7 and branch-2.6 as well.

        Show
        zhz Zhe Zhang added a comment - Just backported HADOOP-12682 to branch-2.7 and branch-2.6 as well.
        Hide
        busbey Sean Busbey added a comment -

        thanks!

        Show
        busbey Sean Busbey added a comment - thanks!
        Hide
        vinodkv Vinod Kumar Vavilapalli added a comment -

        Closing the JIRA as part of 2.7.3 release.

        Show
        vinodkv Vinod Kumar Vavilapalli added a comment - Closing the JIRA as part of 2.7.3 release.

          People

          • Assignee:
            zhz Zhe Zhang
            Reporter:
            zhz Zhe Zhang
          • Votes:
            0 Vote for this issue
            Watchers:
            18 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development