Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-13988

KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 2.8.0, 2.7.3
    • Fix Version/s: 2.9.0, 3.0.0-alpha4
    • Component/s: common, kms
    • Labels:
      None
    • Environment:

      HDP 2.5.3.0

      WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes

    • Hadoop Flags:
      Reviewed

      Description

      After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider issues have not been resolved. We put a test build together and applied HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue with requests coming from WebHDFS through to Knox to a TDE zone.

      So we added some debug to our build and determined effectively what is happening here is a double proxy situation which does not seem to work. So we propose the following fix in getActualUgi Method:

           }
           // Use current user by default
           UserGroupInformation actualUgi = currentUgi;
           if (currentUgi.getRealUser() != null) {
             // Use real user for proxy user
             if (LOG.isDebugEnabled()) {
      	   LOG.debug("using RealUser for proxyUser);
      	}
             actualUgi = currentUgi.getRealUser();
             if (getDoAsUser() != null) {
            	  if (LOG.isDebugEnabled()) {
      		LOG.debug("doAsUser exists");
      		LOG.debug("currentUGI realUser shortName: {}", currentUgi.getRealUser().getShortUserName());
      		LOG.debug("processUGI loginUser shortName: {}", UserGroupInformation.getLoginUser().getShortUserName());
                }
          	  if (currentUgi.getRealUser().getShortUserName() != UserGroupInformation.getLoginUser().getShortUserName()) {
          		  if (LOG.isDebugEnabled()) {
      		  	LOG.debug("currentUGI.realUser does not match UGI.processUser);
      		  }
      		  actualUgi = UserGroupInformation.getLoginUser();
      		  if (LOG.isDebugEnabled()) {
      	    	  	LOG.debug("LoginUser for Proxy: {}", actualUgi.getLoginUser());
      		  }
           	  }
             }
      	
           } else if (!currentUgiContainsKmsDt() &&
               !currentUgi.hasKerberosCredentials()) {
             // Use login user for user that does not have either
             // Kerberos credential or KMS delegation token for KMS operations
             if (LOG.isDebugEnabled()) {
      	   LOG.debug("using loginUser no KMS Delegation Token no Kerberos Credentials");
      	}
             actualUgi = currentUgi.getLoginUser();
           }
           return actualUgi;
         }
      
      
      1. HADOOP-13988.patch
        2 kB
        Greg Senia
      2. HADOOP-13988.patch
        3 kB
        Greg Senia
      3. HADOOP-13988.01.patch
        2 kB
        Xiaoyu Yao
      4. HADOOP-13988.02.patch
        2 kB
        Xiaoyu Yao
      5. HADOOP-13988.03.patch
        3 kB
        Xiaoyu Yao

        Issue Links

          Activity

          Hide
          xyao Xiaoyu Yao added a comment -

          Open HADOOP-14029 to fix the non-secure proxy use case and resolve this one.

          Show
          xyao Xiaoyu Yao added a comment - Open HADOOP-14029 to fix the non-secure proxy use case and resolve this one.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          -1 patch 0m 6s HADOOP-13988 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help.



          Subsystem Report/Notes
          JIRA Issue HADOOP-13988
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12849613/HADOOP-13988.03.patch
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11520/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. -1 patch 0m 6s HADOOP-13988 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. Subsystem Report/Notes JIRA Issue HADOOP-13988 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12849613/HADOOP-13988.03.patch Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11520/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          xyao Xiaoyu Yao added a comment -

          Below seems to be caused by Kerberos/DNS lookup issue, which is not related to this change.

          java.io.IOException: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Cannot get a KDC reply)]; Host Details : local host is: "bcf70b846b20/172.17.0.2"; destination host is: "localhost":56245;
          
          Show
          xyao Xiaoyu Yao added a comment - Below seems to be caused by Kerberos/DNS lookup issue, which is not related to this change. java.io.IOException: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Cannot get a KDC reply)]; Host Details : local host is: "bcf70b846b20/172.17.0.2" ; destination host is: "localhost" :56245;
          Hide
          xyao Xiaoyu Yao added a comment -

          Attach a new patch that fix the non-secure proxy user case that was caught by the hdfs unit tests. Also fix a unit test bug in TestKMS. Please review, thanks!

          Show
          xyao Xiaoyu Yao added a comment - Attach a new patch that fix the non-secure proxy user case that was caught by the hdfs unit tests. Also fix a unit test bug in TestKMS. Please review, thanks!
          Hide
          xyao Xiaoyu Yao added a comment - - edited

          Greg Senia, the unit test failure is different seems different. Xiao Chen, it is caused by the proxy user in non-secure case.
          We will need to check if security is enabled before checking kerberos credential/DT as below.

          --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
          +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
          @@ -1097,7 +1097,8 @@ private UserGroupInformation getActualUgi() throws IOException {
                 actualUgi = currentUgi.getRealUser();
               }
           
          -    if (!containsKmsDt(actualUgi) &&
          +    if (UserGroupInformation.isSecurityEnabled() &&
          +        !containsKmsDt(actualUgi) &&
                   !actualUgi.hasKerberosCredentials()) {
                 // Use login user for user that does not have either
                 // Kerberos credential or KMS delegation token for KMS operations
          
          Show
          xyao Xiaoyu Yao added a comment - - edited Greg Senia , the unit test failure is different seems different. Xiao Chen , it is caused by the proxy user in non-secure case. We will need to check if security is enabled before checking kerberos credential/DT as below. --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java @@ -1097,7 +1097,8 @@ private UserGroupInformation getActualUgi() throws IOException { actualUgi = currentUgi.getRealUser(); } - if (!containsKmsDt(actualUgi) && + if (UserGroupInformation.isSecurityEnabled() && + !containsKmsDt(actualUgi) && !actualUgi.hasKerberosCredentials()) { // Use login user for user that does not have either // Kerberos credential or KMS delegation token for KMS operations
          Hide
          xiaochen Xiao Chen added a comment -

          Hm, what I saw on https://builds.apache.org/job/PreCommit-HDFS-Build/18275/testReport/org.apache.hadoop.hdfs/TestAclsEndToEnd/testGoodWithWhitelistWithoutBlacklist/ is sth like this:

          2017-01-26 20:32:18,448 ERROR hdfs.TestAclsEndToEnd (TestAclsEndToEnd.java:run(1644)) - IOException thrown during doAs() operation
          java.io.IOException: org.apache.hadoop.security.authentication.client.AuthenticationException: Authentication failed, URL: http://localhost:36605/kms/v1/keys?doAs=keyadmin&user.name=keyadmin, status: 403, message: Forbidden
          	at org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:551)
          	at org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKeyInternal(KMSClientProvider.java:732)
          	at org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKey(KMSClientProvider.java:742)
          	at org.apache.hadoop.crypto.key.KeyProviderExtension.createKey(KeyProviderExtension.java:74)
          	at org.apache.hadoop.hdfs.DFSTestUtil.createKey(DFSTestUtil.java:1634)
          	at org.apache.hadoop.hdfs.DFSTestUtil.createKey(DFSTestUtil.java:1615)
          	at org.apache.hadoop.hdfs.TestAclsEndToEnd$1.execute(TestAclsEndToEnd.java:1532)
          	at org.apache.hadoop.hdfs.TestAclsEndToEnd$6.run(TestAclsEndToEnd.java:1640)
          	at org.apache.hadoop.hdfs.TestAclsEndToEnd$6.run(TestAclsEndToEnd.java:1636)
          	at java.security.AccessController.doPrivileged(Native Method)
          	at javax.security.auth.Subject.doAs(Subject.java:356)
          	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
          	at org.apache.hadoop.hdfs.TestAclsEndToEnd.doUserOp(TestAclsEndToEnd.java:1636)
          	at org.apache.hadoop.hdfs.TestAclsEndToEnd.createKey(TestAclsEndToEnd.java:1528)
          	at org.apache.hadoop.hdfs.TestAclsEndToEnd.doFullAclTest(TestAclsEndToEnd.java:415)
          	at org.apache.hadoop.hdfs.TestAclsEndToEnd.testGoodWithWhitelistWithoutBlacklist(TestAclsEndToEnd.java:369)
          
          Show
          xiaochen Xiao Chen added a comment - Hm, what I saw on https://builds.apache.org/job/PreCommit-HDFS-Build/18275/testReport/org.apache.hadoop.hdfs/TestAclsEndToEnd/testGoodWithWhitelistWithoutBlacklist/ is sth like this: 2017-01-26 20:32:18,448 ERROR hdfs.TestAclsEndToEnd (TestAclsEndToEnd.java:run(1644)) - IOException thrown during doAs() operation java.io.IOException: org.apache.hadoop.security.authentication.client.AuthenticationException: Authentication failed, URL: http://localhost:36605/kms/v1/keys?doAs=keyadmin&user.name=keyadmin, status: 403, message: Forbidden at org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:551) at org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKeyInternal(KMSClientProvider.java:732) at org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKey(KMSClientProvider.java:742) at org.apache.hadoop.crypto.key.KeyProviderExtension.createKey(KeyProviderExtension.java:74) at org.apache.hadoop.hdfs.DFSTestUtil.createKey(DFSTestUtil.java:1634) at org.apache.hadoop.hdfs.DFSTestUtil.createKey(DFSTestUtil.java:1615) at org.apache.hadoop.hdfs.TestAclsEndToEnd$1.execute(TestAclsEndToEnd.java:1532) at org.apache.hadoop.hdfs.TestAclsEndToEnd$6.run(TestAclsEndToEnd.java:1640) at org.apache.hadoop.hdfs.TestAclsEndToEnd$6.run(TestAclsEndToEnd.java:1636) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:356) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869) at org.apache.hadoop.hdfs.TestAclsEndToEnd.doUserOp(TestAclsEndToEnd.java:1636) at org.apache.hadoop.hdfs.TestAclsEndToEnd.createKey(TestAclsEndToEnd.java:1528) at org.apache.hadoop.hdfs.TestAclsEndToEnd.doFullAclTest(TestAclsEndToEnd.java:415) at org.apache.hadoop.hdfs.TestAclsEndToEnd.testGoodWithWhitelistWithoutBlacklist(TestAclsEndToEnd.java:369)
          Hide
          gss2002 Greg Senia added a comment -

          Is this the error we are
          Talking about:

          Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Cannot get a KDC reply)]; Host Details : local host is: "bcf70b846b20/172.17.0.2"; destination host is: "localhost":56245;
          Stack Trace

          java.io.IOException: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Cannot get a KDC reply)]; Host Details : local host is: "bcf70b846b20/172.17.0.2"; destination host is: "localhost":56245;

          Show
          gss2002 Greg Senia added a comment - Is this the error we are Talking about: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Cannot get a KDC reply)] ; Host Details : local host is: "bcf70b846b20/172.17.0.2"; destination host is: "localhost":56245; Stack Trace java.io.IOException: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Cannot get a KDC reply)] ; Host Details : local host is: "bcf70b846b20/172.17.0.2"; destination host is: "localhost":56245;
          Hide
          xyao Xiaoyu Yao added a comment -

          Thanks Xiao Chen for the heads up. Looking at it...

          Show
          xyao Xiaoyu Yao added a comment - Thanks Xiao Chen for the heads up. Looking at it...
          Hide
          xiaochen Xiao Chen added a comment -

          Thanks all for finding and fixing another KMSCP UGI issue...

          Git bisected branch-2 TestAclsEndToEnd failures to this jira. See https://builds.apache.org/job/PreCommit-HDFS-Build/18275/testReport/

          Both trunk and branch-2 are failing. Could someone take a look? Thanks.

          Show
          xiaochen Xiao Chen added a comment - Thanks all for finding and fixing another KMSCP UGI issue... Git bisected branch-2 TestAclsEndToEnd failures to this jira. See https://builds.apache.org/job/PreCommit-HDFS-Build/18275/testReport/ Both trunk and branch-2 are failing. Could someone take a look? Thanks.
          Hide
          gss2002 Greg Senia added a comment -

          Xiaoyu Yao and Larry McCay thanks for all the help with this issue. I appreciate you guys digging in and helping get the right fix built.

          Show
          gss2002 Greg Senia added a comment - Xiaoyu Yao and Larry McCay thanks for all the help with this issue. I appreciate you guys digging in and helping get the right fix built.
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11174 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11174/)
          HADOOP-13988. KMSClientProvider does not work with WebHDFS and Apache (xyao: rev a46933e8ce4c1715c11e3e3283bf0e8c2b53b837)

          • (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11174 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11174/ ) HADOOP-13988 . KMSClientProvider does not work with WebHDFS and Apache (xyao: rev a46933e8ce4c1715c11e3e3283bf0e8c2b53b837) (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
          Hide
          xyao Xiaoyu Yao added a comment -

          Thanks Greg Senia for the contribution and all for the discussion/reviews. I've commit the fix to trunk and branch-2.

          Show
          xyao Xiaoyu Yao added a comment - Thanks Greg Senia for the contribution and all for the discussion/reviews. I've commit the fix to trunk and branch-2.
          Hide
          lmccay Larry McCay added a comment -

          Okay - I understand now.
          Even though the Knox usecase doesn't present a KMS delegation token as part of the request, other uses of KMSClientProvider will.
          Usecases such as Yarn acquiring the KMS-DT to provide for use with a MR job need to be accommodated.

          Here is my +1.

          Thanks, Xiaoyu Yao!

          Show
          lmccay Larry McCay added a comment - Okay - I understand now. Even though the Knox usecase doesn't present a KMS delegation token as part of the request, other uses of KMSClientProvider will. Usecases such as Yarn acquiring the KMS-DT to provide for use with a MR job need to be accommodated. Here is my +1. Thanks, Xiaoyu Yao !
          Hide
          xyao Xiaoyu Yao added a comment -

          Thanks Larry McCay for the detail of knox use case. Knox end user access webhdfs using proxy user from a token user - knox (with hdfs-dt).

          Knox doesn't use UGI at all.

          On the DN side, the webhdfs create UGI based on the deserialized cookie, which is the currentUGI. However, it does not have either Kerberos credential or KMS delegation token. To access KMS for encrypted files, the right UGI would be the DN's loginUser (with local kerberos credential), which fits the logic below in the latest patch.

           if (!containsKmsDt(actualUgi) && !actualUgi.hasKerberosCredentials()) {
          ...
           actualUgi = UserGroupInformation.getLoginUser();
          
          Show
          xyao Xiaoyu Yao added a comment - Thanks Larry McCay for the detail of knox use case. Knox end user access webhdfs using proxy user from a token user - knox (with hdfs-dt). Knox doesn't use UGI at all. On the DN side, the webhdfs create UGI based on the deserialized cookie, which is the currentUGI. However, it does not have either Kerberos credential or KMS delegation token. To access KMS for encrypted files, the right UGI would be the DN's loginUser (with local kerberos credential), which fits the logic below in the latest patch. if (!containsKmsDt(actualUgi) && !actualUgi.hasKerberosCredentials()) { ... actualUgi = UserGroupInformation.getLoginUser();
          Hide
          lmccay@apache.org larry mccay added a comment -

          Knox doesn't use UGI at all.
          It dispatches requests to WebHDFS via HttpClient.
          All interactions are either with a SPNEGO authentication to WebHDFS or via
          haoop.auth cookie/delegation token.
          It never acquires delegation tokens directly - only what is returned via
          WebHDFS calls.

          Show
          lmccay@apache.org larry mccay added a comment - Knox doesn't use UGI at all. It dispatches requests to WebHDFS via HttpClient. All interactions are either with a SPNEGO authentication to WebHDFS or via haoop.auth cookie/delegation token. It never acquires delegation tokens directly - only what is returned via WebHDFS calls.
          Hide
          xyao Xiaoyu Yao added a comment -

          Thanks Jitendra Nath Pandey and Larry McCay for the review.

          Knox never interacts directly with KMS and neither does the Knox enduser.

          Hadoop proxy user does not recommend using delegation token to proxy another user. Oozie for example uses a kerberos to proxy its end user. That's also the expected usage from HADOOP-13749.

          Knox can either uses UGI with kerberos to create proxy user for its end user, or impersonate end user to get KMS-DT and add it to the end user's UGI if the file accessed is in encryption zone.

          Show
          xyao Xiaoyu Yao added a comment - Thanks Jitendra Nath Pandey and Larry McCay for the review. Knox never interacts directly with KMS and neither does the Knox enduser. Hadoop proxy user does not recommend using delegation token to proxy another user. Oozie for example uses a kerberos to proxy its end user. That's also the expected usage from HADOOP-13749 . Knox can either uses UGI with kerberos to create proxy user for its end user, or impersonate end user to get KMS-DT and add it to the end user's UGI if the file accessed is in encryption zone.
          Hide
          lmccay Larry McCay added a comment -

          This patch does look good to me.
          I would like to better understand the difference between the two better however.
          I understand that it is related to when a request has a KMS delegation token already instead of having to authenticate with kerberos.

          My question is how does the request coming from Knox ever get the KMS-DT?
          Knox never interacts directly with KMS and neither does the Knox enduser.

          This is important in order to understand how to provide such improvements and to review such patches.

          Show
          lmccay Larry McCay added a comment - This patch does look good to me. I would like to better understand the difference between the two better however. I understand that it is related to when a request has a KMS delegation token already instead of having to authenticate with kerberos. My question is how does the request coming from Knox ever get the KMS-DT? Knox never interacts directly with KMS and neither does the Knox enduser. This is important in order to understand how to provide such improvements and to review such patches.
          Hide
          jnp Jitendra Nath Pandey added a comment -

          +1. The latest patch looks good.

          Show
          jnp Jitendra Nath Pandey added a comment - +1. The latest patch looks good.
          Hide
          gss2002 Greg Senia added a comment -

          Xiaoyu Yao yes same test case using our dataingest framework that makes a curl call to verify clean data that is stored in TDE zone.

          Show
          gss2002 Greg Senia added a comment - Xiaoyu Yao yes same test case using our dataingest framework that makes a curl call to verify clean data that is stored in TDE zone.
          Hide
          xyao Xiaoyu Yao added a comment -

          Thanks Greg Senia for the detailed results. Are they from the similar workload you described before?

          "We have a data ingest framework that runs continuously in this environment and has run with no issues for the last week since applying the fixes and Knox to WebHDFS at a TDE file is returned correctly."

          Show
          xyao Xiaoyu Yao added a comment - Thanks Greg Senia for the detailed results. Are they from the similar workload you described before? "We have a data ingest framework that runs continuously in this environment and has run with no issues for the last week since applying the fixes and Knox to WebHDFS at a TDE file is returned correctly."
          Hide
          gss2002 Greg Senia added a comment -

          Xiaoyu Yao and Larry McCay here is log output from 02.patch

          2017-01-23 10:29:17,424 DEBUG security.UserGroupInformation (UserGroupInformation.java:doAs(1744)) - PrivilegedActionException as:knox (auth:TOKEN) cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby
          2017-01-23 10:29:17,424 DEBUG security.UserGroupInformation (UserGroupInformation.java:doAs(1744)) - PrivilegedActionException as:knox (auth:TOKEN) cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby
          2017-01-23 10:29:17,426 DEBUG security.UserGroupInformation (UserGroupInformation.java:logPrivilegedAction(1767)) - PrivilegedAction as:knox (auth:TOKEN) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:758)
          2017-01-23 10:29:17,426 DEBUG security.UserGroupInformation (UserGroupInformation.java:logPrivilegedAction(1767)) - PrivilegedAction as:knox (auth:TOKEN) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:758)
          2017-01-23 10:29:17,437 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1774)) - UGI: gss2002 (auth:PROXY) via knox (auth:TOKEN)
          2017-01-23 10:29:17,437 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1774)) - UGI: gss2002 (auth:PROXY) via knox (auth:TOKEN)
          2017-01-23 10:29:17,437 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1776)) - +RealUGI: knox (auth:TOKEN)
          2017-01-23 10:29:17,437 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1776)) - +RealUGI: knox (auth:TOKEN)
          2017-01-23 10:29:17,437 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1778)) - +LoginUGI: dn/ha20t5001dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM (auth:KERBEROS)
          2017-01-23 10:29:17,437 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1778)) - +LoginUGI: dn/ha20t5001dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM (auth:KERBEROS)
          2017-01-23 10:29:17,438 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1780)) - +UGI token:Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:tech, Ident: (HDFS_DELEGATION_TOKEN token 14676 for gss2002)
          2017-01-23 10:29:17,438 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1780)) - +UGI token:Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:tech, Ident: (HDFS_DELEGATION_TOKEN token 14676 for gss2002)
          2017-01-23 10:29:17,438 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1780)) - +UGI token:Kind: HDFS_DELEGATION_TOKEN, Service: 10.70.33.7:8020, Ident: (HDFS_DELEGATION_TOKEN token 14676 for gss2002)
          2017-01-23 10:29:17,438 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1780)) - +UGI token:Kind: HDFS_DELEGATION_TOKEN, Service: 10.70.33.7:8020, Ident: (HDFS_DELEGATION_TOKEN token 14676 for gss2002)
          2017-01-23 10:29:17,438 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1780)) - +UGI token:Kind: HDFS_DELEGATION_TOKEN, Service: 10.70.33.6:8020, Ident: (HDFS_DELEGATION_TOKEN token 14676 for gss2002)
          2017-01-23 10:29:17,438 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1780)) - +UGI token:Kind: HDFS_DELEGATION_TOKEN, Service: 10.70.33.6:8020, Ident: (HDFS_DELEGATION_TOKEN token 14676 for gss2002)
          2017-01-23 10:29:17,438 DEBUG kms.KMSClientProvider (KMSClientProvider.java:getActualUgi(1061)) - using loginUser no KMS Delegation Token no Kerberos Credentials
          2017-01-23 10:29:17,438 DEBUG kms.KMSClientProvider (KMSClientProvider.java:getActualUgi(1061)) - using loginUser no KMS Delegation Token no Kerberos Credentials
          2017-01-23 10:29:17,438 DEBUG security.UserGroupInformation (UserGroupInformation.java:logPrivilegedAction(1767)) - PrivilegedAction as:dn/ha20t5001dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM (auth:KERBEROS) from:org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:524)
          2017-01-23 10:29:17,438 DEBUG security.UserGroupInformation (UserGroupInformation.java:logPrivilegedAction(1767)) - PrivilegedAction as:dn/ha20t5001dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM (auth:KERBEROS) from:org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:524)
          2017-01-23 10:29:17,439 DEBUG security.UserGroupInformation (UserGroupInformation.java:getTGT(898)) - Found tgt Ticket (hex) =

          Client Principal = dn/ha20t5001dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM
          Server Principal = krbtgt/TECH.HDP.EXAMPLE.COM@TECH.HDP.EXAMPLE.COM
          Session Key = EncryptionKey: keyType=18 keyBytes (hex dump)=
          0000: 3E 58 9C C0 36 40 0F F2 F1 BB E7 A8 4B C7 EC 89 >X..6@......K...
          0010: 96 32 E3 28 B1 47 36 D0 99 DE C9 5E 28 7F 8F 48 .2.(.G6....^(..H

          Forwardable Ticket true
          Forwarded Ticket false
          Proxiable Ticket false
          Proxy Ticket false
          Postdated Ticket false
          Renewable Ticket false
          Initial Ticket false
          Auth Time = Mon Jan 23 09:11:28 EST 2017
          Start Time = Mon Jan 23 09:11:28 EST 2017
          End Time = Mon Jan 23 19:11:28 EST 2017
          Renew Till = null
          Client Addresses Null
          2017-01-23 10:29:17,439 DEBUG security.UserGroupInformation (UserGroupInformation.java:getTGT(898)) - Found tgt Ticket (hex) =

          Client Principal = dn/ha20t5001dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM
          Server Principal = krbtgt/TECH.HDP.EXAMPLE.COM@TECH.HDP.EXAMPLE.COM
          Session Key = EncryptionKey: keyType=18 keyBytes (hex dump)=
          0000: 3E 58 9C C0 36 40 0F F2 F1 BB E7 A8 4B C7 EC 89 >X..6@......K...
          0010: 96 32 E3 28 B1 47 36 D0 99 DE C9 5E 28 7F 8F 48 .2.(.G6....^(..H

          Forwardable Ticket true
          Forwarded Ticket false
          Proxiable Ticket false
          Proxy Ticket false
          Postdated Ticket false
          Renewable Ticket false
          Initial Ticket false
          Auth Time = Mon Jan 23 09:11:28 EST 2017
          Start Time = Mon Jan 23 09:11:28 EST 2017
          End Time = Mon Jan 23 19:11:28 EST 2017
          Renew Till = null
          Client Addresses Null
          2017-01-23 10:29:17,555 INFO DataNode.clienttrace (DataXceiver.java:requestShortCircuitShm(468)) - cliID: DFSClient_NONMAPREDUCE_-1687232963_147, src: 127.0.0.1, dest: 127.0.0.1, op: REQUEST_SHORT_CIRCUIT_SHM, shmId: 7de3b3475df3d2cee241e9a91ee83271, srvID: 0bb43433-8195-44fa-a76b-333e779542bf, success: true
          2017-01-23 10:29:17,557 INFO DataNode.clienttrace (DataXceiver.java:requestShortCircuitFds(369)) - src: 127.0.0.1, dest: 127.0.0.1, op: REQUEST_SHORT_CIRCUIT_FDS, blockid: 1073781194, srvID: 0bb43433-8195-44fa-a76b-333e779542bf, success: true

          Show
          gss2002 Greg Senia added a comment - Xiaoyu Yao and Larry McCay here is log output from 02.patch 2017-01-23 10:29:17,424 DEBUG security.UserGroupInformation (UserGroupInformation.java:doAs(1744)) - PrivilegedActionException as:knox (auth:TOKEN) cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby 2017-01-23 10:29:17,424 DEBUG security.UserGroupInformation (UserGroupInformation.java:doAs(1744)) - PrivilegedActionException as:knox (auth:TOKEN) cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby 2017-01-23 10:29:17,426 DEBUG security.UserGroupInformation (UserGroupInformation.java:logPrivilegedAction(1767)) - PrivilegedAction as:knox (auth:TOKEN) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:758) 2017-01-23 10:29:17,426 DEBUG security.UserGroupInformation (UserGroupInformation.java:logPrivilegedAction(1767)) - PrivilegedAction as:knox (auth:TOKEN) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:758) 2017-01-23 10:29:17,437 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1774)) - UGI: gss2002 (auth:PROXY) via knox (auth:TOKEN) 2017-01-23 10:29:17,437 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1774)) - UGI: gss2002 (auth:PROXY) via knox (auth:TOKEN) 2017-01-23 10:29:17,437 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1776)) - +RealUGI: knox (auth:TOKEN) 2017-01-23 10:29:17,437 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1776)) - +RealUGI: knox (auth:TOKEN) 2017-01-23 10:29:17,437 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1778)) - +LoginUGI: dn/ha20t5001dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM (auth:KERBEROS) 2017-01-23 10:29:17,437 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1778)) - +LoginUGI: dn/ha20t5001dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM (auth:KERBEROS) 2017-01-23 10:29:17,438 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1780)) - +UGI token:Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:tech, Ident: (HDFS_DELEGATION_TOKEN token 14676 for gss2002) 2017-01-23 10:29:17,438 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1780)) - +UGI token:Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:tech, Ident: (HDFS_DELEGATION_TOKEN token 14676 for gss2002) 2017-01-23 10:29:17,438 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1780)) - +UGI token:Kind: HDFS_DELEGATION_TOKEN, Service: 10.70.33.7:8020, Ident: (HDFS_DELEGATION_TOKEN token 14676 for gss2002) 2017-01-23 10:29:17,438 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1780)) - +UGI token:Kind: HDFS_DELEGATION_TOKEN, Service: 10.70.33.7:8020, Ident: (HDFS_DELEGATION_TOKEN token 14676 for gss2002) 2017-01-23 10:29:17,438 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1780)) - +UGI token:Kind: HDFS_DELEGATION_TOKEN, Service: 10.70.33.6:8020, Ident: (HDFS_DELEGATION_TOKEN token 14676 for gss2002) 2017-01-23 10:29:17,438 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1780)) - +UGI token:Kind: HDFS_DELEGATION_TOKEN, Service: 10.70.33.6:8020, Ident: (HDFS_DELEGATION_TOKEN token 14676 for gss2002) 2017-01-23 10:29:17,438 DEBUG kms.KMSClientProvider (KMSClientProvider.java:getActualUgi(1061)) - using loginUser no KMS Delegation Token no Kerberos Credentials 2017-01-23 10:29:17,438 DEBUG kms.KMSClientProvider (KMSClientProvider.java:getActualUgi(1061)) - using loginUser no KMS Delegation Token no Kerberos Credentials 2017-01-23 10:29:17,438 DEBUG security.UserGroupInformation (UserGroupInformation.java:logPrivilegedAction(1767)) - PrivilegedAction as:dn/ha20t5001dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM (auth:KERBEROS) from:org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:524) 2017-01-23 10:29:17,438 DEBUG security.UserGroupInformation (UserGroupInformation.java:logPrivilegedAction(1767)) - PrivilegedAction as:dn/ha20t5001dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM (auth:KERBEROS) from:org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:524) 2017-01-23 10:29:17,439 DEBUG security.UserGroupInformation (UserGroupInformation.java:getTGT(898)) - Found tgt Ticket (hex) = Client Principal = dn/ha20t5001dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM Server Principal = krbtgt/TECH.HDP.EXAMPLE.COM@TECH.HDP.EXAMPLE.COM Session Key = EncryptionKey: keyType=18 keyBytes (hex dump)= 0000: 3E 58 9C C0 36 40 0F F2 F1 BB E7 A8 4B C7 EC 89 >X..6@......K... 0010: 96 32 E3 28 B1 47 36 D0 99 DE C9 5E 28 7F 8F 48 .2.(.G6....^(..H Forwardable Ticket true Forwarded Ticket false Proxiable Ticket false Proxy Ticket false Postdated Ticket false Renewable Ticket false Initial Ticket false Auth Time = Mon Jan 23 09:11:28 EST 2017 Start Time = Mon Jan 23 09:11:28 EST 2017 End Time = Mon Jan 23 19:11:28 EST 2017 Renew Till = null Client Addresses Null 2017-01-23 10:29:17,439 DEBUG security.UserGroupInformation (UserGroupInformation.java:getTGT(898)) - Found tgt Ticket (hex) = Client Principal = dn/ha20t5001dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM Server Principal = krbtgt/TECH.HDP.EXAMPLE.COM@TECH.HDP.EXAMPLE.COM Session Key = EncryptionKey: keyType=18 keyBytes (hex dump)= 0000: 3E 58 9C C0 36 40 0F F2 F1 BB E7 A8 4B C7 EC 89 >X..6@......K... 0010: 96 32 E3 28 B1 47 36 D0 99 DE C9 5E 28 7F 8F 48 .2.(.G6....^(..H Forwardable Ticket true Forwarded Ticket false Proxiable Ticket false Proxy Ticket false Postdated Ticket false Renewable Ticket false Initial Ticket false Auth Time = Mon Jan 23 09:11:28 EST 2017 Start Time = Mon Jan 23 09:11:28 EST 2017 End Time = Mon Jan 23 19:11:28 EST 2017 Renew Till = null Client Addresses Null 2017-01-23 10:29:17,555 INFO DataNode.clienttrace (DataXceiver.java:requestShortCircuitShm(468)) - cliID: DFSClient_NONMAPREDUCE_-1687232963_147, src: 127.0.0.1, dest: 127.0.0.1, op: REQUEST_SHORT_CIRCUIT_SHM, shmId: 7de3b3475df3d2cee241e9a91ee83271, srvID: 0bb43433-8195-44fa-a76b-333e779542bf, success: true 2017-01-23 10:29:17,557 INFO DataNode.clienttrace (DataXceiver.java:requestShortCircuitFds(369)) - src: 127.0.0.1, dest: 127.0.0.1, op: REQUEST_SHORT_CIRCUIT_FDS, blockid: 1073781194, srvID: 0bb43433-8195-44fa-a76b-333e779542bf, success: true
          Hide
          gss2002 Greg Senia added a comment -

          Xiaoyu Yao the second test fix seems to be working. I will leave it in my environment for a few days to make sure as kerberos tickets expire that the fix still works.

          Show
          gss2002 Greg Senia added a comment - Xiaoyu Yao the second test fix seems to be working. I will leave it in my environment for a few days to make sure as kerberos tickets expire that the fix still works.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 17m 8s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 13m 30s trunk passed
          +1 compile 10m 39s trunk passed
          +1 checkstyle 0m 30s trunk passed
          +1 mvnsite 1m 10s trunk passed
          +1 mvneclipse 0m 17s trunk passed
          +1 findbugs 1m 42s trunk passed
          +1 javadoc 0m 51s trunk passed
          +1 mvninstall 0m 41s the patch passed
          +1 compile 10m 47s the patch passed
          +1 javac 10m 47s the patch passed
          +1 checkstyle 0m 29s the patch passed
          +1 mvnsite 1m 8s the patch passed
          +1 mvneclipse 0m 17s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 1m 49s the patch passed
          +1 javadoc 0m 48s the patch passed
          +1 unit 8m 43s hadoop-common in the patch passed.
          +1 asflicense 0m 37s The patch does not generate ASF License warnings.
          73m 8s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HADOOP-13988
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12848656/HADOOP-13988.02.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 624e6cd253a4 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 9bab85c
          Default Java 1.8.0_121
          findbugs v3.0.0
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11486/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11486/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 17m 8s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 13m 30s trunk passed +1 compile 10m 39s trunk passed +1 checkstyle 0m 30s trunk passed +1 mvnsite 1m 10s trunk passed +1 mvneclipse 0m 17s trunk passed +1 findbugs 1m 42s trunk passed +1 javadoc 0m 51s trunk passed +1 mvninstall 0m 41s the patch passed +1 compile 10m 47s the patch passed +1 javac 10m 47s the patch passed +1 checkstyle 0m 29s the patch passed +1 mvnsite 1m 8s the patch passed +1 mvneclipse 0m 17s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 49s the patch passed +1 javadoc 0m 48s the patch passed +1 unit 8m 43s hadoop-common in the patch passed. +1 asflicense 0m 37s The patch does not generate ASF License warnings. 73m 8s Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HADOOP-13988 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12848656/HADOOP-13988.02.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 624e6cd253a4 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 9bab85c Default Java 1.8.0_121 findbugs v3.0.0 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11486/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11486/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 17m 3s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 14m 31s trunk passed
          +1 compile 9m 54s trunk passed
          +1 checkstyle 0m 31s trunk passed
          +1 mvnsite 1m 3s trunk passed
          +1 mvneclipse 0m 18s trunk passed
          +1 findbugs 1m 26s trunk passed
          +1 javadoc 0m 47s trunk passed
          +1 mvninstall 0m 36s the patch passed
          +1 compile 9m 14s the patch passed
          +1 javac 9m 14s the patch passed
          +1 checkstyle 0m 29s the patch passed
          +1 mvnsite 0m 58s the patch passed
          +1 mvneclipse 0m 18s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 1m 32s the patch passed
          +1 javadoc 0m 47s the patch passed
          +1 unit 7m 41s hadoop-common in the patch passed.
          +1 asflicense 0m 34s The patch does not generate ASF License warnings.
          69m 34s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HADOOP-13988
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12848651/HADOOP-13988.01.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux e32051fa79b9 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 9bab85c
          Default Java 1.8.0_121
          findbugs v3.0.0
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11483/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11483/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 17m 3s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 14m 31s trunk passed +1 compile 9m 54s trunk passed +1 checkstyle 0m 31s trunk passed +1 mvnsite 1m 3s trunk passed +1 mvneclipse 0m 18s trunk passed +1 findbugs 1m 26s trunk passed +1 javadoc 0m 47s trunk passed +1 mvninstall 0m 36s the patch passed +1 compile 9m 14s the patch passed +1 javac 9m 14s the patch passed +1 checkstyle 0m 29s the patch passed +1 mvnsite 0m 58s the patch passed +1 mvneclipse 0m 18s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 32s the patch passed +1 javadoc 0m 47s the patch passed +1 unit 7m 41s hadoop-common in the patch passed. +1 asflicense 0m 34s The patch does not generate ASF License warnings. 69m 34s Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HADOOP-13988 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12848651/HADOOP-13988.01.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux e32051fa79b9 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 9bab85c Default Java 1.8.0_121 findbugs v3.0.0 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11483/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11483/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          xyao Xiaoyu Yao added a comment -

          Minor update to use UGI#getLoginUser() and correct comments.

          Show
          xyao Xiaoyu Yao added a comment - Minor update to use UGI#getLoginUser() and correct comments.
          Hide
          xyao Xiaoyu Yao added a comment -

          The comment is not valid any, we should remove it in the next patch.

           // Add existing credentials from current UGI, since provider is cached.
          
          Show
          xyao Xiaoyu Yao added a comment - The comment is not valid any, we should remove it in the next patch. // Add existing credentials from current UGI, since provider is cached.
          Hide
          xyao Xiaoyu Yao added a comment -

          Thanks Greg Senia for posting the patch and detailed results. It helps us better understand the problem here.

          The issue is caused by the real user for the proxy user - knox in this case is a token user who does not have either kms delegation token or kerberos credential on the local datanode. A cleaner fix would be merging with the logic below so that we can use kms delegation token or kerberos credential if the real user has one.

          Please review and try the v01 patch attached on your cluster and let us know the result. Thanks!

          Show
          xyao Xiaoyu Yao added a comment - Thanks Greg Senia for posting the patch and detailed results. It helps us better understand the problem here. The issue is caused by the real user for the proxy user - knox in this case is a token user who does not have either kms delegation token or kerberos credential on the local datanode. A cleaner fix would be merging with the logic below so that we can use kms delegation token or kerberos credential if the real user has one. Please review and try the v01 patch attached on your cluster and let us know the result. Thanks!
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 16s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 13m 26s trunk passed
          +1 compile 11m 11s trunk passed
          +1 checkstyle 0m 28s trunk passed
          +1 mvnsite 1m 2s trunk passed
          +1 mvneclipse 0m 18s trunk passed
          +1 findbugs 1m 30s trunk passed
          +1 javadoc 0m 50s trunk passed
          +1 mvninstall 0m 39s the patch passed
          +1 compile 10m 12s the patch passed
          +1 javac 10m 12s the patch passed
          +1 checkstyle 0m 30s the patch passed
          +1 mvnsite 1m 3s the patch passed
          +1 mvneclipse 0m 19s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 1m 39s the patch passed
          +1 javadoc 0m 48s the patch passed
          +1 unit 7m 53s hadoop-common in the patch passed.
          +1 asflicense 0m 32s The patch does not generate ASF License warnings.
          54m 23s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HADOOP-13988
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12848433/HADOOP-13988.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 1892cdc91763 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 60865c8
          Default Java 1.8.0_111
          findbugs v3.0.0
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11477/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11477/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 16s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 13m 26s trunk passed +1 compile 11m 11s trunk passed +1 checkstyle 0m 28s trunk passed +1 mvnsite 1m 2s trunk passed +1 mvneclipse 0m 18s trunk passed +1 findbugs 1m 30s trunk passed +1 javadoc 0m 50s trunk passed +1 mvninstall 0m 39s the patch passed +1 compile 10m 12s the patch passed +1 javac 10m 12s the patch passed +1 checkstyle 0m 30s the patch passed +1 mvnsite 1m 3s the patch passed +1 mvneclipse 0m 19s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 39s the patch passed +1 javadoc 0m 48s the patch passed +1 unit 7m 53s hadoop-common in the patch passed. +1 asflicense 0m 32s The patch does not generate ASF License warnings. 54m 23s Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HADOOP-13988 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12848433/HADOOP-13988.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 1892cdc91763 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 60865c8 Default Java 1.8.0_111 findbugs v3.0.0 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11477/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11477/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          gss2002 Greg Senia added a comment -

          Larry McCay the logs from above are from the patch uploaded an hour ago. Let me know if it looks like code path is wrong from what I can see the code path is working correctly and the !equals is definitely working correctly if it wasn't it would of failed.

          Also here is the patch output from my last build about an hour ago with the updated path from today:

          ETG-GSeni-MBP:hadoop-release gss2002$ patch -p1 < ../../kmsfixes/HADOOP-13558.02.patch
          patching file hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
          Hunk #1 succeeded at 618 with fuzz 1 (offset -14 lines).
          Hunk #2 succeeded at 825 (offset -40 lines).
          patching file hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java
          Hunk #1 succeeded at 31 (offset -1 lines).
          Hunk #2 succeeded at 902 with fuzz 2 (offset -111 lines).

          ETG-GSeni-MBP:hadoop-release gss2002$ patch -p1 < ../../kmsfixes/HADOOP-13749.00.patch
          patching file hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
          Hunk #4 succeeded at 901 (offset 2 lines).
          Hunk #5 succeeded at 924 (offset 2 lines).
          Hunk #6 succeeded at 996 (offset 2 lines).
          Hunk #7 succeeded at 1042 (offset 2 lines).
          patching file hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
          Hunk #1 succeeded at 1768 (offset -55 lines).
          patching file hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
          Hunk #1 succeeded at 1825 (offset -8 lines).
          Hunk #2 succeeded at 2149 (offset -5 lines).

          ETG-GSeni-MBP:hadoop-release gss2002$ patch -p1 < ../../HADOOP-13988.patch
          patching file hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
          Hunk #1 succeeded at 1052 (offset -10 lines).
          patching file hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
          Hunk #1 succeeded at 1774 (offset -67 lines).

          Show
          gss2002 Greg Senia added a comment - Larry McCay the logs from above are from the patch uploaded an hour ago. Let me know if it looks like code path is wrong from what I can see the code path is working correctly and the !equals is definitely working correctly if it wasn't it would of failed. Also here is the patch output from my last build about an hour ago with the updated path from today: ETG-GSeni-MBP:hadoop-release gss2002$ patch -p1 < ../../kmsfixes/ HADOOP-13558 .02.patch patching file hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java Hunk #1 succeeded at 618 with fuzz 1 (offset -14 lines). Hunk #2 succeeded at 825 (offset -40 lines). patching file hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java Hunk #1 succeeded at 31 (offset -1 lines). Hunk #2 succeeded at 902 with fuzz 2 (offset -111 lines). ETG-GSeni-MBP:hadoop-release gss2002$ patch -p1 < ../../kmsfixes/ HADOOP-13749 .00.patch patching file hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java Hunk #4 succeeded at 901 (offset 2 lines). Hunk #5 succeeded at 924 (offset 2 lines). Hunk #6 succeeded at 996 (offset 2 lines). Hunk #7 succeeded at 1042 (offset 2 lines). patching file hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java Hunk #1 succeeded at 1768 (offset -55 lines). patching file hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java Hunk #1 succeeded at 1825 (offset -8 lines). Hunk #2 succeeded at 2149 (offset -5 lines). ETG-GSeni-MBP:hadoop-release gss2002$ patch -p1 < ../../ HADOOP-13988 .patch patching file hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java Hunk #1 succeeded at 1052 (offset -10 lines). patching file hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java Hunk #1 succeeded at 1774 (offset -67 lines).
          Hide
          gss2002 Greg Senia added a comment -

          yes its running in our cluster. Just put the newest patch out there here is log output from DN getting the request from Knox:

          2017-01-19 20:33:12,835 DEBUG security.UserGroupInformation (UserGroupInformation.java:logPrivilegedAction(1767)) - PrivilegedAction as:gss2002 (auth:PROXY) via knox (auth:TOKEN) from:org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.channelRead0(WebHdfsHandler.java:114)
          2017-01-19 20:33:12,835 DEBUG security.UserGroupInformation (UserGroupInformation.java:logPrivilegedAction(1767)) - PrivilegedAction as:gss2002 (auth:PROXY) via knox (auth:TOKEN) from:org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.channelRead0(WebHdfsHandler.java:114)
          2017-01-19 20:33:12,873 DEBUG security.SecurityUtil (SecurityUtil.java:setTokenService(421)) - Acquired token Kind: HDFS_DELEGATION_TOKEN, Service: 10.70.33.6:8020, Ident: (HDFS_DELEGATION_TOKEN token 14666 for gss2002)
          2017-01-19 20:33:12,873 DEBUG security.SecurityUtil (SecurityUtil.java:setTokenService(421)) - Acquired token Kind: HDFS_DELEGATION_TOKEN, Service: 10.70.33.6:8020, Ident: (HDFS_DELEGATION_TOKEN token 14666 for gss2002)
          2017-01-19 20:33:12,874 DEBUG security.SecurityUtil (SecurityUtil.java:setTokenService(421)) - Acquired token Kind: HDFS_DELEGATION_TOKEN, Service: 10.70.33.7:8020, Ident: (HDFS_DELEGATION_TOKEN token 14666 for gss2002)
          2017-01-19 20:33:12,874 DEBUG security.SecurityUtil (SecurityUtil.java:setTokenService(421)) - Acquired token Kind: HDFS_DELEGATION_TOKEN, Service: 10.70.33.7:8020, Ident: (HDFS_DELEGATION_TOKEN token 14666 for gss2002)
          2017-01-19 20:33:13,061 DEBUG security.UserGroupInformation (UserGroupInformation.java:logPrivilegedAction(1767)) - PrivilegedAction as:knox (auth:TOKEN) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:758)
          2017-01-19 20:33:13,061 DEBUG security.UserGroupInformation (UserGroupInformation.java:logPrivilegedAction(1767)) - PrivilegedAction as:knox (auth:TOKEN) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:758)
          2017-01-19 20:33:13,099 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1774)) - UGI: gss2002 (auth:PROXY) via knox (auth:TOKEN)
          2017-01-19 20:33:13,099 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1774)) - UGI: gss2002 (auth:PROXY) via knox (auth:TOKEN)
          2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1776)) - +RealUGI: knox (auth:TOKEN)
          2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1776)) - +RealUGI: knox (auth:TOKEN)
          2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1777)) - +RealUGI: shortName: knox
          2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1777)) - +RealUGI: shortName: knox
          2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1780)) - +LoginUGI: dn/ha20t5002dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM (auth:KERBEROS)
          2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1780)) - +LoginUGI: dn/ha20t5002dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM (auth:KERBEROS)
          2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1781)) - +LoginUGI shortName: hdfs
          2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1781)) - +LoginUGI shortName: hdfs
          2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1784)) - +UGI token:Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:tech, Ident: (HDFS_DELEGATION_TOKEN token 14666 for gss2002)
          2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1784)) - +UGI token:Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:tech, Ident: (HDFS_DELEGATION_TOKEN token 14666 for gss2002)
          2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1784)) - +UGI token:Kind: HDFS_DELEGATION_TOKEN, Service: 10.70.33.7:8020, Ident: (HDFS_DELEGATION_TOKEN token 14666 for gss2002)
          2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1784)) - +UGI token:Kind: HDFS_DELEGATION_TOKEN, Service: 10.70.33.7:8020, Ident: (HDFS_DELEGATION_TOKEN token 14666 for gss2002)
          2017-01-19 20:33:13,101 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1784)) - +UGI token:Kind: HDFS_DELEGATION_TOKEN, Service: 10.70.33.6:8020, Ident: (HDFS_DELEGATION_TOKEN token 14666 for gss2002)
          2017-01-19 20:33:13,101 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1784)) - +UGI token:Kind: HDFS_DELEGATION_TOKEN, Service: 10.70.33.6:8020, Ident: (HDFS_DELEGATION_TOKEN token 14666 for gss2002)
          2017-01-19 20:33:13,101 DEBUG kms.KMSClientProvider (KMSClientProvider.java:getActualUgi(1055)) - using RealUser for proxyUser
          2017-01-19 20:33:13,101 DEBUG kms.KMSClientProvider (KMSClientProvider.java:getActualUgi(1055)) - using RealUser for proxyUser
          2017-01-19 20:33:13,101 DEBUG kms.KMSClientProvider (KMSClientProvider.java:getActualUgi(1060)) - doAsUser exists
          2017-01-19 20:33:13,101 DEBUG kms.KMSClientProvider (KMSClientProvider.java:getActualUgi(1060)) - doAsUser exists
          2017-01-19 20:33:13,101 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1774)) - UGI: knox (auth:TOKEN)
          2017-01-19 20:33:13,101 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1774)) - UGI: knox (auth:TOKEN)
          2017-01-19 20:33:13,101 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1780)) - +LoginUGI: dn/ha20t5002dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM (auth:KERBEROS)
          2017-01-19 20:33:13,101 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1780)) - +LoginUGI: dn/ha20t5002dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM (auth:KERBEROS)
          2017-01-19 20:33:13,101 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1781)) - +LoginUGI shortName: hdfs
          2017-01-19 20:33:13,101 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1781)) - +LoginUGI shortName: hdfs
          2017-01-19 20:33:13,101 DEBUG kms.KMSClientProvider (KMSClientProvider.java:getActualUgi(1068)) - currentUGI.realUser does not match UGI processUser
          2017-01-19 20:33:13,101 DEBUG kms.KMSClientProvider (KMSClientProvider.java:getActualUgi(1068)) - currentUGI.realUser does not match UGI processUser
          2017-01-19 20:33:13,101 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1774)) - UGI: dn/ha20t5002dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM (auth:KERBEROS)
          2017-01-19 20:33:13,101 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1774)) - UGI: dn/ha20t5002dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM (auth:KERBEROS)
          2017-01-19 20:33:13,101 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1780)) - +LoginUGI: dn/ha20t5002dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM (auth:KERBEROS)
          2017-01-19 20:33:13,101 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1780)) - +LoginUGI: dn/ha20t5002dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM (auth:KERBEROS)
          2017-01-19 20:33:13,102 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1781)) - +LoginUGI shortName: hdfs
          2017-01-19 20:33:13,102 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1781)) - +LoginUGI shortName: hdfs
          2017-01-19 20:33:13,102 DEBUG security.UserGroupInformation (UserGroupInformation.java:logPrivilegedAction(1767)) - PrivilegedAction as:dn/ha20t5002dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM (auth:KERBEROS) from:org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:524)
          2017-01-19 20:33:13,102 DEBUG security.UserGroupInformation (UserGroupInformation.java:logPrivilegedAction(1767)) - PrivilegedAction as:dn/ha20t5002dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM (auth:KERBEROS) from:org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:524)
          2017-01-19 20:33:13,107 DEBUG security.UserGroupInformation (UserGroupInformation.java:getTGT(898)) - Found tgt Ticket (hex) =

          Client Principal = dn/ha20t5002dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM
          Server Principal = krbtgt/TECH.HDP.EXAMPLE.COM@TECH.HDP.EXAMPLE.COM
          Session Key = EncryptionKey: keyType=18 keyBytes (hex dump)=

          Forwardable Ticket true
          Forwarded Ticket false
          Proxiable Ticket false
          Proxy Ticket false
          Postdated Ticket false
          Renewable Ticket false
          Initial Ticket false
          Auth Time = Thu Jan 19 20:22:30 EST 2017
          Start Time = Thu Jan 19 20:22:30 EST 2017
          End Time = Fri Jan 20 06:22:30 EST 2017
          Renew Till = null
          Client Addresses Null
          2017-01-19 20:33:13,107 DEBUG security.UserGroupInformation (UserGroupInformation.java:getTGT(898)) - Found tgt Ticket (hex) =

          Client Principal = dn/ha20t5002dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM
          Server Principal = krbtgt/TECH.HDP.EXAMPLE.COM@TECH.HDP.EXAMPLE.COM
          Session Key = EncryptionKey: keyType=18 keyBytes (hex dump)=

          Forwardable Ticket true
          Forwarded Ticket false
          Proxiable Ticket false
          Proxy Ticket false
          Postdated Ticket false
          Renewable Ticket false
          Initial Ticket false
          Auth Time = Thu Jan 19 20:22:30 EST 2017
          Start Time = Thu Jan 19 20:22:30 EST 2017
          End Time = Fri Jan 20 06:22:30 EST 2017
          Renew Till = null
          Client Addresses Null
          2017-01-19 20:33:13,122 DEBUG client.KerberosAuthenticator (KerberosAuthenticator.java:authenticate(192)) - JDK performed authentication on our behalf.
          2017-01-19 20:33:13,122 DEBUG client.KerberosAuthenticator (KerberosAuthenticator.java:authenticate(192)) - JDK performed authentication on our behalf.
          2017-01-19 20:33:13,257 INFO DataNode.clienttrace (DataXceiver.java:requestShortCircuitShm(468)) - cliID: DFSClient_NONMAPREDUCE_513733485_146, src: 127.0.0.1, dest: 127.0.0.1, op: REQUEST_SHORT_CIRCUIT_SHM, shmId: e7f6cfb0dd48d8112883cc97c9292c4d, srvID: faca0b23-bfbe-413c-a2db-cc23c8817e87, success: true
          2017-01-19 20:33:13,262 INFO DataNode.clienttrace (DataXceiver.java:requestShortCircuitFds(369)) - src: 127.0.0.1, dest: 127.0.0.1, op: REQUEST_SHORT_CIRCUIT_FDS, blockid: 1073781194, srvID: faca0b23-bfbe-413c-a2db-cc23c8817e87, success: true

          Show
          gss2002 Greg Senia added a comment - yes its running in our cluster. Just put the newest patch out there here is log output from DN getting the request from Knox: 2017-01-19 20:33:12,835 DEBUG security.UserGroupInformation (UserGroupInformation.java:logPrivilegedAction(1767)) - PrivilegedAction as:gss2002 (auth:PROXY) via knox (auth:TOKEN) from:org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.channelRead0(WebHdfsHandler.java:114) 2017-01-19 20:33:12,835 DEBUG security.UserGroupInformation (UserGroupInformation.java:logPrivilegedAction(1767)) - PrivilegedAction as:gss2002 (auth:PROXY) via knox (auth:TOKEN) from:org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.channelRead0(WebHdfsHandler.java:114) 2017-01-19 20:33:12,873 DEBUG security.SecurityUtil (SecurityUtil.java:setTokenService(421)) - Acquired token Kind: HDFS_DELEGATION_TOKEN, Service: 10.70.33.6:8020, Ident: (HDFS_DELEGATION_TOKEN token 14666 for gss2002) 2017-01-19 20:33:12,873 DEBUG security.SecurityUtil (SecurityUtil.java:setTokenService(421)) - Acquired token Kind: HDFS_DELEGATION_TOKEN, Service: 10.70.33.6:8020, Ident: (HDFS_DELEGATION_TOKEN token 14666 for gss2002) 2017-01-19 20:33:12,874 DEBUG security.SecurityUtil (SecurityUtil.java:setTokenService(421)) - Acquired token Kind: HDFS_DELEGATION_TOKEN, Service: 10.70.33.7:8020, Ident: (HDFS_DELEGATION_TOKEN token 14666 for gss2002) 2017-01-19 20:33:12,874 DEBUG security.SecurityUtil (SecurityUtil.java:setTokenService(421)) - Acquired token Kind: HDFS_DELEGATION_TOKEN, Service: 10.70.33.7:8020, Ident: (HDFS_DELEGATION_TOKEN token 14666 for gss2002) 2017-01-19 20:33:13,061 DEBUG security.UserGroupInformation (UserGroupInformation.java:logPrivilegedAction(1767)) - PrivilegedAction as:knox (auth:TOKEN) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:758) 2017-01-19 20:33:13,061 DEBUG security.UserGroupInformation (UserGroupInformation.java:logPrivilegedAction(1767)) - PrivilegedAction as:knox (auth:TOKEN) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:758) 2017-01-19 20:33:13,099 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1774)) - UGI: gss2002 (auth:PROXY) via knox (auth:TOKEN) 2017-01-19 20:33:13,099 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1774)) - UGI: gss2002 (auth:PROXY) via knox (auth:TOKEN) 2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1776)) - +RealUGI: knox (auth:TOKEN) 2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1776)) - +RealUGI: knox (auth:TOKEN) 2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1777)) - +RealUGI: shortName: knox 2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1777)) - +RealUGI: shortName: knox 2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1780)) - +LoginUGI: dn/ha20t5002dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM (auth:KERBEROS) 2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1780)) - +LoginUGI: dn/ha20t5002dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM (auth:KERBEROS) 2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1781)) - +LoginUGI shortName: hdfs 2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1781)) - +LoginUGI shortName: hdfs 2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1784)) - +UGI token:Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:tech, Ident: (HDFS_DELEGATION_TOKEN token 14666 for gss2002) 2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1784)) - +UGI token:Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:tech, Ident: (HDFS_DELEGATION_TOKEN token 14666 for gss2002) 2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1784)) - +UGI token:Kind: HDFS_DELEGATION_TOKEN, Service: 10.70.33.7:8020, Ident: (HDFS_DELEGATION_TOKEN token 14666 for gss2002) 2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1784)) - +UGI token:Kind: HDFS_DELEGATION_TOKEN, Service: 10.70.33.7:8020, Ident: (HDFS_DELEGATION_TOKEN token 14666 for gss2002) 2017-01-19 20:33:13,101 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1784)) - +UGI token:Kind: HDFS_DELEGATION_TOKEN, Service: 10.70.33.6:8020, Ident: (HDFS_DELEGATION_TOKEN token 14666 for gss2002) 2017-01-19 20:33:13,101 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1784)) - +UGI token:Kind: HDFS_DELEGATION_TOKEN, Service: 10.70.33.6:8020, Ident: (HDFS_DELEGATION_TOKEN token 14666 for gss2002) 2017-01-19 20:33:13,101 DEBUG kms.KMSClientProvider (KMSClientProvider.java:getActualUgi(1055)) - using RealUser for proxyUser 2017-01-19 20:33:13,101 DEBUG kms.KMSClientProvider (KMSClientProvider.java:getActualUgi(1055)) - using RealUser for proxyUser 2017-01-19 20:33:13,101 DEBUG kms.KMSClientProvider (KMSClientProvider.java:getActualUgi(1060)) - doAsUser exists 2017-01-19 20:33:13,101 DEBUG kms.KMSClientProvider (KMSClientProvider.java:getActualUgi(1060)) - doAsUser exists 2017-01-19 20:33:13,101 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1774)) - UGI: knox (auth:TOKEN) 2017-01-19 20:33:13,101 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1774)) - UGI: knox (auth:TOKEN) 2017-01-19 20:33:13,101 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1780)) - +LoginUGI: dn/ha20t5002dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM (auth:KERBEROS) 2017-01-19 20:33:13,101 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1780)) - +LoginUGI: dn/ha20t5002dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM (auth:KERBEROS) 2017-01-19 20:33:13,101 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1781)) - +LoginUGI shortName: hdfs 2017-01-19 20:33:13,101 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1781)) - +LoginUGI shortName: hdfs 2017-01-19 20:33:13,101 DEBUG kms.KMSClientProvider (KMSClientProvider.java:getActualUgi(1068)) - currentUGI.realUser does not match UGI processUser 2017-01-19 20:33:13,101 DEBUG kms.KMSClientProvider (KMSClientProvider.java:getActualUgi(1068)) - currentUGI.realUser does not match UGI processUser 2017-01-19 20:33:13,101 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1774)) - UGI: dn/ha20t5002dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM (auth:KERBEROS) 2017-01-19 20:33:13,101 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1774)) - UGI: dn/ha20t5002dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM (auth:KERBEROS) 2017-01-19 20:33:13,101 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1780)) - +LoginUGI: dn/ha20t5002dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM (auth:KERBEROS) 2017-01-19 20:33:13,101 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1780)) - +LoginUGI: dn/ha20t5002dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM (auth:KERBEROS) 2017-01-19 20:33:13,102 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1781)) - +LoginUGI shortName: hdfs 2017-01-19 20:33:13,102 DEBUG security.UserGroupInformation (UserGroupInformation.java:logAllUserInfo(1781)) - +LoginUGI shortName: hdfs 2017-01-19 20:33:13,102 DEBUG security.UserGroupInformation (UserGroupInformation.java:logPrivilegedAction(1767)) - PrivilegedAction as:dn/ha20t5002dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM (auth:KERBEROS) from:org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:524) 2017-01-19 20:33:13,102 DEBUG security.UserGroupInformation (UserGroupInformation.java:logPrivilegedAction(1767)) - PrivilegedAction as:dn/ha20t5002dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM (auth:KERBEROS) from:org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:524) 2017-01-19 20:33:13,107 DEBUG security.UserGroupInformation (UserGroupInformation.java:getTGT(898)) - Found tgt Ticket (hex) = Client Principal = dn/ha20t5002dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM Server Principal = krbtgt/TECH.HDP.EXAMPLE.COM@TECH.HDP.EXAMPLE.COM Session Key = EncryptionKey: keyType=18 keyBytes (hex dump)= Forwardable Ticket true Forwarded Ticket false Proxiable Ticket false Proxy Ticket false Postdated Ticket false Renewable Ticket false Initial Ticket false Auth Time = Thu Jan 19 20:22:30 EST 2017 Start Time = Thu Jan 19 20:22:30 EST 2017 End Time = Fri Jan 20 06:22:30 EST 2017 Renew Till = null Client Addresses Null 2017-01-19 20:33:13,107 DEBUG security.UserGroupInformation (UserGroupInformation.java:getTGT(898)) - Found tgt Ticket (hex) = Client Principal = dn/ha20t5002dn.tech.hdp.example.com@TECH.HDP.EXAMPLE.COM Server Principal = krbtgt/TECH.HDP.EXAMPLE.COM@TECH.HDP.EXAMPLE.COM Session Key = EncryptionKey: keyType=18 keyBytes (hex dump)= Forwardable Ticket true Forwarded Ticket false Proxiable Ticket false Proxy Ticket false Postdated Ticket false Renewable Ticket false Initial Ticket false Auth Time = Thu Jan 19 20:22:30 EST 2017 Start Time = Thu Jan 19 20:22:30 EST 2017 End Time = Fri Jan 20 06:22:30 EST 2017 Renew Till = null Client Addresses Null 2017-01-19 20:33:13,122 DEBUG client.KerberosAuthenticator (KerberosAuthenticator.java:authenticate(192)) - JDK performed authentication on our behalf. 2017-01-19 20:33:13,122 DEBUG client.KerberosAuthenticator (KerberosAuthenticator.java:authenticate(192)) - JDK performed authentication on our behalf. 2017-01-19 20:33:13,257 INFO DataNode.clienttrace (DataXceiver.java:requestShortCircuitShm(468)) - cliID: DFSClient_NONMAPREDUCE_513733485_146, src: 127.0.0.1, dest: 127.0.0.1, op: REQUEST_SHORT_CIRCUIT_SHM, shmId: e7f6cfb0dd48d8112883cc97c9292c4d, srvID: faca0b23-bfbe-413c-a2db-cc23c8817e87, success: true 2017-01-19 20:33:13,262 INFO DataNode.clienttrace (DataXceiver.java:requestShortCircuitFds(369)) - src: 127.0.0.1, dest: 127.0.0.1, op: REQUEST_SHORT_CIRCUIT_FDS, blockid: 1073781194, srvID: faca0b23-bfbe-413c-a2db-cc23c8817e87, success: true
          Hide
          lmccay Larry McCay added a comment -

          Greg Senia - I just want to be clear that your latest patch is what is running in your cluster not your original one.
          The fix that was required affected the code path taken.

          Show
          lmccay Larry McCay added a comment - Greg Senia - I just want to be clear that your latest patch is what is running in your cluster not your original one. The fix that was required affected the code path taken.
          Hide
          gss2002 Greg Senia added a comment -

          Xiaoyu Yao We are currently running the fix patched into our HDP 2.5.3.0 build. We grabbed the HDP-2.5.3.0-tag from HWX github and recompiled with this fix and the two fixes this is dependent on. We have been running this fix for over a week now in our test environment with 2 NNs w/HA and their associated components 3 JN's and 2 ZKFC's, 2 RM's, 4 DN's/RS's/NM's, 2 HiveServer2/Metastores, 2 HBaseMasters and a node running Knox for WebHDFS, Oozie and HiveServer2 http access and 1 Node as an Oozie Server. We have a data ingest framework that runs continuously in this environment and has run with no issues for the last week since applying the fixes and Knox to WebHDFS at a TDE file is returned correctly. I will look at adjusting the above code in regards to logging.

          Show
          gss2002 Greg Senia added a comment - Xiaoyu Yao We are currently running the fix patched into our HDP 2.5.3.0 build. We grabbed the HDP-2.5.3.0-tag from HWX github and recompiled with this fix and the two fixes this is dependent on. We have been running this fix for over a week now in our test environment with 2 NNs w/HA and their associated components 3 JN's and 2 ZKFC's, 2 RM's, 4 DN's/RS's/NM's, 2 HiveServer2/Metastores, 2 HBaseMasters and a node running Knox for WebHDFS, Oozie and HiveServer2 http access and 1 Node as an Oozie Server. We have a data ingest framework that runs continuously in this environment and has run with no issues for the last week since applying the fixes and Knox to WebHDFS at a TDE file is returned correctly. I will look at adjusting the above code in regards to logging.
          Hide
          xyao Xiaoyu Yao added a comment -

          Greg Senia, the change looks good to me overall. I just have a few comments about the additional logging.
          Can you also describe the manual testing that has been done with the patch?

          1. Some the if(LOG.isDebugEnabled()) guard is not needed as we are using slf4j
          line 1065, 1083, 1072.

          2. Line 1075 can be moved into UGI#logAllUserInfo

          3. Log 1089-109, I think we want to log UGI#loginUser instead of UGI#loginUser#loginUser, which has already been covered in line 1075.

          Show
          xyao Xiaoyu Yao added a comment - Greg Senia , the change looks good to me overall. I just have a few comments about the additional logging. Can you also describe the manual testing that has been done with the patch? 1. Some the if(LOG.isDebugEnabled()) guard is not needed as we are using slf4j line 1065, 1083, 1072. 2. Line 1075 can be moved into UGI#logAllUserInfo 3. Log 1089-109, I think we want to log UGI#loginUser instead of UGI#loginUser#loginUser, which has already been covered in line 1075.
          Hide
          gss2002 Greg Senia added a comment -

          Also in regards to test case let me know as this class doesn't have much around test cases around it.

          Show
          gss2002 Greg Senia added a comment - Also in regards to test case let me know as this class doesn't have much around test cases around it.
          Hide
          lmccay Larry McCay added a comment -

          I agree - that test fails intermittently and wouldn't be affected by this patch.

          Show
          lmccay Larry McCay added a comment - I agree - that test fails intermittently and wouldn't be affected by this patch.
          Hide
          gss2002 Greg Senia added a comment -

          Larry McCay made the changes sorry for delay. I think the test error is not related to my patch can you verify also:

          stGracefulFailoverMultipleZKfcs(org.apache.hadoop.ha.TestZKFailoverController) Time elapsed: 70.289 sec <<< ERROR!
          org.apache.hadoop.ha.ServiceFailedException: Unable to become active. Local node did not get an opportunity to do so from ZooKeeper, or the local node took too long to transition to active.
          at org.apache.hadoop.ha.ZKFailoverController.doGracefulFailover(ZKFailoverController.java:693)
          at org.apache.hado

          Show
          gss2002 Greg Senia added a comment - Larry McCay made the changes sorry for delay. I think the test error is not related to my patch can you verify also: stGracefulFailoverMultipleZKfcs(org.apache.hadoop.ha.TestZKFailoverController) Time elapsed: 70.289 sec <<< ERROR! org.apache.hadoop.ha.ServiceFailedException: Unable to become active. Local node did not get an opportunity to do so from ZooKeeper, or the local node took too long to transition to active. at org.apache.hadoop.ha.ZKFailoverController.doGracefulFailover(ZKFailoverController.java:693) at org.apache.hado
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 13s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 12m 28s trunk passed
          +1 compile 9m 36s trunk passed
          +1 checkstyle 0m 29s trunk passed
          +1 mvnsite 0m 59s trunk passed
          +1 mvneclipse 0m 18s trunk passed
          +1 findbugs 1m 24s trunk passed
          +1 javadoc 0m 48s trunk passed
          +1 mvninstall 0m 36s the patch passed
          +1 compile 9m 10s the patch passed
          +1 javac 9m 10s the patch passed
          +1 checkstyle 0m 28s the patch passed
          +1 mvnsite 0m 57s the patch passed
          +1 mvneclipse 0m 18s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 1m 29s the patch passed
          +1 javadoc 0m 48s the patch passed
          -1 unit 8m 18s hadoop-common in the patch failed.
          +1 asflicense 0m 31s The patch does not generate ASF License warnings.
          50m 37s



          Reason Tests
          Failed junit tests hadoop.ha.TestZKFailoverController



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HADOOP-13988
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12847537/HADOOP-13988.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 4e272a3cb982 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / ed09c14
          Default Java 1.8.0_111
          findbugs v3.0.0
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/11443/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11443/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11443/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 13s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 12m 28s trunk passed +1 compile 9m 36s trunk passed +1 checkstyle 0m 29s trunk passed +1 mvnsite 0m 59s trunk passed +1 mvneclipse 0m 18s trunk passed +1 findbugs 1m 24s trunk passed +1 javadoc 0m 48s trunk passed +1 mvninstall 0m 36s the patch passed +1 compile 9m 10s the patch passed +1 javac 9m 10s the patch passed +1 checkstyle 0m 28s the patch passed +1 mvnsite 0m 57s the patch passed +1 mvneclipse 0m 18s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 29s the patch passed +1 javadoc 0m 48s the patch passed -1 unit 8m 18s hadoop-common in the patch failed. +1 asflicense 0m 31s The patch does not generate ASF License warnings. 50m 37s Reason Tests Failed junit tests hadoop.ha.TestZKFailoverController Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HADOOP-13988 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12847537/HADOOP-13988.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 4e272a3cb982 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / ed09c14 Default Java 1.8.0_111 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-HADOOP-Build/11443/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11443/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11443/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 13s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 12m 43s trunk passed
          +1 compile 9m 33s trunk passed
          +1 checkstyle 0m 28s trunk passed
          +1 mvnsite 1m 0s trunk passed
          +1 mvneclipse 0m 18s trunk passed
          +1 findbugs 1m 25s trunk passed
          +1 javadoc 0m 46s trunk passed
          +1 mvninstall 0m 36s the patch passed
          +1 compile 9m 12s the patch passed
          +1 javac 9m 12s the patch passed
          -0 checkstyle 0m 28s hadoop-common-project/hadoop-common: The patch generated 1 new + 14 unchanged - 0 fixed = 15 total (was 14)
          +1 mvnsite 0m 58s the patch passed
          +1 mvneclipse 0m 18s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 1m 30s the patch passed
          +1 javadoc 0m 47s the patch passed
          +1 unit 8m 25s hadoop-common in the patch passed.
          +1 asflicense 0m 31s The patch does not generate ASF License warnings.
          51m 2s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HADOOP-13988
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12847518/HADOOP-13988.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 6c1b6e6f63ed 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 2604e82
          Default Java 1.8.0_111
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/11441/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11441/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11441/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 13s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 12m 43s trunk passed +1 compile 9m 33s trunk passed +1 checkstyle 0m 28s trunk passed +1 mvnsite 1m 0s trunk passed +1 mvneclipse 0m 18s trunk passed +1 findbugs 1m 25s trunk passed +1 javadoc 0m 46s trunk passed +1 mvninstall 0m 36s the patch passed +1 compile 9m 12s the patch passed +1 javac 9m 12s the patch passed -0 checkstyle 0m 28s hadoop-common-project/hadoop-common: The patch generated 1 new + 14 unchanged - 0 fixed = 15 total (was 14) +1 mvnsite 0m 58s the patch passed +1 mvneclipse 0m 18s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 30s the patch passed +1 javadoc 0m 47s the patch passed +1 unit 8m 25s hadoop-common in the patch passed. +1 asflicense 0m 31s The patch does not generate ASF License warnings. 51m 2s Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HADOOP-13988 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12847518/HADOOP-13988.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 6c1b6e6f63ed 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 2604e82 Default Java 1.8.0_111 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/11441/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11441/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11441/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          gss2002 Greg Senia added a comment -

          Larry McCay I will fix shortly!

          Show
          gss2002 Greg Senia added a comment - Larry McCay I will fix shortly!
          Hide
          lmccay Larry McCay added a comment -

          This has a type too:

          +        // Check if the realUser patches the user used by process
          

          s/patches/matches/

          Show
          lmccay Larry McCay added a comment - This has a type too: + // Check if the realUser patches the user used by process s/patches/matches/
          Hide
          lmccay Larry McCay added a comment - - edited

          Looks like findbugs flagged the following:

          +        if (currentUgi.getRealUser().getShortUserName() != UserGroupInformation.getLoginUser().getShortUserName()) {
          

          That should use an !equals() call - right?
          May need to revisit that for your cluster.

          Show
          lmccay Larry McCay added a comment - - edited Looks like findbugs flagged the following: + if (currentUgi.getRealUser().getShortUserName() != UserGroupInformation.getLoginUser().getShortUserName()) { That should use an !equals() call - right? May need to revisit that for your cluster.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 15s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 14m 32s trunk passed
          +1 compile 11m 29s trunk passed
          +1 checkstyle 0m 27s trunk passed
          +1 mvnsite 1m 11s trunk passed
          +1 mvneclipse 0m 17s trunk passed
          +1 findbugs 1m 39s trunk passed
          +1 javadoc 0m 58s trunk passed
          +1 mvninstall 0m 53s the patch passed
          +1 compile 11m 5s the patch passed
          +1 javac 11m 5s the patch passed
          -0 checkstyle 0m 33s hadoop-common-project/hadoop-common: The patch generated 4 new + 14 unchanged - 0 fixed = 18 total (was 14)
          +1 mvnsite 1m 10s the patch passed
          +1 mvneclipse 0m 19s the patch passed
          -1 whitespace 0m 0s The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
          -1 whitespace 0m 0s The patch 15 line(s) with tabs.
          -1 findbugs 1m 57s hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
          +1 javadoc 0m 49s the patch passed
          +1 unit 9m 45s hadoop-common in the patch passed.
          +1 asflicense 0m 32s The patch does not generate ASF License warnings.
          59m 44s



          Reason Tests
          FindBugs module:hadoop-common-project/hadoop-common
            Comparison of String objects using == or != in org.apache.hadoop.crypto.key.kms.KMSClientProvider.getActualUgi() At KMSClientProvider.java:== or != in org.apache.hadoop.crypto.key.kms.KMSClientProvider.getActualUgi() At KMSClientProvider.java:[line 1113]



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HADOOP-13988
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12847413/HADOOP-13988.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 0bb3fca653a8 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / d3170f9
          Default Java 1.8.0_111
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/11434/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
          whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/11434/artifact/patchprocess/whitespace-eol.txt
          whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/11434/artifact/patchprocess/whitespace-tabs.txt
          findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/11434/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-common.html
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11434/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11434/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 15s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 14m 32s trunk passed +1 compile 11m 29s trunk passed +1 checkstyle 0m 27s trunk passed +1 mvnsite 1m 11s trunk passed +1 mvneclipse 0m 17s trunk passed +1 findbugs 1m 39s trunk passed +1 javadoc 0m 58s trunk passed +1 mvninstall 0m 53s the patch passed +1 compile 11m 5s the patch passed +1 javac 11m 5s the patch passed -0 checkstyle 0m 33s hadoop-common-project/hadoop-common: The patch generated 4 new + 14 unchanged - 0 fixed = 18 total (was 14) +1 mvnsite 1m 10s the patch passed +1 mvneclipse 0m 19s the patch passed -1 whitespace 0m 0s The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply -1 whitespace 0m 0s The patch 15 line(s) with tabs. -1 findbugs 1m 57s hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) +1 javadoc 0m 49s the patch passed +1 unit 9m 45s hadoop-common in the patch passed. +1 asflicense 0m 32s The patch does not generate ASF License warnings. 59m 44s Reason Tests FindBugs module:hadoop-common-project/hadoop-common   Comparison of String objects using == or != in org.apache.hadoop.crypto.key.kms.KMSClientProvider.getActualUgi() At KMSClientProvider.java:== or != in org.apache.hadoop.crypto.key.kms.KMSClientProvider.getActualUgi() At KMSClientProvider.java: [line 1113] Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HADOOP-13988 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12847413/HADOOP-13988.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 0bb3fca653a8 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / d3170f9 Default Java 1.8.0_111 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/11434/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/11434/artifact/patchprocess/whitespace-eol.txt whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/11434/artifact/patchprocess/whitespace-tabs.txt findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/11434/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-common.html Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11434/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11434/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          gss2002 Greg Senia added a comment -

          This patch requires these JIRAs to be included also

          Show
          gss2002 Greg Senia added a comment - This patch requires these JIRAs to be included also
          Hide
          gss2002 Greg Senia added a comment -

          Initial Patch that is running in our test environment right now across 25 nodes.

          Show
          gss2002 Greg Senia added a comment - Initial Patch that is running in our test environment right now across 25 nodes.
          Hide
          gss2002 Greg Senia added a comment -

          Larry McCay and Xiaoyu Yao I have my original patch I will attach it and we can modify and test from there.

          Show
          gss2002 Greg Senia added a comment - Larry McCay and Xiaoyu Yao I have my original patch I will attach it and we can modify and test from there.
          Hide
          xyao Xiaoyu Yao added a comment -

          Thanks Greg Senia for reporting the issue and propose the fix. The proposed fix makes sense to me.
          Based on that, I think we can simplify the change below assuming the proxy user from Hadoop service will always set the UserGroupInformation.AuthenticationMethod.PROXY while proxy user from client directly will not.

          Also, we should add the additional tracing to UGI#logAllUserInfo().

           if (currentUgi.getRealUser() != null) {
                if (currentUgi.getAuthenticationMethod() == UserGroupInformation.AuthenticationMethod.PROXY) {
                  // Use login user for proxy user from another proxy server
                  actualUgi = currentUgi.getLoginUser();
                } else {
                  // Use real user for proxy user from client directly
                  actualUgi = currentUgi.getRealUser();
                }
            }
          
          Show
          xyao Xiaoyu Yao added a comment - Thanks Greg Senia for reporting the issue and propose the fix. The proposed fix makes sense to me. Based on that, I think we can simplify the change below assuming the proxy user from Hadoop service will always set the UserGroupInformation.AuthenticationMethod.PROXY while proxy user from client directly will not. Also, we should add the additional tracing to UGI#logAllUserInfo(). if (currentUgi.getRealUser() != null ) { if (currentUgi.getAuthenticationMethod() == UserGroupInformation.AuthenticationMethod.PROXY) { // Use login user for proxy user from another proxy server actualUgi = currentUgi.getLoginUser(); } else { // Use real user for proxy user from client directly actualUgi = currentUgi.getRealUser(); } }
          Hide
          lmccay Larry McCay added a comment -

          Greg Senia - thank you for bringing this insight to a JIRA!

          I have observed this double proxying issue before and I think this may actually help it in other areas as well.
          Do you plan to provide a patch for it with appropriate tests as well?

          Show
          lmccay Larry McCay added a comment - Greg Senia - thank you for bringing this insight to a JIRA! I have observed this double proxying issue before and I think this may actually help it in other areas as well. Do you plan to provide a patch for it with appropriate tests as well?

            People

            • Assignee:
              xyao Xiaoyu Yao
              Reporter:
              gss2002 Greg Senia
            • Votes:
              0 Vote for this issue
              Watchers:
              10 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development