Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-6127

WebHDFS tokens cannot be renewed in HA setup

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 2.4.0
    • Fix Version/s: 2.4.0
    • Component/s: ha
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      TokenAspect class assumes that the service name of the token is alway a host-ip pair. In HA setup, however, the service name becomes the name service id, which breaks the assumption. As a result, WebHDFS tokens cannot be renewed in HA setup.

      1. HDFS-6127.000.patch
        13 kB
        Haohui Mai
      2. HDFS-6127.001.patch
        15 kB
        Haohui Mai

        Activity

        Hide
        Arpit Gupta added a comment -

        Here is the stack trace

        /usr/lib/hadoop/bin/hadoop org.apache.hadoop.fs.slive.SliveTest -rename 14,uniform -packetSize 65536 -baseDir webhdfs://ha-2-secure/user/user/ha-slive -seed 12345678 -sleep 100,1000 -duration 600 -append 14,uniform -blockSize 16777216,33554432 -create 16,uniform -mkdir 14,uniform -maps 12 -ls 14,uniform -writeSize 1,134217728 -files 1024 -ops 10000 -read 14,uniform -replication 1,3 -appendSize 1,134217728 -reduces 6 -resFile /grid/0/tmp/hwqe/artifacts/ha-slive-00002-namenode2-1395127484.out -readSize 1,4294967295 -dirSize 16 -delete 14,uniform
        INFO|Initial wait for Service namenode2: 60
        14/03/18 07:24:44 INFO slive.SliveTest: Running with option list -rename 14,uniform -packetSize 65536 -baseDir webhdfs://ha-2-secure/user/user/ha-slive -seed 12345678 -sleep 100,1000 -duration 600 -append 14,uniform -blockSize 16777216,33554432 -create 16,uniform -mkdir 14,uniform -maps 12 -ls 14,uniform -writeSize 1,134217728 -files 1024 -ops 10000 -read 14,uniform -replication 1,3 -appendSize 1,134217728 -reduces 6 -resFile /grid/0/tmp/hwqe/artifacts/ha-slive-00002-namenode2-1395127484.out -readSize 1,4294967295 -dirSize 16 -delete 14,uniform
        14/03/18 07:24:44 INFO slive.SliveTest: Options are:
        14/03/18 07:24:44 INFO slive.ConfigExtractor: Base directory = webhdfs://ha-2-secure/user/user/ha-slive/slive
        14/03/18 07:24:44 INFO slive.ConfigExtractor: Data directory = webhdfs://ha-2-secure/user/user/ha-slive/slive/data
        14/03/18 07:24:44 INFO slive.ConfigExtractor: Output directory = webhdfs://ha-2-secure/user/user/ha-slive/slive/output
        14/03/18 07:24:44 INFO slive.ConfigExtractor: Result file = /grid/0/tmp/hwqe/artifacts/ha-slive-00002-namenode2-1395127484.out
        14/03/18 07:24:44 INFO slive.ConfigExtractor: Grid queue = default
        14/03/18 07:24:44 INFO slive.ConfigExtractor: Should exit on first error = false
        14/03/18 07:24:44 INFO slive.ConfigExtractor: Duration = 600000 milliseconds
        14/03/18 07:24:44 INFO slive.ConfigExtractor: Map amount = 12
        14/03/18 07:24:44 INFO slive.ConfigExtractor: Reducer amount = 6
        14/03/18 07:24:44 INFO slive.ConfigExtractor: Operation amount = 10000
        14/03/18 07:24:44 INFO slive.ConfigExtractor: Total file limit = 1024
        14/03/18 07:24:44 INFO slive.ConfigExtractor: Total dir file limit = 16
        14/03/18 07:24:44 INFO slive.ConfigExtractor: Read size = 1,4294967295 bytes
        14/03/18 07:24:44 INFO slive.ConfigExtractor: Write size = 1,134217728 bytes
        14/03/18 07:24:44 INFO slive.ConfigExtractor: Append size = 1,134217728 bytes
        14/03/18 07:24:44 INFO slive.ConfigExtractor: Block size = 16777216,33554432 bytes
        14/03/18 07:24:44 INFO slive.ConfigExtractor: Random seed = 12345678
        14/03/18 07:24:44 INFO slive.ConfigExtractor: Sleep range = 100,1000 milliseconds
        14/03/18 07:24:44 INFO slive.ConfigExtractor: Replication amount = 1,3
        14/03/18 07:24:44 INFO slive.ConfigExtractor: Operations are:
        14/03/18 07:24:44 INFO slive.ConfigExtractor: READ
        14/03/18 07:24:44 INFO slive.ConfigExtractor:  UNIFORM
        14/03/18 07:24:44 INFO slive.ConfigExtractor:  14%
        14/03/18 07:24:44 INFO slive.ConfigExtractor: APPEND
        14/03/18 07:24:44 INFO slive.ConfigExtractor:  UNIFORM
        14/03/18 07:24:44 INFO slive.ConfigExtractor:  14%
        14/03/18 07:24:44 INFO slive.ConfigExtractor: MKDIR
        14/03/18 07:24:44 INFO slive.ConfigExtractor:  UNIFORM
        14/03/18 07:24:44 INFO slive.ConfigExtractor:  14%
        14/03/18 07:24:44 INFO slive.ConfigExtractor: LS
        14/03/18 07:24:44 INFO slive.ConfigExtractor:  UNIFORM
        14/03/18 07:24:44 INFO slive.ConfigExtractor:  14%
        14/03/18 07:24:44 INFO slive.ConfigExtractor: DELETE
        14/03/18 07:24:44 INFO slive.ConfigExtractor:  UNIFORM
        14/03/18 07:24:44 INFO slive.ConfigExtractor:  14%
        14/03/18 07:24:44 INFO slive.ConfigExtractor: RENAME
        14/03/18 07:24:44 INFO slive.ConfigExtractor:  UNIFORM
        14/03/18 07:24:44 INFO slive.ConfigExtractor:  14%
        14/03/18 07:24:44 INFO slive.ConfigExtractor: CREATE
        14/03/18 07:24:44 INFO slive.ConfigExtractor:  UNIFORM
        14/03/18 07:24:44 INFO slive.ConfigExtractor:  16%
        14/03/18 07:24:44 INFO slive.SliveTest: Running job:
        14/03/18 07:24:44 WARN hdfs.DFSClient: dfs.client.test.drop.namenode.response.number is set to 1, this hacked client will proactively drop responses
        14/03/18 07:24:45 WARN hdfs.DFSClient: dfs.client.test.drop.namenode.response.number is set to 1, this hacked client will proactively drop responses
        14/03/18 07:24:45 WARN hdfs.DFSClient: dfs.client.test.drop.namenode.response.number is set to 1, this hacked client will proactively drop responses
        14/03/18 07:24:48 WARN token.Token: Cannot find class for token kind WEBHDFS delegation
        14/03/18 07:24:48 INFO security.TokenCache: Got dt for webhdfs://ha-2-secure; Kind: WEBHDFS delegation, Service: ha-hdfs:ha-2-secure, Ident: 00 06 68 72 74 5f 71 61 04 79 61 72 6e 00 8a 01 44 d4 14 0f d0 8a 01 44 f8 20 93 d0 0f 10
        14/03/18 07:24:49 INFO mapreduce.JobSubmitter: number of splits:12
        14/03/18 07:24:49 INFO Configuration.deprecation: mapred.job.queue.name is deprecated. Instead, use mapreduce.job.queuename
        14/03/18 07:24:49 INFO Configuration.deprecation: dfs.write.packet.size is deprecated. Instead, use dfs.client-write-packet-size
        14/03/18 07:24:49 INFO Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
        14/03/18 07:24:49 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1395125245496_0006
        14/03/18 07:24:49 WARN token.Token: Cannot find class for token kind WEBHDFS delegation
        14/03/18 07:24:49 WARN token.Token: Cannot find class for token kind WEBHDFS delegation
        Kind: WEBHDFS delegation, Service: ha-hdfs:ha-2-secure, Ident: 00 06 68 72 74 5f 71 61 04 79 61 72 6e 00 8a 01 44 d4 14 0f d0 8a 01 44 f8 20 93 d0 0f 10
        14/03/18 07:24:50 INFO impl.YarnClientImpl: Submitted application application_1395125245496_0006
        14/03/18 07:24:50 INFO mapreduce.JobSubmitter: Cleaning up the staging area /user/user/.staging/job_1395125245496_0006
        14/03/18 07:24:50 WARN security.UserGroupInformation: PriviledgedActionException as:user@REALM (auth:KERBEROS) cause:java.io.IOException: Failed to run job : Does not contain a valid host:port authority: ha-hdfs:ha-2-secure
        14/03/18 07:24:50 WARN security.UserGroupInformation: PriviledgedActionException as:user@REALM (auth:KERBEROS) cause:java.io.IOException: Failed to run job : Does not contain a valid host:port authority: ha-hdfs:ha-2-secure
        14/03/18 07:24:50 ERROR slive.SliveTest: Unable to run job due to error:
        java.io.IOException: Failed to run job : Does not contain a valid host:port authority: ha-hdfs:ha-2-secure
        at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:300)
        at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
        at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
        at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
        at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:833)
        at org.apache.hadoop.fs.slive.SliveTest.runJob(SliveTest.java:204)
        at org.apache.hadoop.fs.slive.SliveTest.run(SliveTest.java:117)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
        at org.apache.hadoop.fs.slive.SliveTest.main(SliveTest.java:314)
        14/03/18 07:24:50 WARN util.ShutdownHookManager: ShutdownHook 'ClientFinalizer' failed, java.lang.IllegalArgumentException: Does not contain a valid host:port authority: ha-hdfs:ha-2-secure
        java.lang.IllegalArgumentException: Does not contain a valid host:port authority: ha-hdfs:ha-2-secure
        at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:211)
        at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
        at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
        at org.apache.hadoop.security.SecurityUtil.getTokenServiceAddr(SecurityUtil.java:347)
        at org.apache.hadoop.hdfs.web.TokenAspect$TokenManager.getInstance(TokenAspect.java:75)
        at org.apache.hadoop.hdfs.web.TokenAspect$TokenManager.cancel(TokenAspect.java:52)
        at org.apache.hadoop.security.token.Token.cancel(Token.java:387)
        at org.apache.hadoop.fs.DelegationTokenRenewer$RenewAction.cancel(DelegationTokenRenewer.java:152)
        at org.apache.hadoop.fs.DelegationTokenRenewer$RenewAction.access$200(DelegationTokenRenewer.java:58)
        at org.apache.hadoop.fs.DelegationTokenRenewer.removeRenewAction(DelegationTokenRenewer.java:241)
        at org.apache.hadoop.hdfs.web.TokenAspect.removeRenewAction(TokenAspect.java:154)
        at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.close(WebHdfsFileSystem.java:922)
        at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2488)
        at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2505)
        at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
        
        Show
        Arpit Gupta added a comment - Here is the stack trace /usr/lib/hadoop/bin/hadoop org.apache.hadoop.fs.slive.SliveTest -rename 14,uniform -packetSize 65536 -baseDir webhdfs: //ha-2-secure/user/user/ha-slive -seed 12345678 -sleep 100,1000 -duration 600 -append 14,uniform -blockSize 16777216,33554432 -create 16,uniform -mkdir 14,uniform -maps 12 -ls 14,uniform -writeSize 1,134217728 -files 1024 -ops 10000 -read 14,uniform -replication 1,3 -appendSize 1,134217728 -reduces 6 -resFile /grid/0/tmp/hwqe/artifacts/ha-slive-00002-namenode2-1395127484.out -readSize 1,4294967295 -dirSize 16 -delete 14,uniform INFO|Initial wait for Service namenode2: 60 14/03/18 07:24:44 INFO slive.SliveTest: Running with option list -rename 14,uniform -packetSize 65536 -baseDir webhdfs: //ha-2-secure/user/user/ha-slive -seed 12345678 -sleep 100,1000 -duration 600 -append 14,uniform -blockSize 16777216,33554432 -create 16,uniform -mkdir 14,uniform -maps 12 -ls 14,uniform -writeSize 1,134217728 -files 1024 -ops 10000 -read 14,uniform -replication 1,3 -appendSize 1,134217728 -reduces 6 -resFile /grid/0/tmp/hwqe/artifacts/ha-slive-00002-namenode2-1395127484.out -readSize 1,4294967295 -dirSize 16 -delete 14,uniform 14/03/18 07:24:44 INFO slive.SliveTest: Options are: 14/03/18 07:24:44 INFO slive.ConfigExtractor: Base directory = webhdfs: //ha-2-secure/user/user/ha-slive/slive 14/03/18 07:24:44 INFO slive.ConfigExtractor: Data directory = webhdfs: //ha-2-secure/user/user/ha-slive/slive/data 14/03/18 07:24:44 INFO slive.ConfigExtractor: Output directory = webhdfs: //ha-2-secure/user/user/ha-slive/slive/output 14/03/18 07:24:44 INFO slive.ConfigExtractor: Result file = /grid/0/tmp/hwqe/artifacts/ha-slive-00002-namenode2-1395127484.out 14/03/18 07:24:44 INFO slive.ConfigExtractor: Grid queue = default 14/03/18 07:24:44 INFO slive.ConfigExtractor: Should exit on first error = false 14/03/18 07:24:44 INFO slive.ConfigExtractor: Duration = 600000 milliseconds 14/03/18 07:24:44 INFO slive.ConfigExtractor: Map amount = 12 14/03/18 07:24:44 INFO slive.ConfigExtractor: Reducer amount = 6 14/03/18 07:24:44 INFO slive.ConfigExtractor: Operation amount = 10000 14/03/18 07:24:44 INFO slive.ConfigExtractor: Total file limit = 1024 14/03/18 07:24:44 INFO slive.ConfigExtractor: Total dir file limit = 16 14/03/18 07:24:44 INFO slive.ConfigExtractor: Read size = 1,4294967295 bytes 14/03/18 07:24:44 INFO slive.ConfigExtractor: Write size = 1,134217728 bytes 14/03/18 07:24:44 INFO slive.ConfigExtractor: Append size = 1,134217728 bytes 14/03/18 07:24:44 INFO slive.ConfigExtractor: Block size = 16777216,33554432 bytes 14/03/18 07:24:44 INFO slive.ConfigExtractor: Random seed = 12345678 14/03/18 07:24:44 INFO slive.ConfigExtractor: Sleep range = 100,1000 milliseconds 14/03/18 07:24:44 INFO slive.ConfigExtractor: Replication amount = 1,3 14/03/18 07:24:44 INFO slive.ConfigExtractor: Operations are: 14/03/18 07:24:44 INFO slive.ConfigExtractor: READ 14/03/18 07:24:44 INFO slive.ConfigExtractor: UNIFORM 14/03/18 07:24:44 INFO slive.ConfigExtractor: 14% 14/03/18 07:24:44 INFO slive.ConfigExtractor: APPEND 14/03/18 07:24:44 INFO slive.ConfigExtractor: UNIFORM 14/03/18 07:24:44 INFO slive.ConfigExtractor: 14% 14/03/18 07:24:44 INFO slive.ConfigExtractor: MKDIR 14/03/18 07:24:44 INFO slive.ConfigExtractor: UNIFORM 14/03/18 07:24:44 INFO slive.ConfigExtractor: 14% 14/03/18 07:24:44 INFO slive.ConfigExtractor: LS 14/03/18 07:24:44 INFO slive.ConfigExtractor: UNIFORM 14/03/18 07:24:44 INFO slive.ConfigExtractor: 14% 14/03/18 07:24:44 INFO slive.ConfigExtractor: DELETE 14/03/18 07:24:44 INFO slive.ConfigExtractor: UNIFORM 14/03/18 07:24:44 INFO slive.ConfigExtractor: 14% 14/03/18 07:24:44 INFO slive.ConfigExtractor: RENAME 14/03/18 07:24:44 INFO slive.ConfigExtractor: UNIFORM 14/03/18 07:24:44 INFO slive.ConfigExtractor: 14% 14/03/18 07:24:44 INFO slive.ConfigExtractor: CREATE 14/03/18 07:24:44 INFO slive.ConfigExtractor: UNIFORM 14/03/18 07:24:44 INFO slive.ConfigExtractor: 16% 14/03/18 07:24:44 INFO slive.SliveTest: Running job: 14/03/18 07:24:44 WARN hdfs.DFSClient: dfs.client.test.drop.namenode.response.number is set to 1, this hacked client will proactively drop responses 14/03/18 07:24:45 WARN hdfs.DFSClient: dfs.client.test.drop.namenode.response.number is set to 1, this hacked client will proactively drop responses 14/03/18 07:24:45 WARN hdfs.DFSClient: dfs.client.test.drop.namenode.response.number is set to 1, this hacked client will proactively drop responses 14/03/18 07:24:48 WARN token.Token: Cannot find class for token kind WEBHDFS delegation 14/03/18 07:24:48 INFO security.TokenCache: Got dt for webhdfs: //ha-2-secure; Kind: WEBHDFS delegation, Service: ha-hdfs:ha-2-secure, Ident: 00 06 68 72 74 5f 71 61 04 79 61 72 6e 00 8a 01 44 d4 14 0f d0 8a 01 44 f8 20 93 d0 0f 10 14/03/18 07:24:49 INFO mapreduce.JobSubmitter: number of splits:12 14/03/18 07:24:49 INFO Configuration.deprecation: mapred.job.queue.name is deprecated. Instead, use mapreduce.job.queuename 14/03/18 07:24:49 INFO Configuration.deprecation: dfs.write.packet.size is deprecated. Instead, use dfs.client-write-packet-size 14/03/18 07:24:49 INFO Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum 14/03/18 07:24:49 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1395125245496_0006 14/03/18 07:24:49 WARN token.Token: Cannot find class for token kind WEBHDFS delegation 14/03/18 07:24:49 WARN token.Token: Cannot find class for token kind WEBHDFS delegation Kind: WEBHDFS delegation, Service: ha-hdfs:ha-2-secure, Ident: 00 06 68 72 74 5f 71 61 04 79 61 72 6e 00 8a 01 44 d4 14 0f d0 8a 01 44 f8 20 93 d0 0f 10 14/03/18 07:24:50 INFO impl.YarnClientImpl: Submitted application application_1395125245496_0006 14/03/18 07:24:50 INFO mapreduce.JobSubmitter: Cleaning up the staging area /user/user/.staging/job_1395125245496_0006 14/03/18 07:24:50 WARN security.UserGroupInformation: PriviledgedActionException as:user@REALM (auth:KERBEROS) cause:java.io.IOException: Failed to run job : Does not contain a valid host:port authority: ha-hdfs:ha-2-secure 14/03/18 07:24:50 WARN security.UserGroupInformation: PriviledgedActionException as:user@REALM (auth:KERBEROS) cause:java.io.IOException: Failed to run job : Does not contain a valid host:port authority: ha-hdfs:ha-2-secure 14/03/18 07:24:50 ERROR slive.SliveTest: Unable to run job due to error: java.io.IOException: Failed to run job : Does not contain a valid host:port authority: ha-hdfs:ha-2-secure at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:300) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548) at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:833) at org.apache.hadoop.fs.slive.SliveTest.runJob(SliveTest.java:204) at org.apache.hadoop.fs.slive.SliveTest.run(SliveTest.java:117) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) at org.apache.hadoop.fs.slive.SliveTest.main(SliveTest.java:314) 14/03/18 07:24:50 WARN util.ShutdownHookManager: ShutdownHook 'ClientFinalizer' failed, java.lang.IllegalArgumentException: Does not contain a valid host:port authority: ha-hdfs:ha-2-secure java.lang.IllegalArgumentException: Does not contain a valid host:port authority: ha-hdfs:ha-2-secure at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:211) at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163) at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152) at org.apache.hadoop.security.SecurityUtil.getTokenServiceAddr(SecurityUtil.java:347) at org.apache.hadoop.hdfs.web.TokenAspect$TokenManager.getInstance(TokenAspect.java:75) at org.apache.hadoop.hdfs.web.TokenAspect$TokenManager.cancel(TokenAspect.java:52) at org.apache.hadoop.security.token.Token.cancel(Token.java:387) at org.apache.hadoop.fs.DelegationTokenRenewer$RenewAction.cancel(DelegationTokenRenewer.java:152) at org.apache.hadoop.fs.DelegationTokenRenewer$RenewAction.access$200(DelegationTokenRenewer.java:58) at org.apache.hadoop.fs.DelegationTokenRenewer.removeRenewAction(DelegationTokenRenewer.java:241) at org.apache.hadoop.hdfs.web.TokenAspect.removeRenewAction(TokenAspect.java:154) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.close(WebHdfsFileSystem.java:922) at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2488) at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2505) at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
        Hide
        Hadoop QA added a comment -

        +1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12635628/HDFS-6127.000.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 2 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. There were no new javadoc warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/6444//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6444//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12635628/HDFS-6127.000.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/6444//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6444//console This message is automatically generated.
        Hide
        Jing Zhao added a comment -

        The patch looks good to me. Some comments:

        1. Nit: HAUtil#getServiceUriFromToken and TokenManager#getInstance need format.
        2. javadoc of HAUtil#getServiceUriFromToken needs to be updated after the change: the method now can support URI of other FS, not just HDFS.
        3. The change in TestWebhdfsForHA actually weakens our unit test, I think. We still need the fs.renew and fs.cancel for regression of HDFS-5339. A separate unit test with some code copy should be fine here, I guess.
          -      fs.renewDelegationToken(token);
          -      fs.cancelDelegationToken(token);
          +      token.renew(conf);
          +      token.cancel(conf);
          

        +1 after addressing the comments.

        Show
        Jing Zhao added a comment - The patch looks good to me. Some comments: Nit: HAUtil#getServiceUriFromToken and TokenManager#getInstance need format. javadoc of HAUtil#getServiceUriFromToken needs to be updated after the change: the method now can support URI of other FS, not just HDFS. The change in TestWebhdfsForHA actually weakens our unit test, I think. We still need the fs.renew and fs.cancel for regression of HDFS-5339 . A separate unit test with some code copy should be fine here, I guess. - fs.renewDelegationToken(token); - fs.cancelDelegationToken(token); + token.renew(conf); + token.cancel(conf); +1 after addressing the comments.
        Hide
        Jing Zhao added a comment -

        The change in TestWebhdfsForHA actually weakens our unit test, I think

        Actually this will not, since in the end the fs.renew and fs.cancel will still be called. Thus please just ignore this comment.

        Show
        Jing Zhao added a comment - The change in TestWebhdfsForHA actually weakens our unit test, I think Actually this will not, since in the end the fs.renew and fs.cancel will still be called. Thus please just ignore this comment.
        Hide
        Haohui Mai added a comment -

        This v1 patch addresses Jing's comment. I also use Mockito.verify to ensure that the corresponding methods are called.

        Show
        Haohui Mai added a comment - This v1 patch addresses Jing's comment. I also use Mockito.verify to ensure that the corresponding methods are called.
        Hide
        Hadoop QA added a comment -

        +1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12635684/HDFS-6127.001.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 2 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. There were no new javadoc warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/6449//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6449//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12635684/HDFS-6127.001.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/6449//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6449//console This message is automatically generated.
        Hide
        Hudson added a comment -

        SUCCESS: Integrated in Hadoop-trunk-Commit #5363 (See https://builds.apache.org/job/Hadoop-trunk-Commit/5363/)
        HDFS-6127. WebHDFS tokens cannot be renewed in HA setup. Contributed by Haohui Mai. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1579546)

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/TokenAspect.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/resources
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/resources/TestDatanodeWebHdfsMethods.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFSForHA.java
        Show
        Hudson added a comment - SUCCESS: Integrated in Hadoop-trunk-Commit #5363 (See https://builds.apache.org/job/Hadoop-trunk-Commit/5363/ ) HDFS-6127 . WebHDFS tokens cannot be renewed in HA setup. Contributed by Haohui Mai. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1579546 ) /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/TokenAspect.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/resources /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/resources/TestDatanodeWebHdfsMethods.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFSForHA.java
        Hide
        Haohui Mai added a comment -

        I've committed the patch to trunk, branch-2, and branch-2.4. Thanks Jing Zhao for the review.

        Show
        Haohui Mai added a comment - I've committed the patch to trunk, branch-2, and branch-2.4. Thanks Jing Zhao for the review.
        Hide
        Hudson added a comment -

        FAILURE: Integrated in Hadoop-Yarn-trunk #515 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/515/)
        HDFS-6127. WebHDFS tokens cannot be renewed in HA setup. Contributed by Haohui Mai. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1579546)

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/TokenAspect.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/resources
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/resources/TestDatanodeWebHdfsMethods.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFSForHA.java
        Show
        Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk #515 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/515/ ) HDFS-6127 . WebHDFS tokens cannot be renewed in HA setup. Contributed by Haohui Mai. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1579546 ) /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/TokenAspect.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/resources /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/resources/TestDatanodeWebHdfsMethods.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFSForHA.java
        Hide
        Hudson added a comment -

        FAILURE: Integrated in Hadoop-Hdfs-trunk #1707 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1707/)
        HDFS-6127. WebHDFS tokens cannot be renewed in HA setup. Contributed by Haohui Mai. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1579546)

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/TokenAspect.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/resources
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/resources/TestDatanodeWebHdfsMethods.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFSForHA.java
        Show
        Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #1707 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1707/ ) HDFS-6127 . WebHDFS tokens cannot be renewed in HA setup. Contributed by Haohui Mai. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1579546 ) /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/TokenAspect.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/resources /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/resources/TestDatanodeWebHdfsMethods.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFSForHA.java
        Hide
        Hudson added a comment -

        SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1732 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1732/)
        HDFS-6127. WebHDFS tokens cannot be renewed in HA setup. Contributed by Haohui Mai. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1579546)

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/TokenAspect.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/resources
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/resources/TestDatanodeWebHdfsMethods.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFSForHA.java
        Show
        Hudson added a comment - SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1732 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1732/ ) HDFS-6127 . WebHDFS tokens cannot be renewed in HA setup. Contributed by Haohui Mai. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1579546 ) /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/TokenAspect.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/resources /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/resources/TestDatanodeWebHdfsMethods.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFSForHA.java

          People

          • Assignee:
            Haohui Mai
            Reporter:
            Arpit Gupta
          • Votes:
            0 Vote for this issue
            Watchers:
            7 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development