Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-9525

hadoop utilities need to support provided delegation tokens

    Details

    • Type: New Feature
    • Status: Resolved
    • Priority: Blocker
    • Resolution: Fixed
    • Affects Version/s: 3.0.0-alpha1
    • Fix Version/s: 2.9.0, 3.0.0-alpha1
    • Component/s: security
    • Labels:
      None
    • Target Version/s:
    • Release Note:
      If hadoop.token.files property is defined and configured to one or more comma-delimited delegation token files, Hadoop will use those token files to connect to the services as named in the token.

      Description

      When using the webhdfs:// filesystem (especially from distcp), we need the ability to inject a delegation token rather than webhdfs initialize its own. This would allow for cross-authentication-zone file system accesses.

      1. HDFS-7984.001.patch
        18 kB
        HeeSoo Kim
      2. HDFS-7984.002.patch
        18 kB
        HeeSoo Kim
      3. HDFS-7984.003.patch
        18 kB
        HeeSoo Kim
      4. HDFS-7984.004.patch
        18 kB
        HeeSoo Kim
      5. HDFS-7984.005.patch
        21 kB
        HeeSoo Kim
      6. HDFS-7984.006.patch
        21 kB
        HeeSoo Kim
      7. HDFS-7984.007.patch
        22 kB
        HeeSoo Kim
      8. HDFS-7984.patch
        5 kB
        HeeSoo Kim
      9. HDFS-9525.008.patch
        11 kB
        HeeSoo Kim
      10. HDFS-9525.009.patch
        12 kB
        Allen Wittenauer
      11. HDFS-9525.009.patch
        12 kB
        HeeSoo Kim
      12. HDFS-9525.branch-2.008.patch
        11 kB
        HeeSoo Kim
      13. HDFS-9525.branch-2.009.patch
        11 kB
        HeeSoo Kim

        Issue Links

          Activity

          Hide
          cnauroth Chris Nauroth added a comment -

          Hi Allen. It's unclear to me what this issue is requesting. Can you please elaborate? It sounds like you want a customizable hook into the WebHDFS client's logic for delegation token handling. You mentioned initialization, but do you also need a hook into renewal and cancellation too?

          I'm not sure what kind of use case would motivate this customization. Do you have some scenario where your process already has a delegation token, and you'd prefer not to get a new token for WebHDFS? If so, then any further details about this scenario would be helpful.

          Thanks!

          Show
          cnauroth Chris Nauroth added a comment - Hi Allen. It's unclear to me what this issue is requesting. Can you please elaborate? It sounds like you want a customizable hook into the WebHDFS client's logic for delegation token handling. You mentioned initialization, but do you also need a hook into renewal and cancellation too? I'm not sure what kind of use case would motivate this customization. Do you have some scenario where your process already has a delegation token, and you'd prefer not to get a new token for WebHDFS? If so, then any further details about this scenario would be helpful. Thanks!
          Hide
          aw Allen Wittenauer added a comment -

          Sort of.

          Today, WebHDFS's authentication logic is mainly predicated that the one is using SPNEGO either within the same realm or in multiple realms with a trust established. If one has a two Hadoop clusters in different realms with no trust, there is no way that I'm aware of to distcp between those two systems in a secure fashion. It should be possible to either 'hdfs fetchdt' (or equivalent) a token from one cluster. Copy it over to the other realm. Then give that token as part of the job conf during the distcp on the foreign/other cluster to use as the authentication.

          Coupled with HDFS-7983, one can see where this would be useful beyond the strictly cluster<->cluster talked about above.

          Show
          aw Allen Wittenauer added a comment - Sort of. Today, WebHDFS's authentication logic is mainly predicated that the one is using SPNEGO either within the same realm or in multiple realms with a trust established. If one has a two Hadoop clusters in different realms with no trust, there is no way that I'm aware of to distcp between those two systems in a secure fashion. It should be possible to either 'hdfs fetchdt' (or equivalent) a token from one cluster. Copy it over to the other realm. Then give that token as part of the job conf during the distcp on the foreign/other cluster to use as the authentication. Coupled with HDFS-7983 , one can see where this would be useful beyond the strictly cluster<->cluster talked about above.
          Hide
          erwaman Anthony Hsu added a comment -

          Using the WebHDFS REST API, one can provide a delegation token as follows:

          curl "http://<HOST>:<PORT>/webhdfs/v1/<PATH>?delegation=<TOKEN>&op=..."
          

          It seems one should be able to provide a delegation token when using webhdfs:// URIs as well. I tried setting the HADOOP_TOKEN_FILE_LOCATION environment variable, but the WebHdfsFileSystem doesn't seem to use this provided token.

          Show
          erwaman Anthony Hsu added a comment - Using the WebHDFS REST API, one can provide a delegation token as follows: curl "http: //<HOST>:<PORT>/webhdfs/v1/<PATH>?delegation=<TOKEN>&op=..." It seems one should be able to provide a delegation token when using webhdfs:// URIs as well. I tried setting the HADOOP_TOKEN_FILE_LOCATION environment variable, but the WebHdfsFileSystem doesn't seem to use this provided token.
          Hide
          aw Allen Wittenauer added a comment - - edited

          Yup, Anthony Hsu. That's exactly what this issue is about... Keep in mind that one might need more than one token...

          Show
          aw Allen Wittenauer added a comment - - edited Yup, Anthony Hsu . That's exactly what this issue is about... Keep in mind that one might need more than one token...
          Hide
          erwaman Anthony Hsu added a comment -

          Actually, it seems that WebHdfsFileSystem does use the tokens in HADOOP_TOKEN_FILE_LOCATION (under the hood, it's all handled by UserGroupInformation). My mistake earlier was that I was fetching delegation tokens for hdfs:// rather than webhdfs://. Once I fixed this, setting HADOOP_TOKEN_FILE_LOCATION worked as expected.

          Show
          erwaman Anthony Hsu added a comment - Actually, it seems that WebHdfsFileSystem does use the tokens in HADOOP_TOKEN_FILE_LOCATION (under the hood, it's all handled by UserGroupInformation ). My mistake earlier was that I was fetching delegation tokens for hdfs:// rather than webhdfs:// . Once I fixed this, setting HADOOP_TOKEN_FILE_LOCATION worked as expected.
          Hide
          sookim HeeSoo Kim added a comment -

          Anthony is right.

           HADOOP_TOKEN_FILE_LOCATION 

          supports to read delegation token for WebHDFSFileSystem.
          However,

           HADOOP_TOKEN_FILE_LOCATION

          is only support one file. ugi can have multiple tokens in credential. This patch supports multiple delegation token files as parameter.

           -Dhadoop.token.files=filename1,filename2
          Show
          sookim HeeSoo Kim added a comment - Anthony is right. HADOOP_TOKEN_FILE_LOCATION supports to read delegation token for WebHDFSFileSystem. However, HADOOP_TOKEN_FILE_LOCATION is only support one file. ugi can have multiple tokens in credential. This patch supports multiple delegation token files as parameter. -Dhadoop.token.files=filename1,filename2
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 17m 41s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 tests included 0m 0s The patch appears to include 1 new or modified test files.
          +1 javac 7m 53s There were no new javac warning messages.
          +1 javadoc 10m 31s There were no new javadoc warning messages.
          +1 release audit 0m 24s The applied patch does not increase the total number of release audit warnings.
          -1 checkstyle 1m 6s The applied patch generated 1 new checkstyle issues (total was 108, now 109).
          -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix.
          +1 install 1m 29s mvn install still works.
          +1 eclipse:eclipse 0m 34s The patch built with eclipse:eclipse.
          +1 findbugs 1m 53s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
          +1 common tests 6m 52s Tests passed in hadoop-common.
              48m 28s  



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12768446/HDFS-7984.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision trunk / 86c9222
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/13177/artifact/patchprocess/diffcheckstylehadoop-common.txt
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/13177/artifact/patchprocess/whitespace.txt
          hadoop-common test log https://builds.apache.org/job/PreCommit-HDFS-Build/13177/artifact/patchprocess/testrun_hadoop-common.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/13177/testReport/
          Java 1.7.0_55
          uname Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/13177/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 17m 41s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 1 new or modified test files. +1 javac 7m 53s There were no new javac warning messages. +1 javadoc 10m 31s There were no new javadoc warning messages. +1 release audit 0m 24s The applied patch does not increase the total number of release audit warnings. -1 checkstyle 1m 6s The applied patch generated 1 new checkstyle issues (total was 108, now 109). -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. +1 install 1m 29s mvn install still works. +1 eclipse:eclipse 0m 34s The patch built with eclipse:eclipse. +1 findbugs 1m 53s The patch does not introduce any new Findbugs (version 3.0.0) warnings. +1 common tests 6m 52s Tests passed in hadoop-common.     48m 28s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12768446/HDFS-7984.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / 86c9222 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/13177/artifact/patchprocess/diffcheckstylehadoop-common.txt whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/13177/artifact/patchprocess/whitespace.txt hadoop-common test log https://builds.apache.org/job/PreCommit-HDFS-Build/13177/artifact/patchprocess/testrun_hadoop-common.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/13177/testReport/ Java 1.7.0_55 uname Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-HDFS-Build/13177/console This message was automatically generated.
          Hide
          hitliuyi Yi Liu added a comment -

          String fileLocation = System.getenv(HADOOP_TOKEN_FILE_LOCATION);
          ...
          Credentials cred = Credentials.readTokenStorageFile(

          The HADOOP_TOKEN_FILE_LOCATION already support multiple tokens.

          Show
          hitliuyi Yi Liu added a comment - String fileLocation = System.getenv(HADOOP_TOKEN_FILE_LOCATION); ... Credentials cred = Credentials.readTokenStorageFile( The HADOOP_TOKEN_FILE_LOCATION already support multiple tokens.
          Hide
          aw Allen Wittenauer added a comment -

          HADOOP_TOKEN_FILE_LOCATION is also a terrible interface.

          Show
          aw Allen Wittenauer added a comment - HADOOP_TOKEN_FILE_LOCATION is also a terrible interface.
          Hide
          aw Allen Wittenauer added a comment -

          Oh, one other thing: there is no way for an end user to create a token file with multiple tokens inside it, short of building custom code to do it.. (That issue is a separate, upcoming JIRA....)

          Show
          aw Allen Wittenauer added a comment - Oh, one other thing: there is no way for an end user to create a token file with multiple tokens inside it, short of building custom code to do it.. (That issue is a separate, upcoming JIRA....)
          Hide
          hitliuyi Yi Liu added a comment - - edited

          there is no way for an end user to create a token file with multiple tokens inside it, short of building custom code to do it..

          No, I think we have. When using existing Credentials#writeTokenStorageFile, all tokens of the credentials will be persisted, and Credentials#readTokenStorageStream will read all tokens too. So what we need to do is to add different tokens to the Credentials, use your example, there are two hdfs, we can get the delegation tokens for each of them, the service field of the two delegation tokens should be different, we can add them to one Credentials or through the UGI api to add them into one Credentials.
          Actually even if we have multiple token files which contain only one token in each, we can read them separately through Credentials#writeTokenStorageFile, and add them to one Credentials.

          Back to the original purpose of the JIRA, I don't know why we need to specify multiple delegation tokens in one webhdfs://, the delegation token is used in some service to access HDFS on behalf of user, so one hdfs only needs one delegation token for one user. For the distcp example you said, I think correct behavior is: user specify delegation token in each webhdfs://, and the MR task will add the two delegation tokens to the UGI Credentials of that user. I think this is already supported, I have not tried the distcp on two different secured hdfs, if there is some bug, the correct fix is as I said, it's not to support multiple delegation tokens in one webhdfs://.
          We also should not use HADOOP_TOKEN_FILE_LOCATION to solve the problem.

          Show
          hitliuyi Yi Liu added a comment - - edited there is no way for an end user to create a token file with multiple tokens inside it, short of building custom code to do it.. No, I think we have. When using existing Credentials#writeTokenStorageFile , all tokens of the credentials will be persisted, and Credentials#readTokenStorageStream will read all tokens too. So what we need to do is to add different tokens to the Credentials, use your example, there are two hdfs, we can get the delegation tokens for each of them, the service field of the two delegation tokens should be different, we can add them to one Credentials or through the UGI api to add them into one Credentials . Actually even if we have multiple token files which contain only one token in each, we can read them separately through Credentials#writeTokenStorageFile , and add them to one Credentials . Back to the original purpose of the JIRA, I don't know why we need to specify multiple delegation tokens in one webhdfs://, the delegation token is used in some service to access HDFS on behalf of user, so one hdfs only needs one delegation token for one user. For the distcp example you said, I think correct behavior is: user specify delegation token in each webhdfs://, and the MR task will add the two delegation tokens to the UGI Credentials of that user. I think this is already supported, I have not tried the distcp on two different secured hdfs, if there is some bug, the correct fix is as I said, it's not to support multiple delegation tokens in one webhdfs://. We also should not use HADOOP_TOKEN_FILE_LOCATION to solve the problem.
          Hide
          aw Allen Wittenauer added a comment -

          No, I think we have. When using existing Credentials#writeTokenStorageFile ... (a bunch of other verbage)

          This demonstrates the big disconnect between what we see and what our users see.

          You don't seriously expect some data scientist or ops person to write code for this, do you? Yes, there's an API, but where are the command line utilities to use it? Where's the example code? Oh that's right, we expect everyone to build their own utilities. Is it because the APIs are the only thing that ever stay stable? Unless we switch Java versions in the middle of a branch. Or, I guess, at least until we move the classes out of jars. Or, ...

          (... and let's not forget that this is in some of the LEAST user-friendly bits of the source. Even long time Hadoop devs shudder in fear when dealing with the UGI and token code ...)

          Back to the original purpose of the JIRA, I don't know why we need to specify multiple delegation tokens in one webhdfs://, the delegation token is used in some service to access HDFS on behalf of user, so one hdfs only needs one delegation token for one user.

          I think you're greatly simplifying the situation. In our use cases, we almost always have multiple realms in play where cross-realm is not and cannot be configured. We also don't trust our jobs to work with the given HDFS JARs since Hadoop backward compatibility is pretty much a joke at this point. (See above) So there are often two WebHDFS URLs given on the distcp command line.

          It's also not unusual to have a third cluster in play to act as an intermediary. So yes, there are definitely real world use cases where supplying multiple DTs are needed.

          user specify delegation token in each webhdfs://,

          ... which, today, the only way a user can do this is via HADOOP_TOKEN_FILE_LOCATION... which I think everyone agrees is pretty terrible. Of course, that's after they build an application to actually create a file with multiple tokens.

          We also should not use HADOOP_TOKEN_FILE_LOCATION to solve the problem.

          ... which ultimately brings us back to this and a handful of other patches we're working on.

          Show
          aw Allen Wittenauer added a comment - No, I think we have. When using existing Credentials#writeTokenStorageFile ... (a bunch of other verbage) This demonstrates the big disconnect between what we see and what our users see. You don't seriously expect some data scientist or ops person to write code for this, do you? Yes, there's an API, but where are the command line utilities to use it? Where's the example code? Oh that's right, we expect everyone to build their own utilities. Is it because the APIs are the only thing that ever stay stable? Unless we switch Java versions in the middle of a branch. Or, I guess, at least until we move the classes out of jars. Or, ... (... and let's not forget that this is in some of the LEAST user-friendly bits of the source. Even long time Hadoop devs shudder in fear when dealing with the UGI and token code ...) Back to the original purpose of the JIRA, I don't know why we need to specify multiple delegation tokens in one webhdfs://, the delegation token is used in some service to access HDFS on behalf of user, so one hdfs only needs one delegation token for one user. I think you're greatly simplifying the situation. In our use cases, we almost always have multiple realms in play where cross-realm is not and cannot be configured. We also don't trust our jobs to work with the given HDFS JARs since Hadoop backward compatibility is pretty much a joke at this point. (See above) So there are often two WebHDFS URLs given on the distcp command line. It's also not unusual to have a third cluster in play to act as an intermediary. So yes, there are definitely real world use cases where supplying multiple DTs are needed. user specify delegation token in each webhdfs://, ... which, today, the only way a user can do this is via HADOOP_TOKEN_FILE_LOCATION... which I think everyone agrees is pretty terrible. Of course, that's after they build an application to actually create a file with multiple tokens. We also should not use HADOOP_TOKEN_FILE_LOCATION to solve the problem. ... which ultimately brings us back to this and a handful of other patches we're working on.
          Hide
          sookim HeeSoo Kim added a comment -

          merge HDFS-7984 and HDFS-9077

          Show
          sookim HeeSoo Kim added a comment - merge HDFS-7984 and HDFS-9077
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          -1 pre-patch 24m 45s Pre-patch trunk has 1 extant Findbugs (version 3.0.0) warnings.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 tests included 0m 0s The patch appears to include 3 new or modified test files.
          +1 javac 9m 2s There were no new javac warning messages.
          +1 javadoc 11m 22s There were no new javadoc warning messages.
          +1 release audit 0m 26s The applied patch does not increase the total number of release audit warnings.
          -1 checkstyle 2m 54s The applied patch generated 1 new checkstyle issues (total was 108, now 109).
          -1 whitespace 0m 1s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix.
          +1 install 1m 51s mvn install still works.
          +1 eclipse:eclipse 0m 38s The patch built with eclipse:eclipse.
          +1 findbugs 7m 21s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
          +1 common tests 8m 58s Tests passed in hadoop-common.
          -1 hdfs tests 64m 55s Tests failed in hadoop-hdfs.
          +1 hdfs tests 0m 32s Tests passed in hadoop-hdfs-client.
              134m 33s  



          Reason Tests
          Timed out tests org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints
            org.apache.hadoop.hdfs.server.namenode.TestLargeDirectoryDelete
            org.apache.hadoop.hdfs.TestReplication
            org.apache.hadoop.hdfs.TestPread
            org.apache.hadoop.hdfs.TestSafeMode
            org.apache.hadoop.hdfs.TestFileAppend4
            org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits
            org.apache.hadoop.hdfs.TestRollingUpgrade
            org.apache.hadoop.hdfs.server.namenode.TestFileTruncate
            org.apache.hadoop.hdfs.server.mover.TestStorageMover
            org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams
            org.apache.hadoop.hdfs.server.namenode.ha.TestHAStateTransitions
            org.apache.hadoop.hdfs.server.namenode.TestDeleteRace
            org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
            org.apache.hadoop.hdfs.TestParallelUnixDomainRead



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12768802/HDFS-7984.002.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision trunk / 3cc7377
          Pre-patch Findbugs warnings https://builds.apache.org/job/PreCommit-HDFS-Build/13199/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/13199/artifact/patchprocess/diffcheckstylehadoop-common.txt
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/13199/artifact/patchprocess/whitespace.txt
          hadoop-common test log https://builds.apache.org/job/PreCommit-HDFS-Build/13199/artifact/patchprocess/testrun_hadoop-common.txt
          hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HDFS-Build/13199/artifact/patchprocess/testrun_hadoop-hdfs.txt
          hadoop-hdfs-client test log https://builds.apache.org/job/PreCommit-HDFS-Build/13199/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/13199/testReport/
          Java 1.7.0_55
          uname Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/13199/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment -1 pre-patch 24m 45s Pre-patch trunk has 1 extant Findbugs (version 3.0.0) warnings. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 3 new or modified test files. +1 javac 9m 2s There were no new javac warning messages. +1 javadoc 11m 22s There were no new javadoc warning messages. +1 release audit 0m 26s The applied patch does not increase the total number of release audit warnings. -1 checkstyle 2m 54s The applied patch generated 1 new checkstyle issues (total was 108, now 109). -1 whitespace 0m 1s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. +1 install 1m 51s mvn install still works. +1 eclipse:eclipse 0m 38s The patch built with eclipse:eclipse. +1 findbugs 7m 21s The patch does not introduce any new Findbugs (version 3.0.0) warnings. +1 common tests 8m 58s Tests passed in hadoop-common. -1 hdfs tests 64m 55s Tests failed in hadoop-hdfs. +1 hdfs tests 0m 32s Tests passed in hadoop-hdfs-client.     134m 33s   Reason Tests Timed out tests org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints   org.apache.hadoop.hdfs.server.namenode.TestLargeDirectoryDelete   org.apache.hadoop.hdfs.TestReplication   org.apache.hadoop.hdfs.TestPread   org.apache.hadoop.hdfs.TestSafeMode   org.apache.hadoop.hdfs.TestFileAppend4   org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits   org.apache.hadoop.hdfs.TestRollingUpgrade   org.apache.hadoop.hdfs.server.namenode.TestFileTruncate   org.apache.hadoop.hdfs.server.mover.TestStorageMover   org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams   org.apache.hadoop.hdfs.server.namenode.ha.TestHAStateTransitions   org.apache.hadoop.hdfs.server.namenode.TestDeleteRace   org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover   org.apache.hadoop.hdfs.TestParallelUnixDomainRead Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12768802/HDFS-7984.002.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / 3cc7377 Pre-patch Findbugs warnings https://builds.apache.org/job/PreCommit-HDFS-Build/13199/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/13199/artifact/patchprocess/diffcheckstylehadoop-common.txt whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/13199/artifact/patchprocess/whitespace.txt hadoop-common test log https://builds.apache.org/job/PreCommit-HDFS-Build/13199/artifact/patchprocess/testrun_hadoop-common.txt hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HDFS-Build/13199/artifact/patchprocess/testrun_hadoop-hdfs.txt hadoop-hdfs-client test log https://builds.apache.org/job/PreCommit-HDFS-Build/13199/artifact/patchprocess/testrun_hadoop-hdfs-client.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/13199/testReport/ Java 1.7.0_55 uname Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-HDFS-Build/13199/console This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          -1 pre-patch 28m 46s Pre-patch trunk has 1 extant Findbugs (version 3.0.0) warnings.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 tests included 0m 0s The patch appears to include 3 new or modified test files.
          +1 javac 10m 45s There were no new javac warning messages.
          +1 javadoc 13m 40s There were no new javadoc warning messages.
          +1 release audit 0m 30s The applied patch does not increase the total number of release audit warnings.
          +1 checkstyle 5m 35s There were no new checkstyle issues.
          -1 whitespace 0m 1s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix.
          +1 install 2m 1s mvn install still works.
          +1 eclipse:eclipse 0m 45s The patch built with eclipse:eclipse.
          +1 findbugs 8m 45s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
          -1 common tests 10m 21s Tests failed in hadoop-common.
          -1 hdfs tests 81m 29s Tests failed in hadoop-hdfs.
          +1 hdfs tests 0m 39s Tests passed in hadoop-hdfs-client.
              163m 22s  



          Reason Tests
          Failed unit tests hadoop.metrics2.impl.TestMetricsSystemImpl
            hadoop.ipc.TestDecayRpcScheduler
            hadoop.ipc.TestRPCWaitForProxy
            hadoop.hdfs.server.namenode.ha.TestEditLogTailer
            hadoop.hdfs.TestPersistBlocks
            hadoop.hdfs.TestRollingUpgrade
            hadoop.hdfs.server.datanode.TestDirectoryScanner



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12769027/HDFS-7984.003.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision trunk / ab99d95
          Pre-patch Findbugs warnings https://builds.apache.org/job/PreCommit-HDFS-Build/13229/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/13229/artifact/patchprocess/whitespace.txt
          hadoop-common test log https://builds.apache.org/job/PreCommit-HDFS-Build/13229/artifact/patchprocess/testrun_hadoop-common.txt
          hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HDFS-Build/13229/artifact/patchprocess/testrun_hadoop-hdfs.txt
          hadoop-hdfs-client test log https://builds.apache.org/job/PreCommit-HDFS-Build/13229/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/13229/testReport/
          Java 1.7.0_55
          uname Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/13229/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment -1 pre-patch 28m 46s Pre-patch trunk has 1 extant Findbugs (version 3.0.0) warnings. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 3 new or modified test files. +1 javac 10m 45s There were no new javac warning messages. +1 javadoc 13m 40s There were no new javadoc warning messages. +1 release audit 0m 30s The applied patch does not increase the total number of release audit warnings. +1 checkstyle 5m 35s There were no new checkstyle issues. -1 whitespace 0m 1s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. +1 install 2m 1s mvn install still works. +1 eclipse:eclipse 0m 45s The patch built with eclipse:eclipse. +1 findbugs 8m 45s The patch does not introduce any new Findbugs (version 3.0.0) warnings. -1 common tests 10m 21s Tests failed in hadoop-common. -1 hdfs tests 81m 29s Tests failed in hadoop-hdfs. +1 hdfs tests 0m 39s Tests passed in hadoop-hdfs-client.     163m 22s   Reason Tests Failed unit tests hadoop.metrics2.impl.TestMetricsSystemImpl   hadoop.ipc.TestDecayRpcScheduler   hadoop.ipc.TestRPCWaitForProxy   hadoop.hdfs.server.namenode.ha.TestEditLogTailer   hadoop.hdfs.TestPersistBlocks   hadoop.hdfs.TestRollingUpgrade   hadoop.hdfs.server.datanode.TestDirectoryScanner Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12769027/HDFS-7984.003.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / ab99d95 Pre-patch Findbugs warnings https://builds.apache.org/job/PreCommit-HDFS-Build/13229/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/13229/artifact/patchprocess/whitespace.txt hadoop-common test log https://builds.apache.org/job/PreCommit-HDFS-Build/13229/artifact/patchprocess/testrun_hadoop-common.txt hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HDFS-Build/13229/artifact/patchprocess/testrun_hadoop-hdfs.txt hadoop-hdfs-client test log https://builds.apache.org/job/PreCommit-HDFS-Build/13229/artifact/patchprocess/testrun_hadoop-hdfs-client.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/13229/testReport/ Java 1.7.0_55 uname Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-HDFS-Build/13229/console This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 9s docker + precommit patch detected.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
          +1 mvninstall 3m 26s trunk passed
          +1 compile 5m 13s trunk passed with JDK v1.8.0_60
          +1 compile 4m 48s trunk passed with JDK v1.7.0_79
          +1 checkstyle 1m 9s trunk passed
          +1 mvneclipse 0m 44s trunk passed
          -1 findbugs 2m 9s hadoop-hdfs-project/hadoop-hdfs in trunk cannot run convertXmlToText from findbugs
          +1 javadoc 2m 51s trunk passed with JDK v1.8.0_60
          +1 javadoc 3m 51s trunk passed with JDK v1.7.0_79
          +1 mvninstall 2m 59s the patch passed
          +1 compile 5m 3s the patch passed with JDK v1.8.0_60
          +1 javac 5m 3s the patch passed
          +1 compile 4m 48s the patch passed with JDK v1.7.0_79
          +1 javac 4m 48s the patch passed
          +1 checkstyle 1m 6s the patch passed
          +1 mvneclipse 0m 44s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 6m 45s the patch passed
          +1 javadoc 2m 47s the patch passed with JDK v1.8.0_60
          +1 javadoc 3m 50s the patch passed with JDK v1.7.0_79
          -1 unit 7m 45s hadoop-common in the patch failed with JDK v1.8.0_60.
          -1 unit 63m 46s hadoop-hdfs in the patch failed with JDK v1.8.0_60.
          +1 unit 1m 3s hadoop-hdfs-client in the patch passed with JDK v1.8.0_60.
          +1 unit 8m 25s hadoop-common in the patch passed with JDK v1.7.0_79.
          -1 unit 64m 20s hadoop-hdfs in the patch failed with JDK v1.7.0_79.
          +1 unit 1m 1s hadoop-hdfs-client in the patch passed with JDK v1.7.0_79.
          -1 asflicense 0m 22s Patch generated 56 ASF License warnings.
          204m 40s



          Reason Tests
          JDK v1.7.0_79 Failed junit tests hadoop.metrics2.impl.TestMetricsSystemImpl
            hadoop.test.TestTimedOutTestsListener
            hadoop.hdfs.server.datanode.TestDataNodeMetrics
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010
            hadoop.hdfs.TestLeaseRecovery2
            hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000
            hadoop.hdfs.server.blockmanagement.TestNodeCount
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
            hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
            hadoop.hdfs.server.balancer.TestBalancer



          Subsystem Report/Notes
          Docker Client=1.7.1 Server=1.7.1 Image:test-patch-base-hadoop-date2015-10-29
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12769574/HDFS-7984.004.patch
          JIRA Issue HDFS-7984
          Optional Tests asflicense javac javadoc mvninstall unit findbugs checkstyle compile
          uname Linux 9a83d0bf4e48 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-c3a2069/precommit/personality/hadoop.sh
          git revision trunk / e2267de
          Default Java 1.7.0_79
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_60 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79
          findbugs v3.0.0
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/13280/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs.html
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13280/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_60.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13280/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_60.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13280/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_79.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/13280/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_60.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13280/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_60.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13280/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_79.txt
          JDK v1.7.0_79 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/13280/testReport/
          asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/13280/artifact/patchprocess/patch-asflicense-problems.txt
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-client U: .
          Max memory used 228MB
          Powered by Apache Yetus http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/13280/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 9s docker + precommit patch detected. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. +1 mvninstall 3m 26s trunk passed +1 compile 5m 13s trunk passed with JDK v1.8.0_60 +1 compile 4m 48s trunk passed with JDK v1.7.0_79 +1 checkstyle 1m 9s trunk passed +1 mvneclipse 0m 44s trunk passed -1 findbugs 2m 9s hadoop-hdfs-project/hadoop-hdfs in trunk cannot run convertXmlToText from findbugs +1 javadoc 2m 51s trunk passed with JDK v1.8.0_60 +1 javadoc 3m 51s trunk passed with JDK v1.7.0_79 +1 mvninstall 2m 59s the patch passed +1 compile 5m 3s the patch passed with JDK v1.8.0_60 +1 javac 5m 3s the patch passed +1 compile 4m 48s the patch passed with JDK v1.7.0_79 +1 javac 4m 48s the patch passed +1 checkstyle 1m 6s the patch passed +1 mvneclipse 0m 44s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 6m 45s the patch passed +1 javadoc 2m 47s the patch passed with JDK v1.8.0_60 +1 javadoc 3m 50s the patch passed with JDK v1.7.0_79 -1 unit 7m 45s hadoop-common in the patch failed with JDK v1.8.0_60. -1 unit 63m 46s hadoop-hdfs in the patch failed with JDK v1.8.0_60. +1 unit 1m 3s hadoop-hdfs-client in the patch passed with JDK v1.8.0_60. +1 unit 8m 25s hadoop-common in the patch passed with JDK v1.7.0_79. -1 unit 64m 20s hadoop-hdfs in the patch failed with JDK v1.7.0_79. +1 unit 1m 1s hadoop-hdfs-client in the patch passed with JDK v1.7.0_79. -1 asflicense 0m 22s Patch generated 56 ASF License warnings. 204m 40s Reason Tests JDK v1.7.0_79 Failed junit tests hadoop.metrics2.impl.TestMetricsSystemImpl   hadoop.test.TestTimedOutTestsListener   hadoop.hdfs.server.datanode.TestDataNodeMetrics   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010   hadoop.hdfs.TestLeaseRecovery2   hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000   hadoop.hdfs.server.blockmanagement.TestNodeCount   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure   hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer   hadoop.hdfs.server.balancer.TestBalancer Subsystem Report/Notes Docker Client=1.7.1 Server=1.7.1 Image:test-patch-base-hadoop-date2015-10-29 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12769574/HDFS-7984.004.patch JIRA Issue HDFS-7984 Optional Tests asflicense javac javadoc mvninstall unit findbugs checkstyle compile uname Linux 9a83d0bf4e48 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-c3a2069/precommit/personality/hadoop.sh git revision trunk / e2267de Default Java 1.7.0_79 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_60 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/13280/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs.html unit https://builds.apache.org/job/PreCommit-HDFS-Build/13280/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_60.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/13280/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_60.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/13280/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_79.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/13280/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_60.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13280/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_60.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13280/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_79.txt JDK v1.7.0_79 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/13280/testReport/ asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/13280/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-client U: . Max memory used 228MB Powered by Apache Yetus http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HDFS-Build/13280/console This message was automatically generated.
          Hide
          sookim HeeSoo Kim added a comment -

          The test failures are unrelated to the change made for this jira.

          Show
          sookim HeeSoo Kim added a comment - The test failures are unrelated to the change made for this jira.
          Hide
          aw Allen Wittenauer added a comment -
          • The new property needs documentation in core-default.xml.

          There really should be some example usage, etc, but given that there is zero end-user documentation on delegation tokens that I can find, there's really not a great place to put it. Let's open another JIRA for that though since that's a much larger scope.

          I'm also kinda surprised that there really isn't a decent, appropriate location to define the constant string for the config property. A quick pass through hadoop-common shows that these things are all over the place.

          Show
          aw Allen Wittenauer added a comment - The new property needs documentation in core-default.xml. There really should be some example usage, etc, but given that there is zero end-user documentation on delegation tokens that I can find, there's really not a great place to put it. Let's open another JIRA for that though since that's a much larger scope. I'm also kinda surprised that there really isn't a decent, appropriate location to define the constant string for the config property. A quick pass through hadoop-common shows that these things are all over the place.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 6s docker + precommit patch detected.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
          +1 mvninstall 3m 19s trunk passed
          +1 compile 4m 40s trunk passed with JDK v1.8.0_60
          +1 compile 4m 22s trunk passed with JDK v1.7.0_79
          +1 checkstyle 1m 2s trunk passed
          +1 mvneclipse 0m 41s trunk passed
          -1 findbugs 2m 5s hadoop-hdfs-project/hadoop-hdfs in trunk cannot run convertXmlToText from findbugs
          +1 javadoc 2m 29s trunk passed with JDK v1.8.0_60
          +1 javadoc 3m 23s trunk passed with JDK v1.7.0_79
          +1 mvninstall 2m 37s the patch passed
          +1 compile 4m 35s the patch passed with JDK v1.8.0_60
          +1 javac 4m 35s the patch passed
          +1 compile 4m 31s the patch passed with JDK v1.7.0_79
          +1 javac 4m 31s the patch passed
          -1 checkstyle 0m 56s Patch generated 2 new checkstyle issues in root (total was 465, now 466).
          +1 mvneclipse 0m 41s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 xml 0m 0s The patch has no ill-formed XML file.
          +1 findbugs 6m 7s the patch passed
          +1 javadoc 2m 23s the patch passed with JDK v1.8.0_60
          +1 javadoc 3m 14s the patch passed with JDK v1.7.0_79
          -1 unit 6m 14s hadoop-common in the patch failed with JDK v1.8.0_60.
          -1 unit 54m 6s hadoop-hdfs in the patch failed with JDK v1.8.0_60.
          +1 unit 0m 55s hadoop-hdfs-client in the patch passed with JDK v1.8.0_60.
          -1 unit 6m 55s hadoop-common in the patch failed with JDK v1.7.0_79.
          -1 unit 52m 29s hadoop-hdfs in the patch failed with JDK v1.7.0_79.
          +1 unit 0m 56s hadoop-hdfs-client in the patch passed with JDK v1.7.0_79.
          -1 asflicense 0m 21s Patch generated 56 ASF License warnings.
          174m 29s



          Reason Tests
          JDK v1.8.0_60 Failed junit tests hadoop.ipc.TestIPC
            hadoop.metrics2.sink.TestFileSink
            hadoop.hdfs.server.blockmanagement.TestNodeCount
          JDK v1.7.0_79 Failed junit tests hadoop.crypto.key.TestValueQueue
            hadoop.metrics2.sink.TestFileSink
            hadoop.hdfs.TestSafeModeWithStripedFile
            hadoop.hdfs.server.blockmanagement.TestNodeCount
            hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure



          Subsystem Report/Notes
          Docker Client=1.7.1 Server=1.7.1 Image:test-patch-base-hadoop-date2015-11-02
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12770129/HDFS-7984.005.patch
          JIRA Issue HDFS-7984
          Optional Tests asflicense javac javadoc mvninstall unit findbugs checkstyle compile xml
          uname Linux ac1973d1b0b3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build@2/patchprocess/apache-yetus-e77b1ce/precommit/personality/hadoop.sh
          git revision trunk / 9e7dcab
          Default Java 1.7.0_79
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_60 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79
          findbugs v3.0.0
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/13337/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs.html
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/13337/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13337/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_60.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13337/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_60.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13337/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_79.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13337/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_79.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/13337/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_60.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13337/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_60.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13337/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_79.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13337/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_79.txt
          JDK v1.7.0_79 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/13337/testReport/
          asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/13337/artifact/patchprocess/patch-asflicense-problems.txt
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-client U: .
          Max memory used 227MB
          Powered by Apache Yetus http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/13337/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 6s docker + precommit patch detected. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. +1 mvninstall 3m 19s trunk passed +1 compile 4m 40s trunk passed with JDK v1.8.0_60 +1 compile 4m 22s trunk passed with JDK v1.7.0_79 +1 checkstyle 1m 2s trunk passed +1 mvneclipse 0m 41s trunk passed -1 findbugs 2m 5s hadoop-hdfs-project/hadoop-hdfs in trunk cannot run convertXmlToText from findbugs +1 javadoc 2m 29s trunk passed with JDK v1.8.0_60 +1 javadoc 3m 23s trunk passed with JDK v1.7.0_79 +1 mvninstall 2m 37s the patch passed +1 compile 4m 35s the patch passed with JDK v1.8.0_60 +1 javac 4m 35s the patch passed +1 compile 4m 31s the patch passed with JDK v1.7.0_79 +1 javac 4m 31s the patch passed -1 checkstyle 0m 56s Patch generated 2 new checkstyle issues in root (total was 465, now 466). +1 mvneclipse 0m 41s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 xml 0m 0s The patch has no ill-formed XML file. +1 findbugs 6m 7s the patch passed +1 javadoc 2m 23s the patch passed with JDK v1.8.0_60 +1 javadoc 3m 14s the patch passed with JDK v1.7.0_79 -1 unit 6m 14s hadoop-common in the patch failed with JDK v1.8.0_60. -1 unit 54m 6s hadoop-hdfs in the patch failed with JDK v1.8.0_60. +1 unit 0m 55s hadoop-hdfs-client in the patch passed with JDK v1.8.0_60. -1 unit 6m 55s hadoop-common in the patch failed with JDK v1.7.0_79. -1 unit 52m 29s hadoop-hdfs in the patch failed with JDK v1.7.0_79. +1 unit 0m 56s hadoop-hdfs-client in the patch passed with JDK v1.7.0_79. -1 asflicense 0m 21s Patch generated 56 ASF License warnings. 174m 29s Reason Tests JDK v1.8.0_60 Failed junit tests hadoop.ipc.TestIPC   hadoop.metrics2.sink.TestFileSink   hadoop.hdfs.server.blockmanagement.TestNodeCount JDK v1.7.0_79 Failed junit tests hadoop.crypto.key.TestValueQueue   hadoop.metrics2.sink.TestFileSink   hadoop.hdfs.TestSafeModeWithStripedFile   hadoop.hdfs.server.blockmanagement.TestNodeCount   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure Subsystem Report/Notes Docker Client=1.7.1 Server=1.7.1 Image:test-patch-base-hadoop-date2015-11-02 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12770129/HDFS-7984.005.patch JIRA Issue HDFS-7984 Optional Tests asflicense javac javadoc mvninstall unit findbugs checkstyle compile xml uname Linux ac1973d1b0b3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build@2/patchprocess/apache-yetus-e77b1ce/precommit/personality/hadoop.sh git revision trunk / 9e7dcab Default Java 1.7.0_79 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_60 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/13337/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs.html checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/13337/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/13337/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_60.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/13337/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_60.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/13337/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_79.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/13337/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_79.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/13337/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_60.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13337/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_60.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13337/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_79.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13337/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_79.txt JDK v1.7.0_79 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/13337/testReport/ asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/13337/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-client U: . Max memory used 227MB Powered by Apache Yetus http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HDFS-Build/13337/console This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 6s docker + precommit patch detected.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
          +1 mvninstall 3m 10s trunk passed
          +1 compile 4m 26s trunk passed with JDK v1.8.0_60
          +1 compile 4m 10s trunk passed with JDK v1.7.0_79
          +1 checkstyle 0m 58s trunk passed
          +1 mvneclipse 0m 40s trunk passed
          -1 findbugs 1m 48s hadoop-hdfs-project/hadoop-hdfs in trunk cannot run convertXmlToText from findbugs
          +1 javadoc 2m 17s trunk passed with JDK v1.8.0_60
          +1 javadoc 3m 10s trunk passed with JDK v1.7.0_79
          +1 mvninstall 2m 40s the patch passed
          +1 compile 4m 15s the patch passed with JDK v1.8.0_60
          +1 javac 4m 15s the patch passed
          +1 compile 4m 15s the patch passed with JDK v1.7.0_79
          +1 javac 4m 15s the patch passed
          -1 checkstyle 1m 1s Patch generated 1 new checkstyle issues in root (total was 465, now 465).
          +1 mvneclipse 0m 39s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 xml 0m 1s The patch has no ill-formed XML file.
          +1 findbugs 5m 47s the patch passed
          +1 javadoc 2m 19s the patch passed with JDK v1.8.0_60
          +1 javadoc 3m 8s the patch passed with JDK v1.7.0_79
          +1 unit 6m 23s hadoop-common in the patch passed with JDK v1.8.0_60.
          -1 unit 50m 19s hadoop-hdfs in the patch failed with JDK v1.8.0_60.
          +1 unit 0m 50s hadoop-hdfs-client in the patch passed with JDK v1.8.0_60.
          +1 unit 6m 58s hadoop-common in the patch passed with JDK v1.7.0_79.
          -1 unit 50m 5s hadoop-hdfs in the patch failed with JDK v1.7.0_79.
          +1 unit 0m 56s hadoop-hdfs-client in the patch passed with JDK v1.7.0_79.
          -1 asflicense 0m 21s Patch generated 58 ASF License warnings.
          165m 30s



          Reason Tests
          JDK v1.8.0_60 Failed junit tests hadoop.hdfs.TestDFSFinalize
            hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots
            hadoop.hdfs.TestDFSUpgradeFromImage
            hadoop.hdfs.server.datanode.TestFsDatasetCache
          JDK v1.7.0_79 Failed junit tests hadoop.hdfs.TestEncryptionZonesWithKMS
            hadoop.hdfs.TestSafeMode
            hadoop.hdfs.server.namenode.TestDecommissioningStatus
            hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock
            hadoop.hdfs.TestDFSUpgradeFromImage



          Subsystem Report/Notes
          Docker Client=1.7.1 Server=1.7.1 Image:test-patch-base-hadoop-date2015-11-03
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12770354/HDFS-7984.006.patch
          JIRA Issue HDFS-7984
          Optional Tests asflicense javac javadoc mvninstall unit findbugs checkstyle compile xml
          uname Linux f917f4ff156f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-1a9afee/precommit/personality/hadoop.sh
          git revision trunk / 957f031
          Default Java 1.7.0_79
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_60 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79
          findbugs v3.0.0
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/13361/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs.html
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/13361/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13361/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_60.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13361/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_79.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/13361/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_60.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13361/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_79.txt
          JDK v1.7.0_79 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/13361/testReport/
          asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/13361/artifact/patchprocess/patch-asflicense-problems.txt
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-client U: .
          Max memory used 224MB
          Powered by Apache Yetus http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/13361/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 6s docker + precommit patch detected. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. +1 mvninstall 3m 10s trunk passed +1 compile 4m 26s trunk passed with JDK v1.8.0_60 +1 compile 4m 10s trunk passed with JDK v1.7.0_79 +1 checkstyle 0m 58s trunk passed +1 mvneclipse 0m 40s trunk passed -1 findbugs 1m 48s hadoop-hdfs-project/hadoop-hdfs in trunk cannot run convertXmlToText from findbugs +1 javadoc 2m 17s trunk passed with JDK v1.8.0_60 +1 javadoc 3m 10s trunk passed with JDK v1.7.0_79 +1 mvninstall 2m 40s the patch passed +1 compile 4m 15s the patch passed with JDK v1.8.0_60 +1 javac 4m 15s the patch passed +1 compile 4m 15s the patch passed with JDK v1.7.0_79 +1 javac 4m 15s the patch passed -1 checkstyle 1m 1s Patch generated 1 new checkstyle issues in root (total was 465, now 465). +1 mvneclipse 0m 39s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 xml 0m 1s The patch has no ill-formed XML file. +1 findbugs 5m 47s the patch passed +1 javadoc 2m 19s the patch passed with JDK v1.8.0_60 +1 javadoc 3m 8s the patch passed with JDK v1.7.0_79 +1 unit 6m 23s hadoop-common in the patch passed with JDK v1.8.0_60. -1 unit 50m 19s hadoop-hdfs in the patch failed with JDK v1.8.0_60. +1 unit 0m 50s hadoop-hdfs-client in the patch passed with JDK v1.8.0_60. +1 unit 6m 58s hadoop-common in the patch passed with JDK v1.7.0_79. -1 unit 50m 5s hadoop-hdfs in the patch failed with JDK v1.7.0_79. +1 unit 0m 56s hadoop-hdfs-client in the patch passed with JDK v1.7.0_79. -1 asflicense 0m 21s Patch generated 58 ASF License warnings. 165m 30s Reason Tests JDK v1.8.0_60 Failed junit tests hadoop.hdfs.TestDFSFinalize   hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots   hadoop.hdfs.TestDFSUpgradeFromImage   hadoop.hdfs.server.datanode.TestFsDatasetCache JDK v1.7.0_79 Failed junit tests hadoop.hdfs.TestEncryptionZonesWithKMS   hadoop.hdfs.TestSafeMode   hadoop.hdfs.server.namenode.TestDecommissioningStatus   hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock   hadoop.hdfs.TestDFSUpgradeFromImage Subsystem Report/Notes Docker Client=1.7.1 Server=1.7.1 Image:test-patch-base-hadoop-date2015-11-03 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12770354/HDFS-7984.006.patch JIRA Issue HDFS-7984 Optional Tests asflicense javac javadoc mvninstall unit findbugs checkstyle compile xml uname Linux f917f4ff156f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-1a9afee/precommit/personality/hadoop.sh git revision trunk / 957f031 Default Java 1.7.0_79 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_60 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/13361/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs.html checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/13361/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/13361/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_60.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/13361/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_79.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/13361/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_60.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13361/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_79.txt JDK v1.7.0_79 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/13361/testReport/ asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/13361/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-client U: . Max memory used 224MB Powered by Apache Yetus http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HDFS-Build/13361/console This message was automatically generated.
          Hide
          sookim HeeSoo Kim added a comment -

          The test failures are unrelated to the change made for this jira.
          It has javadoc checkstyle error. However, I keep this for the code consistency.

          Show
          sookim HeeSoo Kim added a comment - The test failures are unrelated to the change made for this jira. It has javadoc checkstyle error. However, I keep this for the code consistency.
          Hide
          sookim HeeSoo Kim added a comment -

          Add hadoop.token.files property in core-site.xml. In addition, it checks whether the token files are existing or not.
          It also changes RENEWDELEGATIONTOKEN to use delegationParam only when the job has a credential information. If the job does not have credential, It still uses SPNEGO connection to get the right credential.

          Show
          sookim HeeSoo Kim added a comment - Add hadoop.token.files property in core-site.xml. In addition, it checks whether the token files are existing or not. It also changes RENEWDELEGATIONTOKEN to use delegationParam only when the job has a credential information. If the job does not have credential, It still uses SPNEGO connection to get the right credential.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 6s docker + precommit patch detected.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
          +1 mvninstall 3m 12s trunk passed
          +1 compile 4m 56s trunk passed with JDK v1.8.0_60
          +1 compile 4m 45s trunk passed with JDK v1.7.0_79
          +1 checkstyle 1m 3s trunk passed
          +1 mvnsite 2m 23s trunk passed
          +1 mvneclipse 0m 43s trunk passed
          +1 findbugs 5m 59s trunk passed
          +1 javadoc 2m 44s trunk passed with JDK v1.8.0_60
          +1 javadoc 3m 40s trunk passed with JDK v1.7.0_79
          +1 mvninstall 2m 53s the patch passed
          +1 compile 4m 57s the patch passed with JDK v1.8.0_60
          +1 javac 4m 57s the patch passed
          +1 compile 4m 42s the patch passed with JDK v1.7.0_79
          +1 javac 4m 42s the patch passed
          -1 checkstyle 1m 7s Patch generated 1 new checkstyle issues in root (total was 467, now 464).
          +1 mvnsite 2m 20s the patch passed
          +1 mvneclipse 0m 44s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 xml 0m 0s The patch has no ill-formed XML file.
          +1 findbugs 6m 37s the patch passed
          +1 javadoc 2m 45s the patch passed with JDK v1.8.0_60
          +1 javadoc 3m 37s the patch passed with JDK v1.7.0_79
          -1 unit 21m 58s hadoop-common in the patch failed with JDK v1.8.0_60.
          -1 unit 63m 46s hadoop-hdfs in the patch failed with JDK v1.8.0_60.
          +1 unit 1m 1s hadoop-hdfs-client in the patch passed with JDK v1.8.0_60.
          -1 unit 7m 20s hadoop-common in the patch failed with JDK v1.7.0_79.
          -1 unit 59m 0s hadoop-hdfs in the patch failed with JDK v1.7.0_79.
          +1 unit 1m 0s hadoop-hdfs-client in the patch passed with JDK v1.7.0_79.
          -1 asflicense 0m 21s Patch generated 56 ASF License warnings.
          215m 17s



          Reason Tests
          JDK v1.8.0_60 Failed junit tests hadoop.ipc.TestDecayRpcScheduler
            hadoop.ipc.TestIPC
            hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes
            hadoop.hdfs.security.TestDelegationTokenForProxyUser
            hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport
            hadoop.hdfs.server.namenode.snapshot.TestSnapshotRename
            hadoop.fs.viewfs.TestViewFsAtHdfsRoot
          JDK v1.8.0_60 Timed out junit tests org.apache.hadoop.http.TestHttpServerLifecycle
          JDK v1.7.0_79 Failed junit tests hadoop.ipc.TestDecayRpcScheduler
            hadoop.hdfs.server.blockmanagement.TestNodeCount
            hadoop.hdfs.TestBlockReaderLocal



          Subsystem Report/Notes
          Docker Client=1.7.1 Server=1.7.1 Image:test-patch-base-hadoop-date2015-11-13
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12772045/HDFS-7984.007.patch
          JIRA Issue HDFS-7984
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml
          uname Linux 90742a59df1e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-fa12328/precommit/personality/hadoop.sh
          git revision trunk / 7ff280f
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/13498/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13498/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_60.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13498/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_60.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13498/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_79.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13498/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_79.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/13498/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_60.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13498/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_60.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13498/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_79.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13498/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_79.txt
          JDK v1.7.0_79 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/13498/testReport/
          asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/13498/artifact/patchprocess/patch-asflicense-problems.txt
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-client U: .
          Max memory used 229MB
          Powered by Apache Yetus http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/13498/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 6s docker + precommit patch detected. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. +1 mvninstall 3m 12s trunk passed +1 compile 4m 56s trunk passed with JDK v1.8.0_60 +1 compile 4m 45s trunk passed with JDK v1.7.0_79 +1 checkstyle 1m 3s trunk passed +1 mvnsite 2m 23s trunk passed +1 mvneclipse 0m 43s trunk passed +1 findbugs 5m 59s trunk passed +1 javadoc 2m 44s trunk passed with JDK v1.8.0_60 +1 javadoc 3m 40s trunk passed with JDK v1.7.0_79 +1 mvninstall 2m 53s the patch passed +1 compile 4m 57s the patch passed with JDK v1.8.0_60 +1 javac 4m 57s the patch passed +1 compile 4m 42s the patch passed with JDK v1.7.0_79 +1 javac 4m 42s the patch passed -1 checkstyle 1m 7s Patch generated 1 new checkstyle issues in root (total was 467, now 464). +1 mvnsite 2m 20s the patch passed +1 mvneclipse 0m 44s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 xml 0m 0s The patch has no ill-formed XML file. +1 findbugs 6m 37s the patch passed +1 javadoc 2m 45s the patch passed with JDK v1.8.0_60 +1 javadoc 3m 37s the patch passed with JDK v1.7.0_79 -1 unit 21m 58s hadoop-common in the patch failed with JDK v1.8.0_60. -1 unit 63m 46s hadoop-hdfs in the patch failed with JDK v1.8.0_60. +1 unit 1m 1s hadoop-hdfs-client in the patch passed with JDK v1.8.0_60. -1 unit 7m 20s hadoop-common in the patch failed with JDK v1.7.0_79. -1 unit 59m 0s hadoop-hdfs in the patch failed with JDK v1.7.0_79. +1 unit 1m 0s hadoop-hdfs-client in the patch passed with JDK v1.7.0_79. -1 asflicense 0m 21s Patch generated 56 ASF License warnings. 215m 17s Reason Tests JDK v1.8.0_60 Failed junit tests hadoop.ipc.TestDecayRpcScheduler   hadoop.ipc.TestIPC   hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes   hadoop.hdfs.security.TestDelegationTokenForProxyUser   hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport   hadoop.hdfs.server.namenode.snapshot.TestSnapshotRename   hadoop.fs.viewfs.TestViewFsAtHdfsRoot JDK v1.8.0_60 Timed out junit tests org.apache.hadoop.http.TestHttpServerLifecycle JDK v1.7.0_79 Failed junit tests hadoop.ipc.TestDecayRpcScheduler   hadoop.hdfs.server.blockmanagement.TestNodeCount   hadoop.hdfs.TestBlockReaderLocal Subsystem Report/Notes Docker Client=1.7.1 Server=1.7.1 Image:test-patch-base-hadoop-date2015-11-13 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12772045/HDFS-7984.007.patch JIRA Issue HDFS-7984 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml uname Linux 90742a59df1e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-fa12328/precommit/personality/hadoop.sh git revision trunk / 7ff280f findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/13498/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/13498/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_60.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/13498/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_60.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/13498/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_79.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/13498/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_79.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/13498/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_60.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13498/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_60.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13498/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_79.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13498/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_79.txt JDK v1.7.0_79 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/13498/testReport/ asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/13498/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-client U: . Max memory used 229MB Powered by Apache Yetus http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HDFS-Build/13498/console This message was automatically generated.
          Hide
          sookim HeeSoo Kim added a comment -

          The test failures are unrelated to the change made for this jira.

          Show
          sookim HeeSoo Kim added a comment - The test failures are unrelated to the change made for this jira.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 9s docker + precommit patch detected.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
          +1 mvninstall 7m 38s trunk passed
          +1 compile 7m 42s trunk passed with JDK v1.8.0_66
          +1 compile 8m 40s trunk passed with JDK v1.7.0_85
          +1 checkstyle 1m 1s trunk passed
          +1 mvnsite 2m 26s trunk passed
          +1 mvneclipse 0m 40s trunk passed
          +1 findbugs 5m 37s trunk passed
          +1 javadoc 2m 35s trunk passed with JDK v1.8.0_66
          +1 javadoc 3m 13s trunk passed with JDK v1.7.0_85
          +1 mvninstall 3m 9s the patch passed
          +1 compile 7m 51s the patch passed with JDK v1.8.0_66
          -1 javac 18m 42s root-jdk1.8.0_66 with JDK v1.8.0_66 generated 2 new issues (was 779, now 779).
          +1 javac 7m 51s the patch passed
          +1 compile 8m 37s the patch passed with JDK v1.7.0_85
          -1 javac 27m 19s root-jdk1.7.0_85 with JDK v1.7.0_85 generated 2 new issues (was 772, now 772).
          +1 javac 8m 37s the patch passed
          -1 checkstyle 1m 5s Patch generated 1 new checkstyle issues in root (total was 467, now 464).
          +1 mvnsite 2m 27s the patch passed
          +1 mvneclipse 0m 42s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 xml 0m 1s The patch has no ill-formed XML file.
          +1 findbugs 5m 56s the patch passed
          +1 javadoc 2m 21s the patch passed with JDK v1.8.0_66
          +1 javadoc 3m 14s the patch passed with JDK v1.7.0_85
          -1 unit 12m 33s hadoop-common in the patch failed with JDK v1.8.0_66.
          -1 unit 52m 48s hadoop-hdfs in the patch failed with JDK v1.8.0_66.
          +1 unit 0m 54s hadoop-hdfs-client in the patch passed with JDK v1.8.0_66.
          -1 unit 7m 7s hadoop-common in the patch failed with JDK v1.7.0_85.
          -1 unit 50m 4s hadoop-hdfs in the patch failed with JDK v1.7.0_85.
          +1 unit 1m 1s hadoop-hdfs-client in the patch passed with JDK v1.7.0_85.
          -1 asflicense 0m 20s Patch generated 58 ASF License warnings.
          201m 15s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.net.TestClusterTopology
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
            hadoop.hdfs.server.namenode.ha.TestDNFencing
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
          JDK v1.7.0_85 Failed junit tests hadoop.security.ssl.TestReloadingX509TrustManager
            hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:date2015-11-17
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12772045/HDFS-7984.007.patch
          JIRA Issue HDFS-7984
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml
          uname Linux fe3d7f2c6cb3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-3f4279a/precommit/personality/hadoop.sh
          git revision trunk / dfbde3f
          findbugs v3.0.0
          javac root-jdk1.8.0_66: https://builds.apache.org/job/PreCommit-HDFS-Build/13541/artifact/patchprocess/diff-compile-javac-root-jdk1.8.0_66.txt
          javac root-jdk1.7.0_85: https://builds.apache.org/job/PreCommit-HDFS-Build/13541/artifact/patchprocess/diff-compile-javac-root-jdk1.7.0_85.txt
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/13541/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13541/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13541/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13541/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_85.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13541/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_85.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/13541/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13541/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13541/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_85.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13541/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_85.txt
          JDK v1.7.0_85 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/13541/testReport/
          asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/13541/artifact/patchprocess/patch-asflicense-problems.txt
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-client U: .
          Max memory used 77MB
          Powered by Apache Yetus http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/13541/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 9s docker + precommit patch detected. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. +1 mvninstall 7m 38s trunk passed +1 compile 7m 42s trunk passed with JDK v1.8.0_66 +1 compile 8m 40s trunk passed with JDK v1.7.0_85 +1 checkstyle 1m 1s trunk passed +1 mvnsite 2m 26s trunk passed +1 mvneclipse 0m 40s trunk passed +1 findbugs 5m 37s trunk passed +1 javadoc 2m 35s trunk passed with JDK v1.8.0_66 +1 javadoc 3m 13s trunk passed with JDK v1.7.0_85 +1 mvninstall 3m 9s the patch passed +1 compile 7m 51s the patch passed with JDK v1.8.0_66 -1 javac 18m 42s root-jdk1.8.0_66 with JDK v1.8.0_66 generated 2 new issues (was 779, now 779). +1 javac 7m 51s the patch passed +1 compile 8m 37s the patch passed with JDK v1.7.0_85 -1 javac 27m 19s root-jdk1.7.0_85 with JDK v1.7.0_85 generated 2 new issues (was 772, now 772). +1 javac 8m 37s the patch passed -1 checkstyle 1m 5s Patch generated 1 new checkstyle issues in root (total was 467, now 464). +1 mvnsite 2m 27s the patch passed +1 mvneclipse 0m 42s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 xml 0m 1s The patch has no ill-formed XML file. +1 findbugs 5m 56s the patch passed +1 javadoc 2m 21s the patch passed with JDK v1.8.0_66 +1 javadoc 3m 14s the patch passed with JDK v1.7.0_85 -1 unit 12m 33s hadoop-common in the patch failed with JDK v1.8.0_66. -1 unit 52m 48s hadoop-hdfs in the patch failed with JDK v1.8.0_66. +1 unit 0m 54s hadoop-hdfs-client in the patch passed with JDK v1.8.0_66. -1 unit 7m 7s hadoop-common in the patch failed with JDK v1.7.0_85. -1 unit 50m 4s hadoop-hdfs in the patch failed with JDK v1.7.0_85. +1 unit 1m 1s hadoop-hdfs-client in the patch passed with JDK v1.7.0_85. -1 asflicense 0m 20s Patch generated 58 ASF License warnings. 201m 15s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.net.TestClusterTopology   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020   hadoop.hdfs.server.namenode.ha.TestDNFencing   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 JDK v1.7.0_85 Failed junit tests hadoop.security.ssl.TestReloadingX509TrustManager   hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer Subsystem Report/Notes Docker Image:yetus/hadoop:date2015-11-17 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12772045/HDFS-7984.007.patch JIRA Issue HDFS-7984 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml uname Linux fe3d7f2c6cb3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-3f4279a/precommit/personality/hadoop.sh git revision trunk / dfbde3f findbugs v3.0.0 javac root-jdk1.8.0_66: https://builds.apache.org/job/PreCommit-HDFS-Build/13541/artifact/patchprocess/diff-compile-javac-root-jdk1.8.0_66.txt javac root-jdk1.7.0_85: https://builds.apache.org/job/PreCommit-HDFS-Build/13541/artifact/patchprocess/diff-compile-javac-root-jdk1.7.0_85.txt checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/13541/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/13541/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/13541/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/13541/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_85.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/13541/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_85.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/13541/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13541/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13541/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_85.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13541/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_85.txt JDK v1.7.0_85 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/13541/testReport/ asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/13541/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-client U: . Max memory used 77MB Powered by Apache Yetus http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HDFS-Build/13541/console This message was automatically generated.
          Hide
          aw Allen Wittenauer added a comment -

          Yetus isn't smart enough (yet) to pull apart line numbers so those deprecation warnings are not associated with this patch.

          Show
          aw Allen Wittenauer added a comment - Yetus isn't smart enough (yet) to pull apart line numbers so those deprecation warnings are not associated with this patch.
          Hide
          aw Allen Wittenauer added a comment -

          If I understand the code change correctly, I'm sort of surprised this doesn't work:

          on user account1:

          hdfs fetchdt /tmp/token
          chmod a+r /tmp/token
          

          on user account2:

          hadoop fs -Dhadoop.token.file=/tmp/token -ls /user/account1
          

          Both hdfs and webhdfs are failing this simple test.

          Show
          aw Allen Wittenauer added a comment - If I understand the code change correctly, I'm sort of surprised this doesn't work: on user account1: hdfs fetchdt /tmp/token chmod a+r /tmp/token on user account2: hadoop fs -Dhadoop.token.file=/tmp/token -ls /user/account1 Both hdfs and webhdfs are failing this simple test.
          Hide
          sookim HeeSoo Kim added a comment -

          If you run the command on user account2, you should use HADOOP_CLIENT_OPTS parameter.
          on user account2:

          HADOOP_CLIENT_OPTS="${HADOOP_CLIENT_OPTS} -Dhadoop.token.files=/tmp/token" hadoop fs -ls /user/account1
          

          When you use the -Dproperty=value pattern, the property should be mention before the java command.

          hadoop fs -Dhadoop.token.files=/tmp/token -ls /user/account1
          

          If you use the above command, the property of sun.java.command has the -Dhadoop.token.files property as a value.

          sun.java.command=org.apache.hadoop.fs.FsShell  -Dhadoop.token.files=/tmp/token -ls /user/account1
          
          Show
          sookim HeeSoo Kim added a comment - If you run the command on user account2, you should use HADOOP_CLIENT_OPTS parameter. on user account2: HADOOP_CLIENT_OPTS="${HADOOP_CLIENT_OPTS} -Dhadoop.token.files=/tmp/token" hadoop fs -ls /user/account1 When you use the -Dproperty=value pattern, the property should be mention before the java command. hadoop fs -Dhadoop.token.files=/tmp/token -ls /user/account1 If you use the above command, the property of sun.java.command has the -Dhadoop.token.files property as a value. sun.java.command=org.apache.hadoop.fs.FsShell -Dhadoop.token.files=/tmp/token -ls /user/account1
          Hide
          aw Allen Wittenauer added a comment -

          OK, .hadooprc and HADOOP_CLIENT_OPTS DOES work. This really does seem like a bug in how hadoop fs/hdfs dfs handles properties on the CLI.

          +1 committing this to trunk.

          Thanks!

          Show
          aw Allen Wittenauer added a comment - OK, .hadooprc and HADOOP_CLIENT_OPTS DOES work. This really does seem like a bug in how hadoop fs/hdfs dfs handles properties on the CLI. +1 committing this to trunk. Thanks!
          Hide
          aw Allen Wittenauer added a comment -

          (Ok, this is technically a common patch, but the vast vast vast majority of code is in HDFS, so I guess I'll keep it there.)

          Show
          aw Allen Wittenauer added a comment - (Ok, this is technically a common patch, but the vast vast vast majority of code is in HDFS, so I guess I'll keep it there.)
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #8942 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8942/)
          HDFS-9525. hadoop utilities need to support provided delegation tokens (aw: rev 832b3cbde1c2f77b04c93188e3a94420974090cf)

          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
          • hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/DelegationParam.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
          • hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #8942 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8942/ ) HDFS-9525 . hadoop utilities need to support provided delegation tokens (aw: rev 832b3cbde1c2f77b04c93188e3a94420974090cf) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/DelegationParam.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java hadoop-common-project/hadoop-common/src/main/resources/core-default.xml hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #677 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/677/)
          HDFS-9525. hadoop utilities need to support provided delegation tokens (aw: rev 832b3cbde1c2f77b04c93188e3a94420974090cf)

          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java
          • hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/DelegationParam.java
          • hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #677 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/677/ ) HDFS-9525 . hadoop utilities need to support provided delegation tokens (aw: rev 832b3cbde1c2f77b04c93188e3a94420974090cf) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java hadoop-common-project/hadoop-common/src/main/resources/core-default.xml hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/DelegationParam.java hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
          Hide
          daryn Daryn Sharp added a comment -

          Allen Wittenauer Heesoo Kim -1. Revert everything (I have a knack for screwing up git or I would do it) except the multiple token file support which is what this jira purported to do. Never make fundamental security changes under an innocent sounding title.

          1. You cannot get a token with a token. That effectively killed security. What's the purpose of having an expiration if I can steal a token and use it to get new tokens forever?
          2. When you see a test explicitly stating that you can't use a token to get a token, you don't delete it.
          3. When you see a test called testPrivateTokenExclusion, that deals with 3 tokens, with the comment "// Ensure only non-private tokens are returned", you don't change the assert from 1 to 3.
          4. In general, when you touch something security related and tests break - best case is unacceptable incompatibility. Worst case, this.

          I'm sorry for my tone. Tremendous effort was spent to stabilize webhdfs for production usage. Ignoring the security implications, handling of token acquisition, spnego contexts, and renewal was a terrible problem. If I've misinterpreted the patch, please correct me.

          Show
          daryn Daryn Sharp added a comment - Allen Wittenauer Heesoo Kim -1. Revert everything (I have a knack for screwing up git or I would do it) except the multiple token file support which is what this jira purported to do. Never make fundamental security changes under an innocent sounding title. You cannot get a token with a token. That effectively killed security. What's the purpose of having an expiration if I can steal a token and use it to get new tokens forever? When you see a test explicitly stating that you can't use a token to get a token, you don't delete it. When you see a test called testPrivateTokenExclusion , that deals with 3 tokens, with the comment "// Ensure only non-private tokens are returned", you don't change the assert from 1 to 3. In general, when you touch something security related and tests break - best case is unacceptable incompatibility. Worst case, this. I'm sorry for my tone. Tremendous effort was spent to stabilize webhdfs for production usage. Ignoring the security implications, handling of token acquisition, spnego contexts, and renewal was a terrible problem. If I've misinterpreted the patch, please correct me.
          Hide
          aw Allen Wittenauer added a comment -

          I've reverted the patch.

          except the multiple token file support which is what this jira purported to do.

          Well, no. The whole underlying point of this JIRA is to fix WebHDFS which despite "[t]remendous effort" doesn't work with multiple Kerberos realms that don't have an established trust when using something like distcp.

          Show
          aw Allen Wittenauer added a comment - I've reverted the patch. except the multiple token file support which is what this jira purported to do. Well, no. The whole underlying point of this JIRA is to fix WebHDFS which despite " [t] remendous effort" doesn't work with multiple Kerberos realms that don't have an established trust when using something like distcp.
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #8957 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8957/)
          Revert "HDFS-9525. hadoop utilities need to support provided delegation (aw: rev 576b569b6c97bd5f57e52efdabdf8c2fa996a524)

          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java
          • hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/DelegationParam.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java
          • hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
          • hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #8957 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8957/ ) Revert " HDFS-9525 . hadoop utilities need to support provided delegation (aw: rev 576b569b6c97bd5f57e52efdabdf8c2fa996a524) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/DelegationParam.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java hadoop-common-project/hadoop-common/src/main/resources/core-default.xml hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
          Hide
          daryn Daryn Sharp added a comment -

          The whole underlying point of this JIRA is to fix WebHDFS which ... doesn't work with multiple Kerberos realms

          1. There's nothing wrong with spnego.
          2. There's nothing wrong with webhdfs.
          3. You can't fix something that's not broken by breaking it.

          Kerberos wasn't designed to be used in a non-trusting environment. All web clients, ex. curl the jdk's, I'm aware of fail if a webserver in 1 realm redirects to another webserver in a 2nd non-trusted realm. They don't have multiple identity/TGT support.

          You've discovered that feeding in tokens via an external tool like fetchdt is rather painful. So you probably worked backwards from the end goal. Used fetchdt as the remote identity to get a token, fed that via a token cache into your distcps, became dismayed that it eventually expired, possibly had to restart a deamon to read in a new token cache every week or two, thus ultimately decided to break the security model to allow getting new tokens from prior tokens.

          What I would do is write a wrapper over distcp and change nothing in hadoop core. Off the top of my head, something like:

          UserGroupInformation.loginUserFromKeytab("local-principal", keytab);
          UserGroupInformation whyDoYouNotTrustMe =
            UserGroupInformation.loginUserFromKeytabAndReturnUGI("other-principal", keytab);
          whyDoYouNotTrustMe.doAs(
              new PrivilegedExceptionAction<Void>() {
                @Override
                 public Void run() throws IOException {
                   remotePath.getFileSystem(conf).addDelegationTokens(
                       renewer, UserGroupInformation.getLoginUser().getCredentials());
                   return null;
                 }});
          
          DistCpOptions options = new DistCpOptions(listFile, target);
          options.setXYZ(...);
          new DistCp(conf, options).execute();
          

          Snapping back the token files if that's the route you chose to follow, rather than a yack (yet another config key), why not add an option to fetchdt to add tokens to a file instead of overwriting the entire file?

          Show
          daryn Daryn Sharp added a comment - The whole underlying point of this JIRA is to fix WebHDFS which ... doesn't work with multiple Kerberos realms There's nothing wrong with spnego. There's nothing wrong with webhdfs. You can't fix something that's not broken by breaking it. Kerberos wasn't designed to be used in a non-trusting environment. All web clients, ex. curl the jdk's, I'm aware of fail if a webserver in 1 realm redirects to another webserver in a 2nd non-trusted realm. They don't have multiple identity/TGT support. You've discovered that feeding in tokens via an external tool like fetchdt is rather painful. So you probably worked backwards from the end goal. Used fetchdt as the remote identity to get a token, fed that via a token cache into your distcps, became dismayed that it eventually expired, possibly had to restart a deamon to read in a new token cache every week or two, thus ultimately decided to break the security model to allow getting new tokens from prior tokens. What I would do is write a wrapper over distcp and change nothing in hadoop core. Off the top of my head, something like: UserGroupInformation.loginUserFromKeytab("local-principal", keytab); UserGroupInformation whyDoYouNotTrustMe = UserGroupInformation.loginUserFromKeytabAndReturnUGI("other-principal", keytab); whyDoYouNotTrustMe.doAs( new PrivilegedExceptionAction<Void>() { @Override public Void run() throws IOException { remotePath.getFileSystem(conf).addDelegationTokens( renewer, UserGroupInformation.getLoginUser().getCredentials()); return null; }}); DistCpOptions options = new DistCpOptions(listFile, target); options.setXYZ(...); new DistCp(conf, options).execute(); Snapping back the token files if that's the route you chose to follow, rather than a yack (yet another config key), why not add an option to fetchdt to add tokens to a file instead of overwriting the entire file?
          Hide
          aw Allen Wittenauer added a comment -

          It'd be great if you read over the past comments instead of jumping to conclusions. Thanks.

          Also:

          Snapping back the token files if that's the route you chose to follow, rather than a yack (yet another config key), why not add an option to fetchdt to add tokens to a file instead of overwriting the entire file?

          See HADOOP-12563.

          Show
          aw Allen Wittenauer added a comment - It'd be great if you read over the past comments instead of jumping to conclusions. Thanks. Also: Snapping back the token files if that's the route you chose to follow, rather than a yack (yet another config key), why not add an option to fetchdt to add tokens to a file instead of overwriting the entire file? See HADOOP-12563 .
          Hide
          daryn Daryn Sharp added a comment -

          I've read the comments but I don't see the connection between the discussion and the implementation. Hence why I asked for a correction if wrong.

          An enhanced fetchdt is probably the best solution to side step the lack of realm trust.

          Show
          daryn Daryn Sharp added a comment - I've read the comments but I don't see the connection between the discussion and the implementation. Hence why I asked for a correction if wrong. An enhanced fetchdt is probably the best solution to side step the lack of realm trust.
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #688 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/688/)
          Revert "HDFS-9525. hadoop utilities need to support provided delegation (aw: rev 576b569b6c97bd5f57e52efdabdf8c2fa996a524)

          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
          • hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
          • hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/DelegationParam.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #688 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/688/ ) Revert " HDFS-9525 . hadoop utilities need to support provided delegation (aw: rev 576b569b6c97bd5f57e52efdabdf8c2fa996a524) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java hadoop-common-project/hadoop-common/src/main/resources/core-default.xml hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/DelegationParam.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java
          Hide
          sookim HeeSoo Kim added a comment -

          Daryn Sharp, Allen Wittenauer Thank you for your feedback.

          An enhanced fetchdt is probably the best solution to side step the lack of realm trust.

          That's right. We can use fetchdt to get the token from un-trusted realm cluster.
          However, WebHDFS still has a problem to use the token which get the token using fetchdt.

          I changed the code that supports the following features.

          1. It supports multiple token files when we fetched the delegationToken from target filesystem using fetchdt.
          2. If we want to distcp from non-kerberos cluster to kerberos cluster, WebHDFS does not use the delegationToken even ugi has the webHDFS's token. It supports to use token for WebHDFS on non-kerberos cluster.
          Show
          sookim HeeSoo Kim added a comment - Daryn Sharp , Allen Wittenauer Thank you for your feedback. An enhanced fetchdt is probably the best solution to side step the lack of realm trust. That's right. We can use fetchdt to get the token from un-trusted realm cluster. However, WebHDFS still has a problem to use the token which get the token using fetchdt. I changed the code that supports the following features. It supports multiple token files when we fetched the delegationToken from target filesystem using fetchdt. If we want to distcp from non-kerberos cluster to kerberos cluster, WebHDFS does not use the delegationToken even ugi has the webHDFS's token. It supports to use token for WebHDFS on non-kerberos cluster.
          Hide
          sookim HeeSoo Kim added a comment -
          1. It supports multiple token files when we fetched the delegationToken from target filesystem using fetchdt.
          2. If we want to distcp from non-kerberos cluster to kerberos cluster, WebHDFS does not use the delegationToken even ugi has the webHDFS's token. It supports to use token for WebHDFS on non-kerberos cluster.
          Show
          sookim HeeSoo Kim added a comment - It supports multiple token files when we fetched the delegationToken from target filesystem using fetchdt. If we want to distcp from non-kerberos cluster to kerberos cluster, WebHDFS does not use the delegationToken even ugi has the webHDFS's token. It supports to use token for WebHDFS on non-kerberos cluster.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 1s The patch appears to include 2 new or modified test files.
          +1 mvninstall 7m 36s trunk passed
          +1 compile 7m 48s trunk passed with JDK v1.8.0_66
          +1 compile 8m 31s trunk passed with JDK v1.7.0_91
          +1 checkstyle 0m 57s trunk passed
          +1 mvnsite 2m 26s trunk passed
          +1 mvneclipse 0m 40s trunk passed
          +1 findbugs 5m 23s trunk passed
          +1 javadoc 2m 16s trunk passed with JDK v1.8.0_66
          +1 javadoc 3m 11s trunk passed with JDK v1.7.0_91
          +1 mvninstall 3m 4s the patch passed
          +1 compile 7m 36s the patch passed with JDK v1.8.0_66
          -1 javac 17m 59s root-jdk1.8.0_66 with JDK v1.8.0_66 generated 2 new issues (was 729, now 729).
          +1 javac 7m 36s the patch passed
          +1 compile 8m 30s the patch passed with JDK v1.7.0_91
          -1 javac 26m 29s root-jdk1.7.0_91 with JDK v1.7.0_91 generated 2 new issues (was 723, now 723).
          +1 javac 8m 30s the patch passed
          -1 checkstyle 0m 57s Patch generated 1 new checkstyle issues in root (total was 345, now 346).
          +1 mvnsite 2m 23s the patch passed
          +1 mvneclipse 0m 41s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 xml 0m 1s The patch has no ill-formed XML file.
          +1 findbugs 5m 52s the patch passed
          +1 javadoc 2m 16s the patch passed with JDK v1.8.0_66
          +1 javadoc 3m 11s the patch passed with JDK v1.7.0_91
          -1 unit 6m 31s hadoop-common in the patch failed with JDK v1.8.0_66.
          +1 unit 0m 49s hadoop-hdfs-client in the patch passed with JDK v1.8.0_66.
          -1 unit 50m 4s hadoop-hdfs in the patch failed with JDK v1.8.0_66.
          -1 unit 7m 3s hadoop-common in the patch failed with JDK v1.7.0_91.
          +1 unit 0m 57s hadoop-hdfs-client in the patch passed with JDK v1.7.0_91.
          -1 unit 51m 18s hadoop-hdfs in the patch failed with JDK v1.7.0_91.
          -1 asflicense 0m 20s Patch generated 58 ASF License warnings.
          191m 55s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.fs.shell.TestCopyPreserveFlag
            hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes
            hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot
          JDK v1.7.0_91 Failed junit tests hadoop.metrics2.impl.TestGangliaMetrics
            hadoop.hdfs.web.TestWebHDFS



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12777595/HDFS-9525.008.patch
          JIRA Issue HDFS-9525
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml
          uname Linux 2a07d6857524 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 0c3a53e
          findbugs v3.0.0
          javac root-jdk1.8.0_66: https://builds.apache.org/job/PreCommit-HDFS-Build/13879/artifact/patchprocess/diff-compile-javac-root-jdk1.8.0_66.txt
          javac root-jdk1.7.0_91: https://builds.apache.org/job/PreCommit-HDFS-Build/13879/artifact/patchprocess/diff-compile-javac-root-jdk1.7.0_91.txt
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/13879/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13879/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13879/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13879/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_91.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13879/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_91.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/13879/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13879/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13879/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_91.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13879/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_91.txt
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/13879/testReport/
          asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/13879/artifact/patchprocess/patch-asflicense-problems.txt
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-client U: .
          Max memory used 75MB
          Powered by Apache Yetus 0.1.0 http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/13879/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 1s The patch appears to include 2 new or modified test files. +1 mvninstall 7m 36s trunk passed +1 compile 7m 48s trunk passed with JDK v1.8.0_66 +1 compile 8m 31s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 57s trunk passed +1 mvnsite 2m 26s trunk passed +1 mvneclipse 0m 40s trunk passed +1 findbugs 5m 23s trunk passed +1 javadoc 2m 16s trunk passed with JDK v1.8.0_66 +1 javadoc 3m 11s trunk passed with JDK v1.7.0_91 +1 mvninstall 3m 4s the patch passed +1 compile 7m 36s the patch passed with JDK v1.8.0_66 -1 javac 17m 59s root-jdk1.8.0_66 with JDK v1.8.0_66 generated 2 new issues (was 729, now 729). +1 javac 7m 36s the patch passed +1 compile 8m 30s the patch passed with JDK v1.7.0_91 -1 javac 26m 29s root-jdk1.7.0_91 with JDK v1.7.0_91 generated 2 new issues (was 723, now 723). +1 javac 8m 30s the patch passed -1 checkstyle 0m 57s Patch generated 1 new checkstyle issues in root (total was 345, now 346). +1 mvnsite 2m 23s the patch passed +1 mvneclipse 0m 41s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 xml 0m 1s The patch has no ill-formed XML file. +1 findbugs 5m 52s the patch passed +1 javadoc 2m 16s the patch passed with JDK v1.8.0_66 +1 javadoc 3m 11s the patch passed with JDK v1.7.0_91 -1 unit 6m 31s hadoop-common in the patch failed with JDK v1.8.0_66. +1 unit 0m 49s hadoop-hdfs-client in the patch passed with JDK v1.8.0_66. -1 unit 50m 4s hadoop-hdfs in the patch failed with JDK v1.8.0_66. -1 unit 7m 3s hadoop-common in the patch failed with JDK v1.7.0_91. +1 unit 0m 57s hadoop-hdfs-client in the patch passed with JDK v1.7.0_91. -1 unit 51m 18s hadoop-hdfs in the patch failed with JDK v1.7.0_91. -1 asflicense 0m 20s Patch generated 58 ASF License warnings. 191m 55s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.fs.shell.TestCopyPreserveFlag   hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes   hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot JDK v1.7.0_91 Failed junit tests hadoop.metrics2.impl.TestGangliaMetrics   hadoop.hdfs.web.TestWebHDFS Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12777595/HDFS-9525.008.patch JIRA Issue HDFS-9525 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml uname Linux 2a07d6857524 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 0c3a53e findbugs v3.0.0 javac root-jdk1.8.0_66: https://builds.apache.org/job/PreCommit-HDFS-Build/13879/artifact/patchprocess/diff-compile-javac-root-jdk1.8.0_66.txt javac root-jdk1.7.0_91: https://builds.apache.org/job/PreCommit-HDFS-Build/13879/artifact/patchprocess/diff-compile-javac-root-jdk1.7.0_91.txt checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/13879/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/13879/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/13879/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/13879/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_91.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/13879/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/13879/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13879/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13879/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_91.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13879/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/13879/testReport/ asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/13879/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-client U: . Max memory used 75MB Powered by Apache Yetus 0.1.0 http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HDFS-Build/13879/console This message was automatically generated.
          Hide
          sookim HeeSoo Kim added a comment -

          The test failures are unrelated to the change made for this jira.
          Daryn Sharp and Allen Wittenauer, would you please review this new patch?

          Thanks,

          Show
          sookim HeeSoo Kim added a comment - The test failures are unrelated to the change made for this jira. Daryn Sharp and Allen Wittenauer , would you please review this new patch? Thanks,
          Hide
          aw Allen Wittenauer added a comment -

          javac issues are directly related to YETUS-187.

          Show
          aw Allen Wittenauer added a comment - javac issues are directly related to YETUS-187 .
          Hide
          daryn Daryn Sharp added a comment -

          If we want to distcp from non-kerberos cluster to kerberos cluster, WebHDFS does not use the delegationToken even ugi has the webHDFS's token.

          I thought the issue at hand is how to access 2 kerberos clusters? If the other cluster is insecure, then just set ipc.client.fallback-to-simple-auth-allowed=true. Even though the key has ipc in it, it still applies to webhdfs too.

          It supports to use token for WebHDFS on non-kerberos cluster.

          This is the part that completely confuses me. If it's an insecure cluster, tokens aren't issued. Did you (finish what I started long ago) and issue tokens even with security off? If no, then what issued the token you are attempting to use on the insecure cluster?

          Show
          daryn Daryn Sharp added a comment - If we want to distcp from non-kerberos cluster to kerberos cluster, WebHDFS does not use the delegationToken even ugi has the webHDFS's token. I thought the issue at hand is how to access 2 kerberos clusters? If the other cluster is insecure, then just set ipc.client.fallback-to-simple-auth-allowed=true. Even though the key has ipc in it, it still applies to webhdfs too. It supports to use token for WebHDFS on non-kerberos cluster. This is the part that completely confuses me. If it's an insecure cluster, tokens aren't issued. Did you (finish what I started long ago) and issue tokens even with security off? If no, then what issued the token you are attempting to use on the insecure cluster?
          Hide
          sookim HeeSoo Kim added a comment -

          I thought the issue at hand is how to access 2 kerberos clusters? If the other cluster is insecure, then just set ipc.client.fallback-to-simple-auth-allowed=true.

          Daryn Sharp This uses case can use when source is kerberos cluster and target is non-kerberos(simple) cluster.
          However, this use case is a contrary concept. Our source is non-kerberos(simple) cluster and target is kerberos cluster.
          This is the use case.

          1. I get the token from target cluster with kerberos using fetchdt.
          2. The source cluster get the delegation token file anyhow.
          3. In the source cluster, we set the delegation token file in hadoop.token.files parameter.
          4. The source cluster with cluster tried to connect the target cluster with kerberos.

          Even I set up the delegation token file on source cluster with simple, it does not use the token.
          I agreed that if the source cluster do not have the token information of the target, WebHDFS needs to request GETDELEGATIONTOKEN.
          However, if the source cluster has the right service token, WebHDFS needs to use the service token.

          Show
          sookim HeeSoo Kim added a comment - I thought the issue at hand is how to access 2 kerberos clusters? If the other cluster is insecure, then just set ipc.client.fallback-to-simple-auth-allowed=true. Daryn Sharp This uses case can use when source is kerberos cluster and target is non-kerberos(simple) cluster. However, this use case is a contrary concept. Our source is non-kerberos(simple) cluster and target is kerberos cluster. This is the use case. I get the token from target cluster with kerberos using fetchdt. The source cluster get the delegation token file anyhow. In the source cluster, we set the delegation token file in hadoop.token.files parameter. The source cluster with cluster tried to connect the target cluster with kerberos. Even I set up the delegation token file on source cluster with simple, it does not use the token. I agreed that if the source cluster do not have the token information of the target, WebHDFS needs to request GETDELEGATIONTOKEN. However, if the source cluster has the right service token, WebHDFS needs to use the service token.
          Hide
          sookim HeeSoo Kim added a comment -

          Patch for branch-2.

          Show
          sookim HeeSoo Kim added a comment - Patch for branch-2.
          Hide
          wheat9 Haohui Mai added a comment -

          Does it make more sense to extend `HADOOP_TOKEN_FILE_LOCATION` to support multiple token files instead of introducing a new configuration variable?

          Show
          wheat9 Haohui Mai added a comment - Does it make more sense to extend `HADOOP_TOKEN_FILE_LOCATION` to support multiple token files instead of introducing a new configuration variable?
          Hide
          aw Allen Wittenauer added a comment -

          Does it make more sense to extend `HADOOP_TOKEN_FILE_LOCATION` to support multiple token files instead of introducing a new configuration variable?

          No. It's extremely useful to be able to do this from a workflow engine e.g., Oozie.

          Show
          aw Allen Wittenauer added a comment - Does it make more sense to extend `HADOOP_TOKEN_FILE_LOCATION` to support multiple token files instead of introducing a new configuration variable? No. It's extremely useful to be able to do this from a workflow engine e.g., Oozie.
          Hide
          wheat9 Haohui Mai added a comment -

          No. It's extremely useful to be able to do this from a workflow engine e.g., Oozie.

          I'm confused. Why Oozie is able to set the configuration but not the environment variable? From the mechanism point of view they are equivalent. It only makes a difference if Oozie can only support a single set of configurations for every single workflow.

          Show
          wheat9 Haohui Mai added a comment - No. It's extremely useful to be able to do this from a workflow engine e.g., Oozie. I'm confused. Why Oozie is able to set the configuration but not the environment variable? From the mechanism point of view they are equivalent. It only makes a difference if Oozie can only support a single set of configurations for every single workflow.
          Hide
          aw Allen Wittenauer added a comment -

          Oozie was just an example.

          If I'm firing off several jobs at once via threading, being able to set this as config instead of an env var is significantly easier because it means I don't have to lock around it.

          Show
          aw Allen Wittenauer added a comment - Oozie was just an example. If I'm firing off several jobs at once via threading, being able to set this as config instead of an env var is significantly easier because it means I don't have to lock around it.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
          +1 mvninstall 8m 39s branch-2 passed
          +1 compile 7m 8s branch-2 passed with JDK v1.8.0_66
          +1 compile 7m 25s branch-2 passed with JDK v1.7.0_91
          +1 checkstyle 1m 8s branch-2 passed
          +1 mvnsite 2m 26s branch-2 passed
          +1 mvneclipse 0m 46s branch-2 passed
          -1 findbugs 2m 13s hadoop-common-project/hadoop-common in branch-2 has 5 extant Findbugs warnings.
          -1 findbugs 1m 59s hadoop-hdfs-project/hadoop-hdfs-client in branch-2 has 5 extant Findbugs warnings.
          +1 javadoc 2m 31s branch-2 passed with JDK v1.8.0_66
          +1 javadoc 3m 27s branch-2 passed with JDK v1.7.0_91
          +1 mvninstall 2m 14s the patch passed
          +1 compile 6m 34s the patch passed with JDK v1.8.0_66
          +1 javac 6m 34s the patch passed
          +1 compile 7m 5s the patch passed with JDK v1.7.0_91
          +1 javac 7m 5s the patch passed
          -1 checkstyle 1m 3s Patch generated 1 new checkstyle issues in root (total was 345, now 346).
          +1 mvnsite 2m 13s the patch passed
          +1 mvneclipse 0m 39s the patch passed
          -1 whitespace 0m 0s The patch has 61 line(s) that end in whitespace. Use git apply --whitespace=fix.
          +1 xml 0m 0s The patch has no ill-formed XML file.
          +1 findbugs 6m 16s the patch passed
          +1 javadoc 2m 26s the patch passed with JDK v1.8.0_66
          +1 javadoc 3m 21s the patch passed with JDK v1.7.0_91
          +1 unit 7m 21s hadoop-common in the patch passed with JDK v1.8.0_66.
          +1 unit 0m 56s hadoop-hdfs-client in the patch passed with JDK v1.8.0_66.
          -1 unit 46m 15s hadoop-hdfs in the patch failed with JDK v1.8.0_66.
          +1 unit 7m 0s hadoop-common in the patch passed with JDK v1.7.0_91.
          +1 unit 0m 57s hadoop-hdfs-client in the patch passed with JDK v1.7.0_91.
          -1 unit 43m 31s hadoop-hdfs in the patch failed with JDK v1.7.0_91.
          +1 asflicense 0m 25s Patch does not generate ASF License warnings.
          180m 0s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.hdfs.server.datanode.TestBlockScanner
            hadoop.hdfs.server.namenode.TestNNThroughputBenchmark
          JDK v1.7.0_91 Failed junit tests hadoop.hdfs.TestAppendSnapshotTruncate
            hadoop.hdfs.TestHDFSFileSystemContract
            hadoop.hdfs.server.datanode.TestBlockReplacement



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:babe025
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12780625/HDFS-9525.branch-2.008.patch
          JIRA Issue HDFS-9525
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml
          uname Linux d764a2822b4d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision branch-2 / fc6c940
          Default Java 1.7.0_91
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91
          findbugs v3.0.0
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/14036/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/14036/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14036/artifact/patchprocess/diff-checkstyle-root.txt
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/14036/artifact/patchprocess/whitespace-eol.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14036/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14036/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_91.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14036/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14036/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_91.txt
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14036/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-client U: .
          Max memory used 73MB
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14036/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files. +1 mvninstall 8m 39s branch-2 passed +1 compile 7m 8s branch-2 passed with JDK v1.8.0_66 +1 compile 7m 25s branch-2 passed with JDK v1.7.0_91 +1 checkstyle 1m 8s branch-2 passed +1 mvnsite 2m 26s branch-2 passed +1 mvneclipse 0m 46s branch-2 passed -1 findbugs 2m 13s hadoop-common-project/hadoop-common in branch-2 has 5 extant Findbugs warnings. -1 findbugs 1m 59s hadoop-hdfs-project/hadoop-hdfs-client in branch-2 has 5 extant Findbugs warnings. +1 javadoc 2m 31s branch-2 passed with JDK v1.8.0_66 +1 javadoc 3m 27s branch-2 passed with JDK v1.7.0_91 +1 mvninstall 2m 14s the patch passed +1 compile 6m 34s the patch passed with JDK v1.8.0_66 +1 javac 6m 34s the patch passed +1 compile 7m 5s the patch passed with JDK v1.7.0_91 +1 javac 7m 5s the patch passed -1 checkstyle 1m 3s Patch generated 1 new checkstyle issues in root (total was 345, now 346). +1 mvnsite 2m 13s the patch passed +1 mvneclipse 0m 39s the patch passed -1 whitespace 0m 0s The patch has 61 line(s) that end in whitespace. Use git apply --whitespace=fix. +1 xml 0m 0s The patch has no ill-formed XML file. +1 findbugs 6m 16s the patch passed +1 javadoc 2m 26s the patch passed with JDK v1.8.0_66 +1 javadoc 3m 21s the patch passed with JDK v1.7.0_91 +1 unit 7m 21s hadoop-common in the patch passed with JDK v1.8.0_66. +1 unit 0m 56s hadoop-hdfs-client in the patch passed with JDK v1.8.0_66. -1 unit 46m 15s hadoop-hdfs in the patch failed with JDK v1.8.0_66. +1 unit 7m 0s hadoop-common in the patch passed with JDK v1.7.0_91. +1 unit 0m 57s hadoop-hdfs-client in the patch passed with JDK v1.7.0_91. -1 unit 43m 31s hadoop-hdfs in the patch failed with JDK v1.7.0_91. +1 asflicense 0m 25s Patch does not generate ASF License warnings. 180m 0s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.hdfs.server.datanode.TestBlockScanner   hadoop.hdfs.server.namenode.TestNNThroughputBenchmark JDK v1.7.0_91 Failed junit tests hadoop.hdfs.TestAppendSnapshotTruncate   hadoop.hdfs.TestHDFSFileSystemContract   hadoop.hdfs.server.datanode.TestBlockReplacement Subsystem Report/Notes Docker Image:yetus/hadoop:babe025 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12780625/HDFS-9525.branch-2.008.patch JIRA Issue HDFS-9525 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml uname Linux d764a2822b4d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2 / fc6c940 Default Java 1.7.0_91 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/14036/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/14036/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14036/artifact/patchprocess/diff-checkstyle-root.txt whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/14036/artifact/patchprocess/whitespace-eol.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14036/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14036/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14036/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14036/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14036/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-client U: . Max memory used 73MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14036/console This message was automatically generated.
          Hide
          aw Allen Wittenauer added a comment -

          I believe the issues have been dealt with. if there are no further comments, I'll commit this tomorrow. Thanks.

          Show
          aw Allen Wittenauer added a comment - I believe the issues have been dealt with. if there are no further comments, I'll commit this tomorrow. Thanks.
          Hide
          daryn Daryn Sharp added a comment -

          Looking at latest patch now.

          Show
          daryn Daryn Sharp added a comment - Looking at latest patch now.
          Hide
          daryn Daryn Sharp added a comment -

          I think I understand better what you are trying to do, and I think you might be able to accomplish your goals without much if any code change. I think the main source of frustration is trying to access a secure cluster with security disabled?

          If you are trying to access any secure cluster: enable security in the config. If you will also access an insecure cluster: also set ipc.client.fallback-to-simple-auth-allowed=true. Now you should be able to access any mixture of (in)secure clusters using hdfs or webhdfs.

          There's also an existing config "mapreduce.job.credentials.binary" that can be used to read in a token cache.

          Aside: If using webhdfs for both source and target, I'd advise against it. Webhdfs generates a much higher load on a cluster and is much less fault-tolerant than normal hdfs. Our rule of thumb is always pull data (run distcp on the target), read source with webhdfs (but only when RPC is acl-ed off), always write to local target with hdfs.

          If a code change is necessary, UGI should use Configuration#getTrimmedStrings and unconditionally call Credentials.readTokenStorageFile instead of allowing the user to specify an invalid setting. Only webhdfs related change is WebHdfsFileSystem.canRefreshDelegationToken should default to true.

          Show
          daryn Daryn Sharp added a comment - I think I understand better what you are trying to do, and I think you might be able to accomplish your goals without much if any code change. I think the main source of frustration is trying to access a secure cluster with security disabled? If you are trying to access any secure cluster: enable security in the config. If you will also access an insecure cluster: also set ipc.client.fallback-to-simple-auth-allowed=true. Now you should be able to access any mixture of (in)secure clusters using hdfs or webhdfs. There's also an existing config "mapreduce.job.credentials.binary" that can be used to read in a token cache. Aside: If using webhdfs for both source and target, I'd advise against it. Webhdfs generates a much higher load on a cluster and is much less fault-tolerant than normal hdfs. Our rule of thumb is always pull data (run distcp on the target), read source with webhdfs (but only when RPC is acl-ed off), always write to local target with hdfs. If a code change is necessary, UGI should use Configuration#getTrimmedStrings and unconditionally call Credentials.readTokenStorageFile instead of allowing the user to specify an invalid setting. Only webhdfs related change is WebHdfsFileSystem.canRefreshDelegationToken should default to true.
          Hide
          sookim HeeSoo Kim added a comment -

          Daryn Sharp Thank you for your review.

          If using webhdfs for both source and target, I'd advise against it.

          I agree that webhdfs should be used in one side either source or target.

          Our rule of thumb is always pull data (run distcp on the target), read source with webhdfs (but only when RPC is acl-ed off), always write to local target with hdfs.

          I think it is very important information. I tried to find the solution to run distcp from the source.

          Thank you for your code recommendation. Let me change the code and test it based on new use case.

          Show
          sookim HeeSoo Kim added a comment - Daryn Sharp Thank you for your review. If using webhdfs for both source and target, I'd advise against it. I agree that webhdfs should be used in one side either source or target. Our rule of thumb is always pull data (run distcp on the target), read source with webhdfs (but only when RPC is acl-ed off), always write to local target with hdfs. I think it is very important information. I tried to find the solution to run distcp from the source. Thank you for your code recommendation. Let me change the code and test it based on new use case.
          Hide
          aw Allen Wittenauer added a comment -

          Let's not try to limit ourselves to just solving WebHDFS here. I think it's important to recognize that:

          • this goes beyond just distcp, esp wrt future potential applications (so mapreduce.job.credentials.binary isn't particularly useful if one isn't doing MR...)
          • post HADOOP-12563 there is a very real possibility of having more than just HDFS delegation tokens in use
          • there may be more than two clusters involved
          • there are plenty of places where there is a recommended configuration/usage, but hadoop doesn't limit users just to that limit

          Whenever Hadoop limits itself to solving the absolute, immediate problem rather than building for the future, it ends up a mess. (I'll be more than happy to give examples, but I figure I don't need to...) As a community, we've always succeeded and reached greater heights when keeping the door wide open.

          Show
          aw Allen Wittenauer added a comment - Let's not try to limit ourselves to just solving WebHDFS here. I think it's important to recognize that: this goes beyond just distcp, esp wrt future potential applications (so mapreduce.job.credentials.binary isn't particularly useful if one isn't doing MR...) post HADOOP-12563 there is a very real possibility of having more than just HDFS delegation tokens in use there may be more than two clusters involved there are plenty of places where there is a recommended configuration/usage, but hadoop doesn't limit users just to that limit Whenever Hadoop limits itself to solving the absolute, immediate problem rather than building for the future, it ends up a mess. (I'll be more than happy to give examples, but I figure I don't need to...) As a community, we've always succeeded and reached greater heights when keeping the door wide open.
          Hide
          sookim HeeSoo Kim added a comment -

          I updated the code to use Configuration#getTrimmedStrings.

          Show
          sookim HeeSoo Kim added a comment - I updated the code to use Configuration#getTrimmedStrings .
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
          0 mvndep 1m 47s Maven dependency ordering for branch
          +1 mvninstall 7m 5s branch-2 passed
          +1 compile 5m 43s branch-2 passed with JDK v1.8.0_66
          +1 compile 6m 24s branch-2 passed with JDK v1.7.0_91
          +1 checkstyle 1m 3s branch-2 passed
          +1 mvnsite 2m 12s branch-2 passed
          +1 mvneclipse 0m 42s branch-2 passed
          -1 findbugs 1m 58s hadoop-common-project/hadoop-common in branch-2 has 5 extant Findbugs warnings.
          -1 findbugs 1m 48s hadoop-hdfs-project/hadoop-hdfs-client in branch-2 has 5 extant Findbugs warnings.
          +1 javadoc 2m 20s branch-2 passed with JDK v1.8.0_66
          +1 javadoc 3m 15s branch-2 passed with JDK v1.7.0_91
          0 mvndep 0m 24s Maven dependency ordering for patch
          +1 mvninstall 1m 53s the patch passed
          +1 compile 5m 30s the patch passed with JDK v1.8.0_66
          +1 javac 5m 30s the patch passed
          +1 compile 6m 29s the patch passed with JDK v1.7.0_91
          +1 javac 6m 29s the patch passed
          -1 checkstyle 1m 2s root: patch generated 1 new + 338 unchanged - 0 fixed = 339 total (was 338)
          +1 mvnsite 2m 13s the patch passed
          +1 mvneclipse 0m 39s the patch passed
          -1 whitespace 0m 0s The patch has 61 line(s) that end in whitespace. Use git apply --whitespace=fix.
          +1 xml 0m 0s The patch has no ill-formed XML file.
          +1 findbugs 6m 11s the patch passed
          +1 javadoc 2m 24s the patch passed with JDK v1.8.0_66
          +1 javadoc 3m 17s the patch passed with JDK v1.7.0_91
          +1 unit 7m 25s hadoop-common in the patch passed with JDK v1.8.0_66.
          +1 unit 0m 58s hadoop-hdfs-client in the patch passed with JDK v1.8.0_66.
          -1 unit 45m 2s hadoop-hdfs in the patch failed with JDK v1.8.0_66.
          -1 unit 6m 25s hadoop-common in the patch failed with JDK v1.7.0_91.
          +1 unit 0m 55s hadoop-hdfs-client in the patch passed with JDK v1.7.0_91.
          -1 unit 43m 40s hadoop-hdfs in the patch failed with JDK v1.7.0_91.
          +1 asflicense 0m 29s Patch does not generate ASF License warnings.
          172m 49s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.hdfs.TestCrcCorruption
          JDK v1.7.0_91 Failed junit tests hadoop.metrics2.impl.TestGangliaMetrics
            hadoop.hdfs.server.namenode.TestNNThroughputBenchmark
            hadoop.hdfs.server.namenode.TestFSImageWithAcl



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:babe025
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12783659/HDFS-9525.branch-2.009.patch
          JIRA Issue HDFS-9525
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml
          uname Linux af161cbcd2a3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision branch-2 / c19bdc1
          Default Java 1.7.0_91
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91
          findbugs v3.0.0
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/14190/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/14190/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14190/artifact/patchprocess/diff-checkstyle-root.txt
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/14190/artifact/patchprocess/whitespace-eol.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14190/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14190/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_91.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14190/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_91.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14190/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14190/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_91.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14190/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_91.txt
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14190/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Max memory used 75MB
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14190/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files. 0 mvndep 1m 47s Maven dependency ordering for branch +1 mvninstall 7m 5s branch-2 passed +1 compile 5m 43s branch-2 passed with JDK v1.8.0_66 +1 compile 6m 24s branch-2 passed with JDK v1.7.0_91 +1 checkstyle 1m 3s branch-2 passed +1 mvnsite 2m 12s branch-2 passed +1 mvneclipse 0m 42s branch-2 passed -1 findbugs 1m 58s hadoop-common-project/hadoop-common in branch-2 has 5 extant Findbugs warnings. -1 findbugs 1m 48s hadoop-hdfs-project/hadoop-hdfs-client in branch-2 has 5 extant Findbugs warnings. +1 javadoc 2m 20s branch-2 passed with JDK v1.8.0_66 +1 javadoc 3m 15s branch-2 passed with JDK v1.7.0_91 0 mvndep 0m 24s Maven dependency ordering for patch +1 mvninstall 1m 53s the patch passed +1 compile 5m 30s the patch passed with JDK v1.8.0_66 +1 javac 5m 30s the patch passed +1 compile 6m 29s the patch passed with JDK v1.7.0_91 +1 javac 6m 29s the patch passed -1 checkstyle 1m 2s root: patch generated 1 new + 338 unchanged - 0 fixed = 339 total (was 338) +1 mvnsite 2m 13s the patch passed +1 mvneclipse 0m 39s the patch passed -1 whitespace 0m 0s The patch has 61 line(s) that end in whitespace. Use git apply --whitespace=fix. +1 xml 0m 0s The patch has no ill-formed XML file. +1 findbugs 6m 11s the patch passed +1 javadoc 2m 24s the patch passed with JDK v1.8.0_66 +1 javadoc 3m 17s the patch passed with JDK v1.7.0_91 +1 unit 7m 25s hadoop-common in the patch passed with JDK v1.8.0_66. +1 unit 0m 58s hadoop-hdfs-client in the patch passed with JDK v1.8.0_66. -1 unit 45m 2s hadoop-hdfs in the patch failed with JDK v1.8.0_66. -1 unit 6m 25s hadoop-common in the patch failed with JDK v1.7.0_91. +1 unit 0m 55s hadoop-hdfs-client in the patch passed with JDK v1.7.0_91. -1 unit 43m 40s hadoop-hdfs in the patch failed with JDK v1.7.0_91. +1 asflicense 0m 29s Patch does not generate ASF License warnings. 172m 49s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.hdfs.TestCrcCorruption JDK v1.7.0_91 Failed junit tests hadoop.metrics2.impl.TestGangliaMetrics   hadoop.hdfs.server.namenode.TestNNThroughputBenchmark   hadoop.hdfs.server.namenode.TestFSImageWithAcl Subsystem Report/Notes Docker Image:yetus/hadoop:babe025 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12783659/HDFS-9525.branch-2.009.patch JIRA Issue HDFS-9525 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml uname Linux af161cbcd2a3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2 / c19bdc1 Default Java 1.7.0_91 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/14190/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/14190/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14190/artifact/patchprocess/diff-checkstyle-root.txt whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/14190/artifact/patchprocess/whitespace-eol.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14190/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14190/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_91.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14190/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14190/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14190/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_91.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14190/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14190/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Max memory used 75MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14190/console This message was automatically generated.
          Hide
          aw Allen Wittenauer added a comment -

          -09:

          • reupload so that precommit will process it
          Show
          aw Allen Wittenauer added a comment - -09: reupload so that precommit will process it
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
          0 mvndep 0m 37s Maven dependency ordering for branch
          +1 mvninstall 7m 39s trunk passed
          +1 compile 6m 24s trunk passed with JDK v1.8.0_66
          +1 compile 6m 50s trunk passed with JDK v1.7.0_91
          +1 checkstyle 1m 3s trunk passed
          +1 mvnsite 2m 28s trunk passed
          +1 mvneclipse 0m 41s trunk passed
          +1 findbugs 5m 29s trunk passed
          +1 javadoc 2m 17s trunk passed with JDK v1.8.0_66
          +1 javadoc 3m 15s trunk passed with JDK v1.7.0_91
          0 mvndep 0m 24s Maven dependency ordering for patch
          +1 mvninstall 2m 53s the patch passed
          +1 compile 5m 57s the patch passed with JDK v1.8.0_66
          +1 javac 5m 57s the patch passed
          +1 compile 6m 52s the patch passed with JDK v1.7.0_91
          +1 javac 6m 52s the patch passed
          -1 checkstyle 1m 1s root: patch generated 1 new + 341 unchanged - 0 fixed = 342 total (was 341)
          +1 mvnsite 2m 27s the patch passed
          +1 mvneclipse 0m 41s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 xml 0m 1s The patch has no ill-formed XML file.
          +1 findbugs 6m 14s the patch passed
          +1 javadoc 2m 16s the patch passed with JDK v1.8.0_66
          +1 javadoc 3m 15s the patch passed with JDK v1.7.0_91
          -1 unit 6m 32s hadoop-common in the patch failed with JDK v1.8.0_66.
          +1 unit 0m 50s hadoop-hdfs-client in the patch passed with JDK v1.8.0_66.
          -1 unit 52m 51s hadoop-hdfs in the patch failed with JDK v1.8.0_66.
          +1 unit 7m 2s hadoop-common in the patch passed with JDK v1.7.0_91.
          +1 unit 0m 56s hadoop-hdfs-client in the patch passed with JDK v1.7.0_91.
          +1 unit 50m 0s hadoop-hdfs in the patch passed with JDK v1.7.0_91.
          +1 asflicense 0m 25s Patch does not generate ASF License warnings.
          189m 1s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.ha.TestZKFailoverController
            hadoop.hdfs.server.datanode.TestBlockScanner
            hadoop.hdfs.qjournal.client.TestQuorumJournalManager



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12783854/HDFS-9525.009.patch
          JIRA Issue HDFS-9525
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml
          uname Linux ea3d33500aab 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 34a3900
          Default Java 1.7.0_91
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14206/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14206/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14206/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14206/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14206/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14206/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Max memory used 77MB
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14206/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files. 0 mvndep 0m 37s Maven dependency ordering for branch +1 mvninstall 7m 39s trunk passed +1 compile 6m 24s trunk passed with JDK v1.8.0_66 +1 compile 6m 50s trunk passed with JDK v1.7.0_91 +1 checkstyle 1m 3s trunk passed +1 mvnsite 2m 28s trunk passed +1 mvneclipse 0m 41s trunk passed +1 findbugs 5m 29s trunk passed +1 javadoc 2m 17s trunk passed with JDK v1.8.0_66 +1 javadoc 3m 15s trunk passed with JDK v1.7.0_91 0 mvndep 0m 24s Maven dependency ordering for patch +1 mvninstall 2m 53s the patch passed +1 compile 5m 57s the patch passed with JDK v1.8.0_66 +1 javac 5m 57s the patch passed +1 compile 6m 52s the patch passed with JDK v1.7.0_91 +1 javac 6m 52s the patch passed -1 checkstyle 1m 1s root: patch generated 1 new + 341 unchanged - 0 fixed = 342 total (was 341) +1 mvnsite 2m 27s the patch passed +1 mvneclipse 0m 41s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 xml 0m 1s The patch has no ill-formed XML file. +1 findbugs 6m 14s the patch passed +1 javadoc 2m 16s the patch passed with JDK v1.8.0_66 +1 javadoc 3m 15s the patch passed with JDK v1.7.0_91 -1 unit 6m 32s hadoop-common in the patch failed with JDK v1.8.0_66. +1 unit 0m 50s hadoop-hdfs-client in the patch passed with JDK v1.8.0_66. -1 unit 52m 51s hadoop-hdfs in the patch failed with JDK v1.8.0_66. +1 unit 7m 2s hadoop-common in the patch passed with JDK v1.7.0_91. +1 unit 0m 56s hadoop-hdfs-client in the patch passed with JDK v1.7.0_91. +1 unit 50m 0s hadoop-hdfs in the patch passed with JDK v1.7.0_91. +1 asflicense 0m 25s Patch does not generate ASF License warnings. 189m 1s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.ha.TestZKFailoverController   hadoop.hdfs.server.datanode.TestBlockScanner   hadoop.hdfs.qjournal.client.TestQuorumJournalManager Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12783854/HDFS-9525.009.patch JIRA Issue HDFS-9525 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml uname Linux ea3d33500aab 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 34a3900 Default Java 1.7.0_91 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14206/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14206/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14206/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14206/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14206/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14206/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Max memory used 77MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14206/console This message was automatically generated.
          Hide
          aw Allen Wittenauer added a comment -

          Feedback has been addressed.

          Committing to trunk.

          Thanks all!

          Show
          aw Allen Wittenauer added a comment - Feedback has been addressed. Committing to trunk. Thanks all!
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #9170 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9170/)
          HDFS-9525. hadoop utilities need to support provided delegation tokens (aw: rev d22c4239a40a1c7ed49c06038138f0e3f387b4a0)

          • hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
          • hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/DelegationParam.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #9170 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9170/ ) HDFS-9525 . hadoop utilities need to support provided delegation tokens (aw: rev d22c4239a40a1c7ed49c06038138f0e3f387b4a0) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java hadoop-common-project/hadoop-common/src/main/resources/core-default.xml hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/DelegationParam.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
          Hide
          daryn Daryn Sharp added a comment -

          -1 No, feedback was not addressed, a bug was introduced, and the tests were changed to verify the new bug occurs. Strikethru on the one point addressed.

          If a code change is necessary, UGI should use Configuration#getTrimmedStrings and unconditionally call Credentials.readTokenStorageFile instead of allowing the user to specify an invalid setting. Only webhdfs related change is WebHdfsFileSystem.canRefreshDelegationToken should default to true.

          The last and most important point was overlooked and webhdfs is broken. The tests used to:

          1. call getfilestatus and verify a token is sent
          2. clear the token with the comment // wipe out internal token to simulate auth always required
          3. call getfilestatus again to specifically verify no token is sent - because auth is expected

          This patch changed #3 to verify the opposite behavior: the same token as #1 is passed. Again, just change this.canRefreshDelegationToken = UserGroupInformation.isSecurityEnabled(); to this.canRefreshDelegationToken = true; and it will cause webhdfs to look for a token even if security is off. Nothing else in webhdfs should require a change.

          Show
          daryn Daryn Sharp added a comment - -1 No, feedback was not addressed, a bug was introduced, and the tests were changed to verify the new bug occurs. Strikethru on the one point addressed. If a code change is necessary, UGI should use Configuration#getTrimmedStrings and unconditionally call Credentials.readTokenStorageFile instead of allowing the user to specify an invalid setting. Only webhdfs related change is WebHdfsFileSystem.canRefreshDelegationToken should default to true. The last and most important point was overlooked and webhdfs is broken. The tests used to: call getfilestatus and verify a token is sent clear the token with the comment // wipe out internal token to simulate auth always required call getfilestatus again to specifically verify no token is sent - because auth is expected This patch changed #3 to verify the opposite behavior: the same token as #1 is passed. Again, just change this.canRefreshDelegationToken = UserGroupInformation.isSecurityEnabled(); to this.canRefreshDelegationToken = true; and it will cause webhdfs to look for a token even if security is off. Nothing else in webhdfs should require a change.
          Hide
          kihwal Kihwal Lee added a comment -

          Is anyone reverting it or reworking on the fix?

          Show
          kihwal Kihwal Lee added a comment - Is anyone reverting it or reworking on the fix?
          Hide
          owen.omalley Owen O'Malley added a comment -

          Daryn Sharp I'm sorry, but I don't see what problem the patch introduced. It lets your webhdfs have a token even if your security is turned off as long as it was already in the UGI. Where is the problem?

          Show
          owen.omalley Owen O'Malley added a comment - Daryn Sharp I'm sorry, but I don't see what problem the patch introduced. It lets your webhdfs have a token even if your security is turned off as long as it was already in the UGI. Where is the problem?
          Hide
          daryn Daryn Sharp added a comment -

          Allowing webhdfs to search for tokens with security off is a fine feature. The problem is the patch rearranged logic in getDelegationToken which introduced a subtle bug that existing tests caught. This should have been a red flag but the tests were changed. A feature for security off should never break security on tests.

          The 1-liner I posted should be all that's needed in webhdfs.

          Show
          daryn Daryn Sharp added a comment - Allowing webhdfs to search for tokens with security off is a fine feature. The problem is the patch rearranged logic in getDelegationToken which introduced a subtle bug that existing tests caught. This should have been a red flag but the tests were changed. A feature for security off should never break security on tests. The 1-liner I posted should be all that's needed in webhdfs.
          Hide
          aw Allen Wittenauer added a comment -

          it will cause webhdfs to look for a token even if security is off. Nothing else in webhdfs should require a change.

          if canRefreshDelegationToken is (defaulted) to true and without a token present in the UGI, then on insecure systems it will attempt to fetch a delegation. Perhaps the

                  if (canRefreshDelegationToken) {
          

          should be

                  this.canRefreshDelegationToken = true;
                  ...
                  if (canRefreshDelegationToken && UserGroupInformation.isSecurityEnabled()) {
          

          would satisfy everyone.

          Show
          aw Allen Wittenauer added a comment - it will cause webhdfs to look for a token even if security is off. Nothing else in webhdfs should require a change. if canRefreshDelegationToken is (defaulted) to true and without a token present in the UGI, then on insecure systems it will attempt to fetch a delegation. Perhaps the if (canRefreshDelegationToken) { should be this .canRefreshDelegationToken = true ; ... if (canRefreshDelegationToken && UserGroupInformation.isSecurityEnabled()) { would satisfy everyone.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          Catching up on this by way of looking at UGI and seeing some new code there that I wasn't expecting.

          sysprops vs config options

          "hadoop.token.files" is not a core-default file, it is a system property.

          Adding a core-default entry here is misleading, as it will make people believe that they can set token files this way. Remove and fix the javadocs to match.

          documentation

          We now have yet another undocumented configuratin point for Hadoop security, while I am still trying to understand what there was already. Please document in hadoop security docs

          logging and error reporting

          Add some more logging too. Print out the files before they are loaded? Please.

          Finally, why skip files that aren't there or aren't files? Isn't that a sign of an error? At the very least, WARN. Otherwise, someone —and I fear it shall be me— will end up trying to debug why a launched YARN app hasn't picked up credentials from oozie, with the cause being a typo in the path which was logged at all

          integration with HADOOP_TOKEN_FILE_LOCATION,

          w.r.t HADOOP_TOKEN_FILE_LOCATION, that has the advantage of working with non-java apps. What may be nice would be for both HADOOP_TOKEN_FILE_LOCATION and "hadoop.token.files" to have the same processing logic.

          you'd go

          String files = System.getProperty("hadoop.token.files", System.getEnv("HADOOP_TOKEN_FILE_LOCATION"))
          

          the env would get picked up, the sysprop override. Then have one followon codepath with the logging I mentioned earlier.

          As it is, there's now the situation that both options can be set. Is that really what is wanted?

          Show
          stevel@apache.org Steve Loughran added a comment - Catching up on this by way of looking at UGI and seeing some new code there that I wasn't expecting. sysprops vs config options "hadoop.token.files" is not a core-default file, it is a system property. Adding a core-default entry here is misleading, as it will make people believe that they can set token files this way. Remove and fix the javadocs to match. documentation We now have yet another undocumented configuratin point for Hadoop security, while I am still trying to understand what there was already. Please document in hadoop security docs logging and error reporting Add some more logging too. Print out the files before they are loaded? Please. Finally, why skip files that aren't there or aren't files? Isn't that a sign of an error? At the very least, WARN. Otherwise, someone —and I fear it shall be me— will end up trying to debug why a launched YARN app hasn't picked up credentials from oozie, with the cause being a typo in the path which was logged at all integration with HADOOP_TOKEN_FILE_LOCATION , w.r.t HADOOP_TOKEN_FILE_LOCATION , that has the advantage of working with non-java apps. What may be nice would be for both HADOOP_TOKEN_FILE_LOCATION and "hadoop.token.files" to have the same processing logic. you'd go String files = System .getProperty( "hadoop.token.files" , System .getEnv( "HADOOP_TOKEN_FILE_LOCATION" )) the env would get picked up, the sysprop override. Then have one followon codepath with the logging I mentioned earlier. As it is, there's now the situation that both options can be set. Is that really what is wanted?
          Hide
          stevel@apache.org Steve Loughran added a comment -

          should i open a new JIRA to cover my issues?

          Show
          stevel@apache.org Steve Loughran added a comment - should i open a new JIRA to cover my issues?
          Hide
          sookim HeeSoo Kim added a comment -

          Steve Loughran Thank you for your feedback.

          "hadoop.token.files" is not a core-default file, it is a system property.

          The "hadoop.token.files" property can be defined in two places.
          One is system core-default file and the other is system property. The code is intended since we considered the two use cases.
          In general, at runtime, the user uses system property.
          However, if the user gets the token periodically somehow, and stores in specific directory in their system. I think they can also use the token filename in core-default file. This code has the error handling when the file does not exist. Even the file does not exist, it won't break the job. It will continuously work without the user mentioned credential files.

          Add some more logging too. Print out the files before they are loaded? Please.

          I thought it is a extension feature of HADOOP_TOKEN_FILE_LOCATION.

          Finally, why skip files that aren't there or aren't files? Isn't that a sign of an error?

          As I explained above, it won't break the job even the token files are not available.
          We don't know that the credential is expired or token file is existed.
          It allows to keep work even it does not have right credential for the service.
          For instance, if it needs to access WebHDFS filesystem and the credential is not available which in hadoop.token.files, it will call SPNEGO to renew the token. Therefore, the job can be work continuously without stop.

          Otherwise, someone —and I fear it shall be me— will end up trying to debug why a launched YARN app hasn't picked up credentials from oozie, with the cause being a typo in the path which was logged at all

          When the credentials is translated to distributed system, the Credentials class has multiple tokens. It will be stored on one file that has in HADOOP_TOKEN_FILE_LOCATION. If the initial client application read the credential token successfully, the token can be distributed to other job.

          String files = System.getProperty("hadoop.token.files", System.getEnv("HADOOP_TOKEN_FILE_LOCATION"))
          the env would get picked up, the sysprop override. Then have one follow on codepath with the logging I mentioned earlier.
          As it is, there's now the situation that both options can be set. Is that really what is wanted?

          The main intention of it is that read credentials from files as much as possible.
          It allows to use multiple token filenames. It would not break previous configuration.
          For instance, YARN uses the HADOOP_TOKEN_FILE_LOCATION property as a default credential filename. The credential file has multiple tokens. I think it is better to support multiple token filenames.

          Show
          sookim HeeSoo Kim added a comment - Steve Loughran Thank you for your feedback. "hadoop.token.files" is not a core-default file, it is a system property. The "hadoop.token.files" property can be defined in two places. One is system core-default file and the other is system property. The code is intended since we considered the two use cases. In general, at runtime, the user uses system property. However, if the user gets the token periodically somehow, and stores in specific directory in their system. I think they can also use the token filename in core-default file. This code has the error handling when the file does not exist. Even the file does not exist, it won't break the job. It will continuously work without the user mentioned credential files. Add some more logging too. Print out the files before they are loaded? Please. I thought it is a extension feature of HADOOP_TOKEN_FILE_LOCATION. Finally, why skip files that aren't there or aren't files? Isn't that a sign of an error? As I explained above, it won't break the job even the token files are not available. We don't know that the credential is expired or token file is existed. It allows to keep work even it does not have right credential for the service. For instance, if it needs to access WebHDFS filesystem and the credential is not available which in hadoop.token.files , it will call SPNEGO to renew the token. Therefore, the job can be work continuously without stop. Otherwise, someone —and I fear it shall be me— will end up trying to debug why a launched YARN app hasn't picked up credentials from oozie, with the cause being a typo in the path which was logged at all When the credentials is translated to distributed system, the Credentials class has multiple tokens. It will be stored on one file that has in HADOOP_TOKEN_FILE_LOCATION. If the initial client application read the credential token successfully, the token can be distributed to other job. String files = System.getProperty("hadoop.token.files", System.getEnv("HADOOP_TOKEN_FILE_LOCATION")) the env would get picked up, the sysprop override. Then have one follow on codepath with the logging I mentioned earlier. As it is, there's now the situation that both options can be set. Is that really what is wanted? The main intention of it is that read credentials from files as much as possible. It allows to use multiple token filenames. It would not break previous configuration. For instance, YARN uses the HADOOP_TOKEN_FILE_LOCATION property as a default credential filename. The credential file has multiple tokens. I think it is better to support multiple token filenames.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          I understand more, but I'm worried that if a token file is missing, and you don't have the credentials, there's going to be no explanation of what has gone wrong.

          Currently, the env-var mechanism is usually used as an alternative to being kinited in or having a keytab. It only makes sense to downgrade if you are —or can be— logged in. Without that option, then when you try to access the webhdfs filesystem, the user sees a message about no TGT, and doesn't understand the root cause was that even though a location was set in the cluster config/command line, it wasn't actually there.

          That's what I'm worried about: finding problems early, and why I'm advocating throwing an FNFE if a named file isn't there.

          If you think differently, then at the very least,

          1. the fact that a named file is missing needs to be logged @ info.
          2. kdiag should check for all the files and fail fast if they are missing.
          Show
          stevel@apache.org Steve Loughran added a comment - I understand more, but I'm worried that if a token file is missing, and you don't have the credentials, there's going to be no explanation of what has gone wrong. Currently, the env-var mechanism is usually used as an alternative to being kinited in or having a keytab. It only makes sense to downgrade if you are —or can be— logged in. Without that option, then when you try to access the webhdfs filesystem, the user sees a message about no TGT, and doesn't understand the root cause was that even though a location was set in the cluster config/command line, it wasn't actually there. That's what I'm worried about: finding problems early, and why I'm advocating throwing an FNFE if a named file isn't there. If you think differently, then at the very least, the fact that a named file is missing needs to be logged @ info. kdiag should check for all the files and fail fast if they are missing.
          Hide
          raviprak Ravi Prakash added a comment -

          Thanks for the spirited discussion everyone!

          Steve Loughran : https://github.com/apache/hadoop/blame/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L842 seems like its logging @ info in case the file is missing. Is this not what you are thinking of?
          Cool tool Kdiag that! I've filed HADOOP-13018 for your 2nd point.

          I'm going to merge the branch-2 patch if there are no additional comments.

          Show
          raviprak Ravi Prakash added a comment - Thanks for the spirited discussion everyone! Steve Loughran : https://github.com/apache/hadoop/blame/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L842 seems like its logging @ info in case the file is missing. Is this not what you are thinking of? Cool tool Kdiag that! I've filed HADOOP-13018 for your 2nd point. I'm going to merge the branch-2 patch if there are no additional comments.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          I must have misread it -sorry.

          Looking forward to the kdiag contrib

          Show
          stevel@apache.org Steve Loughran added a comment - I must have misread it -sorry. Looking forward to the kdiag contrib
          Hide
          raviprak Ravi Prakash added a comment -

          Thanks everyone. I've committed the HDFS-9525.branch-2.009.patch to branch-2 . This should go out in the 2.9.0 release

          Show
          raviprak Ravi Prakash added a comment - Thanks everyone. I've committed the HDFS-9525.branch-2.009.patch to branch-2 . This should go out in the 2.9.0 release

            People

            • Assignee:
              sookim HeeSoo Kim
              Reporter:
              aw Allen Wittenauer
            • Votes:
              1 Vote for this issue
              Watchers:
              22 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development