Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 1.0.0, 2.0.0-alpha
    • Fix Version/s: 2.0.2-alpha
    • Component/s: None
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      fuse_dfs should have support for Kerberos authentication. This would allow FUSE to be used in a secure cluster.

      1. HDFS-3568.005.patch
        44 kB
        Colin Patrick McCabe
      2. HDFS-3568.004.patch
        43 kB
        Colin Patrick McCabe
      3. HDFS-3568.003.patch
        43 kB
        Colin Patrick McCabe
      4. HDFS-3568.002.patch
        43 kB
        Colin Patrick McCabe
      5. HDFS-3568.001.patch
        43 kB
        Colin Patrick McCabe

        Issue Links

          Activity

          Hide
          Matt Foley added a comment -

          Opened HDFS-3700 for port to 1.2.0, so this jira can be properly closed.

          Show
          Matt Foley added a comment - Opened HDFS-3700 for port to 1.2.0, so this jira can be properly closed.
          Hide
          Colin Patrick McCabe added a comment -

          Re-resolving this as 'fixed' rather than 'implemented', at Harsh's request

          Show
          Colin Patrick McCabe added a comment - Re-resolving this as 'fixed' rather than 'implemented', at Harsh's request
          Hide
          Colin Patrick McCabe added a comment -

          Why was this marked 'Implemented'? If its committed to a release branch/trunk, lets reopen and re-resolve as 'Fixed'.

          I marked this as "implemented" rather than "fixed" because it's a new feature, not a bug that had to be fixed.
          But I'd be happy to change the resolution if you think it makes more sense to say "fixed." Is there an Apache wiki page or document with some guidelines about which resolution to use in which scenario?

          Show
          Colin Patrick McCabe added a comment - Why was this marked 'Implemented'? If its committed to a release branch/trunk, lets reopen and re-resolve as 'Fixed'. I marked this as "implemented" rather than "fixed" because it's a new feature, not a bug that had to be fixed. But I'd be happy to change the resolution if you think it makes more sense to say "fixed." Is there an Apache wiki page or document with some guidelines about which resolution to use in which scenario?
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk #1133 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1133/)
          HDFS-3568. fuse_dfs: add support for security. Contributed by Colin McCabe. (Revision 1359824)

          Result = SUCCESS
          atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1359824
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_connect.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/hdfs.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/hdfs.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRecovery.java
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk #1133 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1133/ ) HDFS-3568 . fuse_dfs: add support for security. Contributed by Colin McCabe. (Revision 1359824) Result = SUCCESS atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1359824 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_connect.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/hdfs.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/hdfs.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRecovery.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk #1100 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1100/)
          HDFS-3568. fuse_dfs: add support for security. Contributed by Colin McCabe. (Revision 1359824)

          Result = FAILURE
          atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1359824
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_connect.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/hdfs.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/hdfs.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRecovery.java
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #1100 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1100/ ) HDFS-3568 . fuse_dfs: add support for security. Contributed by Colin McCabe. (Revision 1359824) Result = FAILURE atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1359824 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_connect.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/hdfs.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/hdfs.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRecovery.java
          Hide
          Harsh J added a comment -

          Why was this marked 'Implemented'? If its committed to a release branch/trunk, lets reopen and re-resolve as 'Fixed'.

          Show
          Harsh J added a comment - Why was this marked 'Implemented'? If its committed to a release branch/trunk, lets reopen and re-resolve as 'Fixed'.
          Hide
          Colin Patrick McCabe added a comment -

          Ah, looks like there's a second link in the Jenkins message which leads to the findbugs common warnings. So yeah, no bug in test-patch.sh.

          Show
          Colin Patrick McCabe added a comment - Ah, looks like there's a second link in the Jenkins message which leads to the findbugs common warnings. So yeah, no bug in test-patch.sh.
          Hide
          Aaron T. Myers added a comment -

          The findbugs warning is in Common, not HDFS, so if you follow this link you'll see it (no bug in test-patch):

          https://builds.apache.org/job/PreCommit-HDFS-Build/2767//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html

          Show
          Aaron T. Myers added a comment - The findbugs warning is in Common, not HDFS, so if you follow this link you'll see it (no bug in test-patch): https://builds.apache.org/job/PreCommit-HDFS-Build/2767//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
          Hide
          Colin Patrick McCabe added a comment -

          The weird thing is, if you follow https://builds.apache.org/job/PreCommit-HDFS-Build/2767//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html, there is no mention of the new findbugs warning.

          So it looks like another bug in test-patch.sh.

          Show
          Colin Patrick McCabe added a comment - The weird thing is, if you follow https://builds.apache.org/job/PreCommit-HDFS-Build/2767//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html , there is no mention of the new findbugs warning. So it looks like another bug in test-patch.sh.
          Hide
          Aaron T. Myers added a comment -

          Looks like I missed a legitimate findbugs warning that was introduced by this patch. Colin's filed/posted a patch at HADOOP-8585 to address this.

          Show
          Aaron T. Myers added a comment - Looks like I missed a legitimate findbugs warning that was introduced by this patch. Colin's filed/posted a patch at HADOOP-8585 to address this.
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Common-trunk-Commit #2441 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2441/)
          HDFS-3568. fuse_dfs: add support for security. Contributed by Colin McCabe. (Revision 1359824)

          Result = SUCCESS
          atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1359824
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_connect.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/hdfs.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/hdfs.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRecovery.java
          Show
          Hudson added a comment - Integrated in Hadoop-Common-trunk-Commit #2441 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2441/ ) HDFS-3568 . fuse_dfs: add support for security. Contributed by Colin McCabe. (Revision 1359824) Result = SUCCESS atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1359824 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_connect.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/hdfs.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/hdfs.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRecovery.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk-Commit #2508 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2508/)
          HDFS-3568. fuse_dfs: add support for security. Contributed by Colin McCabe. (Revision 1359824)

          Result = SUCCESS
          atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1359824
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_connect.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/hdfs.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/hdfs.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRecovery.java
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #2508 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2508/ ) HDFS-3568 . fuse_dfs: add support for security. Contributed by Colin McCabe. (Revision 1359824) Result = SUCCESS atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1359824 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_connect.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/hdfs.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/hdfs.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRecovery.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk-Commit #2457 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2457/)
          HDFS-3568. fuse_dfs: add support for security. Contributed by Colin McCabe. (Revision 1359824)

          Result = FAILURE
          atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1359824
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_connect.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/hdfs.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/hdfs.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRecovery.java
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk-Commit #2457 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2457/ ) HDFS-3568 . fuse_dfs: add support for security. Contributed by Colin McCabe. (Revision 1359824) Result = FAILURE atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1359824 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_connect.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/hdfs.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/hdfs.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRecovery.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk-Commit #2507 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2507/)
          HDFS-3568. fuse_dfs: add support for security. Contributed by Colin McCabe. (Revision 1359824)

          Result = SUCCESS
          atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1359824
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_connect.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/hdfs.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/hdfs.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRecovery.java
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #2507 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2507/ ) HDFS-3568 . fuse_dfs: add support for security. Contributed by Colin McCabe. (Revision 1359824) Result = SUCCESS atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1359824 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_connect.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/hdfs.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/hdfs.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRecovery.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Common-trunk-Commit #2439 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2439/)
          HDFS-3568. fuse_dfs: add support for security. Contributed by Colin McCabe. (Revision 1359824)

          Result = SUCCESS
          atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1359824
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_connect.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/hdfs.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/hdfs.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRecovery.java
          Show
          Hudson added a comment - Integrated in Hadoop-Common-trunk-Commit #2439 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2439/ ) HDFS-3568 . fuse_dfs: add support for security. Contributed by Colin McCabe. (Revision 1359824) Result = SUCCESS atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1359824 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_connect.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/hdfs.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/hdfs.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRecovery.java
          Hide
          Aaron T. Myers added a comment -

          I've just committed this to trunk and branch-2. Leaving this issue open for commit to branch-1, or, if you'd prefer, we can open a new JIRA for the back-port if there's additional work that needs to be done.

          Thanks a lot for the contribution, Colin.

          Show
          Aaron T. Myers added a comment - I've just committed this to trunk and branch-2. Leaving this issue open for commit to branch-1, or, if you'd prefer, we can open a new JIRA for the back-port if there's additional work that needs to be done. Thanks a lot for the contribution, Colin.
          Hide
          Colin Patrick McCabe added a comment -

          The findbugs warnings are about org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager, which was not changed by this patch.

          The test failure is unrelated as well. It looks like https://issues.apache.org/jira/browse/HDFS-3532

          Show
          Colin Patrick McCabe added a comment - The findbugs warnings are about org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager, which was not changed by this patch. The test failure is unrelated as well. It looks like https://issues.apache.org/jira/browse/HDFS-3532
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12535772/HDFS-3568.005.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 3 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.TestDatanodeBlockScanner

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2767//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/2767//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/2767//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2767//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12535772/HDFS-3568.005.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 1 new or modified test files. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 javadoc. The javadoc tool did not generate any warning messages. +1 eclipse:eclipse. The patch built with eclipse:eclipse. -1 findbugs. The patch appears to introduce 3 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.TestDatanodeBlockScanner +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2767//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/2767//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/2767//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2767//console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12535772/HDFS-3568.005.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 3 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.TestDatanodeBlockScanner

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2765//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/2765//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/2765//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2765//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12535772/HDFS-3568.005.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 1 new or modified test files. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 javadoc. The javadoc tool did not generate any warning messages. +1 eclipse:eclipse. The patch built with eclipse:eclipse. -1 findbugs. The patch appears to introduce 3 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.TestDatanodeBlockScanner +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2765//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/2765//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/2765//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2765//console This message is automatically generated.
          Hide
          Colin Patrick McCabe added a comment -

          I'm not sure what's going on with Jenkins. It seems to be aborting before it actually runs any tests. On the off chance that this is because of the lack of test changes in this patch, here's a patch which does change a test.

          The reason why we have no tests for this patch is that there are no tests for FUSE, and no tests that use a KDC (kerberos domain controller.) Since this patch uses both of those, meaningful unit testing is impossible at this point.

          I am working on a FUSE unit test, so that should improve in the near future.

          Show
          Colin Patrick McCabe added a comment - I'm not sure what's going on with Jenkins. It seems to be aborting before it actually runs any tests. On the off chance that this is because of the lack of test changes in this patch, here's a patch which does change a test. The reason why we have no tests for this patch is that there are no tests for FUSE, and no tests that use a KDC (kerberos domain controller.) Since this patch uses both of those, meaningful unit testing is impossible at this point. I am working on a FUSE unit test, so that should improve in the near future.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12535712/HDFS-3568.004.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2763//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12535712/HDFS-3568.004.patch against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2763//console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12535712/HDFS-3568.004.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2762//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12535712/HDFS-3568.004.patch against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2762//console This message is automatically generated.
          Hide
          Colin Patrick McCabe added a comment -
          • rebase
          Show
          Colin Patrick McCabe added a comment - rebase
          Hide
          Colin Patrick McCabe added a comment -

          hdfsFreeBuilder is dead code, is it used later in the tests?

          You would need this if you had a builder, but then found some reason to delete the builder without constructing an HDFS instance. It really needs to be in the API because otherwise this would be impossible.

          The current code doesn't do anything that could fail between creating the builder and using it to build an HDFS instance, so fuse_dfs doesn't use it at this time. But it's good to have that option.

          Show
          Colin Patrick McCabe added a comment - hdfsFreeBuilder is dead code, is it used later in the tests? You would need this if you had a builder, but then found some reason to delete the builder without constructing an HDFS instance. It really needs to be in the API because otherwise this would be impossible. The current code doesn't do anything that could fail between creating the builder and using it to build an HDFS instance, so fuse_dfs doesn't use it at this time. But it's good to have that option.
          Hide
          Eli Collins added a comment -

          Approach and patch look good to me.

          hdfsFreeBuilder is dead code, is it used later in the tests?

          Show
          Eli Collins added a comment - Approach and patch look good to me. hdfsFreeBuilder is dead code, is it used later in the tests?
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12535498/HDFS-3568.003.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2756//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12535498/HDFS-3568.003.patch against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2756//console This message is automatically generated.
          Hide
          Colin Patrick McCabe added a comment -
          • rename kerberos.ticket.cache.path to hadoop.security.kerberos.ticket.cache.path
          • warn if there is more than one kerberos principal associated with a ticket cache file
          Show
          Colin Patrick McCabe added a comment - rename kerberos.ticket.cache.path to hadoop.security.kerberos.ticket.cache.path warn if there is more than one kerberos principal associated with a ticket cache file
          Hide
          Aaron T. Myers added a comment -

          The latest patch looks pretty good to me. In addition to examining the code, I also tested it manually and confirmed that it largely works as expected, save for one thing which I think we should address in a follow-up JIRA.

          I noticed during my testing that if one kinits as some principal "foo" and then does some operation on fuse_dfs, then kdestroy and kinit as some principal "bar", subsequent operations done via fuse_dfs will still use cached credentials for "foo". The reason for this is that fuse_dfs caches Filesystem instances using the UID of the user running the command as the key into the cache. I think this isn't a big deal, though, since it's pretty uncommon for a single user to want to use credentials for several different principals on the same box.

          Colin, if you agree, would you mind filing a follow-up JIRA for the above issue?

          Two small comments with the current patch, +1 once these are addressed:

          1. In the following code, I think you might also want to assert that loginPrincipals.size() == 1, and at least log a WARN if it's > 1:
            +      Set<Principal> loginPrincipals = loginSubject.getPrincipals();
            +      if (loginPrincipals.isEmpty()) {
            +        throw new RuntimeException("No login principals found!");
            +      }
            +      User ugiUser = new User(loginPrincipals.iterator().next().getName(),
            +          AuthenticationMethod.KERBEROS, login);
            
          2. I think we should change the config key "kerberos.ticket.cache.path" to "hadoop.security.kerberos.ticket.cache.path", to be more inline with the other security configs.
          Show
          Aaron T. Myers added a comment - The latest patch looks pretty good to me. In addition to examining the code, I also tested it manually and confirmed that it largely works as expected, save for one thing which I think we should address in a follow-up JIRA. I noticed during my testing that if one kinits as some principal "foo" and then does some operation on fuse_dfs, then kdestroy and kinit as some principal "bar", subsequent operations done via fuse_dfs will still use cached credentials for "foo". The reason for this is that fuse_dfs caches Filesystem instances using the UID of the user running the command as the key into the cache. I think this isn't a big deal, though, since it's pretty uncommon for a single user to want to use credentials for several different principals on the same box. Colin, if you agree, would you mind filing a follow-up JIRA for the above issue? Two small comments with the current patch, +1 once these are addressed: In the following code, I think you might also want to assert that loginPrincipals.size() == 1, and at least log a WARN if it's > 1: + Set<Principal> loginPrincipals = loginSubject.getPrincipals(); + if (loginPrincipals.isEmpty()) { + throw new RuntimeException( "No login principals found!" ); + } + User ugiUser = new User(loginPrincipals.iterator().next().getName(), + AuthenticationMethod.KERBEROS, login); I think we should change the config key "kerberos.ticket.cache.path" to "hadoop.security.kerberos.ticket.cache.path", to be more inline with the other security configs.
          Hide
          Eli Collins added a comment -

          HDFS-2546 is not required for this patch, I removed the link.

          Show
          Eli Collins added a comment - HDFS-2546 is not required for this patch, I removed the link.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12535018/HDFS-3568.002.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.namenode.TestListCorruptFileBlocks
          org.apache.hadoop.hdfs.TestDFSClientRetries
          org.apache.hadoop.hdfs.TestDatanodeBlockScanner
          org.apache.hadoop.hdfs.TestHDFSTrash

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2739//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/2739//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2739//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12535018/HDFS-3568.002.patch against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 javadoc. The javadoc tool did not generate any warning messages. +1 eclipse:eclipse. The patch built with eclipse:eclipse. -1 findbugs. The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.namenode.TestListCorruptFileBlocks org.apache.hadoop.hdfs.TestDFSClientRetries org.apache.hadoop.hdfs.TestDatanodeBlockScanner org.apache.hadoop.hdfs.TestHDFSTrash +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2739//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/2739//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2739//console This message is automatically generated.
          Hide
          Colin Patrick McCabe added a comment -

          Thanks, atm. Here's a new patch incorporating those suggestions.

          • factor the common UGI-getting code out into UserGroupInformation#getBestUGI
          • fromKerberosTicketCache is now getUGIFromTicketCache
          • fuse: update error message in FUSE connect
          • libhdfs: greatly simplify jstrToCStr
          • libhdfs: Use EINTERNAL consistently (not -EINTERNAL)
          • some whitespace fixes

          note: I didn't forget about the docs for hdfsBuilderSetNameNode... I'm still thinking about the best thing to do with it. Suggestions welcome!

          Show
          Colin Patrick McCabe added a comment - Thanks, atm. Here's a new patch incorporating those suggestions. factor the common UGI-getting code out into UserGroupInformation#getBestUGI fromKerberosTicketCache is now getUGIFromTicketCache fuse: update error message in FUSE connect libhdfs: greatly simplify jstrToCStr libhdfs: Use EINTERNAL consistently (not -EINTERNAL) some whitespace fixes note: I didn't forget about the docs for hdfsBuilderSetNameNode... I'm still thinking about the best thing to do with it. Suggestions welcome!
          Hide
          Colin Patrick McCabe added a comment -

          I don't think there's any need to throw an exception if security is disabled when calling fromKerberosTicketCache. The other methods in the class just return early or return default values when security is disabled, e.g. reloginFromKeytab.

          It's a little awkward because I'd essentially have to duplicate this code yet again:

          if (user == null) {
            ugi = UserGroupInformation.getCurrentUser();
          } else {
            ugi = UserGroupInformation.createRemoteUser(user);
          }
          

          The other alternative would be to return null and have the function callers proceed with their normal flow of control in this case. So something like this in FileSystem#get:

              String ticketCachePath =
                conf.get(CommonConfigurationKeys.KERBEROS_TICKET_CACHE_PATH);
              if (ticketCachePath != null) {
                ugi = UserGroupInformation.fromKerberosTicketCache(ticketCachePath);
              }
              if (ugi == null) {
                 if (user == null) {
                  ugi = UserGroupInformation.getCurrentUser();
                 } else {
                   ugi = UserGroupInformation.createRemoteUser(user);
                 }
              }
          

          In hdfsConfGet, why do you return "EINTERNAL" in some cases and "-EINTERNAL" in others?

          Yeah, I really should be sure to use the same convention everywhere... I think we've gone with "always positive." It's just that the kernel coding convention is that error numbers are always negative, so I keep forgetting that ours is different...

          Are you positive that it's acceptable for all LoginContext objects to share the same reference to a HadoopConfiguration object? Previous to this patch, each LoginContext would get it's own new reference to a HadoopConfiguration object. (I don't know that it is definitely a problem, I'm just not positive either way.)

          The base class, javax.security.auth.login.Configuration contains only static variables. And HadoopConfiguration itself contains only static variables. So it's hard to see what harm could come from having them share the same object.

          Is there really no built-in function which already implements "jStrToCstr" ? (I don't know that there is, I'm just surprised that there isn't.)

          Ah, it looks like there is a built-in function. Will fix.

          I recommend you rename hdfsBuilderSetNameNode to hdfsBuilderSetNameNodeHostname.

          The problem is that the 'nn' parameter can have 4 different types of values:

          • NULL - meaning always use LocalFileSystem
          • The word "default" - meaning read the configuration, and do that
          • the hostname of the NameNode
          • the IP address of the NameNode

          Since only one of those four is a hostname, calling it "...SetHostname" makes me a little queasy. But if the consensus is that we should call it that, then I'm open to that.

          I agree with the rest of the comments, will fix.

          Show
          Colin Patrick McCabe added a comment - I don't think there's any need to throw an exception if security is disabled when calling fromKerberosTicketCache. The other methods in the class just return early or return default values when security is disabled, e.g. reloginFromKeytab. It's a little awkward because I'd essentially have to duplicate this code yet again: if (user == null ) { ugi = UserGroupInformation.getCurrentUser(); } else { ugi = UserGroupInformation.createRemoteUser(user); } The other alternative would be to return null and have the function callers proceed with their normal flow of control in this case. So something like this in FileSystem#get: String ticketCachePath = conf.get(CommonConfigurationKeys.KERBEROS_TICKET_CACHE_PATH); if (ticketCachePath != null ) { ugi = UserGroupInformation.fromKerberosTicketCache(ticketCachePath); } if (ugi == null ) { if (user == null ) { ugi = UserGroupInformation.getCurrentUser(); } else { ugi = UserGroupInformation.createRemoteUser(user); } } In hdfsConfGet, why do you return "EINTERNAL" in some cases and "-EINTERNAL" in others? Yeah, I really should be sure to use the same convention everywhere... I think we've gone with "always positive." It's just that the kernel coding convention is that error numbers are always negative, so I keep forgetting that ours is different... Are you positive that it's acceptable for all LoginContext objects to share the same reference to a HadoopConfiguration object? Previous to this patch, each LoginContext would get it's own new reference to a HadoopConfiguration object. (I don't know that it is definitely a problem, I'm just not positive either way.) The base class, javax.security.auth.login.Configuration contains only static variables. And HadoopConfiguration itself contains only static variables. So it's hard to see what harm could come from having them share the same object. Is there really no built-in function which already implements "jStrToCstr" ? (I don't know that there is, I'm just surprised that there isn't.) Ah, it looks like there is a built-in function. Will fix. I recommend you rename hdfsBuilderSetNameNode to hdfsBuilderSetNameNodeHostname. The problem is that the 'nn' parameter can have 4 different types of values: NULL - meaning always use LocalFileSystem The word "default" - meaning read the configuration, and do that the hostname of the NameNode the IP address of the NameNode Since only one of those four is a hostname, calling it "...SetHostname" makes me a little queasy. But if the consensus is that we should call it that, then I'm open to that. I agree with the rest of the comments, will fix.
          Hide
          Aaron T. Myers added a comment -

          Thanks a lot for the patch, Colin. A few comments:

          1. I recommend refactoring the if/else if/else block that gets a UGI object, since it's repeated in two places.
          2. It's not abundantly obvious what the purpose of the DynamicConfiguration class is. Please add a class comment for it.
          3. Looks like you have a vestigial @param in the method comment for "fromKerberosTicketCache".
          4. I suggest you rename fromKerberosTicketCache to something like "getUGIFromTicketCache"
          5. I don't think there's any need to throw an exception if security is disabled when calling fromKerberosTicketCache. The other methods in the class just return early or return default values when security is disabled, e.g. reloginFromKeytab.
          6. I find checking for "!iter.hasNext()" a little goofy. How about just "loginPrincipals.isEmpty()" ?
          7. Are you positive that it's acceptable for all LoginContext objects to share the same reference to a HadoopConfiguration object? Previous to this patch, each LoginContext would get it's own new reference to a HadoopConfiguration object. (I don't know that it is definitely a problem, I'm just not positive either way.)
          8. Instead of the error message "Unable to determine hadoop.security.authentication", I suggest "Unable to determine the configured value for hadoop.security.authentication."
          9. Is there really no built-in function which already implements "jStrToCstr" ? (I don't know that there is, I'm just surprised that there isn't.)
          10. I recommend you rename hdfsBuilderSetNameNode to hdfsBuilderSetNameNodeHostname.
          11. In hdfsConfGet, why do you return "EINTERNAL" in some cases and "-EINTERNAL" in others?
          12. Looks like there's an errant whitespace change in the function comment for hdfsConnectAsUser in hdfs.h.
          13. "@param nn The NameNode. See hdfsBuilderSetNameNode for details." This isn't terribly helpful, especially since there are no comments for hdfsBuilderSetNameNode. You should also mention that this is expecting the NN host (either hostname or IP.)
          Show
          Aaron T. Myers added a comment - Thanks a lot for the patch, Colin. A few comments: I recommend refactoring the if/else if/else block that gets a UGI object, since it's repeated in two places. It's not abundantly obvious what the purpose of the DynamicConfiguration class is. Please add a class comment for it. Looks like you have a vestigial @param in the method comment for "fromKerberosTicketCache". I suggest you rename fromKerberosTicketCache to something like "getUGIFromTicketCache" I don't think there's any need to throw an exception if security is disabled when calling fromKerberosTicketCache. The other methods in the class just return early or return default values when security is disabled, e.g. reloginFromKeytab. I find checking for " !iter.hasNext() " a little goofy. How about just " loginPrincipals.isEmpty() " ? Are you positive that it's acceptable for all LoginContext objects to share the same reference to a HadoopConfiguration object? Previous to this patch, each LoginContext would get it's own new reference to a HadoopConfiguration object. (I don't know that it is definitely a problem, I'm just not positive either way.) Instead of the error message "Unable to determine hadoop.security.authentication", I suggest "Unable to determine the configured value for hadoop.security.authentication." Is there really no built-in function which already implements "jStrToCstr" ? (I don't know that there is, I'm just surprised that there isn't.) I recommend you rename hdfsBuilderSetNameNode to hdfsBuilderSetNameNodeHostname. In hdfsConfGet, why do you return "EINTERNAL" in some cases and "-EINTERNAL" in others? Looks like there's an errant whitespace change in the function comment for hdfsConnectAsUser in hdfs.h. "@param nn The NameNode. See hdfsBuilderSetNameNode for details." This isn't terribly helpful, especially since there are no comments for hdfsBuilderSetNameNode. You should also mention that this is expecting the NN host (either hostname or IP.)
          Hide
          Colin Patrick McCabe added a comment -

          The general approach here is to allow libhdfs users to specify a kerberos ticket cache file to use to connect. This ticket cache file is what gets renewed when you call kinit. For each UNIX user, there should be one associated ticket cache file. fuse_dfs locates this file and uses it to connect to the HDFS filesystem.

          The advantage of using the ticket cache file directly is that it limits the scope of potential compromises. Only users who have kinited will have a ticket cache file present. So even if a user succeeds in hacking his own fuse_dfs daemon, he will only get access to the files of users who have kinit'ed on his system.

          Some other advantages: there is no additional configuration required from system administrators besides Kerberos itself. This mode of operation is consistent with other Kerberos-enabled programs, which require a valid Kerberos login to function.

          This patch has three main parts.

          • The Java part adds the ability to connect using a Kerberos ticket cache to UserGroupInformation.
          • libhdfs now accepts a kerberos ticket cache parameter when connecting to an hdfsFS. Because the number of different hdfsFS constructors was exploding exponentially, I also added a builder system. libhdfs also now has a function which can pull a configuration string from the HDFS Configuration object.
          • the fuse_dfs part checks to see if Kerberos is configured (using hdfsConfGet). If so, it uses the Kerberos ticket cache infrastructure mentioned previously. There is also some code in fuse_dfs to locate the ticket cache file for a particular UID.
          Show
          Colin Patrick McCabe added a comment - The general approach here is to allow libhdfs users to specify a kerberos ticket cache file to use to connect. This ticket cache file is what gets renewed when you call kinit. For each UNIX user, there should be one associated ticket cache file. fuse_dfs locates this file and uses it to connect to the HDFS filesystem. The advantage of using the ticket cache file directly is that it limits the scope of potential compromises. Only users who have kinited will have a ticket cache file present. So even if a user succeeds in hacking his own fuse_dfs daemon, he will only get access to the files of users who have kinit'ed on his system. Some other advantages: there is no additional configuration required from system administrators besides Kerberos itself. This mode of operation is consistent with other Kerberos-enabled programs, which require a valid Kerberos login to function. This patch has three main parts. The Java part adds the ability to connect using a Kerberos ticket cache to UserGroupInformation. libhdfs now accepts a kerberos ticket cache parameter when connecting to an hdfsFS. Because the number of different hdfsFS constructors was exploding exponentially, I also added a builder system. libhdfs also now has a function which can pull a configuration string from the HDFS Configuration object. the fuse_dfs part checks to see if Kerberos is configured (using hdfsConfGet). If so, it uses the Kerberos ticket cache infrastructure mentioned previously. There is also some code in fuse_dfs to locate the ticket cache file for a particular UID.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12534067/HDFS-3568.001.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2724//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/2724//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2724//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12534067/HDFS-3568.001.patch against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 javadoc. The javadoc tool did not generate any warning messages. +1 eclipse:eclipse. The patch built with eclipse:eclipse. -1 findbugs. The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2724//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/2724//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2724//console This message is automatically generated.
          Hide
          Colin Patrick McCabe added a comment -

          patch to support security in fuse_dfs.

          • add UserGroupInformation#fromKerberosTicketCache. As the name suggests, it allows us to create a UGI from a ticket cache path.
          • fuse_connect.c: discover whether kerberos is configured. If so, locate the appropriate kerberos ticket cache path and pass that to hdfsBuilderConnect.
          • hdfs.c: add a few utility functions such as hadoopConfSet, jStrToCstr, hadoopConfGet
          • hdfs.c: add the hdfsBuilder interface. The basic idea is that rather than creating 2^N different 'connect' functions (where N is the number of possible configuration options), we create a builder and then set some options on it. Then we call hdfsBuilderConnect.
          • Because the hdfsBuilder type is not exported, new fields can be added to it later as needed, without breaking backwards compatibility. The old hdsfConnect* functions are kept for now for compatbility reasons.
          • This patch unifies the connect functions into one function, rather than having totally separate code paths for each.
          • This patch also adds the hdfsConfGet API, which returns the key associated with a given Configuration key (or NULL if there is no such key.) This is needed so that fuse_dfs can determine if Kerberos is enabled.
          Show
          Colin Patrick McCabe added a comment - patch to support security in fuse_dfs. add UserGroupInformation#fromKerberosTicketCache. As the name suggests, it allows us to create a UGI from a ticket cache path. fuse_connect.c: discover whether kerberos is configured. If so, locate the appropriate kerberos ticket cache path and pass that to hdfsBuilderConnect. hdfs.c: add a few utility functions such as hadoopConfSet, jStrToCstr, hadoopConfGet hdfs.c: add the hdfsBuilder interface. The basic idea is that rather than creating 2^N different 'connect' functions (where N is the number of possible configuration options), we create a builder and then set some options on it. Then we call hdfsBuilderConnect. Because the hdfsBuilder type is not exported, new fields can be added to it later as needed, without breaking backwards compatibility. The old hdsfConnect* functions are kept for now for compatbility reasons. This patch unifies the connect functions into one function, rather than having totally separate code paths for each. This patch also adds the hdfsConfGet API, which returns the key associated with a given Configuration key (or NULL if there is no such key.) This is needed so that fuse_dfs can determine if Kerberos is enabled.
          Hide
          Colin Patrick McCabe added a comment -

          Thanks for pointing HDFS-2546 out to me, Harsh. It does look related. Hopefully we'll be able to come up with a libhdfs API that will work well for both.

          Show
          Colin Patrick McCabe added a comment - Thanks for pointing HDFS-2546 out to me, Harsh. It does look related. Hopefully we'll be able to come up with a libhdfs API that will work well for both.
          Hide
          Harsh J added a comment -

          Thanks Todd.

          Colin/other admins - Please remove the 'duplicates' link as am unable to. I've re-linked it as 'requires' instead.

          Show
          Harsh J added a comment - Thanks Todd. Colin/other admins - Please remove the 'duplicates' link as am unable to. I've re-linked it as 'requires' instead.
          Hide
          Todd Lipcon added a comment -

          I'd think HDFS-2546 is probably a pre-req of this work, but not a duplicate. fuse-dfs is a user of libhdfs

          Show
          Todd Lipcon added a comment - I'd think HDFS-2546 is probably a pre-req of this work, but not a duplicate. fuse-dfs is a user of libhdfs
          Hide
          Harsh J added a comment -

          HDFS-2546 seems to be close or a duplicate of this work? If so, please close one of them as a dupe of the other - as it makes sense to you.

          Show
          Harsh J added a comment - HDFS-2546 seems to be close or a duplicate of this work? If so, please close one of them as a dupe of the other - as it makes sense to you.
          Hide
          Colin Patrick McCabe added a comment -

          If it's run as root, fuse_dfs can get access to kerberos ticket cache file of the user performing a FUSE operation. Then FUSE can create a FileSystem instance with this kerberos ticket cache.

          In the future, it would also be good to use privilege separation to contain the power of a fuse_dfs instance running as root.

          Show
          Colin Patrick McCabe added a comment - If it's run as root, fuse_dfs can get access to kerberos ticket cache file of the user performing a FUSE operation. Then FUSE can create a FileSystem instance with this kerberos ticket cache. In the future, it would also be good to use privilege separation to contain the power of a fuse_dfs instance running as root.

            People

            • Assignee:
              Colin Patrick McCabe
              Reporter:
              Colin Patrick McCabe
            • Votes:
              0 Vote for this issue
              Watchers:
              11 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development