Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-13081

add the ability to create multiple UGIs/subjects from one kerberos login

    Details

    • Type: Improvement
    • Status: Reopened
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: 3.0.0-alpha1
    • Component/s: security
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      We have a scenario where we log in with kerberos as a certain user for some tasks, but also want to add tokens to the resulting UGI that would be specific to each task. We don't want to authenticate with kerberos for every task.
      I am not sure how this can be accomplished with the existing UGI interface. Perhaps some clone method would be helpful, similar to createProxyUser minus the proxy stuff; or it could just relogin anew from ticket cache. getUGIFromTicketCache seems like the best option in existing code, but there doesn't appear to be a consistent way of handling ticket cache location - the above method, that I only see called in test, is using a config setting that is not used anywhere else, and the env variable for the location that is used in the main ticket cache related methods is not set uniformly on all paths - therefore, trying to find the correct ticket cache and passing it via the config setting to getUGIFromTicketCache seems even hackier than doing the clone via reflection Moreover, getUGIFromTicketCache ignores the user parameter on the main path - it logs a warning for multiple principals and then logs in with first available.

      1. HADOOP-13081.patch
        1 kB
        Sergey Shelukhin
      2. HADOOP-13081.03.patch
        4 kB
        Sergey Shelukhin
      3. HADOOP-13081.03.patch
        4 kB
        Sergey Shelukhin
      4. HADOOP-13081.02.patch
        4 kB
        Sergey Shelukhin
      5. HADOOP-13081.02.patch
        4 kB
        Sergey Shelukhin
      6. HADOOP-13081.01.patch
        1 kB
        Sergey Shelukhin

        Issue Links

          Activity

          Hide
          sershe Sergey Shelukhin added a comment -
          Show
          sershe Sergey Shelukhin added a comment - Chris Nauroth Siddharth Seth fyi
          Hide
          sershe Sergey Shelukhin added a comment -

          Simple patch. Can someone please review? (and also assign to me; looks like I don't have permissions to assign)

          Show
          sershe Sergey Shelukhin added a comment - Simple patch. Can someone please review? (and also assign to me; looks like I don't have permissions to assign)
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 9m 9s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 6m 48s trunk passed
          +1 compile 6m 37s trunk passed
          +1 checkstyle 0m 24s trunk passed
          +1 mvnsite 0m 58s trunk passed
          +1 mvneclipse 0m 11s trunk passed
          +1 findbugs 1m 21s trunk passed
          +1 javadoc 0m 55s trunk passed
          +1 mvninstall 0m 39s the patch passed
          +1 compile 6m 33s the patch passed
          +1 javac 6m 33s the patch passed
          -1 checkstyle 0m 24s hadoop-common-project/hadoop-common: The patch generated 1 new + 100 unchanged - 0 fixed = 101 total (was 100)
          +1 mvnsite 0m 52s the patch passed
          +1 mvneclipse 0m 12s the patch passed
          -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix.
          -1 findbugs 1m 32s hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
          +1 javadoc 0m 52s the patch passed
          -1 unit 6m 57s hadoop-common in the patch failed.
          +1 asflicense 0m 21s The patch does not generate ASF License warnings.
          45m 27s



          Reason Tests
          FindBugs module:hadoop-common-project/hadoop-common
            org.apache.hadoop.security.UserGroupInformation defines clone() but doesn't implement Cloneable At UserGroupInformation.java:implement Cloneable At UserGroupInformation.java:[lines 631-634]
          Failed junit tests hadoop.ha.TestZKFailoverController



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:2c91fd8
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12806779/HADOOP-13081.patch
          JIRA Issue HADOOP-13081
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux f16841efe339 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 4ca8859
          Default Java 1.8.0_91
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/9612/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
          whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/9612/artifact/patchprocess/whitespace-eol.txt
          findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/9612/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-common.html
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9612/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          unit test logs https://builds.apache.org/job/PreCommit-HADOOP-Build/9612/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9612/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9612/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 9m 9s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 6m 48s trunk passed +1 compile 6m 37s trunk passed +1 checkstyle 0m 24s trunk passed +1 mvnsite 0m 58s trunk passed +1 mvneclipse 0m 11s trunk passed +1 findbugs 1m 21s trunk passed +1 javadoc 0m 55s trunk passed +1 mvninstall 0m 39s the patch passed +1 compile 6m 33s the patch passed +1 javac 6m 33s the patch passed -1 checkstyle 0m 24s hadoop-common-project/hadoop-common: The patch generated 1 new + 100 unchanged - 0 fixed = 101 total (was 100) +1 mvnsite 0m 52s the patch passed +1 mvneclipse 0m 12s the patch passed -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. -1 findbugs 1m 32s hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) +1 javadoc 0m 52s the patch passed -1 unit 6m 57s hadoop-common in the patch failed. +1 asflicense 0m 21s The patch does not generate ASF License warnings. 45m 27s Reason Tests FindBugs module:hadoop-common-project/hadoop-common   org.apache.hadoop.security.UserGroupInformation defines clone() but doesn't implement Cloneable At UserGroupInformation.java:implement Cloneable At UserGroupInformation.java: [lines 631-634] Failed junit tests hadoop.ha.TestZKFailoverController Subsystem Report/Notes Docker Image:yetus/hadoop:2c91fd8 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12806779/HADOOP-13081.patch JIRA Issue HADOOP-13081 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux f16841efe339 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 4ca8859 Default Java 1.8.0_91 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/9612/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/9612/artifact/patchprocess/whitespace-eol.txt findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/9612/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-common.html unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9612/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt unit test logs https://builds.apache.org/job/PreCommit-HADOOP-Build/9612/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9612/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9612/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          sershe Sergey Shelukhin added a comment -
          Show
          sershe Sergey Shelukhin added a comment - Chris Nauroth ping?
          Hide
          cnauroth Chris Nauroth added a comment -

          Hello Sergey Shelukhin. This is definitely in my review queue, but I don't expect I'll get to it soon. UGI changes always need careful scrutiny, even if it's a small patch. One initial question: did you investigate adding a unit test, possibly to TestUserGroupInformation?

          I will get to this eventually, but I'd also welcome others to review it if I'm not giving you the SLA you want.

          Show
          cnauroth Chris Nauroth added a comment - Hello Sergey Shelukhin . This is definitely in my review queue, but I don't expect I'll get to it soon. UGI changes always need careful scrutiny, even if it's a small patch. One initial question: did you investigate adding a unit test, possibly to TestUserGroupInformation ? I will get to this eventually, but I'd also welcome others to review it if I'm not giving you the SLA you want.
          Hide
          drankye Kai Zheng added a comment -

          This is a question. Since it's cloning a UGI, it may first clone the subject exactly, rather than incompletely? Note it's possible to construct a subject instance with another, looking at it's constructors.

          +   
          +  /**
          +   * @return clone of the UGI with a new subject.
          +   */
          +  @InterfaceAudience.Public
          +  @InterfaceStability.Evolving
          +  public UserGroupInformation clone() {
          +    Subject subject = new Subject();
          +    subject.getPrincipals().addAll(this.getSubject().getPrincipals());
          +    // The ctor will set other fields automatically from the principals.
          +    return new UserGroupInformation(subject);
          +  }
          
          Show
          drankye Kai Zheng added a comment - This is a question. Since it's cloning a UGI, it may first clone the subject exactly, rather than incompletely? Note it's possible to construct a subject instance with another, looking at it's constructors. + + /** + * @ return clone of the UGI with a new subject. + */ + @InterfaceAudience.Public + @InterfaceStability.Evolving + public UserGroupInformation clone() { + Subject subject = new Subject(); + subject.getPrincipals().addAll( this .getSubject().getPrincipals()); + // The ctor will set other fields automatically from the principals. + return new UserGroupInformation(subject); + }
          Hide
          sershe Sergey Shelukhin added a comment -

          Updated the patch accordingly.

          Show
          sershe Sergey Shelukhin added a comment - Updated the patch accordingly.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 28s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 7m 13s trunk passed
          +1 compile 8m 1s trunk passed
          +1 checkstyle 0m 24s trunk passed
          +1 mvnsite 0m 59s trunk passed
          +1 mvneclipse 0m 12s trunk passed
          +1 findbugs 1m 30s trunk passed
          +1 javadoc 0m 48s trunk passed
          +1 mvninstall 0m 46s the patch passed
          +1 compile 7m 42s the patch passed
          +1 javac 7m 42s the patch passed
          -1 checkstyle 0m 34s hadoop-common-project/hadoop-common: The patch generated 2 new + 100 unchanged - 0 fixed = 102 total (was 100)
          +1 mvnsite 0m 54s the patch passed
          +1 mvneclipse 0m 12s the patch passed
          -1 whitespace 0m 0s The patch has 21 line(s) that end in whitespace. Use git apply --whitespace=fix.
          -1 findbugs 1m 42s hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
          +1 javadoc 0m 46s the patch passed
          -1 unit 8m 17s hadoop-common in the patch failed.
          +1 asflicense 0m 23s The patch does not generate ASF License warnings.
          41m 36s



          Reason Tests
          FindBugs module:hadoop-common-project/hadoop-common
            org.apache.hadoop.security.UserGroupInformation defines clone() but doesn't implement Cloneable At UserGroupInformation.java:implement Cloneable At UserGroupInformation.java:[lines 632-633]
          Failed junit tests hadoop.metrics2.impl.TestGangliaMetrics



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:2c91fd8
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12810160/HADOOP-13081.01.patch
          JIRA Issue HADOOP-13081
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 2c986d017442 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 709a814
          Default Java 1.8.0_91
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/9762/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
          whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/9762/artifact/patchprocess/whitespace-eol.txt
          findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/9762/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-common.html
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9762/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9762/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9762/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 28s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 7m 13s trunk passed +1 compile 8m 1s trunk passed +1 checkstyle 0m 24s trunk passed +1 mvnsite 0m 59s trunk passed +1 mvneclipse 0m 12s trunk passed +1 findbugs 1m 30s trunk passed +1 javadoc 0m 48s trunk passed +1 mvninstall 0m 46s the patch passed +1 compile 7m 42s the patch passed +1 javac 7m 42s the patch passed -1 checkstyle 0m 34s hadoop-common-project/hadoop-common: The patch generated 2 new + 100 unchanged - 0 fixed = 102 total (was 100) +1 mvnsite 0m 54s the patch passed +1 mvneclipse 0m 12s the patch passed -1 whitespace 0m 0s The patch has 21 line(s) that end in whitespace. Use git apply --whitespace=fix. -1 findbugs 1m 42s hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) +1 javadoc 0m 46s the patch passed -1 unit 8m 17s hadoop-common in the patch failed. +1 asflicense 0m 23s The patch does not generate ASF License warnings. 41m 36s Reason Tests FindBugs module:hadoop-common-project/hadoop-common   org.apache.hadoop.security.UserGroupInformation defines clone() but doesn't implement Cloneable At UserGroupInformation.java:implement Cloneable At UserGroupInformation.java: [lines 632-633] Failed junit tests hadoop.metrics2.impl.TestGangliaMetrics Subsystem Report/Notes Docker Image:yetus/hadoop:2c91fd8 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12810160/HADOOP-13081.01.patch JIRA Issue HADOOP-13081 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 2c986d017442 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 709a814 Default Java 1.8.0_91 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/9762/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/9762/artifact/patchprocess/whitespace-eol.txt findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/9762/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-common.html unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9762/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9762/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9762/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          sershe Sergey Shelukhin added a comment -

          ping?

          Show
          sershe Sergey Shelukhin added a comment - ping?
          Hide
          sershe Sergey Shelukhin added a comment -
          Show
          sershe Sergey Shelukhin added a comment - Chris Nauroth ping?
          Hide
          cnauroth Chris Nauroth added a comment -

          I am offline, returning 7/18. I'll try to reprioritize this when I return, or if anyone else wants to review in my absence, please feel free.

          Show
          cnauroth Chris Nauroth added a comment - I am offline, returning 7/18. I'll try to reprioritize this when I return, or if anyone else wants to review in my absence, please feel free.
          Hide
          sershe Sergey Shelukhin added a comment -

          Also can someone please assign this to me?

          Show
          sershe Sergey Shelukhin added a comment - Also can someone please assign this to me?
          Hide
          sershe Sergey Shelukhin added a comment -

          ping? We are doing this via reflection in Hive now, in certain scenarios, and it appears to work as intended.

          Show
          sershe Sergey Shelukhin added a comment - ping? We are doing this via reflection in Hive now, in certain scenarios, and it appears to work as intended.
          Hide
          cnauroth Chris Nauroth added a comment -

          Hello Sergey Shelukhin. Some feedback on the current patch:

          1. On 6/11, I commented asking if it's feasible to add a unit test to TestUserGroupInformation. What are your thoughts?
          2. FindBugs flagged that it's a method named clone, but it doesn't implement Cloneable and doesn't follow the traditional recipe for a clone method (e.g. calling the superclass). Maybe rename the method to something like copySubject?
          3. Could you add to the JavaDocs describing the intent of this method? You could describe how it supports adding different credentials to different UGI instances without forcing them all to re-authenticate through Kerberos.
          4. Checkstyle flagged that there were a few lines longer than 80 characters.
          Show
          cnauroth Chris Nauroth added a comment - Hello Sergey Shelukhin . Some feedback on the current patch: On 6/11, I commented asking if it's feasible to add a unit test to TestUserGroupInformation . What are your thoughts? FindBugs flagged that it's a method named clone , but it doesn't implement Cloneable and doesn't follow the traditional recipe for a clone method (e.g. calling the superclass). Maybe rename the method to something like copySubject ? Could you add to the JavaDocs describing the intent of this method? You could describe how it supports adding different credentials to different UGI instances without forcing them all to re-authenticate through Kerberos. Checkstyle flagged that there were a few lines longer than 80 characters.
          Hide
          sershe Sergey Shelukhin added a comment -

          Yes, it's possible to add the test. It fails however, probably due to problems with mocks, I will finish it tomorrow, need to run now. Fixed the rest.

          Show
          sershe Sergey Shelukhin added a comment - Yes, it's possible to add the test. It fails however, probably due to problems with mocks, I will finish it tomorrow, need to run now. Fixed the rest.
          Hide
          sershe Sergey Shelukhin added a comment -

          Added the test and some additional logic to clone Hadoop credentials, which are apparently reused from the set, rather than adding to the set.

          Show
          sershe Sergey Shelukhin added a comment - Added the test and some additional logic to clone Hadoop credentials, which are apparently reused from the set, rather than adding to the set.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 14s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          +1 mvninstall 6m 41s trunk passed
          +1 compile 7m 6s trunk passed
          +1 checkstyle 0m 28s trunk passed
          +1 mvnsite 1m 3s trunk passed
          +1 mvneclipse 0m 15s trunk passed
          +1 findbugs 1m 31s trunk passed
          +1 javadoc 0m 50s trunk passed
          +1 mvninstall 0m 43s the patch passed
          +1 compile 7m 51s the patch passed
          -1 javac 7m 51s root generated 1 new + 709 unchanged - 1 fixed = 710 total (was 710)
          -0 checkstyle 0m 27s hadoop-common-project/hadoop-common: The patch generated 6 new + 150 unchanged - 0 fixed = 156 total (was 150)
          +1 mvnsite 0m 56s the patch passed
          +1 mvneclipse 0m 12s the patch passed
          -1 whitespace 0m 0s The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
          -1 whitespace 0m 0s The patch 4 line(s) with tabs.
          +1 findbugs 1m 38s the patch passed
          +1 javadoc 0m 46s the patch passed
          +1 unit 8m 45s hadoop-common in the patch passed.
          +1 asflicense 0m 22s The patch does not generate ASF License warnings.
          41m 17s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:9560f25
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12821080/HADOOP-13081.02.patch
          JIRA Issue HADOOP-13081
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 176ab1e88879 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 95f2b98
          Default Java 1.8.0_101
          findbugs v3.0.0
          javac https://builds.apache.org/job/PreCommit-HADOOP-Build/10122/artifact/patchprocess/diff-compile-javac-root.txt
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/10122/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
          whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/10122/artifact/patchprocess/whitespace-eol.txt
          whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/10122/artifact/patchprocess/whitespace-tabs.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10122/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10122/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 14s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 6m 41s trunk passed +1 compile 7m 6s trunk passed +1 checkstyle 0m 28s trunk passed +1 mvnsite 1m 3s trunk passed +1 mvneclipse 0m 15s trunk passed +1 findbugs 1m 31s trunk passed +1 javadoc 0m 50s trunk passed +1 mvninstall 0m 43s the patch passed +1 compile 7m 51s the patch passed -1 javac 7m 51s root generated 1 new + 709 unchanged - 1 fixed = 710 total (was 710) -0 checkstyle 0m 27s hadoop-common-project/hadoop-common: The patch generated 6 new + 150 unchanged - 0 fixed = 156 total (was 150) +1 mvnsite 0m 56s the patch passed +1 mvneclipse 0m 12s the patch passed -1 whitespace 0m 0s The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply -1 whitespace 0m 0s The patch 4 line(s) with tabs. +1 findbugs 1m 38s the patch passed +1 javadoc 0m 46s the patch passed +1 unit 8m 45s hadoop-common in the patch passed. +1 asflicense 0m 22s The patch does not generate ASF License warnings. 41m 17s Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12821080/HADOOP-13081.02.patch JIRA Issue HADOOP-13081 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 176ab1e88879 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 95f2b98 Default Java 1.8.0_101 findbugs v3.0.0 javac https://builds.apache.org/job/PreCommit-HADOOP-Build/10122/artifact/patchprocess/diff-compile-javac-root.txt checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/10122/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/10122/artifact/patchprocess/whitespace-eol.txt whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/10122/artifact/patchprocess/whitespace-tabs.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10122/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10122/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          sershe Sergey Shelukhin added a comment -

          Updating the patch to remove tabs

          Show
          sershe Sergey Shelukhin added a comment - Updating the patch to remove tabs
          Hide
          cnauroth Chris Nauroth added a comment -

          Nice catch on cloning the Hadoop Credentials instances! Those are mutable.

              Set<Object> set = new HashSet<>(old.size());
          

          I assume this was intended to size the new set identical to the old one to prevent reallocations while adding elements. Unfortunately, the constructor argument is the initial capacity, not the expected size. With the default load factor of 0.75, that means passing the old set's size actually is highly likely to cause one reallocation while adding elements.

          I think using the no-args constructor is acceptable here, given that the sets are expected to be small. If you feel it's really important to control the size, then you could use something like Guava's logic:

          https://github.com/google/guava/blob/v11.0.2/guava/src/com/google/common/collect/Maps.java#L106-L120

          (Please don't call Guava directly here though. We're starting to try to limit our dependency on it.)

          I think this will be ready to go after addressing that and the last round of pre-commit nitpicks.

          Show
          cnauroth Chris Nauroth added a comment - Nice catch on cloning the Hadoop Credentials instances! Those are mutable. Set< Object > set = new HashSet<>(old.size()); I assume this was intended to size the new set identical to the old one to prevent reallocations while adding elements. Unfortunately, the constructor argument is the initial capacity, not the expected size. With the default load factor of 0.75, that means passing the old set's size actually is highly likely to cause one reallocation while adding elements. I think using the no-args constructor is acceptable here, given that the sets are expected to be small. If you feel it's really important to control the size, then you could use something like Guava's logic: https://github.com/google/guava/blob/v11.0.2/guava/src/com/google/common/collect/Maps.java#L106-L120 (Please don't call Guava directly here though. We're starting to try to limit our dependency on it.) I think this will be ready to go after addressing that and the last round of pre-commit nitpicks.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 13s Docker mode activated.
          +1 @author 0m 1s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          +1 mvninstall 6m 45s trunk passed
          +1 compile 6m 42s trunk passed
          +1 checkstyle 0m 24s trunk passed
          +1 mvnsite 0m 54s trunk passed
          +1 mvneclipse 0m 12s trunk passed
          +1 findbugs 1m 16s trunk passed
          +1 javadoc 0m 44s trunk passed
          +1 mvninstall 0m 37s the patch passed
          +1 compile 7m 54s the patch passed
          -1 javac 7m 54s root generated 1 new + 709 unchanged - 1 fixed = 710 total (was 710)
          +1 checkstyle 0m 27s the patch passed
          +1 mvnsite 1m 0s the patch passed
          +1 mvneclipse 0m 13s the patch passed
          -1 whitespace 0m 0s The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
          +1 findbugs 1m 44s the patch passed
          +1 javadoc 0m 49s the patch passed
          -1 unit 17m 17s hadoop-common in the patch failed.
          +1 asflicense 0m 22s The patch does not generate ASF License warnings.
          48m 57s



          Reason Tests
          Timed out junit tests org.apache.hadoop.http.TestHttpServerLifecycle



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:9560f25
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12821087/HADOOP-13081.02.patch
          JIRA Issue HADOOP-13081
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 96ca3687bfbc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 95f2b98
          Default Java 1.8.0_101
          findbugs v3.0.0
          javac https://builds.apache.org/job/PreCommit-HADOOP-Build/10123/artifact/patchprocess/diff-compile-javac-root.txt
          whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/10123/artifact/patchprocess/whitespace-eol.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/10123/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10123/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10123/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 13s Docker mode activated. +1 @author 0m 1s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 6m 45s trunk passed +1 compile 6m 42s trunk passed +1 checkstyle 0m 24s trunk passed +1 mvnsite 0m 54s trunk passed +1 mvneclipse 0m 12s trunk passed +1 findbugs 1m 16s trunk passed +1 javadoc 0m 44s trunk passed +1 mvninstall 0m 37s the patch passed +1 compile 7m 54s the patch passed -1 javac 7m 54s root generated 1 new + 709 unchanged - 1 fixed = 710 total (was 710) +1 checkstyle 0m 27s the patch passed +1 mvnsite 1m 0s the patch passed +1 mvneclipse 0m 13s the patch passed -1 whitespace 0m 0s The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply +1 findbugs 1m 44s the patch passed +1 javadoc 0m 49s the patch passed -1 unit 17m 17s hadoop-common in the patch failed. +1 asflicense 0m 22s The patch does not generate ASF License warnings. 48m 57s Reason Tests Timed out junit tests org.apache.hadoop.http.TestHttpServerLifecycle Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12821087/HADOOP-13081.02.patch JIRA Issue HADOOP-13081 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 96ca3687bfbc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 95f2b98 Default Java 1.8.0_101 findbugs v3.0.0 javac https://builds.apache.org/job/PreCommit-HADOOP-Build/10123/artifact/patchprocess/diff-compile-javac-root.txt whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/10123/artifact/patchprocess/whitespace-eol.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/10123/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10123/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10123/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          sershe Sergey Shelukhin added a comment -

          Updated

          Show
          sershe Sergey Shelukhin added a comment - Updated
          Hide
          cnauroth Chris Nauroth added a comment -

          +1 for patch 03, pending pre-commit run.

          Show
          cnauroth Chris Nauroth added a comment - +1 for patch 03, pending pre-commit run.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 20s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          +1 mvninstall 8m 41s trunk passed
          +1 compile 8m 9s trunk passed
          +1 checkstyle 0m 25s trunk passed
          +1 mvnsite 1m 2s trunk passed
          +1 mvneclipse 0m 13s trunk passed
          +1 findbugs 1m 29s trunk passed
          +1 javadoc 0m 49s trunk passed
          +1 mvninstall 0m 44s the patch passed
          +1 compile 7m 35s the patch passed
          +1 javac 7m 35s the patch passed
          -0 checkstyle 0m 23s hadoop-common-project/hadoop-common: The patch generated 1 new + 150 unchanged - 0 fixed = 151 total (was 150)
          +1 mvnsite 0m 56s the patch passed
          +1 mvneclipse 0m 15s the patch passed
          -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
          +1 findbugs 1m 28s the patch passed
          +1 javadoc 0m 53s the patch passed
          +1 unit 7m 29s hadoop-common in the patch passed.
          +1 asflicense 0m 21s The patch does not generate ASF License warnings.
          42m 43s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:9560f25
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12821435/HADOOP-13081.03.patch
          JIRA Issue HADOOP-13081
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux cc794176aac8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 9f473cf
          Default Java 1.8.0_101
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/10146/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
          whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/10146/artifact/patchprocess/whitespace-eol.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10146/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10146/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 20s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 8m 41s trunk passed +1 compile 8m 9s trunk passed +1 checkstyle 0m 25s trunk passed +1 mvnsite 1m 2s trunk passed +1 mvneclipse 0m 13s trunk passed +1 findbugs 1m 29s trunk passed +1 javadoc 0m 49s trunk passed +1 mvninstall 0m 44s the patch passed +1 compile 7m 35s the patch passed +1 javac 7m 35s the patch passed -0 checkstyle 0m 23s hadoop-common-project/hadoop-common: The patch generated 1 new + 150 unchanged - 0 fixed = 151 total (was 150) +1 mvnsite 0m 56s the patch passed +1 mvneclipse 0m 15s the patch passed -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply +1 findbugs 1m 28s the patch passed +1 javadoc 0m 53s the patch passed +1 unit 7m 29s hadoop-common in the patch passed. +1 asflicense 0m 21s The patch does not generate ASF License warnings. 42m 43s Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12821435/HADOOP-13081.03.patch JIRA Issue HADOOP-13081 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux cc794176aac8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 9f473cf Default Java 1.8.0_101 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/10146/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/10146/artifact/patchprocess/whitespace-eol.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10146/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10146/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          sershe Sergey Shelukhin added a comment -

          fixed checkstyle... sigh

          Show
          sershe Sergey Shelukhin added a comment - fixed checkstyle... sigh
          Hide
          hadoopqa Hadoop QA added a comment -
          +1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 15s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          +1 mvninstall 6m 52s trunk passed
          +1 compile 6m 47s trunk passed
          +1 checkstyle 0m 24s trunk passed
          +1 mvnsite 0m 53s trunk passed
          +1 mvneclipse 0m 13s trunk passed
          +1 findbugs 1m 20s trunk passed
          +1 javadoc 0m 46s trunk passed
          +1 mvninstall 0m 39s the patch passed
          +1 compile 6m 58s the patch passed
          +1 javac 6m 58s the patch passed
          +1 checkstyle 0m 25s the patch passed
          +1 mvnsite 0m 55s the patch passed
          +1 mvneclipse 0m 11s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 1m 27s the patch passed
          +1 javadoc 0m 46s the patch passed
          +1 unit 7m 7s hadoop-common in the patch passed.
          +1 asflicense 0m 21s The patch does not generate ASF License warnings.
          37m 44s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:9560f25
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12821458/HADOOP-13081.03.patch
          JIRA Issue HADOOP-13081
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 407a418a9aab 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 9f473cf
          Default Java 1.8.0_101
          findbugs v3.0.0
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10147/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10147/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall Vote Subsystem Runtime Comment 0 reexec 0m 15s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 6m 52s trunk passed +1 compile 6m 47s trunk passed +1 checkstyle 0m 24s trunk passed +1 mvnsite 0m 53s trunk passed +1 mvneclipse 0m 13s trunk passed +1 findbugs 1m 20s trunk passed +1 javadoc 0m 46s trunk passed +1 mvninstall 0m 39s the patch passed +1 compile 6m 58s the patch passed +1 javac 6m 58s the patch passed +1 checkstyle 0m 25s the patch passed +1 mvnsite 0m 55s the patch passed +1 mvneclipse 0m 11s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 27s the patch passed +1 javadoc 0m 46s the patch passed +1 unit 7m 7s hadoop-common in the patch passed. +1 asflicense 0m 21s The patch does not generate ASF License warnings. 37m 44s Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12821458/HADOOP-13081.03.patch JIRA Issue HADOOP-13081 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 407a418a9aab 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 9f473cf Default Java 1.8.0_101 findbugs v3.0.0 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10147/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10147/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          cnauroth Chris Nauroth added a comment -

          +1 for patch 03. I have committed this to trunk, branch-2 and branch-2.8. Sergey Shelukhin, thank you for the patch.

          Show
          cnauroth Chris Nauroth added a comment - +1 for patch 03. I have committed this to trunk, branch-2 and branch-2.8. Sergey Shelukhin , thank you for the patch.
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-trunk-Commit #10195 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10195/)
          HADOOP-13081. add the ability to create multiple UGIs/subjects from one (cnauroth: rev 0458a2af6e925d023882714e8b7b0568eca7a775)

          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-trunk-Commit #10195 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10195/ ) HADOOP-13081 . add the ability to create multiple UGIs/subjects from one (cnauroth: rev 0458a2af6e925d023882714e8b7b0568eca7a775) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
          Hide
          daryn Daryn Sharp added a comment -

          Just noticed this due to a conflict. This is very broken and should be reverted.

          Neither the hadoop Credentials copy ctor nor the subject's cred sets are thread-safe.

          The subject's creds cannot be iterated w/o synchronizing on the set. Best case you'll get a CME. Worst case a copy of the set in an inconsistent state. For instance, the GSSAPI adds and removes service tickets from the subject. A snapshot at the wrong time will have a stale service ticket. Reusing the service ticket in the new ugi will cause replay attack exceptions. Or if another thread is attempting to relogin, the subject in the ugi being copied will not contain any kerberos creds.

          The subject's cred sets aren't actually sets. They are backed by a linked list. GSSAPI often relies on ordering of tickets. Cloning into a hash set loses the implied ordering. Crazy exceptions occur when the client starts requesting tickets from the KDC with a TGS instead of a TGT. Other ipc bugs cause the process to be unable to authenticate until a restart (ex. ran into this with oozie). I have an internal patch I need to push out.

          Relogin of a clone ugi will wipe out the kerberos credentials in the original ugi. The hadoop User principal contains the login context which references the original subject.

          Perhaps I missed it, but what is a concrete use case? The description and the javadoc don't make sense to me: "... allowing multiple users with different tokens to reuse the UGI without re-authenticating with Kerberos". Using tokens makes kerberos irrelevant.

          If the intention is mixing a ugi with kerberos creds for user1, and tokens for user2 - that's playing with fire esp. if user1 is a privileged user. The ugi should only contain user2 tokens for allowed services, otherwise there's the security risk of being user1 to some services. Proxy users exist for this reason.

          Why isn't UGI.createRemoteUser(username) and ugi.addToken(token) sufficient if no further kerberos auth is intended, or a proxy user that contains the intended tokens if you need a mix of token and kerberos auth?

          Show
          daryn Daryn Sharp added a comment - Just noticed this due to a conflict. This is very broken and should be reverted. Neither the hadoop Credentials copy ctor nor the subject's cred sets are thread-safe. The subject's creds cannot be iterated w/o synchronizing on the set. Best case you'll get a CME. Worst case a copy of the set in an inconsistent state. For instance, the GSSAPI adds and removes service tickets from the subject. A snapshot at the wrong time will have a stale service ticket. Reusing the service ticket in the new ugi will cause replay attack exceptions. Or if another thread is attempting to relogin, the subject in the ugi being copied will not contain any kerberos creds. The subject's cred sets aren't actually sets. They are backed by a linked list. GSSAPI often relies on ordering of tickets. Cloning into a hash set loses the implied ordering. Crazy exceptions occur when the client starts requesting tickets from the KDC with a TGS instead of a TGT. Other ipc bugs cause the process to be unable to authenticate until a restart (ex. ran into this with oozie). I have an internal patch I need to push out. Relogin of a clone ugi will wipe out the kerberos credentials in the original ugi. The hadoop User principal contains the login context which references the original subject. – Perhaps I missed it, but what is a concrete use case? The description and the javadoc don't make sense to me: "... allowing multiple users with different tokens to reuse the UGI without re-authenticating with Kerberos". Using tokens makes kerberos irrelevant. If the intention is mixing a ugi with kerberos creds for user1, and tokens for user2 - that's playing with fire esp. if user1 is a privileged user. The ugi should only contain user2 tokens for allowed services, otherwise there's the security risk of being user1 to some services. Proxy users exist for this reason. Why isn't UGI.createRemoteUser(username) and ugi.addToken(token) sufficient if no further kerberos auth is intended, or a proxy user that contains the intended tokens if you need a mix of token and kerberos auth?
          Hide
          cnauroth Chris Nauroth added a comment -

          Daryn Sharp, great review. Thank you.

          To summarize (at least from my own understanding), we need to revert the current patch and post a new revision that at least adds this:

          • Address thread safety. Is it sufficient to make the whole method body synchronized (subject), similar to addCredentials, getCredentials, etc.?
          • Clone into an insertion order preserving Set implementation (LinkedHashSet).

          Relogin of a clone ugi will wipe out the kerberos credentials in the original ugi. The hadoop User principal contains the login context which references the original subject.

          I had thought this part was OK, resulting in successful relogin for both original and cloned UGI. Was I wrong? ("wipe out" sounds bad.) If this part is not OK, then is the proposed change to replace the User principal in the clone with a new instance, which in turn owns its own LoginContext instance?

          Sergey Shelukhin, could you please comment more about the concrete use case, and specifically address why it couldn't be solved with remote users or proxy users?

          Show
          cnauroth Chris Nauroth added a comment - Daryn Sharp , great review. Thank you. To summarize (at least from my own understanding), we need to revert the current patch and post a new revision that at least adds this: Address thread safety. Is it sufficient to make the whole method body synchronized (subject) , similar to addCredentials , getCredentials , etc.? Clone into an insertion order preserving Set implementation ( LinkedHashSet ). Relogin of a clone ugi will wipe out the kerberos credentials in the original ugi. The hadoop User principal contains the login context which references the original subject. I had thought this part was OK, resulting in successful relogin for both original and cloned UGI. Was I wrong? ("wipe out" sounds bad.) If this part is not OK, then is the proposed change to replace the User principal in the clone with a new instance, which in turn owns its own LoginContext instance? Sergey Shelukhin , could you please comment more about the concrete use case, and specifically address why it couldn't be solved with remote users or proxy users?
          Hide
          sershe Sergey Shelukhin added a comment - - edited

          Chris Nauroth the concrete use case is where a service runs multiple pieces of work on behalf of users; it can be set to log in as a particular user using Kerberos (specifically when running these), but the users can also add their own tokens.
          We cannot add tokens to a single kerberos-based UGI because they will all mix; we also cannot log in for every piece of work in most cases, as it would overload the KDC.
          Ideally, we should be able to reuse the kerberos login and create a separate UGI with it for each user, adding the user-specific tokens.

          Show
          sershe Sergey Shelukhin added a comment - - edited Chris Nauroth the concrete use case is where a service runs multiple pieces of work on behalf of users; it can be set to log in as a particular user using Kerberos (specifically when running these), but the users can also add their own tokens. We cannot add tokens to a single kerberos-based UGI because they will all mix; we also cannot log in for every piece of work in most cases, as it would overload the KDC. Ideally, we should be able to reuse the kerberos login and create a separate UGI with it for each user, adding the user-specific tokens.
          Hide
          cnauroth Chris Nauroth added a comment -

          Thank you, Sergey.

          I think Daryn is asserting that you could potentially achieve this by running the portion of code that needs Kerberos auth inside one UGI.doAs, and then run the portion of code that needs delegation token auth inside a different UGI.doAs, where that second UGI was built with createRemoteUser and addToken. Would that work, or is there some reason that the actions can't be separated, and you really need both the Kerberos credentials and the delegation token all at once in a single UGI.doAs?

          Show
          cnauroth Chris Nauroth added a comment - Thank you, Sergey. I think Daryn is asserting that you could potentially achieve this by running the portion of code that needs Kerberos auth inside one UGI.doAs, and then run the portion of code that needs delegation token auth inside a different UGI.doAs, where that second UGI was built with createRemoteUser and addToken . Would that work, or is there some reason that the actions can't be separated, and you really need both the Kerberos credentials and the delegation token all at once in a single UGI.doAs?
          Hide
          sershe Sergey Shelukhin added a comment -

          We don't have control over which parts of the code need kerberos or tokens; I suspect that usually only one would be needed but we don't know which one.

          Show
          sershe Sergey Shelukhin added a comment - We don't have control over which parts of the code need kerberos or tokens; I suspect that usually only one would be needed but we don't know which one.
          Hide
          sershe Sergey Shelukhin added a comment -

          Btw, we do already have the implementation using reflection

          Show
          sershe Sergey Shelukhin added a comment - Btw, we do already have the implementation using reflection
          Hide
          drankye Kai Zheng added a comment -

          This is very interesting. Thanks all for the details.

          If Java assumes the not thread-safe and ordering on the credentials set, it may be hard to clone the UGI/subject correctly. It can also cause some troubles if the derived UGIs are out of sync thereafter.

          Just curious:
          Daryn Sharp, I thought the change introduced here only provided a new method doing the UGI cloning and the corresponding test. It might not affect existing methods. Did this new method be called somewhere and cause your the trouble?

          Sergey Shelukhin, in what specific you will run into Kerberos ticket related codes after you do the UGI cloning and the 2nd doAs?

          At least, after reverting, we probably should add some comment in the class header, asserting that UGI isn't expected to be cloned due to the reasons found here.

          Show
          drankye Kai Zheng added a comment - This is very interesting. Thanks all for the details. If Java assumes the not thread-safe and ordering on the credentials set, it may be hard to clone the UGI/subject correctly. It can also cause some troubles if the derived UGIs are out of sync thereafter. Just curious: Daryn Sharp , I thought the change introduced here only provided a new method doing the UGI cloning and the corresponding test. It might not affect existing methods. Did this new method be called somewhere and cause your the trouble? Sergey Shelukhin , in what specific you will run into Kerberos ticket related codes after you do the UGI cloning and the 2nd doAs? At least, after reverting, we probably should add some comment in the class header, asserting that UGI isn't expected to be cloned due to the reasons found here.
          Hide
          sershe Sergey Shelukhin added a comment - - edited

          The synchronization issues and preserving the order seem fixable. UGI already iterates credentials (e.g. in getTGT or getCredentialsInternal) synchronizing on itself or subject only.
          User principal only uses the LoginContext to relogin. We could clear it and posit that clones cannot be used to relogin (this is rather arbitrary, admittedly...)

          Show
          sershe Sergey Shelukhin added a comment - - edited The synchronization issues and preserving the order seem fixable. UGI already iterates credentials (e.g. in getTGT or getCredentialsInternal) synchronizing on itself or subject only. User principal only uses the LoginContext to relogin. We could clear it and posit that clones cannot be used to relogin (this is rather arbitrary, admittedly...)
          Hide
          cnauroth Chris Nauroth added a comment -

          I have reverted from trunk, branch-2 and branch-2.8. Daryn Sharp, can you please comment if the plan stated above looks good to you?

          Show
          cnauroth Chris Nauroth added a comment - I have reverted from trunk, branch-2 and branch-2.8. Daryn Sharp , can you please comment if the plan stated above looks good to you?
          Hide
          aw Allen Wittenauer added a comment - - edited

          I have reverted from trunk, branch-2 and branch-2.8.

          This puts alpha1 in a weird state. At a minimum, a release note is going to be required to clarify what code is in what branch. (FWIW, I did the same thing with another patch and pretty much regret it. High five!)

          Show
          aw Allen Wittenauer added a comment - - edited I have reverted from trunk, branch-2 and branch-2.8. This puts alpha1 in a weird state. At a minimum, a release note is going to be required to clarify what code is in what branch. (FWIW, I did the same thing with another patch and pretty much regret it. High five!)
          Hide
          cnauroth Chris Nauroth added a comment -

          Yikes, good point. I missed that this had shipped in 3.0.0-alpha1. I'm now thinking the simplest thing is to restore it (revert my revert) and then apply fixes as a new patch/new JIRA.

          Show
          cnauroth Chris Nauroth added a comment - Yikes, good point. I missed that this had shipped in 3.0.0-alpha1. I'm now thinking the simplest thing is to restore it (revert my revert) and then apply fixes as a new patch/new JIRA.
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10516 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10516/)
          Revert "HADOOP-13081. add the ability to create multiple UGIs/subjects (cnauroth: rev 1e0ea27e9602efba102b2145d0240ecc9d5845a1)

          • (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java
          • (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10516 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10516/ ) Revert " HADOOP-13081 . add the ability to create multiple UGIs/subjects (cnauroth: rev 1e0ea27e9602efba102b2145d0240ecc9d5845a1) (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
          Hide
          daryn Daryn Sharp added a comment -

          Restoring a new & broken API because it shipped in an alpha isn't a good reason to put it back. Nothing in an alpha is supposed to carry any guarantee of delivery in a final release. Not only is the implementation broken, but it's very dangerous for inexperienced developers.

          the concrete use case is where a service runs multiple pieces of work on behalf of users; it can be set to log in as a particular user using Kerberos (specifically when running these), but the users can also add their own tokens.

          Why isn't a proxy user insufficient? Let's explore your use case:

          1. Kerberos login as superman
          2. Clone the UGI
          3. Add tokens for daryn
          4. Access remote services for which no token is available, or the token was added incorrectly - oops, I'm superman to that service
          5. Traverse a symlink from NN1 to NN2, oops, now I'm superman on NN2

          If you are running a secure cluster, then why would you possibly risk exposing yourself to privilege escalation?

          With proxy users I will never be superman. Period. The real user may authenticate, but I'll never be the real user.

          1. Kerberos login as superman
          2. Create proxy UGI for daryn. It's a wrapper over the superman ugi, other proxy users share the common ugi. No multiple logins.
          3. Add tokens to the proxy ugi
          4. Access any remote services - I will be the user in a token, or superman will kerberos auth and request an authz to be daryn. Unless the service is configured to allow superman to be me, the connection is rejected.
          Show
          daryn Daryn Sharp added a comment - Restoring a new & broken API because it shipped in an alpha isn't a good reason to put it back. Nothing in an alpha is supposed to carry any guarantee of delivery in a final release. Not only is the implementation broken, but it's very dangerous for inexperienced developers. the concrete use case is where a service runs multiple pieces of work on behalf of users; it can be set to log in as a particular user using Kerberos (specifically when running these), but the users can also add their own tokens. Why isn't a proxy user insufficient? Let's explore your use case: Kerberos login as superman Clone the UGI Add tokens for daryn Access remote services for which no token is available, or the token was added incorrectly - oops, I'm superman to that service Traverse a symlink from NN1 to NN2, oops, now I'm superman on NN2 If you are running a secure cluster, then why would you possibly risk exposing yourself to privilege escalation? With proxy users I will never be superman. Period. The real user may authenticate, but I'll never be the real user. Kerberos login as superman Create proxy UGI for daryn. It's a wrapper over the superman ugi, other proxy users share the common ugi. No multiple logins. Add tokens to the proxy ugi Access any remote services - I will be the user in a token, or superman will kerberos auth and request an authz to be daryn. Unless the service is configured to allow superman to be me, the connection is rejected.
          Hide
          daryn Daryn Sharp added a comment -

          Ideally, we should be able to reuse the kerberos login and create a separate UGI with it for each user, adding the user-specific tokens.

          I should have quoted this. You just defined a proxy user. This is what oozie and other services use to do actions on behalf of other users.

          Show
          daryn Daryn Sharp added a comment - Ideally, we should be able to reuse the kerberos login and create a separate UGI with it for each user, adding the user-specific tokens. I should have quoted this. You just defined a proxy user. This is what oozie and other services use to do actions on behalf of other users.
          Hide
          aw Allen Wittenauer added a comment -

          Restoring a new & broken API because it shipped in an alpha isn't a good reason to put it back.

          It's about release notes and change log maintenance. This JIRA needs to stay as fixed in 3.0.0-alpha1, regardless of outcome.

          Show
          aw Allen Wittenauer added a comment - Restoring a new & broken API because it shipped in an alpha isn't a good reason to put it back. It's about release notes and change log maintenance. This JIRA needs to stay as fixed in 3.0.0-alpha1, regardless of outcome.
          Hide
          daryn Daryn Sharp added a comment -

          So what are we doing with this jira? I have not heard a compelling use case for adding/keeping a dangerous api when I believe the current api is sufficient and just misunderstood. I want to make sure I understand the use case before rejecting the jira.

          Show
          daryn Daryn Sharp added a comment - So what are we doing with this jira? I have not heard a compelling use case for adding/keeping a dangerous api when I believe the current api is sufficient and just misunderstood. I want to make sure I understand the use case before rejecting the jira.
          Hide
          cnauroth Chris Nauroth added a comment -

          This JIRA will take one of two directions depending on discussion:

          • If this becomes effectively a "Won't Fix", then re-resolve this as fixed in 3.0.0-alpha1 and open a new JIRA to track removal of the API in 3.0.0-alpha2, for the sake of accuracy in release notes.
          • If the change is accepted in some form, then re-resolve this as fixed in 3.0.0-alpha1 and open a new JIRA targeted to 3.0.0-alpha2 and 2.8.0 to track the corrected patch.

          Sergey Shelukhin, I think you're best equipped to provide justification for this API, because you're effectively already doing it via reflection in Hive.

          We don't have control over which parts of the code need kerberos or tokens; I suspect that usually only one would be needed but we don't know which one.

          Can you describe why you don't have control? Intuitively, I'd expect to see isolated pieces of code that need to make service calls on behalf of the user with delegation token (a separate UGI), and then other parts of the code acting as the privileged user. Maybe I'm not understanding the full context.

          Show
          cnauroth Chris Nauroth added a comment - This JIRA will take one of two directions depending on discussion: If this becomes effectively a "Won't Fix", then re-resolve this as fixed in 3.0.0-alpha1 and open a new JIRA to track removal of the API in 3.0.0-alpha2, for the sake of accuracy in release notes. If the change is accepted in some form, then re-resolve this as fixed in 3.0.0-alpha1 and open a new JIRA targeted to 3.0.0-alpha2 and 2.8.0 to track the corrected patch. Sergey Shelukhin , I think you're best equipped to provide justification for this API, because you're effectively already doing it via reflection in Hive. We don't have control over which parts of the code need kerberos or tokens; I suspect that usually only one would be needed but we don't know which one. Can you describe why you don't have control? Intuitively, I'd expect to see isolated pieces of code that need to make service calls on behalf of the user with delegation token (a separate UGI), and then other parts of the code acting as the privileged user. Maybe I'm not understanding the full context.
          Hide
          sershe Sergey Shelukhin added a comment -

          Sorry, I've posted this in the wrong JIRA apparently:

          The scenario is like this; we accept work on behalf of clients that is, generally speaking, authorized on a higher level (those are fragments of Hive jobs right now, except unlike MR they all run in-process, and we are also making the external client which is the crux of the matter). In normal case, the service doing the auth (HiveServer2 in case of Hive) gathers the tokens and passes them on to the service running the fragment; the external client may supply some tokens too. However, apparently for some clients it's difficult (or not implemented yet) to gather tokens, so in the cases of perimeter security, we want to be able to configure access in such way that they can access all of HDFS (for example; it could be some other service that their code touched that we have no idea about, hypothetically). The reasoning is that if the work item has passed thru the authorization that our service does, they don't care about HDFS security any more. In that case, our service would log in from keytab and run their item in that context. However, we neither want to require a super-user that is able to access all possible services (e.g. HBase), nor disable HDFS security altogether. So, the user work items would access HDFS (or HBase or whatever) as a user with lots of access, by design, and access other services via tokens.
          This feature is off by default, obviously, and the of their code talking to services is based entirely on tokens by default.
          I understand running as such user is not an ideal situation but it is apparently a valid scenario for some cases.
          So, what we do now is create a master UGI/Subject; for every task, if this is enabled, we clone that via reflection and add the tokens. We haven't extensively tested this yet since external client is not production ready but it appears to work in some tests.

          I hope this makes sense, feel free to clarify.
          We are using reflection to get the subject and construct the UGI from subject.

          Show
          sershe Sergey Shelukhin added a comment - Sorry, I've posted this in the wrong JIRA apparently: The scenario is like this; we accept work on behalf of clients that is, generally speaking, authorized on a higher level (those are fragments of Hive jobs right now, except unlike MR they all run in-process, and we are also making the external client which is the crux of the matter). In normal case, the service doing the auth (HiveServer2 in case of Hive) gathers the tokens and passes them on to the service running the fragment; the external client may supply some tokens too. However, apparently for some clients it's difficult (or not implemented yet) to gather tokens, so in the cases of perimeter security, we want to be able to configure access in such way that they can access all of HDFS (for example; it could be some other service that their code touched that we have no idea about, hypothetically). The reasoning is that if the work item has passed thru the authorization that our service does, they don't care about HDFS security any more. In that case, our service would log in from keytab and run their item in that context. However, we neither want to require a super-user that is able to access all possible services (e.g. HBase), nor disable HDFS security altogether. So, the user work items would access HDFS (or HBase or whatever) as a user with lots of access, by design, and access other services via tokens. This feature is off by default, obviously, and the of their code talking to services is based entirely on tokens by default. I understand running as such user is not an ideal situation but it is apparently a valid scenario for some cases. So, what we do now is create a master UGI/Subject; for every task, if this is enabled, we clone that via reflection and add the tokens. We haven't extensively tested this yet since external client is not production ready but it appears to work in some tests. I hope this makes sense, feel free to clarify. We are using reflection to get the subject and construct the UGI from subject.
          Hide
          andrew.wang Andrew Wang added a comment -

          Hi folks, looks like the revert was not tracked in a separate JIRA, so I filed HADOOP-13803 and resolved it so that trunk and the fix versions are in a consistent state.

          Since we're treating trunk as a release branch during the 3.0.0 alpha process, please try to keep the state of trunk and the 3.0.0-X fix versions consistent. Thanks!

          Show
          andrew.wang Andrew Wang added a comment - Hi folks, looks like the revert was not tracked in a separate JIRA, so I filed HADOOP-13803 and resolved it so that trunk and the fix versions are in a consistent state. Since we're treating trunk as a release branch during the 3.0.0 alpha process, please try to keep the state of trunk and the 3.0.0-X fix versions consistent. Thanks!

            People

            • Assignee:
              sershe Sergey Shelukhin
              Reporter:
              sershe Sergey Shelukhin
            • Votes:
              0 Vote for this issue
              Watchers:
              11 Start watching this issue

              Dates

              • Created:
                Updated:

                Development