Hadoop Common
  1. Hadoop Common
  2. HADOOP-10607

Create an API to Separate Credentials/Password Storage from Applications

    Details

    • Type: New Feature New Feature
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.6.0
    • Component/s: security
    • Labels:
      None

      Description

      As with the filesystem API, we need to provide a generic mechanism to support multiple credential storage mechanisms that are potentially from third parties.

      We need the ability to eliminate the storage of passwords and secrets in clear text within configuration files or within code.

      Toward that end, I propose an API that is configured using a list of URLs of CredentialProviders. The implementation will look for implementations using the ServiceLoader interface and thus support third party libraries.

      Two providers will be included in this patch. One using the credentials cache in MapReduce jobs and the other using Java KeyStores from either HDFS or local file system.

      A CredShell CLI will also be included in this patch which provides the ability to manage the credentials within the stores.

      1. 10607-branch-2.patch
        64 kB
        Larry McCay
      2. 10607-12.patch
        69 kB
        Larry McCay
      3. 10607-11.patch
        68 kB
        Larry McCay
      4. 10607-10.patch
        67 kB
        Larry McCay
      5. 10607-9.patch
        60 kB
        Larry McCay
      6. 10607-8.patch
        60 kB
        Larry McCay
      7. 10607-7.patch
        60 kB
        Larry McCay
      8. 10607-6.patch
        60 kB
        Larry McCay
      9. 10607-5.patch
        56 kB
        Larry McCay
      10. 10607-4.patch
        56 kB
        Larry McCay
      11. 10607-3.patch
        56 kB
        Larry McCay
      12. 10607-2.patch
        56 kB
        Larry McCay
      13. 10607.patch
        56 kB
        Larry McCay

        Issue Links

          Activity

          Hide
          Larry McCay added a comment -

          FYI - I have created an umbrella jira - HADOOP-10904 - to track these efforts as they are discovered. You can track progress there.

          Show
          Larry McCay added a comment - FYI - I have created an umbrella jira - HADOOP-10904 - to track these efforts as they are discovered. You can track progress there.
          Hide
          Alejandro Abdelnur added a comment -

          Larry McCay, passwords in the SSL config make sense, thanks for bringing that up again (I've missed before).

          Show
          Alejandro Abdelnur added a comment - Larry McCay , passwords in the SSL config make sense, thanks for bringing that up again (I've missed before).
          Hide
          Larry McCay added a comment -

          Thanks for the additional examples, Steve Loughran.
          Alejandro Abdelnur - we have provided examples of where it will be used in hadoop core already.
          The HADOOP-10791 jira may or may not make it easier to provide an implementation that doesn't store the secret in clear text anywhere. If this is provided in an acceptable way without the credential provider then we may not need it there. Otherwise, we may truly need to uptake it in hadoop auth. We need to determine whether random secrets kept only in memory are acceptable for eliminating the storage of clear text secrets. As you pointed out on 10791, we may need to find a way to uptake either the credential provider or key provider API without pulling in burdensome dependencies.

          In the meantime, I will also be looking at addressing the SSL configuration that currently stores clear text passwords and other places.

          Show
          Larry McCay added a comment - Thanks for the additional examples, Steve Loughran . Alejandro Abdelnur - we have provided examples of where it will be used in hadoop core already. The HADOOP-10791 jira may or may not make it easier to provide an implementation that doesn't store the secret in clear text anywhere. If this is provided in an acceptable way without the credential provider then we may not need it there. Otherwise, we may truly need to uptake it in hadoop auth. We need to determine whether random secrets kept only in memory are acceptable for eliminating the storage of clear text secrets. As you pointed out on 10791, we may need to find a way to uptake either the credential provider or key provider API without pulling in burdensome dependencies. In the meantime, I will also be looking at addressing the SSL configuration that currently stores clear text passwords and other places.
          Hide
          Alejandro Abdelnur added a comment -

          Steve, Larry, Owen,

          I'm not disputing the usefulness of the CredentialsProvider at all. I also have concrete usecases outside of Hadoop.

          Still, my concern is that Hadoop is being use a dumping ground of random stuff for the convenience of other projects. Hadoop should only ship code that Hadoop itself uses.

          I understand the convenience, for all downstream projects, to have this API in Hadoop itself: It is available for free.

          IMO, the CredentialProvider should not be shipped in a Hadoop release until Hadoop makes use of it. It can leave in trunk until then.

          I don't have anything to add, I've explained clearly my position, now it is your call to decide if the CredentialProvider should be shipped with Hadoop at the moment or not.

          Show
          Alejandro Abdelnur added a comment - Steve, Larry, Owen, I'm not disputing the usefulness of the CredentialsProvider at all. I also have concrete usecases outside of Hadoop. Still, my concern is that Hadoop is being use a dumping ground of random stuff for the convenience of other projects. Hadoop should only ship code that Hadoop itself uses. I understand the convenience, for all downstream projects, to have this API in Hadoop itself: It is available for free. IMO, the CredentialProvider should not be shipped in a Hadoop release until Hadoop makes use of it. It can leave in trunk until then. I don't have anything to add, I've explained clearly my position, now it is your call to decide if the CredentialProvider should be shipped with Hadoop at the moment or not.
          Hide
          Steve Loughran added a comment -

          SLIDER-254 and SLIDER-263 are examples of YARN apps that need this across hadoop clusters

          Show
          Steve Loughran added a comment - SLIDER-254 and SLIDER-263 are examples of YARN apps that need this across hadoop clusters
          Hide
          Larry McCay added a comment -

          Well, I am waiting on the HADOOP-10791 patch as I said and am hoping to add concrete usage there as well as other places. In the meantime, having it available on the classpath for others on branch-2 is really helpful. I am not sure that I understand all the discussion on those logistics which is why I have been sticking to the technical and usecase answers.

          Concrete usage is certainly not far behind.

          Show
          Larry McCay added a comment - Well, I am waiting on the HADOOP-10791 patch as I said and am hoping to add concrete usage there as well as other places. In the meantime, having it available on the classpath for others on branch-2 is really helpful. I am not sure that I understand all the discussion on those logistics which is why I have been sticking to the technical and usecase answers. Concrete usage is certainly not far behind.
          Hide
          Alejandro Abdelnur added a comment -

          Larry,

          That makes sense, thanks.

          Now, regarding the concrete usage in Hadoop, I still don't see it at the moment and that is why I say it should stay in trunk.

          Do you want this in Hadoop, so it is available to components down the stack via Hadoop classpath and nothing else?

          Show
          Alejandro Abdelnur added a comment - Larry, That makes sense, thanks. Now, regarding the concrete usage in Hadoop, I still don't see it at the moment and that is why I say it should stay in trunk. Do you want this in Hadoop, so it is available to components down the stack via Hadoop classpath and nothing else?
          Hide
          Larry McCay added a comment -

          The keystore provider is useful even without a central authenticating server for many usecases.
          Ideally and eventually, we will have a kerberos authenticating server to serve such credentials but in the meantime the keystore is a way to persist the password without being in clear text. Coupled with file permissions this is stronger protection than file permissions alone. Later migration to a central credential server will be easily accomplished through the use of the API. We are taking babysteps to get where we need to be while satisfying user requirements in a reasonable manner along the way.

          Show
          Larry McCay added a comment - The keystore provider is useful even without a central authenticating server for many usecases. Ideally and eventually, we will have a kerberos authenticating server to serve such credentials but in the meantime the keystore is a way to persist the password without being in clear text. Coupled with file permissions this is stronger protection than file permissions alone. Later migration to a central credential server will be easily accomplished through the use of the API. We are taking babysteps to get where we need to be while satisfying user requirements in a reasonable manner along the way.
          Hide
          Alejandro Abdelnur added a comment -

          Owen,

          Apologies, I didn’t mean to puzzle you with my puzzling (smile).

          hadoop-auth started outside of Hadoop as Alfredo. The initial use cases where for Hadoop itself and Oozie and because of that we brought it in.

          I see the value in the CredentialProvider, I just don’t see a concrete use in Hadoop at the moment other than we could use it for this or that, but we are not using for anything.

          Until we have a concrete usecase, I think we should keep it in trunk.

          Larry,

          In its current form, the CredentialProvider implementations are not really useful as it is not a service and it cannot be used by an app running in the cluster, right? Or am I missing something?

          That was the case with the KeyProvider and that is why I took on the KMS work and now we are using it for HDFS encryption.

          Show
          Alejandro Abdelnur added a comment - Owen, Apologies, I didn’t mean to puzzle you with my puzzling (smile). hadoop-auth started outside of Hadoop as Alfredo. The initial use cases where for Hadoop itself and Oozie and because of that we brought it in. I see the value in the CredentialProvider, I just don’t see a concrete use in Hadoop at the moment other than we could use it for this or that, but we are not using for anything. Until we have a concrete usecase, I think we should keep it in trunk. Larry, In its current form, the CredentialProvider implementations are not really useful as it is not a service and it cannot be used by an app running in the cluster, right? Or am I missing something? That was the case with the KeyProvider and that is why I took on the KMS work and now we are using it for HDFS encryption.
          Hide
          Larry McCay added a comment -

          Sorry, I didn't mean to imply that that is the only usecase inside of hadoop that we have requirements for. The fact is that there are a number of places within hadoop that rely solely on file permissions being set properly. This will meet security policy requirements for some deployments and auditors but not for others. If there is a password/secret in clear text in a file than it is a ding on the audit.

          As I said, there are a number of specific usecases - the following are off the top of my head:

          • signing secret - as we have already mentioned above
          • SSL configuration
          • certain applications when run on Yarn will need to have various secrets provided to them or pre-provisioned as part of application deployment.

          So, to be clear - we have internal hadoop usage in mind as well as external components that would pickup this functionality.

          As it seems inappropriate for hadoop itself to depend on an external ecosystem component for such a facility - I think it most appropriate for hadoop-common.

          Thoughts?

          Show
          Larry McCay added a comment - Sorry, I didn't mean to imply that that is the only usecase inside of hadoop that we have requirements for. The fact is that there are a number of places within hadoop that rely solely on file permissions being set properly. This will meet security policy requirements for some deployments and auditors but not for others. If there is a password/secret in clear text in a file than it is a ding on the audit. As I said, there are a number of specific usecases - the following are off the top of my head: signing secret - as we have already mentioned above SSL configuration certain applications when run on Yarn will need to have various secrets provided to them or pre-provisioned as part of application deployment. So, to be clear - we have internal hadoop usage in mind as well as external components that would pickup this functionality. As it seems inappropriate for hadoop itself to depend on an external ecosystem component for such a facility - I think it most appropriate for hadoop-common. Thoughts?
          Hide
          Owen O'Malley added a comment -

          Alejandro,
          I'm puzzled why you are puzzled. We've always added components and functionality to Hadoop that are useful to upstream components. A mechanism for managing passwords without storing them in plain text passwords is a wonderful addition. There are many places in the Hadoop ecosystem where passwords are stored in config files, such as hadoop-auth and the hive metastore. Giving them a common structure for removing them is a good thing.

          Show
          Owen O'Malley added a comment - Alejandro, I'm puzzled why you are puzzled. We've always added components and functionality to Hadoop that are useful to upstream components. A mechanism for managing passwords without storing them in plain text passwords is a wonderful addition. There are many places in the Hadoop ecosystem where passwords are stored in config files, such as hadoop-auth and the hive metastore. Giving them a common structure for removing them is a good thing.
          Hide
          Alejandro Abdelnur added a comment -

          Larry McCay, in HADOOP-10791 you commented:

          So, how does the signature get validated if it is a randomized secret? It has to be stored somewhere, no? If the random impl eliminates storing clear text secrets for this then we may not need the credential api impl after all.

          Just to be clear, I'm not opposed to the UserCredentials API. I'm opposed to making it part of a release and of a public Hadoop API if there is no use in Hadoop itself. If this ends being the case, their home may be a project that uses it.

          Larry, maybe it would help if you explain the current use case for this API and why is convenient to have it in Hadoop while not being used in Hadoop. In case there such use case?

          Show
          Alejandro Abdelnur added a comment - Larry McCay , in HADOOP-10791 you commented: So, how does the signature get validated if it is a randomized secret? It has to be stored somewhere, no? If the random impl eliminates storing clear text secrets for this then we may not need the credential api impl after all. Just to be clear, I'm not opposed to the UserCredentials API. I'm opposed to making it part of a release and of a public Hadoop API if there is no use in Hadoop itself. If this ends being the case, their home may be a project that uses it. Larry, maybe it would help if you explain the current use case for this API and why is convenient to have it in Hadoop while not being used in Hadoop. In case there such use case?
          Hide
          Larry McCay added a comment -

          It is certainly going to be used in hadoop directly which is one of the reasons that I am watching and waiting for HADOOP-10791 to make sure that we still need to address that usecase that and that we can plug in to the abstraction being added there. We have both external and internal requirements for this API.

          Show
          Larry McCay added a comment - It is certainly going to be used in hadoop directly which is one of the reasons that I am watching and waiting for HADOOP-10791 to make sure that we still need to address that usecase that and that we can plug in to the abstraction being added there. We have both external and internal requirements for this API.
          Hide
          Alejandro Abdelnur added a comment -

          Owen, i'm a bit puzzled here, I don't think we should add things to hadoop that are not used by Hadoop itself. If we see this as being used by Hadoop itself, they it should stay in Hadoop's trunk, and move to a release branch when that happens. If we don't see this being used by Hadoop itself, it does not belong in Hadoop and we should completely remove it from all Hadoop branches.

          Show
          Alejandro Abdelnur added a comment - Owen, i'm a bit puzzled here, I don't think we should add things to hadoop that are not used by Hadoop itself. If we see this as being used by Hadoop itself, they it should stay in Hadoop's trunk, and move to a release branch when that happens. If we don't see this being used by Hadoop itself, it does not belong in Hadoop and we should completely remove it from all Hadoop branches.
          Hide
          Owen O'Malley added a comment -

          Andrew, it has to get released before it can be used by external components. Is there a technical concern with it getting in the 2.6 release?

          Show
          Owen O'Malley added a comment - Andrew, it has to get released before it can be used by external components. Is there a technical concern with it getting in the 2.6 release?
          Hide
          Larry McCay added a comment -

          Hi Andrew Wang - I will look into changing my preferences and configuring git diff as you describe. I thought that I was managing it manually well enough. Thanks for the hints!

          Show
          Larry McCay added a comment - Hi Andrew Wang - I will look into changing my preferences and configuring git diff as you describe. I thought that I was managing it manually well enough. Thanks for the hints!
          Hide
          Andrew Wang added a comment -

          Hey guys, few q's and comments:

          • Why was this merged to branch-2? AFAIK this isn't being used by any Hadoop components yet, so it doesn't belong in a release branch. I'd like to revert it out of branch-2 until there is such a consumer.
          • CredentialShell is using the double dash style for flags. I'm going to broaden the scope of HADOOP-10793 to fix this for both KeyShell and CredentialShell.
          • Larry, I think your IDE is auto-wrapping with tabs. I think this is default behavior with Eclipse. Another thing you can do is configure `git diff` to highlight whitespace errors like these for the future. Maybe we can fix some of these tabs in HADOOP-10793 too, or in a new JIRA. Normally I'm against whitespace only changes, but this is mostly new code so there's little chance of conflicts.
          Show
          Andrew Wang added a comment - Hey guys, few q's and comments: Why was this merged to branch-2? AFAIK this isn't being used by any Hadoop components yet, so it doesn't belong in a release branch. I'd like to revert it out of branch-2 until there is such a consumer. CredentialShell is using the double dash style for flags. I'm going to broaden the scope of HADOOP-10793 to fix this for both KeyShell and CredentialShell. Larry, I think your IDE is auto-wrapping with tabs. I think this is default behavior with Eclipse. Another thing you can do is configure `git diff` to highlight whitespace errors like these for the future. Maybe we can fix some of these tabs in HADOOP-10793 too, or in a new JIRA. Normally I'm against whitespace only changes, but this is mostly new code so there's little chance of conflicts.
          Hide
          Larry McCay added a comment -

          patch to merge to branch-2 added

          Show
          Larry McCay added a comment - patch to merge to branch-2 added
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk #1806 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1806/)
          HADOOP-10607. Create API to separate credential/password storage from
          applications. (Larry McCay via omalley) (omalley: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1603491)

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ProviderUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialProviderFactory.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialShell.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/JavaKeyStoreProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/UserProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.security.alias.CredentialProviderFactory
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProviderFactory.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredShell.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredentialProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredentialProviderFactory.java
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk #1806 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1806/ ) HADOOP-10607 . Create API to separate credential/password storage from applications. (Larry McCay via omalley) (omalley: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1603491 ) /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ProviderUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialProviderFactory.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialShell.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/JavaKeyStoreProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/UserProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.security.alias.CredentialProviderFactory /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProviderFactory.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredShell.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredentialProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredentialProviderFactory.java
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Hdfs-trunk #1779 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1779/)
          HADOOP-10607. Create API to separate credential/password storage from
          applications. (Larry McCay via omalley) (omalley: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1603491)

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ProviderUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialProviderFactory.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialShell.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/JavaKeyStoreProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/UserProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.security.alias.CredentialProviderFactory
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProviderFactory.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredShell.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredentialProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredentialProviderFactory.java
          Show
          Hudson added a comment - SUCCESS: Integrated in Hadoop-Hdfs-trunk #1779 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1779/ ) HADOOP-10607 . Create API to separate credential/password storage from applications. (Larry McCay via omalley) (omalley: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1603491 ) /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ProviderUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialProviderFactory.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialShell.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/JavaKeyStoreProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/UserProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.security.alias.CredentialProviderFactory /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProviderFactory.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredShell.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredentialProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredentialProviderFactory.java
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Yarn-trunk #588 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/588/)
          HADOOP-10607. Create API to separate credential/password storage from
          applications. (Larry McCay via omalley) (omalley: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1603491)

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ProviderUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialProviderFactory.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialShell.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/JavaKeyStoreProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/UserProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.security.alias.CredentialProviderFactory
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProviderFactory.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredShell.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredentialProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredentialProviderFactory.java
          Show
          Hudson added a comment - SUCCESS: Integrated in Hadoop-Yarn-trunk #588 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/588/ ) HADOOP-10607 . Create API to separate credential/password storage from applications. (Larry McCay via omalley) (omalley: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1603491 ) /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ProviderUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialProviderFactory.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialShell.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/JavaKeyStoreProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/UserProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.security.alias.CredentialProviderFactory /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProviderFactory.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredShell.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredentialProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredentialProviderFactory.java
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in Hadoop-trunk-Commit #5723 (See https://builds.apache.org/job/Hadoop-trunk-Commit/5723/)
          HADOOP-10607. Create API to separate credential/password storage from
          applications. (Larry McCay via omalley) (omalley: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1603491)

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ProviderUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialProviderFactory.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialShell.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/JavaKeyStoreProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/UserProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.security.alias.CredentialProviderFactory
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProviderFactory.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredShell.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredentialProvider.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredentialProviderFactory.java
          Show
          Hudson added a comment - SUCCESS: Integrated in Hadoop-trunk-Commit #5723 (See https://builds.apache.org/job/Hadoop-trunk-Commit/5723/ ) HADOOP-10607 . Create API to separate credential/password storage from applications. (Larry McCay via omalley) (omalley: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1603491 ) /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ProviderUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialProviderFactory.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialShell.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/JavaKeyStoreProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/UserProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.security.alias.CredentialProviderFactory /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProviderFactory.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredShell.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredentialProvider.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredentialProviderFactory.java
          Hide
          Owen O'Malley added a comment -

          I just committed this. Thanks, Larry!

          Show
          Owen O'Malley added a comment - I just committed this. Thanks, Larry!
          Hide
          Owen O'Malley added a comment -

          +1

          Show
          Owen O'Malley added a comment - +1
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12650097/10607-12.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 5 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4052//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4052//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12650097/10607-12.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 5 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4052//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4052//console This message is automatically generated.
          Hide
          Larry McCay added a comment -

          Addresses the additional context for IOExceptions that are caught within the getPassword method. Also separated the credential provider api and fallback to config code into discreet methods to address readability and cumbersome style of having them together.

          Leaving the credential as a char[] for now for reasons described earlier in the comments.

          Show
          Larry McCay added a comment - Addresses the additional context for IOExceptions that are caught within the getPassword method. Also separated the credential provider api and fallback to config code into discreet methods to address readability and cumbersome style of having them together. Leaving the credential as a char[] for now for reasons described earlier in the comments.
          Hide
          Larry McCay added a comment -

          I will address the exception and the return style points in another revision of the patch.

          The use of char[]'s over Strings are an accepted security best practice due to Strings being immutable and therefore in memory until GC can clean them up. Character arrays may be used and immediately overwritten to reduce the window of time that the actual password is in memory. While in practice it is often difficult to eliminate all String use for passwords, we shouldn't exacerbate the issue by handing out passwords as Strings. At least the consumers should be able to whack the password char[]'s when they are done with them.

          If we really want to change this then I would suggest we do so in a follow up jira.

          Show
          Larry McCay added a comment - I will address the exception and the return style points in another revision of the patch. The use of char[]'s over Strings are an accepted security best practice due to Strings being immutable and therefore in memory until GC can clean them up. Character arrays may be used and immediately overwritten to reduce the window of time that the actual password is in memory. While in practice it is often difficult to eliminate all String use for passwords, we shouldn't exacerbate the issue by handing out passwords as Strings. At least the consumers should be able to whack the password char[]'s when they are done with them. If we really want to change this then I would suggest we do so in a follow up jira.
          Hide
          Owen O'Malley added a comment -

          Actually, even better would be putting the try/catch block in the loop and tell the user which provider failed.

          Show
          Owen O'Malley added a comment - Actually, even better would be putting the try/catch block in the loop and tell the user which provider failed.
          Hide
          Owen O'Malley added a comment -

          Comments:

          • Rethrowing the exception loses the original stack. You are better off doing:
            catch (IOException ioe) {
              throw new IOException("Can't get key " + name + " from key providers.", ioe);
            }
            

            which keeps the original stack and adds the additional context.

          • Configuration.getPassword can just return the result once it finds it rather than assigning pass and having each following section code protect itself with pass == null.
          • I know that some Java contexts expect char[] for passwords, but String seems like it would be much more natural in the CredentialProvider API.
          Show
          Owen O'Malley added a comment - Comments: Rethrowing the exception loses the original stack. You are better off doing: catch (IOException ioe) { throw new IOException( "Can't get key " + name + " from key providers." , ioe); } which keeps the original stack and adds the additional context. Configuration.getPassword can just return the result once it finds it rather than assigning pass and having each following section code protect itself with pass == null. I know that some Java contexts expect char[] for passwords, but String seems like it would be much more natural in the CredentialProvider API.
          Hide
          Larry McCay added a comment -

          I don't believe that this metric failure is related to this patch.

          Show
          Larry McCay added a comment - I don't believe that this metric failure is related to this patch.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12649866/10607-11.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 5 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common:

          org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4047//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4047//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12649866/10607-11.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 5 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common: org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4047//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4047//console This message is automatically generated.
          Hide
          Larry McCay added a comment -

          Added missing apache license header.

          Show
          Larry McCay added a comment - Added missing apache license header.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12649841/10607-10.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 5 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          -1 release audit. The applied patch generated 1 release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4045//testReport/
          Release audit warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/4045//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4045//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12649841/10607-10.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 5 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. -1 release audit . The applied patch generated 1 release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4045//testReport/ Release audit warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/4045//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4045//console This message is automatically generated.
          Hide
          Larry McCay added a comment -

          Addresses the points in Owen O'Malley's review.

          Show
          Larry McCay added a comment - Addresses the points in Owen O'Malley 's review.
          Hide
          Larry McCay added a comment -

          I should have a patch to address those points some point today.
          Thanks for the review, Owen!

          Show
          Larry McCay added a comment - I should have a patch to address those points some point today. Thanks for the review, Owen!
          Hide
          Owen O'Malley added a comment -

          A few points:

          • I'd make the getPassword throw IOException and not hide it.
          • I'd like a configuration that disables the config-based plain text passwords.
          • Instead of copying the unnestUrl method, just use the static one from the KeyProvider.
          • Remove the static method CredentialProvider.findProvider. I don't think we need it.
          Show
          Owen O'Malley added a comment - A few points: I'd make the getPassword throw IOException and not hide it. I'd like a configuration that disables the config-based plain text passwords. Instead of copying the unnestUrl method, just use the static one from the KeyProvider. Remove the static method CredentialProvider.findProvider. I don't think we need it.
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12648265/10607-9.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4004//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4004//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12648265/10607-9.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4004//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4004//console This message is automatically generated.
          Hide
          Larry McCay added a comment -

          Fixed hadoop shell script error

          Show
          Larry McCay added a comment - Fixed hadoop shell script error
          Hide
          Larry McCay added a comment -

          Removed extraneous whitespace that I inadvertently added.

          Show
          Larry McCay added a comment - Removed extraneous whitespace that I inadvertently added.
          Hide
          Larry McCay added a comment -

          TestZKFailoverControllerStress Failure seems unrelated to this patch.

          Show
          Larry McCay added a comment - TestZKFailoverControllerStress Failure seems unrelated to this patch.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12647891/10607-7.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common:

          org.apache.hadoop.ha.TestZKFailoverControllerStress

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/3994//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/3994//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12647891/10607-7.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common: org.apache.hadoop.ha.TestZKFailoverControllerStress +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/3994//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/3994//console This message is automatically generated.
          Hide
          Larry McCay added a comment -

          Resolved findbug issue.

          Show
          Larry McCay added a comment - Resolved findbug issue.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12647855/10607-6.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/3992//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/3992//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/3992//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12647855/10607-6.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. -1 findbugs . The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/3992//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/3992//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/3992//console This message is automatically generated.
          Hide
          Larry McCay added a comment -

          New revision of the patch that removes the backward compatibility alias token syntax and method and adds a Configuration.getPassword(String name) to get a password from either the credential provider API or fallback to the config file for clear text. Also addressed the rest of Owen O'Malley's - except for the ConfigurationCredentialProvider implementation. See previous comments about this. We can add this in a follow up patch if still desired.

          Show
          Larry McCay added a comment - New revision of the patch that removes the backward compatibility alias token syntax and method and adds a Configuration.getPassword(String name) to get a password from either the credential provider API or fallback to the config file for clear text. Also addressed the rest of Owen O'Malley 's - except for the ConfigurationCredentialProvider implementation. See previous comments about this. We can add this in a follow up patch if still desired.
          Hide
          Larry McCay added a comment -

          Owen O'Malley

          As I go to implement the ConfigurationCredentialProvider, it gets a little more weird.
          What does it mean to create or delete a credential in the provider - since there isn't enough info to implement flush().
          How about getAliases?

          It really only makes sense as a read provider which already exists in conf.get().

          I think that I am going to leave the ConfigurationCredentialProvider out for this iteration - we can always followup with another Jira.

          Is that reasonable?

          Show
          Larry McCay added a comment - Owen O'Malley As I go to implement the ConfigurationCredentialProvider, it gets a little more weird. What does it mean to create or delete a credential in the provider - since there isn't enough info to implement flush(). How about getAliases? It really only makes sense as a read provider which already exists in conf.get(). I think that I am going to leave the ConfigurationCredentialProvider out for this iteration - we can always followup with another Jira. Is that reasonable?
          Hide
          Larry McCay added a comment -

          Owen O'Malley I am concerned that requiring a new ConfigurationCredentialProvider to fallback to clear text passwords doesn't have as much value as it first seems.

          It is a great way to have a meaningful default provider and would work wonderfully when there is no configured provider path.
          However, as soon as someone configures a specific provider to use, it becomes very easy to leave the fallback provider out of the path configuration. It would be really natural to just add the single provider to the configuration and cumbersome to have to add two to the configured provider path. With a path provided then the ease of the default provider is completely gone.

          I've currently coded the Configuration.getPassword method to try the CredentialProvider API and if the alias is not resolved to fallback to config. With that in place, I'm just not sure that we need the config provider.

          I guess the question is whether we want it easy to fallback to config or make it a very explicit action to have some clear text and some not.

          What do you think?

          Show
          Larry McCay added a comment - Owen O'Malley I am concerned that requiring a new ConfigurationCredentialProvider to fallback to clear text passwords doesn't have as much value as it first seems. It is a great way to have a meaningful default provider and would work wonderfully when there is no configured provider path. However, as soon as someone configures a specific provider to use, it becomes very easy to leave the fallback provider out of the path configuration. It would be really natural to just add the single provider to the configuration and cumbersome to have to add two to the configured provider path. With a path provided then the ease of the default provider is completely gone. I've currently coded the Configuration.getPassword method to try the CredentialProvider API and if the alias is not resolved to fallback to config. With that in place, I'm just not sure that we need the config provider. I guess the question is whether we want it easy to fallback to config or make it a very explicit action to have some clear text and some not. What do you think?
          Hide
          Larry McCay added a comment -

          Very good!

          I will hopefully have a new patch by end of day tomorrow.

          Show
          Larry McCay added a comment - Very good! I will hopefully have a new patch by end of day tomorrow.
          Hide
          Owen O'Malley added a comment -

          Looks good except that I'd avoid the special value of ALIASED. We don't have any mandatory properties in our configs.

          Show
          Owen O'Malley added a comment - Looks good except that I'd avoid the special value of ALIASED. We don't have any mandatory properties in our configs.
          Hide
          Larry McCay added a comment -

          Okay let's summarize an approach here...

          If we have a ConfigurationCredentialProvider that simply looks for the credential in configuration then:

          • this can be the default provider which will allow for passwords in clear text and work out of the box
          • we can place a real credential provider in front of it in the provider path and allow for password aliases to be resolved and then fall back to Configuration

          If we add a new method to Configuration - getPassword(String name) then:

          • we essentially extend the configuration file to include the credentials available through the provider API
          • we will leverage the CredentialProvider API to get the password whether it be in a store or in the configuration file without the consuming code or even the Configuration code knowing where it comes from

          If we leverage the existing configuration property names as the aliases into the credential store then:

          • we can simply remove the password config elements from files when not in clear text or
          • add a value of ALIASED or something that indicates that the value is elsewhere (in case the property is mandatory for some elements)

          Is this accurate?

          Show
          Larry McCay added a comment - Okay let's summarize an approach here... If we have a ConfigurationCredentialProvider that simply looks for the credential in configuration then: this can be the default provider which will allow for passwords in clear text and work out of the box we can place a real credential provider in front of it in the provider path and allow for password aliases to be resolved and then fall back to Configuration If we add a new method to Configuration - getPassword(String name) then: we essentially extend the configuration file to include the credentials available through the provider API we will leverage the CredentialProvider API to get the password whether it be in a store or in the configuration file without the consuming code or even the Configuration code knowing where it comes from If we leverage the existing configuration property names as the aliases into the credential store then: we can simply remove the password config elements from files when not in clear text or add a value of ALIASED or something that indicates that the value is elsewhere (in case the property is mandatory for some elements) Is this accurate?
          Hide
          Larry McCay added a comment -

          Owen O'Malley - I can buy the getPassword method - that makes sense.

          What I am wondering now is whether we need alias names beyond the config property names at all.
          If when we call getPassword the implementation first checks for an alias of that name and finds it then it doesn't matter what the value is in the config file. We could suggest that it be ALIASED or something that shows that it is intentionally not a clear text password.

          I think that will get us what we want without the ugly alias token syntax.
          What do you think?

          Show
          Larry McCay added a comment - Owen O'Malley - I can buy the getPassword method - that makes sense. What I am wondering now is whether we need alias names beyond the config property names at all. If when we call getPassword the implementation first checks for an alias of that name and finds it then it doesn't matter what the value is in the config file. We could suggest that it be ALIASED or something that shows that it is intentionally not a clear text password. I think that will get us what we want without the ugly alias token syntax. What do you think?
          Hide
          Owen O'Malley added a comment -

          I think that it would be good to add a method in Configuration that is getPassword(String key).

          That method will do the credential provider lookup and translate it.

          Perhaps we should have the identity credential provider log a warning when it is invoked so that admins are aware when they have plaintext passwords in their config files.

          I think that the right final state is where you only have unadorned aliases where there are currently secrets.

          Show
          Owen O'Malley added a comment - I think that it would be good to add a method in Configuration that is getPassword(String key). That method will do the credential provider lookup and translate it. Perhaps we should have the identity credential provider log a warning when it is invoked so that admins are aware when they have plaintext passwords in their config files. I think that the right final state is where you only have unadorned aliases where there are currently secrets.
          Hide
          Larry McCay added a comment -

          I also think that there is value in being able to look at a configuration element and know whether it is an alias or a clear text password.

          Show
          Larry McCay added a comment - I also think that there is value in being able to look at a configuration element and know whether it is an alias or a clear text password.
          Hide
          Larry McCay added a comment -

          So, you are suggesting that we have a backward compatibility provider that always returns the provided alias name as the credential value? In otherwords, it is a clear text provider.

          I think that I have 2 issues with that:

          1. If there are well known alias/credential pairs that are in the credential store that don't have configuration elements that they will also just return the provided name as the value?
          2. There would never be a valid usecase where one configuration element is backward compatible clear text and another is an alias that must be resolved? Being able to incrementally change them or to be able to test in development when adding something new seems valuable.

          Essentially, it is a pretty big switch to throw - all or nothing.

          Show
          Larry McCay added a comment - So, you are suggesting that we have a backward compatibility provider that always returns the provided alias name as the credential value? In otherwords, it is a clear text provider. I think that I have 2 issues with that: 1. If there are well known alias/credential pairs that are in the credential store that don't have configuration elements that they will also just return the provided name as the value? 2. There would never be a valid usecase where one configuration element is backward compatible clear text and another is an alias that must be resolved? Being able to incrementally change them or to be able to test in development when adding something new seems valuable. Essentially, it is a pretty big switch to throw - all or nothing.
          Hide
          Owen O'Malley added a comment -

          Larry, here are some additional comments:

          • CredentialEntry.toString should assume use the characters as-is rather than printing the hex.
          • I'd suggest removing getCredentialEntryFromConfigValue. I think we can have a better backwards compatibility story.
            • create an IdentityProvider that returns the alias as the password.
            • make IdentityCredentialProvider the default

          Thus, hive-site.xml can use "javax.jdo.option.ConnectionPassword" as "mysecret" and the default of the IdentityCredentialProvider will return "mysecret" as the password. When the user updates their provider to a more secure alternative, they would change "mysecret" to "hive-db-password" and set the password in their provider for "hive-db-password".

          Does that sound reasonable?

          Show
          Owen O'Malley added a comment - Larry, here are some additional comments: CredentialEntry.toString should assume use the characters as-is rather than printing the hex. I'd suggest removing getCredentialEntryFromConfigValue. I think we can have a better backwards compatibility story. create an IdentityProvider that returns the alias as the password. make IdentityCredentialProvider the default Thus, hive-site.xml can use "javax.jdo.option.ConnectionPassword" as "mysecret" and the default of the IdentityCredentialProvider will return "mysecret" as the password. When the user updates their provider to a more secure alternative, they would change "mysecret" to "hive-db-password" and set the password in their provider for "hive-db-password". Does that sound reasonable?
          Hide
          Larry McCay added a comment -

          Will do, Owen O'Malley thanks for the review.
          Good catch on the array problem - I'll try and and a unit test for that as well!

          Show
          Larry McCay added a comment - Will do, Owen O'Malley thanks for the review. Good catch on the array problem - I'll try and and a unit test for that as well!
          Hide
          Owen O'Malley added a comment -

          Larry, some comments:

          • please change CredShell to CredentialShell
          • in CredShell.promptForCredential you clobber the array before returning it.
          • it would be really nice for CredShell to have more unit tests. I'm not quite sure how to get there.
          Show
          Owen O'Malley added a comment - Larry, some comments: please change CredShell to CredentialShell in CredShell.promptForCredential you clobber the array before returning it. it would be really nice for CredShell to have more unit tests. I'm not quite sure how to get there.
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12645438/10607-5.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/3955//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/3955//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12645438/10607-5.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/3955//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/3955//console This message is automatically generated.
          Hide
          Larry McCay added a comment -

          gitx has been flaking out on me - the javadoc warning is fixed now. ugh.

          Show
          Larry McCay added a comment - gitx has been flaking out on me - the javadoc warning is fixed now. ugh.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12645376/10607-4.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 javadoc. The javadoc tool appears to have generated 1 warning messages.
          See https://builds.apache.org/job/PreCommit-HADOOP-Build/3951//artifact/trunk/patchprocess/diffJavadocWarnings.txt for details.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/3951//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/3951//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12645376/10607-4.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 javadoc . The javadoc tool appears to have generated 1 warning messages. See https://builds.apache.org/job/PreCommit-HADOOP-Build/3951//artifact/trunk/patchprocess/diffJavadocWarnings.txt for details. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/3951//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/3951//console This message is automatically generated.
          Hide
          Larry McCay added a comment -

          Fixed javadoc warning

          Show
          Larry McCay added a comment - Fixed javadoc warning
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12645221/10607-3.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 javadoc. The javadoc tool appears to have generated 1 warning messages.
          See https://builds.apache.org/job/PreCommit-HADOOP-Build/3947//artifact/trunk/patchprocess/diffJavadocWarnings.txt for details.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/3947//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/3947//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12645221/10607-3.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 javadoc . The javadoc tool appears to have generated 1 warning messages. See https://builds.apache.org/job/PreCommit-HADOOP-Build/3947//artifact/trunk/patchprocess/diffJavadocWarnings.txt for details. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/3947//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/3947//console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12645101/10607-2.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified test files.

          -1 javac. The patch appears to cause the build to fail.

          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/3943//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12645101/10607-2.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified test files. -1 javac . The patch appears to cause the build to fail. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/3943//console This message is automatically generated.
          Hide
          Larry McCay added a comment -

          Removed java 7 reference to fix build failure.

          Show
          Larry McCay added a comment - Removed java 7 reference to fix build failure.
          Hide
          Larry McCay added a comment -

          Some java 7 symbols crept into the patch - I will remove them and resubmit.

          Show
          Larry McCay added a comment - Some java 7 symbols crept into the patch - I will remove them and resubmit.
          Hide
          Larry McCay added a comment -

          I did consider that Alejandro Abdelnur - I actually answered this question before you asked it.
          GOTO my-original-response-to-wrong-question.

          It would certainly be easier to add to the KMS server that way.
          However, I still feel that the ability to evolve independent of KeyProvider API and the additional baggage for CredentialProviders that don't want to be KeyProviders outweighs the benefits of consolidating them. Especially, if we make sure that KeyProviders can be used as CredentialProviders with an adapter.

          I'm interested in your thoughts on it though.

          Show
          Larry McCay added a comment - I did consider that Alejandro Abdelnur - I actually answered this question before you asked it. GOTO my-original-response-to-wrong-question. It would certainly be easier to add to the KMS server that way. However, I still feel that the ability to evolve independent of KeyProvider API and the additional baggage for CredentialProviders that don't want to be KeyProviders outweighs the benefits of consolidating them. Especially, if we make sure that KeyProviders can be used as CredentialProviders with an adapter. I'm interested in your thoughts on it though.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12644760/10607.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified test files.

          -1 javac. The patch appears to cause the build to fail.

          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/3936//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12644760/10607.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified test files. -1 javac . The patch appears to cause the build to fail. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/3936//console This message is automatically generated.
          Hide
          Alejandro Abdelnur added a comment - - edited

          I see the point of lot of baggage in KeyStore that is not needed.

          Instead adding a new interface, have you considered doing it in the KeyProvider itself? After all the credentials are a key. Then the KMS could easily add REST support for that too.

          Show
          Alejandro Abdelnur added a comment - - edited I see the point of lot of baggage in KeyStore that is not needed. Instead adding a new interface, have you considered doing it in the KeyProvider itself? After all the credentials are a key. Then the KMS could easily add REST support for that too.
          Hide
          Larry McCay added a comment -

          Yes, this is true - though the KeyStore API contains a lot of stuff unrelated to what we actually need.
          It is a perfectly valid implementation to plug in as a provider type but forcing the API on all stores seems unnecessary.

          SafeNet and RSA do not limit their offerings to the KeyStore API - they do provide it as a way to plugin for those that would like to use that as the integration and would be able to plugin with the JavaKeystoreProvider in this API.

          Others however offer REST APIs for acquiring secrets and having to wrap that access in a KeyStore implementation just doesn't feel right. Especially when you would have to stub out the unnecessary methods.

          Show
          Larry McCay added a comment - Yes, this is true - though the KeyStore API contains a lot of stuff unrelated to what we actually need. It is a perfectly valid implementation to plug in as a provider type but forcing the API on all stores seems unnecessary. SafeNet and RSA do not limit their offerings to the KeyStore API - they do provide it as a way to plugin for those that would like to use that as the integration and would be able to plugin with the JavaKeystoreProvider in this API. Others however offer REST APIs for acquiring secrets and having to wrap that access in a KeyStore implementation just doesn't feel right. Especially when you would have to stub out the unnecessary methods.
          Hide
          Alejandro Abdelnur added a comment -

          BTW, I'm trying to see if it makes sense to reuse existing stable APIs from the JDK instead creating a new one.

          Show
          Alejandro Abdelnur added a comment - BTW, I'm trying to see if it makes sense to reuse existing stable APIs from the JDK instead creating a new one.
          Hide
          Alejandro Abdelnur added a comment -

          Larry,

          Doing a KeyStore doesn't mean you have to store things in a jks file. The KeyStore implementation will decide where to store them.

          After a quick look the KeyStore API (http://docs.oracle.com/javase/7/docs/api/java/security/KeyStore.html), all the methods the patch is proposing are there. And the CredentialEntry would extend the KeyStore.Key.

          The actual implementation would be a KeyStoreSpi and it would be bootstrapped by a JCE provider that only provides the keystore (thus it doesn't need to be a sign JAR).

          This is similar to what commercial products like SafeNet and RSA do for their HSM integration.

          Show
          Alejandro Abdelnur added a comment - Larry, Doing a KeyStore doesn't mean you have to store things in a jks file. The KeyStore implementation will decide where to store them. After a quick look the KeyStore API ( http://docs.oracle.com/javase/7/docs/api/java/security/KeyStore.html ), all the methods the patch is proposing are there. And the CredentialEntry would extend the KeyStore.Key. The actual implementation would be a KeyStoreSpi and it would be bootstrapped by a JCE provider that only provides the keystore (thus it doesn't need to be a sign JAR). This is similar to what commercial products like SafeNet and RSA do for their HSM integration.
          Hide
          Larry McCay added a comment -

          Alejandro Abdelnur - Oh, man - sorry about that.
          There are enterprise credential/secret servers as HSMs and other forms.
          Don't you believe that there would be desire for credential providers other than jks?

          Show
          Larry McCay added a comment - Alejandro Abdelnur - Oh, man - sorry about that. There are enterprise credential/secret servers as HSMs and other forms. Don't you believe that there would be desire for credential providers other than jks?
          Hide
          Alejandro Abdelnur added a comment -

          Larry, I was referring to JDK KeyStore not Hadoop KeyProvider.

          Show
          Alejandro Abdelnur added a comment - Larry, I was referring to JDK KeyStore not Hadoop KeyProvider.
          Hide
          Larry McCay added a comment -

          also Alejandro Abdelnur - I was thinking that higher level consumers like KMS could still use both and was hoping that inclusion in the KMS would make sense to you.

          Let me know your thoughts on that as well.

          Show
          Larry McCay added a comment - also Alejandro Abdelnur - I was thinking that higher level consumers like KMS could still use both and was hoping that inclusion in the KMS would make sense to you. Let me know your thoughts on that as well.
          Hide
          Larry McCay added a comment -

          Hi Alejandro Abdelnur - I considered this for some time and came to the following conclusions:

          1. they serve similar but different purposes and consumers
          2. there is no need for versioning for credentials
          3. they need to be able to evolve separately
          4. they should be able to converge on some shared code for the pluggable providers
          5. not all KeyProviders can be used as credential providers
          6. credential providers need not add the baggage of the metadata associated with keys
          7. we do need to make sure that KeyProviders can be plugged in as CredentialProviders for when they can serve both purposes

          The biggest driver for reusing the KeyProvider API in my mind was #7 and we can address that with an adapter for when a particular KeyProvider would fit well as a credential provider as well.

          What do you think?

          Show
          Larry McCay added a comment - Hi Alejandro Abdelnur - I considered this for some time and came to the following conclusions: 1. they serve similar but different purposes and consumers 2. there is no need for versioning for credentials 3. they need to be able to evolve separately 4. they should be able to converge on some shared code for the pluggable providers 5. not all KeyProviders can be used as credential providers 6. credential providers need not add the baggage of the metadata associated with keys 7. we do need to make sure that KeyProviders can be plugged in as CredentialProviders for when they can serve both purposes The biggest driver for reusing the KeyProvider API in my mind was #7 and we can address that with an adapter for when a particular KeyProvider would fit well as a credential provider as well. What do you think?
          Hide
          Alejandro Abdelnur added a comment -

          Larry, any reason why not using the KeyStore API directly?

          Show
          Alejandro Abdelnur added a comment - Larry, any reason why not using the KeyStore API directly?
          Hide
          Larry McCay added a comment -

          Initial patch contribution

          Show
          Larry McCay added a comment - Initial patch contribution
          Hide
          Larry McCay added a comment -

          The same general pattern for provider SPIs has been taken for the credential provider API as was taken for the key provider API.

          Show
          Larry McCay added a comment - The same general pattern for provider SPIs has been taken for the credential provider API as was taken for the key provider API.

            People

            • Assignee:
              Larry McCay
              Reporter:
              Larry McCay
            • Votes:
              0 Vote for this issue
              Watchers:
              20 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development