Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-12942

hadoop credential commands non-obviously use password of "none"

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.8.0, 3.0.0-alpha1
    • Component/s: security
    • Labels:
      None

      Description

      The "hadoop credential create" command, when using a jceks provider, defaults to using the value of "none" for the password that protects the jceks file. This is not obvious in the command or in documentation - to users or to other hadoop developers - and leads to jceks files that essentially are not protected.

      In this example, I'm adding a credential entry with name of "foo" and a value specified by the password entered:

      # hadoop credential create foo -provider localjceks://file/bar.jceks
      Enter password: 
      Enter password again: 
      foo has been successfully created.
      org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
      

      However, the password that protects the file bar.jceks is "none", and there is no obvious way to change that. The practical way of supplying the password at this time is something akin to

      HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
      

      That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the command.

      This is more than a documentation issue. I believe that the password ought to be required. We have three implementations at this point, the two JavaKeystore ones and the UserCredential. The latter is "transient" which does not make sense to use in this context. The former need some sort of password, and it's relatively easy to envision that any non-transient implementation would need a mechanism by which to protect the store that it's creating.

      The implementation gets interesting because the password in the AbstractJavaKeyStoreProvider is determined in the constructor, and changing it after the fact would get messy. So this probably means that the CredentialProviderFactory should have another factory method like the first that additionally takes the password, and an additional constructor exist in all the implementations that takes the password.

      Then we just ask for the password in getCredentialProvider() and that gets passed down to via the factory to the implementation. The code does have logic in the factory to try multiple providers, but I don't really see how multiple providers would be rationaly be used in the command shell context.

      This issue was brought to light when a user stored credentials for a Sqoop action in Oozie; upon trying to figure out where the password was coming from we discovered it to be the default value of "none".

      1. HADOOP-12942.001.patch
        40 kB
        Mike Yoder
      2. HADOOP-12942.002.patch
        48 kB
        Mike Yoder
      3. HADOOP-12942.003.patch
        52 kB
        Mike Yoder
      4. HADOOP-12942.004.patch
        52 kB
        Mike Yoder
      5. HADOOP-12942.005.patch
        59 kB
        Mike Yoder
      6. HADOOP-12942.006.patch
        60 kB
        Mike Yoder
      7. HADOOP-12942.007.patch
        60 kB
        Mike Yoder
      8. HADOOP-12942.008.patch
        60 kB
        Mike Yoder

        Issue Links

          Activity

          Hide
          yoderme Mike Yoder added a comment -

          Need some advice with this one Larry McCay. I'm going to attempt a patch for this.

          Show
          yoderme Mike Yoder added a comment - Need some advice with this one Larry McCay . I'm going to attempt a patch for this.
          Hide
          lmccay Larry McCay added a comment -

          The problem is that the password to protect the store needs to be protected somehow as well.
          The only real fix to this issue that I can envision is to have a credential server that we authenticate to rather than have a password like this.
          Otherwise, we just keep moving the problem.

          The environment variable doesn't really work either except for when both sides of the equation - the provisioner and the consumer of the secret - can set the environment variable without it being exposed in a script, in a file or provided on the command line and thus available from things like ps. For instance, we provision a password using the CLI or programmatically with the API and then it needs to be acquired by a MR job. How does the environment variable get set for the MR job? The idea was that setting up the job would involve collecting known passwords and moving them into the User provider. Again, the code that is setting up the job will need to have the env variable....

          Until we have a secure credential server, the main protections with the keystore credential files are still file permissions. The keystore must be set with appropriate file permissions. In addition, it is more or less obfuscated with the encryption of the keystore and the default password.

          I would love it if you have an idea for something else.

          Show
          lmccay Larry McCay added a comment - The problem is that the password to protect the store needs to be protected somehow as well. The only real fix to this issue that I can envision is to have a credential server that we authenticate to rather than have a password like this. Otherwise, we just keep moving the problem. The environment variable doesn't really work either except for when both sides of the equation - the provisioner and the consumer of the secret - can set the environment variable without it being exposed in a script, in a file or provided on the command line and thus available from things like ps. For instance, we provision a password using the CLI or programmatically with the API and then it needs to be acquired by a MR job. How does the environment variable get set for the MR job? The idea was that setting up the job would involve collecting known passwords and moving them into the User provider. Again, the code that is setting up the job will need to have the env variable.... Until we have a secure credential server, the main protections with the keystore credential files are still file permissions. The keystore must be set with appropriate file permissions. In addition, it is more or less obfuscated with the encryption of the keystore and the default password. I would love it if you have an idea for something else.
          Hide
          lmccay Larry McCay added a comment -

          For what it's worth, I considered an approach that would require having an encrypted master secret file and that all keystores be protected with the password that is encrypted in that file. But alas, that gets us back to the question of how we protect the key/secret used to encrypt the master? I think that as long as we are file based that we are using file permissions as the main protection.

          Show
          lmccay Larry McCay added a comment - For what it's worth, I considered an approach that would require having an encrypted master secret file and that all keystores be protected with the password that is encrypted in that file. But alas, that gets us back to the question of how we protect the key/secret used to encrypt the master? I think that as long as we are file based that we are using file permissions as the main protection.
          Hide
          yoderme Mike Yoder added a comment -

          Otherwise, we just keep moving the problem.

          Oh, I agree. It's turtles all the way down. And you're right - as part of this work I'm looking at where this is used (in the use case we saw, at least) and how we can protect the password. I'm not sure we will be able to solve that problem, though.

          more or less obfuscated

          I don't know if encrypting with the same hardcoded password meets the level of even "obfuscation". Of course, you could probably direct the same charge against using a password that's easy to find.

          I would love it if you have an idea for something else.

          Yeah, me too.

          I think that one of the problems I want to call out here is that the command, as is, gives the user a false sense of security. Since there's no way to obviously specify the credential provider password, it's easy for the user to believe that whatever is going on behind the scenes is secure, because hey we must know what we're doing. If our position is that the security of that jceks file is no better than that of a plaintext file then I think we've done the user a disservice.

          I mean, let's imagine that the command outputted a warning saying "hey, that provider you just used encrypted the file with a hardcoded default password". Of course that will prompt the user to not be happy and demand a patch or something. But at least we'd be up front about the issue.

          Better, I think, to do the right thing from the perspective of this command, and then work on making the later consumers of the provider do "something". But you're right, we have to think hard about end to end security with the password. I don't know if we will have a really good answer, though.

          Show
          yoderme Mike Yoder added a comment - Otherwise, we just keep moving the problem. Oh, I agree. It's turtles all the way down. And you're right - as part of this work I'm looking at where this is used (in the use case we saw, at least) and how we can protect the password. I'm not sure we will be able to solve that problem, though. more or less obfuscated I don't know if encrypting with the same hardcoded password meets the level of even "obfuscation". Of course, you could probably direct the same charge against using a password that's easy to find. I would love it if you have an idea for something else. Yeah, me too. I think that one of the problems I want to call out here is that the command, as is, gives the user a false sense of security. Since there's no way to obviously specify the credential provider password, it's easy for the user to believe that whatever is going on behind the scenes is secure, because hey we must know what we're doing. If our position is that the security of that jceks file is no better than that of a plaintext file then I think we've done the user a disservice. I mean, let's imagine that the command outputted a warning saying "hey, that provider you just used encrypted the file with a hardcoded default password". Of course that will prompt the user to not be happy and demand a patch or something. But at least we'd be up front about the issue. Better, I think, to do the right thing from the perspective of this command, and then work on making the later consumers of the provider do "something". But you're right, we have to think hard about end to end security with the password. I don't know if we will have a really good answer, though.
          Hide
          lmccay Larry McCay added a comment -

          Yes, exactly.

          We can certainly document this aspect more clearly.
          There is a Credential API page in the docs already that will be published with 2.8 - I will take a look and see how much I said about that aspect.

          Something that has occurred to me is that we could possibly leverage the KMS for the "master" secret idea.

          We could:

          • add a command that provisions an encrypted master secret to a well-known location in HDFS
          • add code in the credential provider factory that acquires the key from KMS and decrypts the password from the master file
          • If the master secret can be found and encrypted then that can be used for the keystore password - if not, it falls back to "none" with a warning
          • the credential provider factory would then also be used within the credential provider API runtime use and would do the same thing

          We would have to think through possible recursive issues with requiring access to HDFS in order to get credentials from keystores in HDFS. The fact that the master is in a file rather than a keystore may eliminate that problem though.

          Obviously, this approach would require KMS to be in use and a new manual step to provision a master secret.
          It may be slightly odd that this is all just for the keystore based providers and wouldn't be needed for a credential server based solution but I think that can be justified.

          Show
          lmccay Larry McCay added a comment - Yes, exactly. We can certainly document this aspect more clearly. There is a Credential API page in the docs already that will be published with 2.8 - I will take a look and see how much I said about that aspect. Something that has occurred to me is that we could possibly leverage the KMS for the "master" secret idea. We could: add a command that provisions an encrypted master secret to a well-known location in HDFS add code in the credential provider factory that acquires the key from KMS and decrypts the password from the master file If the master secret can be found and encrypted then that can be used for the keystore password - if not, it falls back to "none" with a warning the credential provider factory would then also be used within the credential provider API runtime use and would do the same thing We would have to think through possible recursive issues with requiring access to HDFS in order to get credentials from keystores in HDFS. The fact that the master is in a file rather than a keystore may eliminate that problem though. Obviously, this approach would require KMS to be in use and a new manual step to provision a master secret. It may be slightly odd that this is all just for the keystore based providers and wouldn't be needed for a credential server based solution but I think that can be justified.
          Hide
          lmccay Larry McCay added a comment -

          The other thing to keep in mind is that the current situation is still better than clear text files.

          • like clear text files the main protection is file permissions
          • unlike clear text files - given a file permission breach, there are no standard tools that can get to the password value in the keystore

          Even with the password to the keystore, keytool can not get to the value and display it.
          You could certainly get to it using the credential provider API directly but this is better than clear text files that are solely protected with file permissions.

          Show
          lmccay Larry McCay added a comment - The other thing to keep in mind is that the current situation is still better than clear text files. like clear text files the main protection is file permissions unlike clear text files - given a file permission breach, there are no standard tools that can get to the password value in the keystore Even with the password to the keystore, keytool can not get to the value and display it. You could certainly get to it using the credential provider API directly but this is better than clear text files that are solely protected with file permissions.
          Hide
          yoderme Mike Yoder added a comment -

          We could:

          This is becoming bigger than the intended scope of this jira.

          Add a command that provisions an encrypted master secret to a well-known location in HDFS

          We'd have to carefully think through what users would be able to perform this action. And if something like this could be automated instead. And where that "well-known location" might be - could it be configured (I think we'd have to). And what about recursion issues if that location was inside an Encryption Zone?

          Obviously, this approach would require KMS to be in use and a new manual step to provision a master secret.

          I think what you propose is workable, but these new requirements do concern me. We'd also have to think through what users could perform this action (for this action and for making the key in the KMS). There are lot of moving parts. Seems like a case for a credential server (or credential server functionality in the KMS).

          Back to the issue in this jira - regardless of the difficulty of handling the credential store password throughout the entire workflow, I still believe that the credential shell should ask for that password. It's got to be better than silently using "none" everywhere. And given that the key store provider has the ability to get the password from a file, it seems like it would be possible to put the password into a file for basically all use cases.

          Show
          yoderme Mike Yoder added a comment - We could: This is becoming bigger than the intended scope of this jira. Add a command that provisions an encrypted master secret to a well-known location in HDFS We'd have to carefully think through what users would be able to perform this action. And if something like this could be automated instead. And where that "well-known location" might be - could it be configured (I think we'd have to). And what about recursion issues if that location was inside an Encryption Zone? Obviously, this approach would require KMS to be in use and a new manual step to provision a master secret. I think what you propose is workable, but these new requirements do concern me. We'd also have to think through what users could perform this action (for this action and for making the key in the KMS). There are lot of moving parts. Seems like a case for a credential server (or credential server functionality in the KMS). Back to the issue in this jira - regardless of the difficulty of handling the credential store password throughout the entire workflow, I still believe that the credential shell should ask for that password. It's got to be better than silently using "none" everywhere. And given that the key store provider has the ability to get the password from a file, it seems like it would be possible to put the password into a file for basically all use cases.
          Hide
          lmccay Larry McCay added a comment -

          Let's walk through that proposal.

          I think that the password file is marginally more secure because both files would have to be accessible in order to access the keystore and some folks may be willing to manage more files in order to get that additional protection. In addition, gaining access to one of those password files will only provide access to keystores that an attacker has access to and are protected by that particular password.

          The AbstractJavaKeyStoreProvider already has support for a password file and can easily be used - we definitely need to document this clearly.

          I have heard reluctance from folks in the past for having commands prompt for passwords and would certainly break the scriptability of it. We would have to add a switch that enabled the prompting for a password - if we were to add it to the credential create subcommand.

          This same password file is used in lots of scenarios though: KMS, javakeystore providers for key provider API, oozie, signing secret providers,e tc. I wonder whether a separate command for it would make sense.
          Keep in mind that we would need to do a number of things for this.

          1. prompt for the password
          2. persist it
          3. set appropriate permissions on the file
          4. somehow determine the filename to use (probably based on the password file name configuration) which would need to be provided by the user as well
          5. allow for use of the same password file for multiple keystores or scenarios
          6. allow for random-ish generated password without prompt

          So, something like:

          hadoop pwdfile -pwdfile.property.name hadoop.security.credstore.java-keystore-provider.password-file [-generate true] [-permissions 400]

          This would check the Configuration for the provided pwdfile.property.name to get the file to persist the password to.
          If generate is set to true then it doesn't prompt and generates a password to use - otherwise, prompts for a password.
          (I could also see the opposite approach which would be default to generate unless a -interactive --i type switch is provided.)
          If permissions are provided the file is created with those permissions otherwise, defaults to 400.

          Show
          lmccay Larry McCay added a comment - Let's walk through that proposal. I think that the password file is marginally more secure because both files would have to be accessible in order to access the keystore and some folks may be willing to manage more files in order to get that additional protection. In addition, gaining access to one of those password files will only provide access to keystores that an attacker has access to and are protected by that particular password. The AbstractJavaKeyStoreProvider already has support for a password file and can easily be used - we definitely need to document this clearly. I have heard reluctance from folks in the past for having commands prompt for passwords and would certainly break the scriptability of it. We would have to add a switch that enabled the prompting for a password - if we were to add it to the credential create subcommand. This same password file is used in lots of scenarios though: KMS, javakeystore providers for key provider API, oozie, signing secret providers,e tc. I wonder whether a separate command for it would make sense. Keep in mind that we would need to do a number of things for this. 1. prompt for the password 2. persist it 3. set appropriate permissions on the file 4. somehow determine the filename to use (probably based on the password file name configuration) which would need to be provided by the user as well 5. allow for use of the same password file for multiple keystores or scenarios 6. allow for random-ish generated password without prompt So, something like: hadoop pwdfile -pwdfile.property.name hadoop.security.credstore.java-keystore-provider.password-file [-generate true] [-permissions 400] This would check the Configuration for the provided pwdfile.property.name to get the file to persist the password to. If generate is set to true then it doesn't prompt and generates a password to use - otherwise, prompts for a password. (I could also see the opposite approach which would be default to generate unless a -interactive --i type switch is provided.) If permissions are provided the file is created with those permissions otherwise, defaults to 400.
          Hide
          yoderme Mike Yoder added a comment -

          I have heard reluctance from folks in the past for having commands prompt for passwords and would certainly break the scriptability of it. We would have to add a switch that enabled the prompting for a password - if we were to add it to the credential create subcommand.

          Agreed. Today as you know the credential create command prompts for a password but there is an undocumented "-value" argument that can be used. I'd stick with the same scheme where either a prompt or command line argument were possible.

          This same password file is used in lots of scenarios though: KMS, javakeystore providers for key provider API, oozie, signing secret providers,e tc. I wonder whether a separate command for it would make sense.

          Conceptually, yes, but aren't config values different? I'm aware of two:

          • alias/AbstractJavaKeyStoreProvider: hadoop.security.credstore.java-keystore-provider.password-file
          • key/JavaKeyStoreProvider: hadoop.security.keystore.java-keystore-provider.password-file

          Keep in mind that we would need to do a number of things for this.
          1. prompt for the password
          2. persist it
          3. set appropriate permissions on the file
          4. somehow determine the filename to use (probably based on the password file name configuration) which would need to be provided by the user as well
          5. allow for use of the same password file for multiple keystores or scenarios
          6. allow for random-ish generated password without prompt

          I think it's even more complicated. The user could want to use the environment variable when the credential is consumed, and so would want to provide it to the command but would not want to deal with anything file-related.

          Also it's conceivable that the user could have constructed the file themselves; although this doesn't seem particularly user friendly.

          So we have scenarios for hadoop credential create|list|etc that look like

          1. Here is the credstore password from a prompt
          2. Here is the credstore password on the command line
          3. The credstore password is already in a file in the "expected" location (set up either by hand or via your new pwdfile command).

          Making a command to manage the password file makes sense. I think that we shouldn't ask the user to give it the property name though: you could modify KeyShell and CredentialShell to have a new subcommand of 'pwdfile', thusly:

          • hadoop credential pwdfile [args]
          • hadoop key pwdfile [args]

          And they could share an implementation. This way the user does not have to remember "hadoop.security.credstore.java-keystore-provider.password-file" or the like. This also means that the provider selected needs a new interface to create said file, if applicable.

          I like the auto-generate-password option for the file. I think the default would be to still prompt for the password, though. So yeah, adding a pwdfile command seems like a good idea.

          The thing about the existing design that I'm going back and forth on is that the CredentialShell is high-level, and selects a provider and then simply passes information to the provider. The password is implied and not passed directly, so the CredentialShell has no notion of whether or not the underlying provider actually has a password or not.

          So, for example, it would be daft of CredentialShell to accept a password on the command line if one is provided in a file, and it would also be even more daft if no password was specifed on the command line and the password wasn't in the password file either. Furthermore it would be silly to accept a password when the underlying provider does not need a password at all for proper operation (example: the UserProvider). There has to be some amount of communication between the CredentialShell and the provider in order to get the "is a password required" and "where precisely is the password" cases correct.

          To make this even more interesting, in the various providers with a key store, the keyStore is either created or opened in the constructor, requiring that all the information be presented up front - without scope for the back and forth of "do you need a password and where" from the provider.

          So... one way to deal with this is to move the keyStore.load() call out of the constructor and defer it until the first get/set/delete credential entry call. Then expose interfaces along the lines of "does this provider already have the password somehow?" and "set the password directly". We'd have to add default behavior in CredentialProvider (and KeyProvider) and then implement in the ones that matter.

          The downside to this approach is that we move around a few error conditions. However everything can throw an IOException, so maybe this isn't a big deal. Seem reasonable? Alternative proposals?

          Show
          yoderme Mike Yoder added a comment - I have heard reluctance from folks in the past for having commands prompt for passwords and would certainly break the scriptability of it. We would have to add a switch that enabled the prompting for a password - if we were to add it to the credential create subcommand. Agreed. Today as you know the credential create command prompts for a password but there is an undocumented "-value" argument that can be used. I'd stick with the same scheme where either a prompt or command line argument were possible. This same password file is used in lots of scenarios though: KMS, javakeystore providers for key provider API, oozie, signing secret providers,e tc. I wonder whether a separate command for it would make sense. Conceptually, yes, but aren't config values different? I'm aware of two: alias/AbstractJavaKeyStoreProvider: hadoop.security.credstore.java-keystore-provider.password-file key/JavaKeyStoreProvider: hadoop.security.keystore.java-keystore-provider.password-file Keep in mind that we would need to do a number of things for this. 1. prompt for the password 2. persist it 3. set appropriate permissions on the file 4. somehow determine the filename to use (probably based on the password file name configuration) which would need to be provided by the user as well 5. allow for use of the same password file for multiple keystores or scenarios 6. allow for random-ish generated password without prompt I think it's even more complicated. The user could want to use the environment variable when the credential is consumed, and so would want to provide it to the command but would not want to deal with anything file-related. Also it's conceivable that the user could have constructed the file themselves; although this doesn't seem particularly user friendly. So we have scenarios for hadoop credential create|list|etc that look like Here is the credstore password from a prompt Here is the credstore password on the command line The credstore password is already in a file in the "expected" location (set up either by hand or via your new pwdfile command). Making a command to manage the password file makes sense. I think that we shouldn't ask the user to give it the property name though: you could modify KeyShell and CredentialShell to have a new subcommand of 'pwdfile', thusly: hadoop credential pwdfile [args] hadoop key pwdfile [args] And they could share an implementation. This way the user does not have to remember "hadoop.security.credstore.java-keystore-provider.password-file" or the like. This also means that the provider selected needs a new interface to create said file, if applicable. I like the auto-generate-password option for the file. I think the default would be to still prompt for the password, though. So yeah, adding a pwdfile command seems like a good idea. The thing about the existing design that I'm going back and forth on is that the CredentialShell is high-level, and selects a provider and then simply passes information to the provider. The password is implied and not passed directly, so the CredentialShell has no notion of whether or not the underlying provider actually has a password or not. So, for example, it would be daft of CredentialShell to accept a password on the command line if one is provided in a file, and it would also be even more daft if no password was specifed on the command line and the password wasn't in the password file either. Furthermore it would be silly to accept a password when the underlying provider does not need a password at all for proper operation (example: the UserProvider). There has to be some amount of communication between the CredentialShell and the provider in order to get the "is a password required" and "where precisely is the password" cases correct. To make this even more interesting, in the various providers with a key store, the keyStore is either created or opened in the constructor, requiring that all the information be presented up front - without scope for the back and forth of "do you need a password and where" from the provider. So... one way to deal with this is to move the keyStore.load() call out of the constructor and defer it until the first get/set/delete credential entry call. Then expose interfaces along the lines of "does this provider already have the password somehow?" and "set the password directly". We'd have to add default behavior in CredentialProvider (and KeyProvider) and then implement in the ones that matter. The downside to this approach is that we move around a few error conditions. However everything can throw an IOException, so maybe this isn't a big deal. Seem reasonable? Alternative proposals?
          Hide
          lmccay Larry McCay added a comment -

          There are at least the following additional password files throughout the ecosystem - I'm sure that there are probably more:

          Hadoop:
          hadoop-http-auth-signature-secret
          hadoop.security.group.mapping.ldap.ssl.keystore.password.file
          hadoop.security.group.mapping.ldap.bind.password.file

          HBase JMX Remote:
          HBASE_JMX_OPTS="$HBASE_JMX_OPTS -Dcom.sun.management.jmxremote.password.file=$HBASE_HOME/conf/jmxremote.passwd"

          HBase Web UIs
          TLS/SSL Server Keystore File Password - Password for the server keystore file used for encrypted web UIs.
          TLS/SSL Server Keystore Key Password - Password that protects the private key contained in the server keystore used for encrypted web UIs.

          HBase REST Server
          HBase REST Server TLS/SSL Server JKS Keystore File Password - The password for the HBase REST Server JKS keystore file.
          HBase REST Server TLS/SSL Server JKS Keystore Key Password - The password that protects the private key contained in the JKS keystore used when HBase REST Server is acting as a TLS/SSL server.

          HBase Thrift Server
          HBase Thrift Server over HTTP TLS/SSL Server JKS Keystore File Password - The password for the HBase Thrift Server JKS keystore file.
          HBase Thrift Server over HTTP TLS/SSL Server JKS Keystore Key Password - The password that protects the private key contained in the JKS keystore used when HBase Thrift Server over HTTP is acting as a TLS/SSL server.

          Oozie SSL/TLS
          Oozie TLS/SSL Server JKS Keystore File Password - Password for the keystore.

          I'd rather not add this work to the Key and Credential provider commands.
          The keystore providers are both just consumers of the same password file pattern found else where through out hadoop.

          I believe that it is generally part of the administrative platforms like Ambari and Cloudera Manager but if you would like a CLI management tool then I think that may add some value. Like I described it could take care of the permissions settings, etc. Which would all be separate manual steps from the command line.

          Show
          lmccay Larry McCay added a comment - There are at least the following additional password files throughout the ecosystem - I'm sure that there are probably more: Hadoop: hadoop-http-auth-signature-secret hadoop.security.group.mapping.ldap.ssl.keystore.password.file hadoop.security.group.mapping.ldap.bind.password.file HBase JMX Remote: HBASE_JMX_OPTS="$HBASE_JMX_OPTS -Dcom.sun.management.jmxremote.password.file=$HBASE_HOME/conf/jmxremote.passwd" HBase Web UIs TLS/SSL Server Keystore File Password - Password for the server keystore file used for encrypted web UIs. TLS/SSL Server Keystore Key Password - Password that protects the private key contained in the server keystore used for encrypted web UIs. HBase REST Server HBase REST Server TLS/SSL Server JKS Keystore File Password - The password for the HBase REST Server JKS keystore file. HBase REST Server TLS/SSL Server JKS Keystore Key Password - The password that protects the private key contained in the JKS keystore used when HBase REST Server is acting as a TLS/SSL server. HBase Thrift Server HBase Thrift Server over HTTP TLS/SSL Server JKS Keystore File Password - The password for the HBase Thrift Server JKS keystore file. HBase Thrift Server over HTTP TLS/SSL Server JKS Keystore Key Password - The password that protects the private key contained in the JKS keystore used when HBase Thrift Server over HTTP is acting as a TLS/SSL server. Oozie SSL/TLS Oozie TLS/SSL Server JKS Keystore File Password - Password for the keystore. I'd rather not add this work to the Key and Credential provider commands. The keystore providers are both just consumers of the same password file pattern found else where through out hadoop. I believe that it is generally part of the administrative platforms like Ambari and Cloudera Manager but if you would like a CLI management tool then I think that may add some value. Like I described it could take care of the permissions settings, etc. Which would all be separate manual steps from the command line.
          Hide
          yoderme Mike Yoder added a comment -

          Oh goodness. When you expand it to the general paradigm of "a password in a file..." yeah, I do recognize most of those. I was just thinking of the concept as applied to the providers in the discussion so far. Let me start without the pwdfile command at all. On some level, an "echo asdf > file && chmod 400 file" isn't that hard. Or at least not implement in the first pass - it's a separate problem from the rest.

          Show
          yoderme Mike Yoder added a comment - Oh goodness. When you expand it to the general paradigm of "a password in a file..." yeah, I do recognize most of those. I was just thinking of the concept as applied to the providers in the discussion so far. Let me start without the pwdfile command at all. On some level, an "echo asdf > file && chmod 400 file" isn't that hard. Or at least not implement in the first pass - it's a separate problem from the rest.
          Hide
          lmccay Larry McCay added a comment -

          Agreed.

          Show
          lmccay Larry McCay added a comment - Agreed.
          Hide
          lmccay Larry McCay added a comment -

          It's actually a "password in a file that is referenced from config" and is a specific pattern in hadoop.

          Show
          lmccay Larry McCay added a comment - It's actually a "password in a file that is referenced from config" and is a specific pattern in hadoop.
          Hide
          yoderme Mike Yoder added a comment -

          At last, a patch. I fixed both the KeyShell and CredentialShell, since they have the same problem. I also noticed that the CredentialShell threw an NPE with the "-help" commands, so I fixed that while I was in there. The new code will prompt for the password for the provider if one is needed, and it will also accept "-password xxx" on the command line. Note that there is a backwards compatibility issue here: the user has to give a password where none was required before. I don't see a way around this, however, since not having a real password was the root cause of this bug. I did set it up so that if the user just hits 'enter' (no password) when prompted, the default "none" is used instead, which is the prior behavior.

          Show
          yoderme Mike Yoder added a comment - At last, a patch. I fixed both the KeyShell and CredentialShell, since they have the same problem. I also noticed that the CredentialShell threw an NPE with the "-help" commands, so I fixed that while I was in there. The new code will prompt for the password for the provider if one is needed, and it will also accept "-password xxx" on the command line. Note that there is a backwards compatibility issue here: the user has to give a password where none was required before. I don't see a way around this, however, since not having a real password was the root cause of this bug. I did set it up so that if the user just hits 'enter' (no password) when prompted, the default "none" is used instead, which is the prior behavior.
          Hide
          lmccay Larry McCay added a comment -

          Mike Yoder - I'm not clear on the intent here. If we are leveraging the password-in-a-file approach then why are we going to prompt the user for a password? It should be in the config if that is what is going to be used. Additionally, how is the MR job going be assured to have the password to access the keystore by this?

          If you are setting a password without it being first provisioned in the file then you are setting them up for a credential store that can't be opened. The current behavior should find the provisioned keystore password from the file and create the credential store appropriately with no need to prompt the user. This is the intended behavior by design and keeps the config aligned with the keystore password.

          I also see the backward compatibility issue as a non-starter. The current behavior isn't a bug "none" is a real password and there is code that is likely depending on that behavior. If we want to add the ability to prompt for the password then it has to be the other way round. If there is no password provided on the command line and no --interactive (-i) switch then you have to use "none".

          I'm sorry if I misunderstood our previous discussion on this but I am -1 on this as it stands - unless I also misunderstand the patch.

          Show
          lmccay Larry McCay added a comment - Mike Yoder - I'm not clear on the intent here. If we are leveraging the password-in-a-file approach then why are we going to prompt the user for a password? It should be in the config if that is what is going to be used. Additionally, how is the MR job going be assured to have the password to access the keystore by this? If you are setting a password without it being first provisioned in the file then you are setting them up for a credential store that can't be opened. The current behavior should find the provisioned keystore password from the file and create the credential store appropriately with no need to prompt the user. This is the intended behavior by design and keeps the config aligned with the keystore password. I also see the backward compatibility issue as a non-starter. The current behavior isn't a bug "none" is a real password and there is code that is likely depending on that behavior. If we want to add the ability to prompt for the password then it has to be the other way round. If there is no password provided on the command line and no --interactive (-i) switch then you have to use "none". I'm sorry if I misunderstood our previous discussion on this but I am -1 on this as it stands - unless I also misunderstand the patch.
          Hide
          lmccay Larry McCay added a comment -

          I see that I did misunderstand your conclusion earlier. I apologize for that.
          I don't see any point in prompting for a password that will successfully create a keystore and not fail until access is attempted from a running MR job.

          I think that the core issue can be addressed with a warning at credential creation time that the default password is being used and that they might want to consider provisioning a password in a file and add that filename to the config.

          Current behavior + what we should consider best practice according to this JIRA is:

          1. provision a password to a file for your credential providers
          2. set this file location as the configured password file (really part of provisioning above...)
          3. use the hadoop credential CLI to provision the actual credential required by MR jobs, etc
          4. if default password is used warning the user

          The combination of the warning from the CLI and better documentation should be all that we need here.
          I wouldn't be opposed to a -strict switch that doesn't allow the default password to be used either.
          So, when that is set we don't fallback to the default but fail out of the CLI with appropriate error message.
          An explicit switch to be -strict about it retains backward compatibility too.

          Prompting for a password that has not been provisioned yet will lead to runtime problems.

          Show
          lmccay Larry McCay added a comment - I see that I did misunderstand your conclusion earlier. I apologize for that. I don't see any point in prompting for a password that will successfully create a keystore and not fail until access is attempted from a running MR job. I think that the core issue can be addressed with a warning at credential creation time that the default password is being used and that they might want to consider provisioning a password in a file and add that filename to the config. Current behavior + what we should consider best practice according to this JIRA is: 1. provision a password to a file for your credential providers 2. set this file location as the configured password file (really part of provisioning above...) 3. use the hadoop credential CLI to provision the actual credential required by MR jobs, etc 4. if default password is used warning the user The combination of the warning from the CLI and better documentation should be all that we need here. I wouldn't be opposed to a -strict switch that doesn't allow the default password to be used either. So, when that is set we don't fallback to the default but fail out of the CLI with appropriate error message. An explicit switch to be -strict about it retains backward compatibility too. Prompting for a password that has not been provisioned yet will lead to runtime problems.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 10s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
          +1 mvninstall 6m 36s trunk passed
          +1 compile 5m 43s trunk passed with JDK v1.8.0_74
          +1 compile 6m 33s trunk passed with JDK v1.7.0_95
          +1 checkstyle 0m 22s trunk passed
          +1 mvnsite 0m 56s trunk passed
          +1 mvneclipse 0m 14s trunk passed
          +1 findbugs 1m 33s trunk passed
          +1 javadoc 0m 54s trunk passed with JDK v1.8.0_74
          +1 javadoc 1m 1s trunk passed with JDK v1.7.0_95
          +1 mvninstall 0m 40s the patch passed
          +1 compile 5m 40s the patch passed with JDK v1.8.0_74
          +1 javac 5m 40s the patch passed
          +1 compile 6m 39s the patch passed with JDK v1.7.0_95
          +1 javac 6m 39s the patch passed
          -1 checkstyle 0m 22s hadoop-common-project/hadoop-common: patch generated 19 new + 103 unchanged - 4 fixed = 122 total (was 107)
          +1 mvnsite 0m 56s the patch passed
          +1 mvneclipse 0m 13s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 1m 46s the patch passed
          +1 javadoc 0m 51s the patch passed with JDK v1.8.0_74
          +1 javadoc 1m 4s the patch passed with JDK v1.7.0_95
          -1 unit 6m 40s hadoop-common in the patch failed with JDK v1.8.0_74.
          -1 unit 6m 54s hadoop-common in the patch failed with JDK v1.7.0_95.
          -1 asflicense 0m 24s Patch generated 2 ASF License warnings.
          57m 21s



          Reason Tests
          JDK v1.8.0_74 Failed junit tests hadoop.crypto.key.TestKeyProviderFactory
          JDK v1.8.0_74 Timed out junit tests org.apache.hadoop.util.TestNativeLibraryChecker
          JDK v1.7.0_95 Failed junit tests hadoop.crypto.key.TestKeyProviderFactory
          JDK v1.7.0_95 Timed out junit tests org.apache.hadoop.util.TestNativeLibraryChecker



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:fbe3e86
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12795453/HADOOP-12942.001.patch
          JIRA Issue HADOOP-12942
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 5e95b12a28ca 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / e8fc81f
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_74 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/8930/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/8930/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_74.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/8930/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_95.txt
          unit test logs https://builds.apache.org/job/PreCommit-HADOOP-Build/8930/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_74.txt https://builds.apache.org/job/PreCommit-HADOOP-Build/8930/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_95.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8930/testReport/
          asflicense https://builds.apache.org/job/PreCommit-HADOOP-Build/8930/artifact/patchprocess/patch-asflicense-problems.txt
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8930/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 10s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files. +1 mvninstall 6m 36s trunk passed +1 compile 5m 43s trunk passed with JDK v1.8.0_74 +1 compile 6m 33s trunk passed with JDK v1.7.0_95 +1 checkstyle 0m 22s trunk passed +1 mvnsite 0m 56s trunk passed +1 mvneclipse 0m 14s trunk passed +1 findbugs 1m 33s trunk passed +1 javadoc 0m 54s trunk passed with JDK v1.8.0_74 +1 javadoc 1m 1s trunk passed with JDK v1.7.0_95 +1 mvninstall 0m 40s the patch passed +1 compile 5m 40s the patch passed with JDK v1.8.0_74 +1 javac 5m 40s the patch passed +1 compile 6m 39s the patch passed with JDK v1.7.0_95 +1 javac 6m 39s the patch passed -1 checkstyle 0m 22s hadoop-common-project/hadoop-common: patch generated 19 new + 103 unchanged - 4 fixed = 122 total (was 107) +1 mvnsite 0m 56s the patch passed +1 mvneclipse 0m 13s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 46s the patch passed +1 javadoc 0m 51s the patch passed with JDK v1.8.0_74 +1 javadoc 1m 4s the patch passed with JDK v1.7.0_95 -1 unit 6m 40s hadoop-common in the patch failed with JDK v1.8.0_74. -1 unit 6m 54s hadoop-common in the patch failed with JDK v1.7.0_95. -1 asflicense 0m 24s Patch generated 2 ASF License warnings. 57m 21s Reason Tests JDK v1.8.0_74 Failed junit tests hadoop.crypto.key.TestKeyProviderFactory JDK v1.8.0_74 Timed out junit tests org.apache.hadoop.util.TestNativeLibraryChecker JDK v1.7.0_95 Failed junit tests hadoop.crypto.key.TestKeyProviderFactory JDK v1.7.0_95 Timed out junit tests org.apache.hadoop.util.TestNativeLibraryChecker Subsystem Report/Notes Docker Image:yetus/hadoop:fbe3e86 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12795453/HADOOP-12942.001.patch JIRA Issue HADOOP-12942 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 5e95b12a28ca 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / e8fc81f Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_74 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/8930/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/8930/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_74.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/8930/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_95.txt unit test logs https://builds.apache.org/job/PreCommit-HADOOP-Build/8930/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_74.txt https://builds.apache.org/job/PreCommit-HADOOP-Build/8930/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_95.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8930/testReport/ asflicense https://builds.apache.org/job/PreCommit-HADOOP-Build/8930/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8930/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          yoderme Mike Yoder added a comment -

          I guess I didn't explain my intent to prompt the user for a password very clearly. My (admittedly simplistic) thinking was "hey there's no password". "We should therefore make sure there's a password."

          If we are leveraging the password-in-a-file approach then why are we going to prompt the user for a password? It should be in the config if that is what is going to be used.

          So if there is in fact a password in a file referred to in the config, it takes priority and the user will never be prompted for a password. That's why the providers' needsPassword() has to exist. We aren't doing anything new with the password-in-a-file approach with this patch; it's has been there and continues to be there.

          Additionally, how is the MR job going be assured to have the password to access the keystore by this?

          They aren't - but they never were assured of this in the first place. If you're reading from a file pointed to by the config, you're assuming that the same config will exist in the context in which it's later used (and that the file exists, too). If you're using an environment variable, you're assuming the environment variable is going to exist in the future context in which it's later used. Neither of these are guaranteed.

          If you are setting a password without it being first provisioned in the file then you are setting them up for a credential store that can't be opened.

          There is a higher probability of that with my patch, yes. I believe this to be better than setting the user up for unintentional insecure storage of secrets. I don't know how to handle this better, and I'm not sure that we can since we don't know how the cred store will be accessed in the future.

          The current behavior should find the provisioned keystore password from the file and create the credential store appropriately with no need to prompt the user. This is the intended behavior by design and keeps the config aligned with the keystore password.

          I see what you're getting at, but I guess I have not felt that they are as "aligned" as you feel they are.

          So instead of prompting the user for a password, you would instead check for either the password-in-a-file or the environment variable, and if they don't exist, error out with a message stating that the provider couldn't find the password and here's how to provide it?

          That would achieve the same sort of goal, but it just seemed easier and a better interface to just ask the user for the password. I suppose my patch doesn't give the user any hints on how to set things up so future stuff could read the keystore, though, which isn't great.

          The current behavior isn't a bug "none" is a real password

          See if I agreed with this I never would have filed this jira. I feel that it is a bug to give the user the impression that a value is being securely stored when in fact it is not. Hardcoded "none" provides no protection.

          I also see the backward compatibility issue as a non-starter

          I view the current interface as having the bug - that interface being the non-obvious use of a password "none". As such, the interface ought to change, and as such that means a backwards compatibility issue. But...ergh. If we must keep the interface safe for scripts and the like... how about the following algorithm
          .

          • if there are no new command line arguments
            • if file or env var found
              • great, continue as before
            • else
              • print a big WARNING that they are using a password of "none" and instructions on how to set it; continue as before
          • else if "-password" or "-askMeTheProviderPassword" is found on the command line obtain the provider password, and
            • if the provider already has a password via file or env var, print a WARNING that the file or env var exists, and that the user supplied password will be ignored
            • else pass the given password into the provider.

          This gives us backwards compatibility, notification to the user that they're doing something insecure, and a way to provide the password in the command itself. Your thoughts?

          Show
          yoderme Mike Yoder added a comment - I guess I didn't explain my intent to prompt the user for a password very clearly. My (admittedly simplistic) thinking was "hey there's no password". "We should therefore make sure there's a password." If we are leveraging the password-in-a-file approach then why are we going to prompt the user for a password? It should be in the config if that is what is going to be used. So if there is in fact a password in a file referred to in the config, it takes priority and the user will never be prompted for a password. That's why the providers' needsPassword() has to exist. We aren't doing anything new with the password-in-a-file approach with this patch; it's has been there and continues to be there. Additionally, how is the MR job going be assured to have the password to access the keystore by this? They aren't - but they never were assured of this in the first place. If you're reading from a file pointed to by the config, you're assuming that the same config will exist in the context in which it's later used (and that the file exists, too). If you're using an environment variable, you're assuming the environment variable is going to exist in the future context in which it's later used. Neither of these are guaranteed. If you are setting a password without it being first provisioned in the file then you are setting them up for a credential store that can't be opened. There is a higher probability of that with my patch, yes. I believe this to be better than setting the user up for unintentional insecure storage of secrets. I don't know how to handle this better, and I'm not sure that we can since we don't know how the cred store will be accessed in the future. The current behavior should find the provisioned keystore password from the file and create the credential store appropriately with no need to prompt the user. This is the intended behavior by design and keeps the config aligned with the keystore password. I see what you're getting at, but I guess I have not felt that they are as "aligned" as you feel they are. So instead of prompting the user for a password, you would instead check for either the password-in-a-file or the environment variable, and if they don't exist, error out with a message stating that the provider couldn't find the password and here's how to provide it? That would achieve the same sort of goal, but it just seemed easier and a better interface to just ask the user for the password. I suppose my patch doesn't give the user any hints on how to set things up so future stuff could read the keystore, though, which isn't great. The current behavior isn't a bug "none" is a real password See if I agreed with this I never would have filed this jira. I feel that it is a bug to give the user the impression that a value is being securely stored when in fact it is not. Hardcoded "none" provides no protection. I also see the backward compatibility issue as a non-starter I view the current interface as having the bug - that interface being the non-obvious use of a password "none". As such, the interface ought to change, and as such that means a backwards compatibility issue. But...ergh. If we must keep the interface safe for scripts and the like... how about the following algorithm . if there are no new command line arguments if file or env var found great, continue as before else print a big WARNING that they are using a password of "none" and instructions on how to set it; continue as before else if "-password" or "-askMeTheProviderPassword" is found on the command line obtain the provider password, and if the provider already has a password via file or env var, print a WARNING that the file or env var exists, and that the user supplied password will be ignored else pass the given password into the provider. This gives us backwards compatibility, notification to the user that they're doing something insecure, and a way to provide the password in the command itself. Your thoughts?
          Hide
          yoderme Mike Yoder added a comment -

          Oh, hey, I didn't see your second comment before posting. We're getting closer...

          You say

          provision a password to a file for your credential providers

          So that means that the config file would have to change so that the name of the file is provided... and the command can't do that itself. Right? This has to be an independent step taken by the user I assume.

          use the hadoop credential CLI to provision the actual credential required by MR jobs, etc

          by this you mean creating the file with the password, assuming that the config file mentions a file?

          I wouldn't be opposed to a -strict switch that doesn't allow the default password to be used either.

          Yeah that's a good idea.

          Prompting for a password that has not been provisioned yet will lead to runtime problems.

          Well, it does give the user the flexibility to set up the password in the file or use the environment variable at their leisure at a later date.

          Show
          yoderme Mike Yoder added a comment - Oh, hey, I didn't see your second comment before posting. We're getting closer... You say provision a password to a file for your credential providers So that means that the config file would have to change so that the name of the file is provided... and the command can't do that itself. Right? This has to be an independent step taken by the user I assume. use the hadoop credential CLI to provision the actual credential required by MR jobs, etc by this you mean creating the file with the password, assuming that the config file mentions a file? I wouldn't be opposed to a -strict switch that doesn't allow the default password to be used either. Yeah that's a good idea. Prompting for a password that has not been provisioned yet will lead to runtime problems. Well, it does give the user the flexibility to set up the password in the file or use the environment variable at their leisure at a later date.
          Hide
          lmccay Larry McCay added a comment -

          They aren't - but they never were assured of this in the first place.

          They were assured of the default password always being available.
          If we are going to define the best practice as using the password-in-a-file approach then it should be documented clearly that this config setting needs to be in place otherwise the default will be used.

          There is a higher probability of that with my patch, yes. I believe this to be better than setting the user up for unintentional insecure storage of secrets. I don't know how to handle this better, and I'm not sure that we can since we don't know how the cred store will be accessed in the future.

          Not if we warn them AND allow them to be protected from it with a -strict flag.

          See if I agreed with this I never would have filed this jira. I feel that it is a bug to give the user the impression that a value is being securely stored when in fact it is not. Hardcoded "none" provides no protection.

          I can agree that the non-obvious aspect of this JIRA is a bug.

          A hardcoded password actually provides more protection than that afforded to the password-in-the-file itself.
          It will only be protected by file permissions.

          Printing a warning, documenting it clearly and providing a -strict switch would satisfy the non-obviousness bug.

          If we didn't have this existing pattern of referencing password-files and had a way to connect all the dots for the password that we get through the prompt then this would be perfect. Unfortunately, I think that prompting like this adds unnecessary complexity in order to add a password that we know isn't provisioned in the config and will likely lead to a failure at runtime rather than provisioning time. I believe that we want to fail early.

          I should clearly state that I don't think that the password-in-a-file actually makes security much better but I can appreciate the idea of using it. But we really are just moving the problem around until we have a credential server.

          We are talking about protecting an encoded credential store which is protected with file permissions with a password that is stored in clear text in a file that is protected with file permissions. Add, to that, the complexity of ensuring that file is configured properly in all environments and it becomes an availability problem.

          My preference is to try and clearly define a best practice until we have a credential server that uses existing functionality and makes it very clear that a default password is being used otherwise.

          Show
          lmccay Larry McCay added a comment - They aren't - but they never were assured of this in the first place. They were assured of the default password always being available. If we are going to define the best practice as using the password-in-a-file approach then it should be documented clearly that this config setting needs to be in place otherwise the default will be used. There is a higher probability of that with my patch, yes. I believe this to be better than setting the user up for unintentional insecure storage of secrets. I don't know how to handle this better, and I'm not sure that we can since we don't know how the cred store will be accessed in the future. Not if we warn them AND allow them to be protected from it with a -strict flag. See if I agreed with this I never would have filed this jira. I feel that it is a bug to give the user the impression that a value is being securely stored when in fact it is not. Hardcoded "none" provides no protection. I can agree that the non-obvious aspect of this JIRA is a bug. A hardcoded password actually provides more protection than that afforded to the password-in-the-file itself. It will only be protected by file permissions. Printing a warning, documenting it clearly and providing a -strict switch would satisfy the non-obviousness bug. If we didn't have this existing pattern of referencing password-files and had a way to connect all the dots for the password that we get through the prompt then this would be perfect. Unfortunately, I think that prompting like this adds unnecessary complexity in order to add a password that we know isn't provisioned in the config and will likely lead to a failure at runtime rather than provisioning time. I believe that we want to fail early. I should clearly state that I don't think that the password-in-a-file actually makes security much better but I can appreciate the idea of using it. But we really are just moving the problem around until we have a credential server. We are talking about protecting an encoded credential store which is protected with file permissions with a password that is stored in clear text in a file that is protected with file permissions. Add, to that, the complexity of ensuring that file is configured properly in all environments and it becomes an availability problem. My preference is to try and clearly define a best practice until we have a credential server that uses existing functionality and makes it very clear that a default password is being used otherwise.
          Hide
          lmccay Larry McCay added a comment -

          HA - yeah, we are stepping on each other....

          So that means that the config file would have to change so that the name of the file is provided... and the command can't do that itself. Right? This has to be an independent step taken by the user I assume.

          Right.

          by this you mean creating the file with the password, assuming that the config file mentions a file?

          Correct.

          Well, it does give the user the flexibility to set up the password in the file or use the environment variable at their leisure at a later date.

          Absolutely, and in other systems, I would agree with this. I would much rather fail early than later here though.

          Show
          lmccay Larry McCay added a comment - HA - yeah, we are stepping on each other.... So that means that the config file would have to change so that the name of the file is provided... and the command can't do that itself. Right? This has to be an independent step taken by the user I assume. Right. by this you mean creating the file with the password, assuming that the config file mentions a file? Correct. Well, it does give the user the flexibility to set up the password in the file or use the environment variable at their leisure at a later date. Absolutely, and in other systems, I would agree with this. I would much rather fail early than later here though.
          Hide
          yoderme Mike Yoder added a comment -

          New patch removes password-entry code and replaces it with warnings/errors about the password. A new "-strict" flag is introduced, which will cause the commands to fail without a password.

          Show
          yoderme Mike Yoder added a comment - New patch removes password-entry code and replaces it with warnings/errors about the password. A new "-strict" flag is introduced, which will cause the commands to fail without a password.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 10s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
          +1 mvninstall 6m 36s trunk passed
          +1 compile 5m 49s trunk passed with JDK v1.8.0_77
          +1 compile 6m 40s trunk passed with JDK v1.7.0_95
          +1 checkstyle 0m 22s trunk passed
          +1 mvnsite 0m 56s trunk passed
          +1 mvneclipse 0m 14s trunk passed
          +1 findbugs 1m 35s trunk passed
          +1 javadoc 0m 51s trunk passed with JDK v1.8.0_77
          +1 javadoc 1m 3s trunk passed with JDK v1.7.0_95
          +1 mvninstall 0m 41s the patch passed
          +1 compile 5m 42s the patch passed with JDK v1.8.0_77
          +1 javac 5m 42s the patch passed
          +1 compile 6m 45s the patch passed with JDK v1.7.0_95
          +1 javac 6m 45s the patch passed
          +1 checkstyle 0m 22s hadoop-common-project/hadoop-common: patch generated 0 new + 38 unchanged - 70 fixed = 38 total (was 108)
          +1 mvnsite 0m 58s the patch passed
          +1 mvneclipse 0m 13s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 1m 48s the patch passed
          +1 javadoc 0m 54s the patch passed with JDK v1.8.0_77
          +1 javadoc 1m 4s the patch passed with JDK v1.7.0_95
          -1 unit 16m 53s hadoop-common in the patch failed with JDK v1.8.0_77.
          +1 unit 8m 5s hadoop-common in the patch passed with JDK v1.7.0_95.
          +1 asflicense 0m 22s Patch does not generate ASF License warnings.
          69m 13s



          Reason Tests
          JDK v1.8.0_77 Timed out junit tests org.apache.hadoop.http.TestHttpServerLifecycle



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:fbe3e86
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12796629/HADOOP-12942.002.patch
          JIRA Issue HADOOP-12942
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 07bde5f83451 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 81d04ca
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9008/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_77.txt
          unit test logs https://builds.apache.org/job/PreCommit-HADOOP-Build/9008/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_77.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9008/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9008/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 10s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files. +1 mvninstall 6m 36s trunk passed +1 compile 5m 49s trunk passed with JDK v1.8.0_77 +1 compile 6m 40s trunk passed with JDK v1.7.0_95 +1 checkstyle 0m 22s trunk passed +1 mvnsite 0m 56s trunk passed +1 mvneclipse 0m 14s trunk passed +1 findbugs 1m 35s trunk passed +1 javadoc 0m 51s trunk passed with JDK v1.8.0_77 +1 javadoc 1m 3s trunk passed with JDK v1.7.0_95 +1 mvninstall 0m 41s the patch passed +1 compile 5m 42s the patch passed with JDK v1.8.0_77 +1 javac 5m 42s the patch passed +1 compile 6m 45s the patch passed with JDK v1.7.0_95 +1 javac 6m 45s the patch passed +1 checkstyle 0m 22s hadoop-common-project/hadoop-common: patch generated 0 new + 38 unchanged - 70 fixed = 38 total (was 108) +1 mvnsite 0m 58s the patch passed +1 mvneclipse 0m 13s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 48s the patch passed +1 javadoc 0m 54s the patch passed with JDK v1.8.0_77 +1 javadoc 1m 4s the patch passed with JDK v1.7.0_95 -1 unit 16m 53s hadoop-common in the patch failed with JDK v1.8.0_77. +1 unit 8m 5s hadoop-common in the patch passed with JDK v1.7.0_95. +1 asflicense 0m 22s Patch does not generate ASF License warnings. 69m 13s Reason Tests JDK v1.8.0_77 Timed out junit tests org.apache.hadoop.http.TestHttpServerLifecycle Subsystem Report/Notes Docker Image:yetus/hadoop:fbe3e86 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12796629/HADOOP-12942.002.patch JIRA Issue HADOOP-12942 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 07bde5f83451 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 81d04ca Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9008/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_77.txt unit test logs https://builds.apache.org/job/PreCommit-HADOOP-Build/9008/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_77.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9008/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9008/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          lmccay Larry McCay added a comment -

          Hi Mike Yoder - Sorry for the delay in review. I am really underwater with other things. I have taken a quick look at this and think that it is looking a lot better.

          Comments:

          1. WARN messages (without strict flag): read too much like ERRORs. It should be talking about the fact that a unique password could be used by provisioning it in one of the following ways instead of it was expected to be found here or there. It is perfectly legitimate to use the static/hardcoded password. I think that this can be done easily with minor changes to the language.
          2. There is a bit more language that is too strongly leaning toward ERROR. For instance, there are cases where it is being communicated that a provider requires a password and none is given. Interesting word play aside, there is one given it happens to be the default one. Basically, I would like to see the semantics of the -strict flag as indicating a desire for a unique or custom password to be used and the lack of the -strict flag as accepting the default password.
          3. It would also be good to let the user know that when a custom password is being used that it must be available to the runtime consumers of it as well. The trick is communicating all of this without spitting out a book.
          4. I'm not sure that the hardcoded password needs to be emitted on the command line in order to satisfy "obviousness". I would rather see it be referred to as the default password. The default password can easily be documented in the command or credential provider docs so that it can be found, when needed.
          5. I think we should take this opportunity to revisit the 700 file permissions and change it to 600 - unless there is some reason that 700 is needed.

          That's as deep as I could dig in today. There are a couple things that I would like to dig deeper into like the consolidation of some caught keystore exceptions and minor loss of context. I can't tell if some refactoring made the previous exceptions not possible anymore or what.

          Show
          lmccay Larry McCay added a comment - Hi Mike Yoder - Sorry for the delay in review. I am really underwater with other things. I have taken a quick look at this and think that it is looking a lot better. Comments: 1. WARN messages (without strict flag): read too much like ERRORs. It should be talking about the fact that a unique password could be used by provisioning it in one of the following ways instead of it was expected to be found here or there. It is perfectly legitimate to use the static/hardcoded password. I think that this can be done easily with minor changes to the language. 2. There is a bit more language that is too strongly leaning toward ERROR. For instance, there are cases where it is being communicated that a provider requires a password and none is given. Interesting word play aside, there is one given it happens to be the default one. Basically, I would like to see the semantics of the -strict flag as indicating a desire for a unique or custom password to be used and the lack of the -strict flag as accepting the default password. 3. It would also be good to let the user know that when a custom password is being used that it must be available to the runtime consumers of it as well. The trick is communicating all of this without spitting out a book. 4. I'm not sure that the hardcoded password needs to be emitted on the command line in order to satisfy "obviousness". I would rather see it be referred to as the default password. The default password can easily be documented in the command or credential provider docs so that it can be found, when needed. 5. I think we should take this opportunity to revisit the 700 file permissions and change it to 600 - unless there is some reason that 700 is needed. That's as deep as I could dig in today. There are a couple things that I would like to dig deeper into like the consolidation of some caught keystore exceptions and minor loss of context. I can't tell if some refactoring made the previous exceptions not possible anymore or what.
          Hide
          yoderme Mike Yoder added a comment -

          Thanks for having a look.

          WARN messages (without strict flag): read too much like ERRORs [...] It is perfectly legitimate to use the static/hardcoded password.

          See, here's where we disagree. Using the CredentialProvider or KeyProvider indicates that the user cares about security. Otherwise they wouldn't use the feature at all - for example just providing a cleartext password instead of getting it through the CredentialProvider. So if the user cares about security, they are going to care that the provider is actually protecting the information.

          Or to come at this a different way - I can think of no other secure system involving a password where the use of a default hardcoded password is common.

          So yeah, given my assumptions above the WARN messages are pretty severe on purpose. It's difficult for me to fathom a (security conscious) user who, upon learning that they were using a static hardcoded password, would say "meh".

          a provider requires a password

          Well, it requires a password for an attempt at secure operation.

          It would also be good to let the user know that when a custom password is being used that it must be available to the runtime consumers of it as well. The trick is communicating all of this without spitting out a book.

          Quite true. How about the following two new lines:

          WARNING: The provider cannot find a password in the expected locations.
          Please supply a password using one of the following two mechanisms:
              o In the environment variable ...
              o In a file referred to by the configuration entry ...
          Please note that when this provider is used in the future, the password must
          also be available to it in the same manner.
          Continuing with default provider password "none"
          

          I'm not sure that the hardcoded password needs to be emitted on the command line in order to satisfy "obviousness".

          My thinking was that the user might want to figure out what the default password is, and so if the information is public, I might as well be helpful right on the command line.

          I think we should take this opportunity to revisit the 700 file permissions and change it to 600

          OK, makes me a little nervous to lump that in, but sure.

          the consolidation of some caught keystore exceptions

          There was one place I changed

          -    } catch (NoSuchAlgorithmException e) {
          -      throw new IOException("Can't load keystore " + getPathAsString(), e);
          -    } catch (CertificateException e) {
          -      throw new IOException("Can't load keystore " + getPathAsString(), e);
          -    }
          

          to this

          +    } catch (GeneralSecurityException e) {
          +      throw new IOException("Can't load keystore " + getPathAsString(), e);
          +    }
          

          just to collapse the to dups into one.

          Show
          yoderme Mike Yoder added a comment - Thanks for having a look. WARN messages (without strict flag): read too much like ERRORs [...] It is perfectly legitimate to use the static/hardcoded password. See, here's where we disagree. Using the CredentialProvider or KeyProvider indicates that the user cares about security. Otherwise they wouldn't use the feature at all - for example just providing a cleartext password instead of getting it through the CredentialProvider. So if the user cares about security, they are going to care that the provider is actually protecting the information. Or to come at this a different way - I can think of no other secure system involving a password where the use of a default hardcoded password is common. So yeah, given my assumptions above the WARN messages are pretty severe on purpose. It's difficult for me to fathom a (security conscious) user who, upon learning that they were using a static hardcoded password, would say "meh". a provider requires a password Well, it requires a password for an attempt at secure operation. It would also be good to let the user know that when a custom password is being used that it must be available to the runtime consumers of it as well. The trick is communicating all of this without spitting out a book. Quite true. How about the following two new lines: WARNING: The provider cannot find a password in the expected locations. Please supply a password using one of the following two mechanisms: o In the environment variable ... o In a file referred to by the configuration entry ... Please note that when this provider is used in the future, the password must also be available to it in the same manner. Continuing with default provider password "none" I'm not sure that the hardcoded password needs to be emitted on the command line in order to satisfy "obviousness". My thinking was that the user might want to figure out what the default password is, and so if the information is public, I might as well be helpful right on the command line. I think we should take this opportunity to revisit the 700 file permissions and change it to 600 OK, makes me a little nervous to lump that in, but sure. the consolidation of some caught keystore exceptions There was one place I changed - } catch (NoSuchAlgorithmException e) { - throw new IOException("Can't load keystore " + getPathAsString(), e); - } catch (CertificateException e) { - throw new IOException("Can't load keystore " + getPathAsString(), e); - } to this + } catch (GeneralSecurityException e) { + throw new IOException("Can't load keystore " + getPathAsString(), e); + } just to collapse the to dups into one.
          Hide
          lmccay Larry McCay added a comment -

          This statement is a good place start:

          Or to come at this a different way - I can think of no other secure system involving a password where the use of a default hardcoded password is common.

          This actually is done in a number of systems that I have seen. Calling this a secure mechanism is would be too strong even with your proposed change. We are talking about levels of protection.

          What the keystore based providers are doing is making the clear text password in a file permissions protected file a bit better. Both mechanisms require file permissions to be thwarted in order to get to the file. However, the clear text password in a file can then easily be gotten to with standard tools. There are no standard tools that can be used to get to the password in a keystore. Keytool doesn't even allow you to read a secret stored this way - whether you have the password or not. Not that you can't write a tool to get to it but it is still better than the clear text password in a file. Claiming that the protection of the keystore with a clear text password in a file or environment variable makes it a secure system is a stretch.

          The other benefit of the credential provider API is that it provides an abstraction and API to easily migrate to a better provider when it comes online.

          Warning that the default password is being used is sufficient when the -strict flag is absent. The language should make it clear that they have accepted the default password but not make it seem like they have done something wrong. Dumping the default password into the command line seems a little too public to me. What do you think about pointing them to documentation to find out more about the default password? We will need to put docs together for this anyway and we can pull what I have in trunk and branch-2 in with new language for the password protection of the keystore providers.

          Please note that when this provider is used in the future, the password must also be available to it in the same manner.

          I fear that "in the future" is too vague. It reads like a temporal thing instead of an "all consumers" thing. It needs to be clear that any clients or jobs that require this password have access to the file and the password. I think that it only needs to be presented as part of the ERROR language as well. If they are accepting the use of the default password then other consumers will not have a problem.

          This is actually the trickiest part of having the non-default password and will be the root cause of lots of support calls/tickets. This is why I am reluctant make the use of the default password seem like an error.

          I am looking to provide a patch that will move the secrets required by a job into the Credentials file during job setup and will make this a moot point eventually. Once we have that in place, the only parties that need access to the keystores are the ones setting up environments for jobs/applications.

          Show
          lmccay Larry McCay added a comment - This statement is a good place start: Or to come at this a different way - I can think of no other secure system involving a password where the use of a default hardcoded password is common. This actually is done in a number of systems that I have seen. Calling this a secure mechanism is would be too strong even with your proposed change. We are talking about levels of protection. What the keystore based providers are doing is making the clear text password in a file permissions protected file a bit better. Both mechanisms require file permissions to be thwarted in order to get to the file. However, the clear text password in a file can then easily be gotten to with standard tools. There are no standard tools that can be used to get to the password in a keystore. Keytool doesn't even allow you to read a secret stored this way - whether you have the password or not. Not that you can't write a tool to get to it but it is still better than the clear text password in a file. Claiming that the protection of the keystore with a clear text password in a file or environment variable makes it a secure system is a stretch. The other benefit of the credential provider API is that it provides an abstraction and API to easily migrate to a better provider when it comes online. Warning that the default password is being used is sufficient when the -strict flag is absent. The language should make it clear that they have accepted the default password but not make it seem like they have done something wrong. Dumping the default password into the command line seems a little too public to me. What do you think about pointing them to documentation to find out more about the default password? We will need to put docs together for this anyway and we can pull what I have in trunk and branch-2 in with new language for the password protection of the keystore providers. Please note that when this provider is used in the future, the password must also be available to it in the same manner. I fear that "in the future" is too vague. It reads like a temporal thing instead of an "all consumers" thing. It needs to be clear that any clients or jobs that require this password have access to the file and the password. I think that it only needs to be presented as part of the ERROR language as well. If they are accepting the use of the default password then other consumers will not have a problem. This is actually the trickiest part of having the non-default password and will be the root cause of lots of support calls/tickets. This is why I am reluctant make the use of the default password seem like an error. I am looking to provide a patch that will move the secrets required by a job into the Credentials file during job setup and will make this a moot point eventually. Once we have that in place, the only parties that need access to the keystores are the ones setting up environments for jobs/applications.
          Hide
          lmccay Larry McCay added a comment -

          Warning should be something like:

          WARNING: You have accepted the use of the default password by not configuring a password in either:

          o The environment variable ...
          o File referred to by the configuration entry ...
          Please review the documentation regarding provider passwords available here...
          Continuing with default provider password.

          Show
          lmccay Larry McCay added a comment - Warning should be something like: WARNING: You have accepted the use of the default password by not configuring a password in either: o The environment variable ... o File referred to by the configuration entry ... Please review the documentation regarding provider passwords available here... Continuing with default provider password.
          Hide
          lmccay Larry McCay added a comment -
          WARNING: You have accepted the use of the default password by not configuring a password in either:
              o The environment variable ...
              o File referred to by the configuration entry ...
          Please review the documentation regarding provider passwords available here...
          Continuing with default provider password.
          
          Show
          lmccay Larry McCay added a comment - WARNING: You have accepted the use of the default password by not configuring a password in either: o The environment variable ... o File referred to by the configuration entry ... Please review the documentation regarding provider passwords available here... Continuing with default provider password.
          Hide
          yoderme Mike Yoder added a comment -

          OK, I'll take your last suggestion. Will have the patch in a bit.

          Show
          yoderme Mike Yoder added a comment - OK, I'll take your last suggestion. Will have the patch in a bit.
          Hide
          lmccay Larry McCay added a comment -

          We may want to wait till we have a link for the "documentation regarding provider passwords available here..." which I am working on before putting together another patch.

          Show
          lmccay Larry McCay added a comment - We may want to wait till we have a link for the "documentation regarding provider passwords available here..." which I am working on before putting together another patch.
          Hide
          lmccay Larry McCay added a comment -
          Show
          lmccay Larry McCay added a comment - The link to the Keystore Passwords section will be: http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CredentialProviderAPI.html#Keystore_Passwords - once HADOOP-13011 is committed.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 16s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 4 new or modified test files.
          +1 mvninstall 7m 19s trunk passed
          +1 compile 8m 29s trunk passed with JDK v1.8.0_77
          +1 compile 7m 42s trunk passed with JDK v1.7.0_95
          +1 checkstyle 0m 22s trunk passed
          +1 mvnsite 0m 59s trunk passed
          +1 mvneclipse 0m 13s trunk passed
          +1 findbugs 1m 43s trunk passed
          +1 javadoc 1m 7s trunk passed with JDK v1.8.0_77
          +1 javadoc 1m 10s trunk passed with JDK v1.7.0_95
          +1 mvninstall 0m 46s the patch passed
          +1 compile 8m 40s the patch passed with JDK v1.8.0_77
          +1 javac 8m 40s the patch passed
          +1 compile 7m 31s the patch passed with JDK v1.7.0_95
          +1 javac 7m 31s the patch passed
          -1 checkstyle 0m 22s hadoop-common-project/hadoop-common: patch generated 1 new + 39 unchanged - 70 fixed = 40 total (was 109)
          +1 mvnsite 1m 2s the patch passed
          +1 mvneclipse 0m 13s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 1m 57s the patch passed
          +1 javadoc 1m 7s the patch passed with JDK v1.8.0_77
          +1 javadoc 1m 9s the patch passed with JDK v1.7.0_95
          +1 unit 10m 29s hadoop-common in the patch passed with JDK v1.8.0_77.
          +1 unit 10m 8s hadoop-common in the patch passed with JDK v1.7.0_95.
          +1 asflicense 0m 24s Patch does not generate ASF License warnings.
          74m 22s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:fbe3e86
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12798407/HADOOP-12942.003.patch
          JIRA Issue HADOOP-12942
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux bd0fe18c537d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 35f0770
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/9076/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9076/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9076/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 16s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 4 new or modified test files. +1 mvninstall 7m 19s trunk passed +1 compile 8m 29s trunk passed with JDK v1.8.0_77 +1 compile 7m 42s trunk passed with JDK v1.7.0_95 +1 checkstyle 0m 22s trunk passed +1 mvnsite 0m 59s trunk passed +1 mvneclipse 0m 13s trunk passed +1 findbugs 1m 43s trunk passed +1 javadoc 1m 7s trunk passed with JDK v1.8.0_77 +1 javadoc 1m 10s trunk passed with JDK v1.7.0_95 +1 mvninstall 0m 46s the patch passed +1 compile 8m 40s the patch passed with JDK v1.8.0_77 +1 javac 8m 40s the patch passed +1 compile 7m 31s the patch passed with JDK v1.7.0_95 +1 javac 7m 31s the patch passed -1 checkstyle 0m 22s hadoop-common-project/hadoop-common: patch generated 1 new + 39 unchanged - 70 fixed = 40 total (was 109) +1 mvnsite 1m 2s the patch passed +1 mvneclipse 0m 13s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 57s the patch passed +1 javadoc 1m 7s the patch passed with JDK v1.8.0_77 +1 javadoc 1m 9s the patch passed with JDK v1.7.0_95 +1 unit 10m 29s hadoop-common in the patch passed with JDK v1.8.0_77. +1 unit 10m 8s hadoop-common in the patch passed with JDK v1.7.0_95. +1 asflicense 0m 24s Patch does not generate ASF License warnings. 74m 22s Subsystem Report/Notes Docker Image:yetus/hadoop:fbe3e86 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12798407/HADOOP-12942.003.patch JIRA Issue HADOOP-12942 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux bd0fe18c537d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 35f0770 Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/9076/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9076/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9076/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          lmccay Larry McCay added a comment -

          Hi Mike Yoder - thanks for the new patch.
          I will try and review it tonight or tomorrow.

          Looks like you got flagged for adding a new checkstyle violation - even though you fixed 70.

          Show
          lmccay Larry McCay added a comment - Hi Mike Yoder - thanks for the new patch. I will try and review it tonight or tomorrow. Looks like you got flagged for adding a new checkstyle violation - even though you fixed 70.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 9s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 4 new or modified test files.
          +1 mvninstall 7m 43s trunk passed
          +1 compile 8m 13s trunk passed with JDK v1.8.0_77
          +1 compile 7m 11s trunk passed with JDK v1.7.0_95
          +1 checkstyle 0m 23s trunk passed
          +1 mvnsite 1m 1s trunk passed
          +1 mvneclipse 0m 13s trunk passed
          +1 findbugs 1m 40s trunk passed
          +1 javadoc 0m 59s trunk passed with JDK v1.8.0_77
          +1 javadoc 1m 5s trunk passed with JDK v1.7.0_95
          +1 mvninstall 0m 48s the patch passed
          +1 compile 7m 29s the patch passed with JDK v1.8.0_77
          +1 javac 7m 29s the patch passed
          +1 compile 7m 15s the patch passed with JDK v1.7.0_95
          +1 javac 7m 15s the patch passed
          +1 checkstyle 0m 20s hadoop-common-project/hadoop-common: patch generated 0 new + 38 unchanged - 70 fixed = 38 total (was 108)
          +1 mvnsite 0m 57s the patch passed
          +1 mvneclipse 0m 13s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 1m 51s the patch passed
          +1 javadoc 0m 53s the patch passed with JDK v1.8.0_77
          +1 javadoc 1m 6s the patch passed with JDK v1.7.0_95
          -1 unit 8m 11s hadoop-common in the patch failed with JDK v1.8.0_77.
          -1 unit 8m 26s hadoop-common in the patch failed with JDK v1.7.0_95.
          +1 asflicense 0m 23s Patch does not generate ASF License warnings.
          67m 39s



          Reason Tests
          JDK v1.8.0_77 Failed junit tests hadoop.net.TestClusterTopology
            hadoop.security.ssl.TestReloadingX509TrustManager
          JDK v1.7.0_95 Failed junit tests hadoop.ha.TestZKFailoverController



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:fbe3e86
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12798569/HADOOP-12942.004.patch
          JIRA Issue HADOOP-12942
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux f8c7ca5a3aba 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / e0cb426
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9085/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_77.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9085/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_95.txt
          unit test logs https://builds.apache.org/job/PreCommit-HADOOP-Build/9085/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_77.txt https://builds.apache.org/job/PreCommit-HADOOP-Build/9085/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_95.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9085/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9085/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 9s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 4 new or modified test files. +1 mvninstall 7m 43s trunk passed +1 compile 8m 13s trunk passed with JDK v1.8.0_77 +1 compile 7m 11s trunk passed with JDK v1.7.0_95 +1 checkstyle 0m 23s trunk passed +1 mvnsite 1m 1s trunk passed +1 mvneclipse 0m 13s trunk passed +1 findbugs 1m 40s trunk passed +1 javadoc 0m 59s trunk passed with JDK v1.8.0_77 +1 javadoc 1m 5s trunk passed with JDK v1.7.0_95 +1 mvninstall 0m 48s the patch passed +1 compile 7m 29s the patch passed with JDK v1.8.0_77 +1 javac 7m 29s the patch passed +1 compile 7m 15s the patch passed with JDK v1.7.0_95 +1 javac 7m 15s the patch passed +1 checkstyle 0m 20s hadoop-common-project/hadoop-common: patch generated 0 new + 38 unchanged - 70 fixed = 38 total (was 108) +1 mvnsite 0m 57s the patch passed +1 mvneclipse 0m 13s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 51s the patch passed +1 javadoc 0m 53s the patch passed with JDK v1.8.0_77 +1 javadoc 1m 6s the patch passed with JDK v1.7.0_95 -1 unit 8m 11s hadoop-common in the patch failed with JDK v1.8.0_77. -1 unit 8m 26s hadoop-common in the patch failed with JDK v1.7.0_95. +1 asflicense 0m 23s Patch does not generate ASF License warnings. 67m 39s Reason Tests JDK v1.8.0_77 Failed junit tests hadoop.net.TestClusterTopology   hadoop.security.ssl.TestReloadingX509TrustManager JDK v1.7.0_95 Failed junit tests hadoop.ha.TestZKFailoverController Subsystem Report/Notes Docker Image:yetus/hadoop:fbe3e86 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12798569/HADOOP-12942.004.patch JIRA Issue HADOOP-12942 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux f8c7ca5a3aba 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / e0cb426 Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9085/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_77.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9085/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_95.txt unit test logs https://builds.apache.org/job/PreCommit-HADOOP-Build/9085/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_77.txt https://builds.apache.org/job/PreCommit-HADOOP-Build/9085/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_95.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9085/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9085/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          yoderme Mike Yoder added a comment -

          So it's not just the absolute number of checkstyle violations, it knows which ones were yours. Ow!

          Regarding the latest patch... it differs in only 4 whitespace characters from the previous patch, which did pass the unit tests. The hadoop.security.ssl.TestReloadingX509TrustManager failure passes for me; looks unrelated.

          Show
          yoderme Mike Yoder added a comment - So it's not just the absolute number of checkstyle violations, it knows which ones were yours. Ow! Regarding the latest patch... it differs in only 4 whitespace characters from the previous patch, which did pass the unit tests. The hadoop.security.ssl.TestReloadingX509TrustManager failure passes for me; looks unrelated.
          Hide
          yoderme Mike Yoder added a comment -

          Patch 005 is identical to 004, but adds documentation in CommandsManual.md

          Show
          yoderme Mike Yoder added a comment - Patch 005 is identical to 004, but adds documentation in CommandsManual.md
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 10s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 4 new or modified test files.
          +1 mvninstall 6m 45s trunk passed
          +1 compile 6m 9s trunk passed with JDK v1.8.0_77
          +1 compile 6m 34s trunk passed with JDK v1.7.0_95
          +1 checkstyle 0m 22s trunk passed
          +1 mvnsite 0m 57s trunk passed
          +1 mvneclipse 0m 14s trunk passed
          +1 findbugs 1m 34s trunk passed
          +1 javadoc 0m 52s trunk passed with JDK v1.8.0_77
          +1 javadoc 1m 5s trunk passed with JDK v1.7.0_95
          +1 mvninstall 0m 40s the patch passed
          +1 compile 5m 41s the patch passed with JDK v1.8.0_77
          +1 javac 5m 41s the patch passed
          +1 compile 6m 37s the patch passed with JDK v1.7.0_95
          +1 javac 6m 37s the patch passed
          +1 checkstyle 0m 21s hadoop-common-project/hadoop-common: patch generated 0 new + 38 unchanged - 70 fixed = 38 total (was 108)
          +1 mvnsite 0m 56s the patch passed
          +1 mvneclipse 0m 14s the patch passed
          -1 whitespace 0m 0s The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix.
          +1 findbugs 1m 47s the patch passed
          +1 javadoc 0m 51s the patch passed with JDK v1.8.0_77
          +1 javadoc 1m 1s the patch passed with JDK v1.7.0_95
          -1 unit 16m 54s hadoop-common in the patch failed with JDK v1.8.0_77.
          +1 unit 8m 7s hadoop-common in the patch passed with JDK v1.7.0_95.
          +1 asflicense 0m 20s Patch does not generate ASF License warnings.
          69m 16s



          Reason Tests
          JDK v1.8.0_77 Failed junit tests hadoop.security.ssl.TestReloadingX509TrustManager
            hadoop.net.TestDNS
          JDK v1.8.0_77 Timed out junit tests org.apache.hadoop.http.TestHttpServerLifecycle



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:fbe3e86
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12799991/HADOOP-12942.005.patch
          JIRA Issue HADOOP-12942
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux ec97943f4fb8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 7da5847
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/9142/artifact/patchprocess/whitespace-eol.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9142/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_77.txt
          unit test logs https://builds.apache.org/job/PreCommit-HADOOP-Build/9142/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_77.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9142/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9142/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 10s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 4 new or modified test files. +1 mvninstall 6m 45s trunk passed +1 compile 6m 9s trunk passed with JDK v1.8.0_77 +1 compile 6m 34s trunk passed with JDK v1.7.0_95 +1 checkstyle 0m 22s trunk passed +1 mvnsite 0m 57s trunk passed +1 mvneclipse 0m 14s trunk passed +1 findbugs 1m 34s trunk passed +1 javadoc 0m 52s trunk passed with JDK v1.8.0_77 +1 javadoc 1m 5s trunk passed with JDK v1.7.0_95 +1 mvninstall 0m 40s the patch passed +1 compile 5m 41s the patch passed with JDK v1.8.0_77 +1 javac 5m 41s the patch passed +1 compile 6m 37s the patch passed with JDK v1.7.0_95 +1 javac 6m 37s the patch passed +1 checkstyle 0m 21s hadoop-common-project/hadoop-common: patch generated 0 new + 38 unchanged - 70 fixed = 38 total (was 108) +1 mvnsite 0m 56s the patch passed +1 mvneclipse 0m 14s the patch passed -1 whitespace 0m 0s The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix. +1 findbugs 1m 47s the patch passed +1 javadoc 0m 51s the patch passed with JDK v1.8.0_77 +1 javadoc 1m 1s the patch passed with JDK v1.7.0_95 -1 unit 16m 54s hadoop-common in the patch failed with JDK v1.8.0_77. +1 unit 8m 7s hadoop-common in the patch passed with JDK v1.7.0_95. +1 asflicense 0m 20s Patch does not generate ASF License warnings. 69m 16s Reason Tests JDK v1.8.0_77 Failed junit tests hadoop.security.ssl.TestReloadingX509TrustManager   hadoop.net.TestDNS JDK v1.8.0_77 Timed out junit tests org.apache.hadoop.http.TestHttpServerLifecycle Subsystem Report/Notes Docker Image:yetus/hadoop:fbe3e86 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12799991/HADOOP-12942.005.patch JIRA Issue HADOOP-12942 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux ec97943f4fb8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 7da5847 Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/9142/artifact/patchprocess/whitespace-eol.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9142/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_77.txt unit test logs https://builds.apache.org/job/PreCommit-HADOOP-Build/9142/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_77.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9142/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9142/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          lmccay Larry McCay added a comment -

          Hi Mike Yoder - this is looking pretty good.

          • I don't like that the warnings are displayed on commands other than create however. In fact, it really should only be displayed when the keystore is being created because it doesn't exist yet.
            However I could be convinced that they should be warned that they are adding a new credential to a provider that is using the default password.
          • It seems that there are a couple lines with trailing whitespace in the command manual change as well.

          I think if we can change the above we are good to go!

          Show
          lmccay Larry McCay added a comment - Hi Mike Yoder - this is looking pretty good. I don't like that the warnings are displayed on commands other than create however. In fact, it really should only be displayed when the keystore is being created because it doesn't exist yet. However I could be convinced that they should be warned that they are adding a new credential to a provider that is using the default password. It seems that there are a couple lines with trailing whitespace in the command manual change as well. I think if we can change the above we are good to go!
          Hide
          yoderme Mike Yoder added a comment -

          Patch 6: now only show the warnings on the create command.

          Show
          yoderme Mike Yoder added a comment - Patch 6: now only show the warnings on the create command.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 10s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 4 new or modified test files.
          +1 mvninstall 7m 3s trunk passed
          +1 compile 6m 1s trunk passed with JDK v1.8.0_91
          +1 compile 6m 48s trunk passed with JDK v1.7.0_95
          +1 checkstyle 0m 27s trunk passed
          +1 mvnsite 1m 2s trunk passed
          +1 mvneclipse 0m 15s trunk passed
          +1 findbugs 1m 37s trunk passed
          +1 javadoc 0m 54s trunk passed with JDK v1.8.0_91
          +1 javadoc 1m 5s trunk passed with JDK v1.7.0_95
          +1 mvninstall 0m 41s the patch passed
          +1 compile 5m 54s the patch passed with JDK v1.8.0_91
          +1 javac 5m 54s the patch passed
          +1 compile 6m 50s the patch passed with JDK v1.7.0_95
          +1 javac 6m 50s the patch passed
          -1 checkstyle 0m 25s hadoop-common-project/hadoop-common: The patch generated 5 new + 109 unchanged - 78 fixed = 114 total (was 187)
          +1 mvnsite 0m 57s the patch passed
          +1 mvneclipse 0m 14s the patch passed
          -1 whitespace 0m 0s The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix.
          +1 findbugs 1m 49s the patch passed
          +1 javadoc 0m 55s the patch passed with JDK v1.8.0_91
          +1 javadoc 1m 7s the patch passed with JDK v1.7.0_95
          -1 unit 19m 16s hadoop-common in the patch failed with JDK v1.8.0_91.
          +1 unit 7m 48s hadoop-common in the patch passed with JDK v1.7.0_95.
          +1 asflicense 0m 23s The patch does not generate ASF License warnings.
          72m 50s



          Reason Tests
          JDK v1.8.0_91 Timed out junit tests org.apache.hadoop.http.TestHttpServerLifecycle



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:cf2ee45
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12803090/HADOOP-12942.006.patch
          JIRA Issue HADOOP-12942
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux acf7768804cc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 996a210
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_91 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/9346/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
          whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/9346/artifact/patchprocess/whitespace-eol.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9346/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_91.txt
          unit test logs https://builds.apache.org/job/PreCommit-HADOOP-Build/9346/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_91.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9346/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9346/console
          Powered by Apache Yetus 0.3.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 10s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 4 new or modified test files. +1 mvninstall 7m 3s trunk passed +1 compile 6m 1s trunk passed with JDK v1.8.0_91 +1 compile 6m 48s trunk passed with JDK v1.7.0_95 +1 checkstyle 0m 27s trunk passed +1 mvnsite 1m 2s trunk passed +1 mvneclipse 0m 15s trunk passed +1 findbugs 1m 37s trunk passed +1 javadoc 0m 54s trunk passed with JDK v1.8.0_91 +1 javadoc 1m 5s trunk passed with JDK v1.7.0_95 +1 mvninstall 0m 41s the patch passed +1 compile 5m 54s the patch passed with JDK v1.8.0_91 +1 javac 5m 54s the patch passed +1 compile 6m 50s the patch passed with JDK v1.7.0_95 +1 javac 6m 50s the patch passed -1 checkstyle 0m 25s hadoop-common-project/hadoop-common: The patch generated 5 new + 109 unchanged - 78 fixed = 114 total (was 187) +1 mvnsite 0m 57s the patch passed +1 mvneclipse 0m 14s the patch passed -1 whitespace 0m 0s The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix. +1 findbugs 1m 49s the patch passed +1 javadoc 0m 55s the patch passed with JDK v1.8.0_91 +1 javadoc 1m 7s the patch passed with JDK v1.7.0_95 -1 unit 19m 16s hadoop-common in the patch failed with JDK v1.8.0_91. +1 unit 7m 48s hadoop-common in the patch passed with JDK v1.7.0_95. +1 asflicense 0m 23s The patch does not generate ASF License warnings. 72m 50s Reason Tests JDK v1.8.0_91 Timed out junit tests org.apache.hadoop.http.TestHttpServerLifecycle Subsystem Report/Notes Docker Image:yetus/hadoop:cf2ee45 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12803090/HADOOP-12942.006.patch JIRA Issue HADOOP-12942 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux acf7768804cc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 996a210 Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_91 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/9346/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/9346/artifact/patchprocess/whitespace-eol.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9346/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-HADOOP-Build/9346/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_91.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9346/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9346/console Powered by Apache Yetus 0.3.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          lmccay Larry McCay added a comment -

          Mike Yoder - can you address the checkstyle and whitespace issues above?

          Show
          lmccay Larry McCay added a comment - Mike Yoder - can you address the checkstyle and whitespace issues above?
          Hide
          yoderme Mike Yoder added a comment -

          Hopefully fixing checkstyle and whitespace issues in patch 8. I would have thought they'd have been detected in patch 6, but... oh well.

          Show
          yoderme Mike Yoder added a comment - Hopefully fixing checkstyle and whitespace issues in patch 8. I would have thought they'd have been detected in patch 6, but... oh well.
          Hide
          lmccay Larry McCay added a comment -

          Same thing happened to me yesterday.
          I think that reporting on checkstyle errors in test classes must have just been turned back on or something.

          Show
          lmccay Larry McCay added a comment - Same thing happened to me yesterday. I think that reporting on checkstyle errors in test classes must have just been turned back on or something.
          Hide
          hadoopqa Hadoop QA added a comment -
          +1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 12s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 4 new or modified test files.
          +1 mvninstall 6m 45s trunk passed
          +1 compile 5m 53s trunk passed with JDK v1.8.0_91
          +1 compile 6m 43s trunk passed with JDK v1.7.0_95
          +1 checkstyle 0m 27s trunk passed
          +1 mvnsite 1m 0s trunk passed
          +1 mvneclipse 0m 13s trunk passed
          +1 findbugs 1m 36s trunk passed
          +1 javadoc 0m 55s trunk passed with JDK v1.8.0_91
          +1 javadoc 1m 6s trunk passed with JDK v1.7.0_95
          +1 mvninstall 0m 41s the patch passed
          +1 compile 5m 52s the patch passed with JDK v1.8.0_91
          +1 javac 5m 52s the patch passed
          +1 compile 6m 50s the patch passed with JDK v1.7.0_95
          +1 javac 6m 50s the patch passed
          +1 checkstyle 0m 25s hadoop-common-project/hadoop-common: The patch generated 0 new + 111 unchanged - 78 fixed = 111 total (was 189)
          +1 mvnsite 0m 56s the patch passed
          +1 mvneclipse 0m 14s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 1m 50s the patch passed
          +1 javadoc 0m 56s the patch passed with JDK v1.8.0_91
          +1 javadoc 1m 6s the patch passed with JDK v1.7.0_95
          +1 unit 7m 44s hadoop-common in the patch passed with JDK v1.8.0_91.
          +1 unit 7m 54s hadoop-common in the patch passed with JDK v1.7.0_95.
          +1 asflicense 0m 23s The patch does not generate ASF License warnings.
          60m 55s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:cf2ee45
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12803345/HADOOP-12942.008.patch
          JIRA Issue HADOOP-12942
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 54186218ac46 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 6e56578
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_91 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9365/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9365/console
          Powered by Apache Yetus 0.3.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall Vote Subsystem Runtime Comment 0 reexec 0m 12s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 4 new or modified test files. +1 mvninstall 6m 45s trunk passed +1 compile 5m 53s trunk passed with JDK v1.8.0_91 +1 compile 6m 43s trunk passed with JDK v1.7.0_95 +1 checkstyle 0m 27s trunk passed +1 mvnsite 1m 0s trunk passed +1 mvneclipse 0m 13s trunk passed +1 findbugs 1m 36s trunk passed +1 javadoc 0m 55s trunk passed with JDK v1.8.0_91 +1 javadoc 1m 6s trunk passed with JDK v1.7.0_95 +1 mvninstall 0m 41s the patch passed +1 compile 5m 52s the patch passed with JDK v1.8.0_91 +1 javac 5m 52s the patch passed +1 compile 6m 50s the patch passed with JDK v1.7.0_95 +1 javac 6m 50s the patch passed +1 checkstyle 0m 25s hadoop-common-project/hadoop-common: The patch generated 0 new + 111 unchanged - 78 fixed = 111 total (was 189) +1 mvnsite 0m 56s the patch passed +1 mvneclipse 0m 14s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 50s the patch passed +1 javadoc 0m 56s the patch passed with JDK v1.8.0_91 +1 javadoc 1m 6s the patch passed with JDK v1.7.0_95 +1 unit 7m 44s hadoop-common in the patch passed with JDK v1.8.0_91. +1 unit 7m 54s hadoop-common in the patch passed with JDK v1.7.0_95. +1 asflicense 0m 23s The patch does not generate ASF License warnings. 60m 55s Subsystem Report/Notes Docker Image:yetus/hadoop:cf2ee45 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12803345/HADOOP-12942.008.patch JIRA Issue HADOOP-12942 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 54186218ac46 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 6e56578 Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_91 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9365/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9365/console Powered by Apache Yetus 0.3.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          lmccay Larry McCay added a comment -

          +1 - I will commit this to trunk, branch-2 and branch-2.8.
          Thanks for the patch, Mike Yoder!

          Show
          lmccay Larry McCay added a comment - +1 - I will commit this to trunk, branch-2 and branch-2.8. Thanks for the patch, Mike Yoder !
          Hide
          lmccay Larry McCay added a comment -

          This has been committed to trunk, branch-2 and branch-2.8.

          Show
          lmccay Larry McCay added a comment - This has been committed to trunk, branch-2 and branch-2.8.
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #9746 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9746/)
          HADOOP-12942. hadoop credential commands non-obviously use password of (lmccay: rev acb509b2fa0bbe6e00f8a90aec37f63a09463afa)

          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/JavaKeyStoreProvider.java
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredentialProviderFactory.java
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProviderFactory.java
          • hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialShell.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredShell.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialProvider.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/LocalJavaKeyStoreProvider.java
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyShell.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #9746 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9746/ ) HADOOP-12942 . hadoop credential commands non-obviously use password of (lmccay: rev acb509b2fa0bbe6e00f8a90aec37f63a09463afa) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/JavaKeyStoreProvider.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredentialProviderFactory.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProviderFactory.java hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialShell.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredShell.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialProvider.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/LocalJavaKeyStoreProvider.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyShell.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java
          Hide
          lmccay Larry McCay added a comment -

          Mike Yoder - Can you set the fix version on this to 2.8? It doesn't seem that I am able to do it.

          Show
          lmccay Larry McCay added a comment - Mike Yoder - Can you set the fix version on this to 2.8? It doesn't seem that I am able to do it.
          Hide
          cnauroth Chris Nauroth added a comment -

          Larry McCay, I have set fix version to 2.8.0. I also added you to the Committers role in JIRA, so you should be set for future JIRA edits.

          Show
          cnauroth Chris Nauroth added a comment - Larry McCay , I have set fix version to 2.8.0. I also added you to the Committers role in JIRA, so you should be set for future JIRA edits.
          Hide
          lmccay Larry McCay added a comment -

          Thanks, Chris Nauroth!
          I thought that I used to be able to do that.

          Show
          lmccay Larry McCay added a comment - Thanks, Chris Nauroth ! I thought that I used to be able to do that.
          Hide
          cnauroth Chris Nauroth added a comment -

          I thought that I used to be able to do that.

          Apache Infrastructure recently has needed to tighten up permissions on JIRA as a spam counter-measure. I suspect things that used to be accessible to you through the Contributors role went away. Adding you to the Committers role should solve it, and you belong in that role now anyway.

          Show
          cnauroth Chris Nauroth added a comment - I thought that I used to be able to do that. Apache Infrastructure recently has needed to tighten up permissions on JIRA as a spam counter-measure. I suspect things that used to be accessible to you through the Contributors role went away. Adding you to the Committers role should solve it, and you belong in that role now anyway.

            People

            • Assignee:
              yoderme Mike Yoder
              Reporter:
              yoderme Mike Yoder
            • Votes:
              0 Vote for this issue
              Watchers:
              13 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development