Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-3733

"s3:" URLs break when Secret Key contains a slash, even if encoded

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: 0.17.1, 2.0.2-alpha
    • Fix Version/s: 2.8.0, 3.0.0-alpha1
    • Component/s: fs/s3
    • Labels:
      None
    • Hadoop Flags:
      Reviewed
    • Release Note:
      Allows userinfo component of URI authority to contain a slash (escaped as %2F). Especially useful for accessing AWS S3 with distcp or hadoop fs.

      Description

      When using URLs of the form s3://ID:SECRET@BUCKET/ at the command line, distcp fails if the SECRET contains a slash, even when the slash is URL-encoded as %2F.

      Say your AWS Access Key ID is RYWX12N9WCY42XVOL8WH
      And your AWS Secret Key is Xqj1/NMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
      And your bucket is called "mybucket"

      You can URL-encode the Secret KKey as Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv

      But this doesn't work:

      $ bin/hadoop distcp file:///source  s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
      08/07/09 15:05:22 INFO util.CopyFiles: srcPaths=[file:///source]
      08/07/09 15:05:22 INFO util.CopyFiles: destPath=s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
      08/07/09 15:05:23 WARN httpclient.RestS3Service: Unable to access bucket: mybucket
      org.jets3t.service.S3ServiceException: S3 HEAD request failed. ResponseCode=403, ResponseMessage=Forbidden
              at org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:339)
      ...
      With failures, global counters are inaccurate; consider running with -i
      Copy failed: org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: S3 PUT failed. XML Error Message: <?xml version="1.0" encoding="UTF-8"?><Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
              at org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:141)
      ...
      
      1. HADOOP-3733-branch-2-007.patch
        44 kB
        Steve Loughran
      2. HADOOP-3733-branch-2-006.patch
        43 kB
        Steve Loughran
      3. HADOOP-3733-branch-2-005.patch
        43 kB
        Steve Loughran
      4. HADOOP-3733-branch-2-004.patch
        40 kB
        Steve Loughran
      5. HADOOP-3733-branch-2-003.patch
        40 kB
        Steve Loughran
      6. HADOOP-3733-branch-2-002.patch
        32 kB
        Steve Loughran
      7. HADOOP-3733-branch-2-001.patch
        31 kB
        Steve Loughran
      8. HADOOP-3733-20130223T011025Z.patch
        7 kB
        David Chaiken
      9. HADOOP-3733.patch
        7 kB
        David Chaiken
      10. hadoop-3733.patch
        2 kB
        Paul Butler

        Issue Links

          Activity

          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment -

          Linked to HADOOP-2066, which is another path related issue.

          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - Linked to HADOOP-2066 , which is another path related issue.
          Hide
          tomwhite Tom White added a comment -

          Judging by the discussion in HADOOP-2066, there is no easy fix here. As a workaround you can set the fs.s3.awsAccessKeyId and fs.s3.awsSecretAccessKey properties then the URI would be S3:/mybucket/dest.

          Show
          tomwhite Tom White added a comment - Judging by the discussion in HADOOP-2066 , there is no easy fix here. As a workaround you can set the fs.s3.awsAccessKeyId and fs.s3.awsSecretAccessKey properties then the URI would be S3:/mybucket/dest.
          Hide
          paulbutler Paul Butler added a comment -

          I've looked into this and found a simple fix (see attached patch). It is definitely not the ideal way to do it, because schema-specific stuff should be kept out of Path.java. But Path.java will always have to do some url-decoding for this to work, and I wanted to avoid breaking other schemas by decoding the authority element for all schemas. I hope this is at least a step in the right direction.

          Show
          paulbutler Paul Butler added a comment - I've looked into this and found a simple fix (see attached patch). It is definitely not the ideal way to do it, because schema-specific stuff should be kept out of Path.java. But Path.java will always have to do some url-decoding for this to work, and I wanted to avoid breaking other schemas by decoding the authority element for all schemas. I hope this is at least a step in the right direction.
          Hide
          chaiken David Chaiken added a comment -

          patch for HADOOP-3733 on 2.0.2-alpha

          Show
          chaiken David Chaiken added a comment - patch for HADOOP-3733 on 2.0.2-alpha
          Hide
          chaiken David Chaiken added a comment -

          Added newly-required timeouts to the patch unit tests. (See HADOOP-9112.)

          Show
          chaiken David Chaiken added a comment - Added newly-required timeouts to the patch unit tests. (See HADOOP-9112 .)
          Hide
          hadoopqa Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12570593/HADOOP-3733-20130223T011025Z.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 tests included appear to have a timeout.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/2222//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2222//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12570593/HADOOP-3733-20130223T011025Z.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 tests included appear to have a timeout. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/2222//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2222//console This message is automatically generated.
          Hide
          joekiller Joseph Lawson added a comment -

          I would like to comment that this bug will bite anyone using AWS IAM credentials more often that one may think. Considering that there are 40 characters in the IAM private key and 64 characters in the total choices, there is a 62.5% chance that a / is going to appear in the private key. So basically there is a 62% chance that hadoop will fail on AWS for any person using this method of access. Seems a bit more than a low priority bug.

          Show
          joekiller Joseph Lawson added a comment - I would like to comment that this bug will bite anyone using AWS IAM credentials more often that one may think. Considering that there are 40 characters in the IAM private key and 64 characters in the total choices, there is a 62.5% chance that a / is going to appear in the private key. So basically there is a 62% chance that hadoop will fail on AWS for any person using this method of access. Seems a bit more than a low priority bug.
          Hide
          Developer22 Ulrich Cech added a comment -

          I can confirm, that this bug treats us very often. We do a lot in AWS, have many account, and the Admins must always remember, that credentials with special characters can not be used. I see, that there is yet a patch available. Is there some problem with it or can it not be committed and solved in the next minor release?
          I/we would we very happy, if this would be the case. Thanks in advance!

          Show
          Developer22 Ulrich Cech added a comment - I can confirm, that this bug treats us very often. We do a lot in AWS, have many account, and the Admins must always remember, that credentials with special characters can not be used. I see, that there is yet a patch available. Is there some problem with it or can it not be committed and solved in the next minor release? I/we would we very happy, if this would be the case. Thanks in advance!
          Hide
          cmenguy Charles Menguy added a comment -

          I can confirm that I have been hitting this issue too, and other people at my company also hit it.
          It would be great to see this patch in an upcoming release.
          Thanks !

          Show
          cmenguy Charles Menguy added a comment - I can confirm that I have been hitting this issue too, and other people at my company also hit it. It would be great to see this patch in an upcoming release. Thanks !
          Hide
          jerryye Jerry Ye added a comment -

          Aside from setting the credentials in /etc/hadoop/core-site.xml or regenerating your secret access keys, passing in the credentials through the command line also works using: -Dfs.s3n.awsAccessKeyId=<your-key> -Dfs.s3n.awsSecretAccessKey=<your-secret-key>.

          Show
          jerryye Jerry Ye added a comment - Aside from setting the credentials in /etc/hadoop/core-site.xml or regenerating your secret access keys, passing in the credentials through the command line also works using: -Dfs.s3n.awsAccessKeyId=<your-key> -Dfs.s3n.awsSecretAccessKey=<your-secret-key>.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12570593/HADOOP-3733-20130223T011025Z.patch
          against trunk revision .

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/3754//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12570593/HADOOP-3733-20130223T011025Z.patch against trunk revision . -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/3754//console This message is automatically generated.
          Hide
          aw Allen Wittenauer added a comment - - edited

          It would probably be better if secrets could be read from an environment variable. Putting things on the command line is not very secure.

          Show
          aw Allen Wittenauer added a comment - - edited It would probably be better if secrets could be read from an environment variable. Putting things on the command line is not very secure.
          Hide
          aw Allen Wittenauer added a comment -

          Cancelling patch since it no longer applies.

          Show
          aw Allen Wittenauer added a comment - Cancelling patch since it no longer applies.
          Hide
          peng Peng Cheng added a comment -

          Hey, any follow up story on this issue? Its been a huge hurdle on using hadoop on S3.
          The plus sign seems to cause a similar problem, should I submit another ticket for it?

          Show
          peng Peng Cheng added a comment - Hey, any follow up story on this issue? Its been a huge hurdle on using hadoop on S3. The plus sign seems to cause a similar problem, should I submit another ticket for it?
          Hide
          stevel@apache.org Steve Loughran added a comment -

          The focus on s3 client stuff has shifted to S3a, which is a switchable replacement for s3n, once we're happy with it. Are you using s3:// direct?

          I think nobody has been that concerned due to the workaround "regen the keys", but that isn't perfect. It does need fixing, but I don't see it happening this week

          Show
          stevel@apache.org Steve Loughran added a comment - The focus on s3 client stuff has shifted to S3a, which is a switchable replacement for s3n, once we're happy with it. Are you using s3:// direct? I think nobody has been that concerned due to the workaround "regen the keys", but that isn't perfect. It does need fixing, but I don't see it happening this week
          Hide
          terry.siu Terry Siu added a comment -

          Howdy! Given the lack of activity/comments for almost a year, can I assume this issue is dead and regenerating the key is the acceptable solution? One of my buckets has '/' in the secrete key and I just hit this issue using s3a. Tried setting fs.s3a.access.key and fs.s3a.secret.key config in the CLI and no luck. Anyone got a workaround other than key regeneration using s3a?

          Show
          terry.siu Terry Siu added a comment - Howdy! Given the lack of activity/comments for almost a year, can I assume this issue is dead and regenerating the key is the acceptable solution? One of my buckets has '/' in the secrete key and I just hit this issue using s3a. Tried setting fs.s3a.access.key and fs.s3a.secret.key config in the CLI and no luck. Anyone got a workaround other than key regeneration using s3a?
          Hide
          terry.siu Terry Siu added a comment -

          Just to clarify my comment above, I'm using Hive to create a table overlay on an existing S3 folder and when I specify the location s3a:<access key>:<secret key>@<bucket>/<folder> where <secret key> has a '/', I get:

          FAILED: IllegalArgumentException The bucketName parameter must be specified

          which I know the / in the <secret key> is messing up Hive.

          Show
          terry.siu Terry Siu added a comment - Just to clarify my comment above, I'm using Hive to create a table overlay on an existing S3 folder and when I specify the location s3a:<access key>:<secret key>@<bucket>/<folder> where <secret key> has a '/', I get: FAILED: IllegalArgumentException The bucketName parameter must be specified which I know the / in the <secret key> is messing up Hive.
          Hide
          darabos Daniel Darabos added a comment -

          We're using s3n and we set fs.s3n.awsAccessKeyId and fs.s3n.awsSecretAccessKey programmatically via hadoop.conf.Configuration.set() when we want to access a file. I think the same should work with s3a.

          Show
          darabos Daniel Darabos added a comment - We're using s3n and we set fs.s3n.awsAccessKeyId and fs.s3n.awsSecretAccessKey programmatically via hadoop.conf.Configuration.set() when we want to access a file. I think the same should work with s3a .
          Hide
          terry.siu Terry Siu added a comment -

          Thanks, Daniel, but I had tried setting fs.s3a.access.key and fs.s3a.secret.key in both the core-site.xml as well as in the Hive CLI, but none of it "stuck". I ended up regenerating the key until I got one that only had alphanumeric characters and everything worked like a charm afterwards.

          Show
          terry.siu Terry Siu added a comment - Thanks, Daniel, but I had tried setting fs.s3a.access.key and fs.s3a.secret.key in both the core-site.xml as well as in the Hive CLI, but none of it "stuck". I ended up regenerating the key until I got one that only had alphanumeric characters and everything worked like a charm afterwards.
          Hide
          raviprak Ravi Prakash added a comment -

          We should see if we can get this whenever 3.0.0 is released

          Show
          raviprak Ravi Prakash added a comment - We should see if we can get this whenever 3.0.0 is released
          Hide
          stevel@apache.org Steve Loughran added a comment -

          Can I observer that one of my secret keys does contain a / in it, and I've not seen this recently

          Show
          stevel@apache.org Steve Loughran added a comment - Can I observer that one of my secret keys does contain a / in it, and I've not seen this recently
          Hide
          raviprak Ravi Prakash added a comment -

          Hi Steve! I seem to still be observing the issue. How are you running your command?

          e.g. for me on trunk

           hadoop fs -ls s3a://<aws_access_key:<aws_secret_key>:@mybucket/mydirectory 

          shows me the contents of mydirectory when aws_secret_key doesn't contain a / or %2F . When the secret key contains the /, even encoding it to %2F I get this:

          ls: s3a://<aws_access_key:<aws_secret_key_with_encoded_slash>@mybucket/mydirectory: getFileStatus on s3a://<aws_access_key:<aws_secret_key_with_encoded_slash>@mybucket/mydirectory: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: <someAmazonID>), S3 Extended Request ID: <anotherAmazonID>
          

          Here's my hadoop version

          Hadoop 3.0.0-alpha1-SNAPSHOT
          Source code repository https://git-wip-us.apache.org/repos/asf/hadoop.git -r 28b66ae919e348123f4c05a4787c9ec56c087c25
          Compiled by raviprak on 2016-06-13T16:36Z
          Compiled with protoc 2.5.0
          From source with checksum 7dcbe718e8724ad01f916f2eaf705b14
          This command was run using /home/raviprak/Code/hadoop/trunk/hadoop-dist/target/hadoop-3.0.0-alpha1-SNAPSHOT/share/hadoop/common/hadoop-common-3.0.0-alpha1-SNAPSHOT.jar
          
          Show
          raviprak Ravi Prakash added a comment - Hi Steve! I seem to still be observing the issue. How are you running your command? e.g. for me on trunk hadoop fs -ls s3a: //<aws_access_key:<aws_secret_key>:@mybucket/mydirectory shows me the contents of mydirectory when aws_secret_key doesn't contain a / or %2F . When the secret key contains the / , even encoding it to %2F I get this: ls: s3a: //<aws_access_key:<aws_secret_key_with_encoded_slash>@mybucket/mydirectory: getFileStatus on s3a://<aws_access_key:<aws_secret_key_with_encoded_slash>@mybucket/mydirectory: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: <someAmazonID>), S3 Extended Request ID: <anotherAmazonID> Here's my hadoop version Hadoop 3.0.0-alpha1-SNAPSHOT Source code repository https: //git-wip-us.apache.org/repos/asf/hadoop.git -r 28b66ae919e348123f4c05a4787c9ec56c087c25 Compiled by raviprak on 2016-06-13T16:36Z Compiled with protoc 2.5.0 From source with checksum 7dcbe718e8724ad01f916f2eaf705b14 This command was run using /home/raviprak/Code/hadoop/trunk/hadoop-dist/target/hadoop-3.0.0-alpha1-SNAPSHOT/share/hadoop/common/hadoop-common-3.0.0-alpha1-SNAPSHOT.jar
          Hide
          stevel@apache.org Steve Loughran added a comment -

          OK, I think I see the issue: you are putting the secret in the URL, not behind the scenes (config, more recently env var).

          I suspect what is happening is some parsing of the URI is getting confused about where to split up the auth info and the URL itself.

          It could be in the initalize() method, where the URI is built

          uri = URI.create(name.getScheme() + "://" + name.getAuthority());
          

          Maybe it should use name.getRawAuthority(), to skip expansion of encoded characters. Alternatively, that authority info should be broken up and used to set up the auth credentials. I'd prefer that as otherwise there's a risk of the URI details being printed. Actually, that's something we should be looking for anyway; making sure that the full URI never gets printed.

          Ravi, do you want to look at this? See if using the raw auth works? If not, try parsing that directly and using it as the credenials

          Show
          stevel@apache.org Steve Loughran added a comment - OK, I think I see the issue: you are putting the secret in the URL, not behind the scenes (config, more recently env var). I suspect what is happening is some parsing of the URI is getting confused about where to split up the auth info and the URL itself. It could be in the initalize() method, where the URI is built uri = URI.create(name.getScheme() + ": //" + name.getAuthority()); Maybe it should use name.getRawAuthority() , to skip expansion of encoded characters. Alternatively, that authority info should be broken up and used to set up the auth credentials. I'd prefer that as otherwise there's a risk of the URI details being printed. Actually, that's something we should be looking for anyway; making sure that the full URI never gets printed. Ravi, do you want to look at this? See if using the raw auth works? If not, try parsing that directly and using it as the credenials
          Hide
          stevel@apache.org Steve Loughran added a comment -

          ...ok root cause is that (a) the S3 filesystems are using URI.getAuthority() to build the URL s3 wants, not URI.getHost(). This is compounded by teh fact that URI.getUserInfo() doesn't seem up to handling a "/" in the string, so the password isn't extracted right.

          The fix is to implement our own user:pass extraction code from the authority, write tests for the parsing; add a functional test to dynamically build a URI with the auth credentials, clear any in the configuration file and try to log on. This test MUST NOT log the URI, meaning the assertions will fail to meet my usual criteria of "provide meaningful diagnostics on a failure".

          Show
          stevel@apache.org Steve Loughran added a comment - ...ok root cause is that (a) the S3 filesystems are using URI.getAuthority() to build the URL s3 wants, not URI.getHost() . This is compounded by teh fact that URI.getUserInfo() doesn't seem up to handling a "/" in the string, so the password isn't extracted right. The fix is to implement our own user:pass extraction code from the authority, write tests for the parsing; add a functional test to dynamically build a URI with the auth credentials, clear any in the configuration file and try to log on. This test MUST NOT log the URI, meaning the assertions will fail to meet my usual criteria of "provide meaningful diagnostics on a failure".
          Hide
          stevel@apache.org Steve Loughran added a comment -

          Fix for this

          1. pull out all URL user/pass extraction into a new class org.apache.hadoop.fs.s3native.S3xLoginHelper. It's in s3n as it is needed there too, I want it to outlive s3: removal, and reserve the option to backport.
          2. this performs its own parsing of the user:pass from the authority info; handles / in passwords.
          3. FS URI construction strips out the authority info, includes only the host in its URIs.
          4. S3, S3N, S3A all use this codepath
          5. S3A has had its code related to credentials modified to work with this too; it shares the same Login structure and is in a static S3AUtils method for easier testing.
          6. Lots of unit tests to verify parsing works
          7. There's an S3A functional test which verifies that passwords stuck in the FS URL are picked up.
          8. I have tested that suite with a password with / in it

          If you do want to use a / in a password in a URL, do encode it with %2F; this will now be handled.

          Show
          stevel@apache.org Steve Loughran added a comment - Fix for this pull out all URL user/pass extraction into a new class org.apache.hadoop.fs.s3native.S3xLoginHelper . It's in s3n as it is needed there too, I want it to outlive s3: removal, and reserve the option to backport. this performs its own parsing of the user:pass from the authority info; handles / in passwords. FS URI construction strips out the authority info, includes only the host in its URIs. S3, S3N, S3A all use this codepath S3A has had its code related to credentials modified to work with this too; it shares the same Login structure and is in a static S3AUtils method for easier testing. Lots of unit tests to verify parsing works There's an S3A functional test which verifies that passwords stuck in the FS URL are picked up. I have tested that suite with a password with / in it If you do want to use a / in a password in a URL, do encode it with %2F; this will now be handled.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          testing: S3 ireland. Intermittent failure of S3A root test (see HADOOP-13271), which goes away when rerun.

          Show
          stevel@apache.org Steve Loughran added a comment - testing: S3 ireland. Intermittent failure of S3A root test (see HADOOP-13271 ), which goes away when rerun.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 15s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 4 new or modified test files.
          +1 mvninstall 7m 35s branch-2 passed
          +1 compile 0m 14s branch-2 passed with JDK v1.8.0_91
          +1 compile 0m 14s branch-2 passed with JDK v1.7.0_101
          +1 checkstyle 0m 16s branch-2 passed
          +1 mvnsite 0m 19s branch-2 passed
          +1 mvneclipse 0m 15s branch-2 passed
          +1 findbugs 0m 30s branch-2 passed
          +1 javadoc 0m 12s branch-2 passed with JDK v1.8.0_91
          +1 javadoc 0m 14s branch-2 passed with JDK v1.7.0_101
          +1 mvninstall 0m 13s the patch passed
          +1 compile 0m 9s the patch passed with JDK v1.8.0_91
          +1 javac 0m 9s the patch passed
          +1 compile 0m 11s the patch passed with JDK v1.7.0_101
          +1 javac 0m 11s the patch passed
          -1 checkstyle 0m 14s hadoop-tools/hadoop-aws: The patch generated 10 new + 100 unchanged - 0 fixed = 110 total (was 100)
          +1 mvnsite 0m 16s the patch passed
          +1 mvneclipse 0m 12s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          -1 findbugs 0m 40s hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
          -1 javadoc 0m 9s hadoop-tools_hadoop-aws-jdk1.8.0_91 with JDK v1.8.0_91 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
          +1 javadoc 0m 12s the patch passed with JDK v1.7.0_101
          +1 unit 0m 11s hadoop-aws in the patch passed with JDK v1.7.0_101.
          +1 asflicense 0m 18s The patch does not generate ASF License warnings.
          14m 3s



          Reason Tests
          FindBugs module:hadoop-tools/hadoop-aws
            org.apache.hadoop.fs.s3native.S3xLoginHelper$Login.EMPTY isn't final but should be At S3xLoginHelper.java:be At S3xLoginHelper.java:[line 80]



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:babe025
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12810415/HADOOP-3733-branch-2-001.patch
          JIRA Issue HADOOP-3733
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 0a50564508ae 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision branch-2 / 9c66fff
          Default Java 1.7.0_101
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_91 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/9771/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt
          findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/9771/artifact/patchprocess/new-findbugs-hadoop-tools_hadoop-aws.html
          javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/9771/artifact/patchprocess/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdk1.8.0_91.txt
          JDK v1.7.0_101 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9771/testReport/
          modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9771/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 15s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 4 new or modified test files. +1 mvninstall 7m 35s branch-2 passed +1 compile 0m 14s branch-2 passed with JDK v1.8.0_91 +1 compile 0m 14s branch-2 passed with JDK v1.7.0_101 +1 checkstyle 0m 16s branch-2 passed +1 mvnsite 0m 19s branch-2 passed +1 mvneclipse 0m 15s branch-2 passed +1 findbugs 0m 30s branch-2 passed +1 javadoc 0m 12s branch-2 passed with JDK v1.8.0_91 +1 javadoc 0m 14s branch-2 passed with JDK v1.7.0_101 +1 mvninstall 0m 13s the patch passed +1 compile 0m 9s the patch passed with JDK v1.8.0_91 +1 javac 0m 9s the patch passed +1 compile 0m 11s the patch passed with JDK v1.7.0_101 +1 javac 0m 11s the patch passed -1 checkstyle 0m 14s hadoop-tools/hadoop-aws: The patch generated 10 new + 100 unchanged - 0 fixed = 110 total (was 100) +1 mvnsite 0m 16s the patch passed +1 mvneclipse 0m 12s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. -1 findbugs 0m 40s hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) -1 javadoc 0m 9s hadoop-tools_hadoop-aws-jdk1.8.0_91 with JDK v1.8.0_91 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) +1 javadoc 0m 12s the patch passed with JDK v1.7.0_101 +1 unit 0m 11s hadoop-aws in the patch passed with JDK v1.7.0_101. +1 asflicense 0m 18s The patch does not generate ASF License warnings. 14m 3s Reason Tests FindBugs module:hadoop-tools/hadoop-aws   org.apache.hadoop.fs.s3native.S3xLoginHelper$Login.EMPTY isn't final but should be At S3xLoginHelper.java:be At S3xLoginHelper.java: [line 80] Subsystem Report/Notes Docker Image:yetus/hadoop:babe025 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12810415/HADOOP-3733-branch-2-001.patch JIRA Issue HADOOP-3733 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 0a50564508ae 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2 / 9c66fff Default Java 1.7.0_101 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_91 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/9771/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/9771/artifact/patchprocess/new-findbugs-hadoop-tools_hadoop-aws.html javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/9771/artifact/patchprocess/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdk1.8.0_91.txt JDK v1.7.0_101 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9771/testReport/ modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9771/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          Patch 002

          • fixes findbug bug.
          • fixes checkstyle warnings, except for the package javadoc.
          • logs at WARN that people shouldn't be sticking secrets inside the filesystem URIs and that it may get pulled at some time.
          Show
          stevel@apache.org Steve Loughran added a comment - Patch 002 fixes findbug bug. fixes checkstyle warnings, except for the package javadoc. logs at WARN that people shouldn't be sticking secrets inside the filesystem URIs and that it may get pulled at some time.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          Testing: s3 ireland. Again, all well except that intermittent root test which is now failing consistently in parallel mode...

          Show
          stevel@apache.org Steve Loughran added a comment - Testing: s3 ireland. Again, all well except that intermittent root test which is now failing consistently in parallel mode...
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 10m 34s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 4 new or modified test files.
          +1 mvninstall 8m 34s branch-2 passed
          +1 compile 0m 10s branch-2 passed with JDK v1.8.0_91
          +1 compile 0m 12s branch-2 passed with JDK v1.7.0_101
          +1 checkstyle 0m 18s branch-2 passed
          +1 mvnsite 0m 20s branch-2 passed
          +1 mvneclipse 0m 51s branch-2 passed
          +1 findbugs 0m 35s branch-2 passed
          +1 javadoc 0m 13s branch-2 passed with JDK v1.8.0_91
          +1 javadoc 0m 15s branch-2 passed with JDK v1.7.0_101
          +1 mvninstall 0m 13s the patch passed
          +1 compile 0m 8s the patch passed with JDK v1.8.0_91
          +1 javac 0m 8s the patch passed
          +1 compile 0m 10s the patch passed with JDK v1.7.0_101
          +1 javac 0m 10s the patch passed
          -1 checkstyle 0m 13s hadoop-tools/hadoop-aws: The patch generated 3 new + 99 unchanged - 0 fixed = 102 total (was 99)
          +1 mvnsite 0m 15s the patch passed
          +1 mvneclipse 0m 11s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 0m 40s the patch passed
          +1 javadoc 0m 9s the patch passed with JDK v1.8.0_91
          +1 javadoc 0m 13s the patch passed with JDK v1.7.0_101
          +1 unit 0m 11s hadoop-aws in the patch passed with JDK v1.7.0_101.
          +1 asflicense 0m 17s The patch does not generate ASF License warnings.
          25m 49s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:babe025
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12810582/HADOOP-3733-branch-2-002.patch
          JIRA Issue HADOOP-3733
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 38153d6b368b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision branch-2 / 9c66fff
          Default Java 1.7.0_101
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_91 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/9774/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt
          JDK v1.7.0_101 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9774/testReport/
          modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9774/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 10m 34s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 4 new or modified test files. +1 mvninstall 8m 34s branch-2 passed +1 compile 0m 10s branch-2 passed with JDK v1.8.0_91 +1 compile 0m 12s branch-2 passed with JDK v1.7.0_101 +1 checkstyle 0m 18s branch-2 passed +1 mvnsite 0m 20s branch-2 passed +1 mvneclipse 0m 51s branch-2 passed +1 findbugs 0m 35s branch-2 passed +1 javadoc 0m 13s branch-2 passed with JDK v1.8.0_91 +1 javadoc 0m 15s branch-2 passed with JDK v1.7.0_101 +1 mvninstall 0m 13s the patch passed +1 compile 0m 8s the patch passed with JDK v1.8.0_91 +1 javac 0m 8s the patch passed +1 compile 0m 10s the patch passed with JDK v1.7.0_101 +1 javac 0m 10s the patch passed -1 checkstyle 0m 13s hadoop-tools/hadoop-aws: The patch generated 3 new + 99 unchanged - 0 fixed = 102 total (was 99) +1 mvnsite 0m 15s the patch passed +1 mvneclipse 0m 11s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 0m 40s the patch passed +1 javadoc 0m 9s the patch passed with JDK v1.8.0_91 +1 javadoc 0m 13s the patch passed with JDK v1.7.0_101 +1 unit 0m 11s hadoop-aws in the patch passed with JDK v1.7.0_101. +1 asflicense 0m 17s The patch does not generate ASF License warnings. 25m 49s Subsystem Report/Notes Docker Image:yetus/hadoop:babe025 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12810582/HADOOP-3733-branch-2-002.patch JIRA Issue HADOOP-3733 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 38153d6b368b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2 / 9c66fff Default Java 1.7.0_101 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_91 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/9774/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt JDK v1.7.0_101 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9774/testReport/ modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9774/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          cnauroth Chris Nauroth added a comment -

          +1 for the patch. I did a successful parallel test run against US-west-2. The Checkstyle warnings look trivial enough to fix on check-in if you'd like. Thank you for the patch.

          Show
          cnauroth Chris Nauroth added a comment - +1 for the patch. I did a successful parallel test run against US-west-2. The Checkstyle warnings look trivial enough to fix on check-in if you'd like. Thank you for the patch.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          Patch 003

          • fixes canonicalization so that there shouldn't be errors if path checking now that auth details are being stripped out of fsUri
          • tests this
          • I've not been able to replicate the checkpath/canonicalization problem which was reported to me by Ravi; he'll have to test himself.
          • special message for the case where getHost()==null, getAuthority!=null; this situation arises if there is an unencoded / in the path:
          -ls: Fatal internal error
          java.lang.NullPointerException: null uri host. This can be caused by unencoded / in the password string
          	at java.util.Objects.requireNonNull(Objects.java:228)
          	at org.apache.hadoop.fs.s3native.S3xLoginHelper.buildFSURI(S3xLoginHelper.java:53)
          	at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:199)
          	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2793)
          	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:101)
          	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2830)
          	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2812)
          	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)
          	at org.apache.hadoop.fs.Path.getFileSystem(Path.java:294)
          	at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
          	at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235)
          	at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218)
          	at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
          	at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
          	at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
          	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
          	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
          	at org.apache.hadoop.fs.FsShell.main(FsShell.java:373)
            
          Show
          stevel@apache.org Steve Loughran added a comment - Patch 003 fixes canonicalization so that there shouldn't be errors if path checking now that auth details are being stripped out of fsUri tests this I've not been able to replicate the checkpath/canonicalization problem which was reported to me by Ravi; he'll have to test himself. special message for the case where getHost()==null, getAuthority!=null; this situation arises if there is an unencoded / in the path: -ls: Fatal internal error java.lang.NullPointerException: null uri host. This can be caused by unencoded / in the password string at java.util.Objects.requireNonNull(Objects.java:228) at org.apache.hadoop.fs.s3native.S3xLoginHelper.buildFSURI(S3xLoginHelper.java:53) at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:199) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2793) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:101) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2830) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2812) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:294) at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325) at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235) at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218) at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103) at org.apache.hadoop.fs.shell.Command.run(Command.java:165) at org.apache.hadoop.fs.FsShell.run(FsShell.java:315) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.fs.FsShell.main(FsShell.java:373)
          Hide
          stevel@apache.org Steve Loughran added a comment -

          patch branch-2-004; fixes checkstyle.

          Chris: thanks for the +1; I'm waiting for Ravi to do another attempt at trying to get this to work.

          FWIW, I don' think people should be trying to use credentials on the CLI. This patch tries to strip it from the URL and path, but they do creep out in error messages and stack traces.

          Show
          stevel@apache.org Steve Loughran added a comment - patch branch-2-004; fixes checkstyle. Chris: thanks for the +1; I'm waiting for Ravi to do another attempt at trying to get this to work. FWIW, I don' think people should be trying to use credentials on the CLI. This patch tries to strip it from the URL and path, but they do creep out in error messages and stack traces.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 14s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 4 new or modified test files.
          +1 mvninstall 6m 33s branch-2 passed
          +1 compile 0m 14s branch-2 passed with JDK v1.8.0_91
          +1 compile 0m 15s branch-2 passed with JDK v1.7.0_101
          +1 checkstyle 0m 16s branch-2 passed
          +1 mvnsite 0m 20s branch-2 passed
          +1 mvneclipse 0m 14s branch-2 passed
          +1 findbugs 0m 33s branch-2 passed
          +1 javadoc 0m 12s branch-2 passed with JDK v1.8.0_91
          +1 javadoc 0m 15s branch-2 passed with JDK v1.7.0_101
          +1 mvninstall 0m 14s the patch passed
          +1 compile 0m 11s the patch passed with JDK v1.8.0_91
          +1 javac 0m 11s the patch passed
          +1 compile 0m 13s the patch passed with JDK v1.7.0_101
          +1 javac 0m 13s the patch passed
          -1 checkstyle 0m 12s hadoop-tools/hadoop-aws: The patch generated 3 new + 99 unchanged - 0 fixed = 102 total (was 99)
          +1 mvnsite 0m 19s the patch passed
          +1 mvneclipse 0m 11s the patch passed
          -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix.
          -1 findbugs 0m 44s hadoop-tools/hadoop-aws generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0)
          -1 javadoc 0m 9s hadoop-tools_hadoop-aws-jdk1.8.0_91 with JDK v1.8.0_91 generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0)
          +1 javadoc 0m 12s the patch passed with JDK v1.7.0_101
          +1 unit 0m 13s hadoop-aws in the patch passed with JDK v1.7.0_101.
          +1 asflicense 0m 17s The patch does not generate ASF License warnings.
          13m 13s



          Reason Tests
          FindBugs module:hadoop-tools/hadoop-aws
            Comparison of String objects using == or != in org.apache.hadoop.fs.s3native.S3xLoginHelper.checkPath(Configuration, URI, Path, int) At S3xLoginHelper.java:== or != in org.apache.hadoop.fs.s3native.S3xLoginHelper.checkPath(Configuration, URI, Path, int) At S3xLoginHelper.java:[line 162]
            Null passed for non-null parameter of toString(URI) in org.apache.hadoop.fs.s3native.S3xLoginHelper.checkPath(Configuration, URI, Path, int) Method invoked at S3xLoginHelper.java:of toString(URI) in org.apache.hadoop.fs.s3native.S3xLoginHelper.checkPath(Configuration, URI, Path, int) Method invoked at S3xLoginHelper.java:[line 170]
            Null passed for non-null parameter of toString(URI) in org.apache.hadoop.fs.s3native.S3xLoginHelper.checkPath(Configuration, URI, Path, int) Method invoked at S3xLoginHelper.java:of toString(URI) in org.apache.hadoop.fs.s3native.S3xLoginHelper.checkPath(Configuration, URI, Path, int) Method invoked at S3xLoginHelper.java:[line 170]



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:babe025
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12810848/HADOOP-3733-branch-2-004.patch
          JIRA Issue HADOOP-3733
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 5fc019659668 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision branch-2 / 7523514
          Default Java 1.7.0_101
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_91 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/9785/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt
          whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/9785/artifact/patchprocess/whitespace-eol.txt
          findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/9785/artifact/patchprocess/new-findbugs-hadoop-tools_hadoop-aws.html
          javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/9785/artifact/patchprocess/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdk1.8.0_91.txt
          JDK v1.7.0_101 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9785/testReport/
          modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9785/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 14s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 4 new or modified test files. +1 mvninstall 6m 33s branch-2 passed +1 compile 0m 14s branch-2 passed with JDK v1.8.0_91 +1 compile 0m 15s branch-2 passed with JDK v1.7.0_101 +1 checkstyle 0m 16s branch-2 passed +1 mvnsite 0m 20s branch-2 passed +1 mvneclipse 0m 14s branch-2 passed +1 findbugs 0m 33s branch-2 passed +1 javadoc 0m 12s branch-2 passed with JDK v1.8.0_91 +1 javadoc 0m 15s branch-2 passed with JDK v1.7.0_101 +1 mvninstall 0m 14s the patch passed +1 compile 0m 11s the patch passed with JDK v1.8.0_91 +1 javac 0m 11s the patch passed +1 compile 0m 13s the patch passed with JDK v1.7.0_101 +1 javac 0m 13s the patch passed -1 checkstyle 0m 12s hadoop-tools/hadoop-aws: The patch generated 3 new + 99 unchanged - 0 fixed = 102 total (was 99) +1 mvnsite 0m 19s the patch passed +1 mvneclipse 0m 11s the patch passed -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. -1 findbugs 0m 44s hadoop-tools/hadoop-aws generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) -1 javadoc 0m 9s hadoop-tools_hadoop-aws-jdk1.8.0_91 with JDK v1.8.0_91 generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) +1 javadoc 0m 12s the patch passed with JDK v1.7.0_101 +1 unit 0m 13s hadoop-aws in the patch passed with JDK v1.7.0_101. +1 asflicense 0m 17s The patch does not generate ASF License warnings. 13m 13s Reason Tests FindBugs module:hadoop-tools/hadoop-aws   Comparison of String objects using == or != in org.apache.hadoop.fs.s3native.S3xLoginHelper.checkPath(Configuration, URI, Path, int) At S3xLoginHelper.java:== or != in org.apache.hadoop.fs.s3native.S3xLoginHelper.checkPath(Configuration, URI, Path, int) At S3xLoginHelper.java: [line 162]   Null passed for non-null parameter of toString(URI) in org.apache.hadoop.fs.s3native.S3xLoginHelper.checkPath(Configuration, URI, Path, int) Method invoked at S3xLoginHelper.java:of toString(URI) in org.apache.hadoop.fs.s3native.S3xLoginHelper.checkPath(Configuration, URI, Path, int) Method invoked at S3xLoginHelper.java: [line 170]   Null passed for non-null parameter of toString(URI) in org.apache.hadoop.fs.s3native.S3xLoginHelper.checkPath(Configuration, URI, Path, int) Method invoked at S3xLoginHelper.java:of toString(URI) in org.apache.hadoop.fs.s3native.S3xLoginHelper.checkPath(Configuration, URI, Path, int) Method invoked at S3xLoginHelper.java: [line 170] Subsystem Report/Notes Docker Image:yetus/hadoop:babe025 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12810848/HADOOP-3733-branch-2-004.patch JIRA Issue HADOOP-3733 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 5fc019659668 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2 / 7523514 Default Java 1.7.0_101 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_91 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/9785/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/9785/artifact/patchprocess/whitespace-eol.txt findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/9785/artifact/patchprocess/new-findbugs-hadoop-tools_hadoop-aws.html javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/9785/artifact/patchprocess/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdk1.8.0_91.txt JDK v1.7.0_101 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9785/testReport/ modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9785/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          Patch 0005

          • all s3 filesystems warn that you shouldn't be putting secrets in your URLs
          • and s3n/s3 don't mention the technique in their error messages
          • javadocs
          • fix findbugs warnings, one through a fix, one through commenting it out. (it's essentially the same code copied and pasted from Filesystem; it's disabled there too).
          Show
          stevel@apache.org Steve Loughran added a comment - Patch 0005 all s3 filesystems warn that you shouldn't be putting secrets in your URLs and s3n/s3 don't mention the technique in their error messages javadocs fix findbugs warnings, one through a fix, one through commenting it out. (it's essentially the same code copied and pasted from Filesystem; it's disabled there too).
          Hide
          stevel@apache.org Steve Loughran added a comment -

          patch branch-2-006. Add a couple of lines in the documentation telling people not to put secrets in their URLs

          Show
          stevel@apache.org Steve Loughran added a comment - patch branch-2-006. Add a couple of lines in the documentation telling people not to put secrets in their URLs
          Hide
          stevel@apache.org Steve Loughran added a comment -

          Patch 006; tested against S3 ireland. I haven't tested inline secrets in s3, s3n, or on the command line; just in a unit test that is set up to do this (and goes to effort to not log the details on a failure: the first time I've written a unit test to be deliberately useless when reporting failures)

          Show
          stevel@apache.org Steve Loughran added a comment - Patch 006; tested against S3 ireland. I haven't tested inline secrets in s3, s3n, or on the command line; just in a unit test that is set up to do this (and goes to effort to not log the details on a failure: the first time I've written a unit test to be deliberately useless when reporting failures)
          Hide
          raviprak Ravi Prakash added a comment -

          Thanks Steve! With patch v6, I am able to use aws secrets without slashes. With un-encoded slashes I see this

          java.lang.NullPointerException: null uri host. This can be caused by unencoded / in the password string
          	at java.util.Objects.requireNonNull(Objects.java:228)
          	at org.apache.hadoop.fs.s3native.S3xLoginHelper.buildFSURI(S3xLoginHelper.java:60)
          	at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:199)
          	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2793)
          	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:101)
          	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2830)
          	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2812)
          	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)
          	at org.apache.hadoop.fs.Path.getFileSystem(Path.java:294)
          	at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
          	at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235)
          	at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218)
          	at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
          	at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
          	at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
          	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
          	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
          	at org.apache.hadoop.fs.FsShell.main(FsShell.java:373)
          

          But with encoded slashes, I still can't do an ls successfully

          Show
          raviprak Ravi Prakash added a comment - Thanks Steve! With patch v6, I am able to use aws secrets without slashes. With un-encoded slashes I see this java.lang.NullPointerException: null uri host. This can be caused by unencoded / in the password string at java.util.Objects.requireNonNull(Objects.java:228) at org.apache.hadoop.fs.s3native.S3xLoginHelper.buildFSURI(S3xLoginHelper.java:60) at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:199) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2793) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:101) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2830) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2812) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:294) at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325) at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235) at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218) at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103) at org.apache.hadoop.fs.shell.Command.run(Command.java:165) at org.apache.hadoop.fs.FsShell.run(FsShell.java:315) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.fs.FsShell.main(FsShell.java:373) But with encoded slashes, I still can't do an ls successfully
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 34s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 4 new or modified test files.
          +1 mvninstall 7m 36s branch-2 passed
          +1 compile 0m 22s branch-2 passed with JDK v1.8.0_91
          +1 compile 0m 18s branch-2 passed with JDK v1.7.0_101
          +1 checkstyle 0m 17s branch-2 passed
          +1 mvnsite 0m 20s branch-2 passed
          +1 mvneclipse 0m 13s branch-2 passed
          +1 findbugs 0m 36s branch-2 passed
          +1 javadoc 0m 14s branch-2 passed with JDK v1.8.0_91
          +1 javadoc 0m 16s branch-2 passed with JDK v1.7.0_101
          +1 mvninstall 0m 14s the patch passed
          +1 compile 0m 11s the patch passed with JDK v1.8.0_91
          +1 javac 0m 11s the patch passed
          +1 compile 0m 14s the patch passed with JDK v1.7.0_101
          +1 javac 0m 14s the patch passed
          +1 checkstyle 0m 12s the patch passed
          +1 mvnsite 0m 18s the patch passed
          +1 mvneclipse 0m 13s the patch passed
          -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix.
          +1 xml 0m 0s The patch has no ill-formed XML file.
          +1 findbugs 0m 41s the patch passed
          -1 javadoc 0m 10s hadoop-tools_hadoop-aws-jdk1.8.0_91 with JDK v1.8.0_91 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0)
          +1 javadoc 0m 14s the patch passed with JDK v1.7.0_101
          +1 unit 0m 14s hadoop-aws in the patch passed with JDK v1.7.0_101.
          +1 asflicense 0m 17s The patch does not generate ASF License warnings.
          15m 3s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:babe025
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12810891/HADOOP-3733-branch-2-006.patch
          JIRA Issue HADOOP-3733
          Optional Tests asflicense findbugs xml compile javac javadoc mvninstall mvnsite unit checkstyle
          uname Linux 73bbaabdd6b0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision branch-2 / 7523514
          Default Java 1.7.0_101
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_91 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101
          findbugs v3.0.0
          whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/9786/artifact/patchprocess/whitespace-eol.txt
          javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/9786/artifact/patchprocess/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdk1.8.0_91.txt
          JDK v1.7.0_101 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9786/testReport/
          modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9786/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 34s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 4 new or modified test files. +1 mvninstall 7m 36s branch-2 passed +1 compile 0m 22s branch-2 passed with JDK v1.8.0_91 +1 compile 0m 18s branch-2 passed with JDK v1.7.0_101 +1 checkstyle 0m 17s branch-2 passed +1 mvnsite 0m 20s branch-2 passed +1 mvneclipse 0m 13s branch-2 passed +1 findbugs 0m 36s branch-2 passed +1 javadoc 0m 14s branch-2 passed with JDK v1.8.0_91 +1 javadoc 0m 16s branch-2 passed with JDK v1.7.0_101 +1 mvninstall 0m 14s the patch passed +1 compile 0m 11s the patch passed with JDK v1.8.0_91 +1 javac 0m 11s the patch passed +1 compile 0m 14s the patch passed with JDK v1.7.0_101 +1 javac 0m 14s the patch passed +1 checkstyle 0m 12s the patch passed +1 mvnsite 0m 18s the patch passed +1 mvneclipse 0m 13s the patch passed -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. +1 xml 0m 0s The patch has no ill-formed XML file. +1 findbugs 0m 41s the patch passed -1 javadoc 0m 10s hadoop-tools_hadoop-aws-jdk1.8.0_91 with JDK v1.8.0_91 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) +1 javadoc 0m 14s the patch passed with JDK v1.7.0_101 +1 unit 0m 14s hadoop-aws in the patch passed with JDK v1.7.0_101. +1 asflicense 0m 17s The patch does not generate ASF License warnings. 15m 3s Subsystem Report/Notes Docker Image:yetus/hadoop:babe025 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12810891/HADOOP-3733-branch-2-006.patch JIRA Issue HADOOP-3733 Optional Tests asflicense findbugs xml compile javac javadoc mvninstall mvnsite unit checkstyle uname Linux 73bbaabdd6b0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2 / 7523514 Default Java 1.7.0_101 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_91 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 findbugs v3.0.0 whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/9786/artifact/patchprocess/whitespace-eol.txt javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/9786/artifact/patchprocess/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdk1.8.0_91.txt JDK v1.7.0_101 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9786/testReport/ modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9786/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          raviprak Ravi Prakash added a comment -

          Found it. In S3xLoginHelper.extractLoginDetails we should just do this:

          password = java.net.URLDecoder.decode(login.substring(loginSplit + 1), "UTF-8");
          Show
          raviprak Ravi Prakash added a comment - Found it. In S3xLoginHelper.extractLoginDetails we should just do this: password = java.net.URLDecoder.decode(login.substring(loginSplit + 1), "UTF-8" );
          Hide
          raviprak Ravi Prakash added a comment -

          I reviewed the patch. Thanks a lot Steve. It looks great! Nits:
          1.

          authority and scheme are not case sensitive

          authority is case sensitive, isn't it?
          2. In general checkPath is a little hard for me to understand. Could you please explain what you are checking in the javadoc?

          After these 3 issues (decoding and these 2) are addressed, I'm +1.

          Show
          raviprak Ravi Prakash added a comment - I reviewed the patch. Thanks a lot Steve. It looks great! Nits: 1. authority and scheme are not case sensitive authority is case sensitive, isn't it? 2. In general checkPath is a little hard for me to understand. Could you please explain what you are checking in the javadoc? After these 3 issues (decoding and these 2) are addressed, I'm +1.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          Thanks for finding the final quirk..I'll address it.

          I think we should also do a password length check...require the secret to be exactly 40 chars long, and fail fast if not. At least the error will be meaningful, not a generic "checksum failure", which is on a par with the kerberos one.

          2. In the javadocs I'll point at the FileSystem one, which is where I lifted it, moving from getAuth to getHost. That's the only difference

          Show
          stevel@apache.org Steve Loughran added a comment - Thanks for finding the final quirk..I'll address it. I think we should also do a password length check...require the secret to be exactly 40 chars long, and fail fast if not. At least the error will be meaningful, not a generic "checksum failure", which is on a par with the kerberos one. 2. In the javadocs I'll point at the FileSystem one, which is where I lifted it, moving from getAuth to getHost. That's the only difference
          Hide
          stevel@apache.org Steve Loughran added a comment -

          Patch 007
          Ravi's patch + javadocs.

          Tested against s3 ireland, and on the command line. It's notable that the secrets end up everywhere, such as in the output of the hadoop fs -ls command. Putting AWS login details in URLs is just wrong.

          Show
          stevel@apache.org Steve Loughran added a comment - Patch 007 Ravi's patch + javadocs. Tested against s3 ireland, and on the command line. It's notable that the secrets end up everywhere, such as in the output of the hadoop fs -ls command. Putting AWS login details in URLs is just wrong.
          Hide
          raviprak Ravi Prakash added a comment -

          Thanks a lot Steve! +1 . Will commit to trunk and branch-2 shortly!

          Show
          raviprak Ravi Prakash added a comment - Thanks a lot Steve! +1 . Will commit to trunk and branch-2 shortly!
          Hide
          raviprak Ravi Prakash added a comment -

          Thanks a lot everyone for your contributions on this long-standing issue. I'm glad we could close it out thanks to Steve!

          Show
          raviprak Ravi Prakash added a comment - Thanks a lot everyone for your contributions on this long-standing issue. I'm glad we could close it out thanks to Steve!
          Hide
          cnauroth Chris Nauroth added a comment -

          Ravi Prakash, thank you very much for the thorough code review and testing.

          Show
          cnauroth Chris Nauroth added a comment - Ravi Prakash , thank you very much for the thorough code review and testing.
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-trunk-Commit #9971 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9971/)
          HADOOP-3733. "s3x:" URLs break when Secret Key contains a slash, even if (raviprak: rev 4aefe119a0203c03cdc893dcb3330fd37f26f0ee)

          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3ACredentialsInURL.java
          • hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3native/S3xLoginHelper.java
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/S3FileSystem.java
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3/TestS3FileSystem.java
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AConfiguration.java
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/S3Credentials.java
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
          • hadoop-tools/hadoop-aws/dev-support/findbugs-exclude.xml
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/TestS3xLoginHelper.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-trunk-Commit #9971 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9971/ ) HADOOP-3733 . "s3x:" URLs break when Secret Key contains a slash, even if (raviprak: rev 4aefe119a0203c03cdc893dcb3330fd37f26f0ee) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3ACredentialsInURL.java hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3native/S3xLoginHelper.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/S3FileSystem.java hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3/TestS3FileSystem.java hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AConfiguration.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/S3Credentials.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java hadoop-tools/hadoop-aws/dev-support/findbugs-exclude.xml hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/TestS3xLoginHelper.java
          Hide
          stevel@apache.org Steve Loughran added a comment -

          pulled the fix into Hadoop 2.8 as well; changed the "fixed for" version marker appropriately.

          Show
          stevel@apache.org Steve Loughran added a comment - pulled the fix into Hadoop 2.8 as well; changed the "fixed for" version marker appropriately.
          Hide
          cnauroth Chris Nauroth added a comment -

          The new test fails if your AWS secret key contains a '+'. I have posted a patch on HADOOP-13287.

          Show
          cnauroth Chris Nauroth added a comment - The new test fails if your AWS secret key contains a '+'. I have posted a patch on HADOOP-13287 .

            People

            • Assignee:
              stevel@apache.org Steve Loughran
              Reporter:
              stuartsierra Stuart Sierra
            • Votes:
              9 Vote for this issue
              Watchers:
              35 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development