Details

    • Type: Sub-task
    • Status: Closed
    • Priority: Blocker
    • Resolution: Fixed
    • Affects Version/s: 2.7.0
    • Fix Version/s: 2.7.0
    • Component/s: fs/s3
    • Labels:
      None
    • Target Version/s:

      Description

      One big advantage provided by the s3a filesystem is the ability to use an IAM instance profile in order to authenticate when attempting to access an S3 bucket from an EC2 instance. This eliminates the need to deploy AWS account credentials to the instance or to provide them to Hadoop via the fs.s3a.awsAccessKeyId and fs.s3a.awsSecretAccessKey params.

      The patch submitted to resolve HADOOP-10714 breaks this behavior by using the S3Credentials class to read the value of these two params. The change in question is presented below:

      S3AFileSystem.java, lines 161-170:

          // Try to get our credentials or just connect anonymously
          S3Credentials s3Credentials = new S3Credentials();
          s3Credentials.initialize(name, conf);
      
          AWSCredentialsProviderChain credentials = new AWSCredentialsProviderChain(
              new BasicAWSCredentialsProvider(s3Credentials.getAccessKey(),
                                              s3Credentials.getSecretAccessKey()),
              new InstanceProfileCredentialsProvider(),
              new AnonymousAWSCredentialsProvider()
          );
      

      As you can see, the getAccessKey() and getSecretAccessKey() methods from the S3Credentials class are now used to provide constructor arguments to BasicAWSCredentialsProvider. These methods will raise an exception if the fs.s3a.awsAccessKeyId or fs.s3a.awsSecretAccessKey params are missing, respectively. If a user is relying on an IAM instance profile to authenticate to an S3 bucket and therefore doesn't supply values for these params, they will receive an exception and won't be able to access the bucket.

      1. HADOOP-11670.002.patch
        2 kB
        Adam Budde
      2. HADOOP-11670-001.patch
        1 kB
        Thomas Demoor
      3. HADOOP-11670-003.patch
        4 kB
        Steve Loughran

        Issue Links

          Activity

          Hide
          stevel@apache.org Steve Loughran added a comment -

          looks more like HADOOP-10714 was the change that did this

          Show
          stevel@apache.org Steve Loughran added a comment - looks more like HADOOP-10714 was the change that did this
          Hide
          budde Adam Budde added a comment -

          My mistake-- looks like you're correct. I've updated the description.

          Show
          budde Adam Budde added a comment - My mistake-- looks like you're correct. I've updated the description.
          Hide
          thodemoor Thomas Demoor added a comment -

          Really quick fix. Tested that adding credentials to core-site still works. DID NOT TEST IAM.

          Show
          thodemoor Thomas Demoor added a comment - Really quick fix. Tested that adding credentials to core-site still works. DID NOT TEST IAM.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          -1.

          That's too ugly. I don't even approve of using string checks in tests on account of brittleness; doing it in production is way out.

          I propose we just back out that bit of the HADOOP-10714 patch which changed the credential init; low cost, less risk

          Show
          stevel@apache.org Steve Loughran added a comment - -1. That's too ugly. I don't even approve of using string checks in tests on account of brittleness; doing it in production is way out. I propose we just back out that bit of the HADOOP-10714 patch which changed the credential init; low cost, less risk
          Hide
          budde Adam Budde added a comment -

          I have a patch ready that reverts the relevant portion of HADOOP-10714. Waiting for my employer to sign off on contributing it.

          Show
          budde Adam Budde added a comment - I have a patch ready that reverts the relevant portion of HADOOP-10714 . Waiting for my employer to sign off on contributing it.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          Adam —That'd be good. I presume you've tested it?

          Show
          stevel@apache.org Steve Loughran added a comment - Adam —That'd be good. I presume you've tested it?
          Hide
          budde Adam Budde added a comment -

          Revert portion of HADOOP-10714 that breaks IAM auth

          Show
          budde Adam Budde added a comment - Revert portion of HADOOP-10714 that breaks IAM auth
          Hide
          budde Adam Budde added a comment -

          Steve-- I've tested that both the IAM instance profile auth as well as the standard parameter auth both work as expected. All of the tests for hadoop-tools pass as well. I'm getting Javadoc errors when I run the test-patch.sh script, but these are coming from the hadoop-yarn-api module and have been popping up for the past few weeks when I try to do a Hadoop build (I've been using the "-Dmaven.javadoc.skip=true" flag because of this). I haven't seen an issue opened for this, so I'll look into it and see if perhaps something isn't configured right on my end.

          Since this patch simply rolls back the relevant portion of the code to its state prior to HADOOP-10714, there could be some simple improvements we might want to roll into this patch, such as using conf.getTrimmed() instead of conf.get().

          Show
          budde Adam Budde added a comment - Steve-- I've tested that both the IAM instance profile auth as well as the standard parameter auth both work as expected. All of the tests for hadoop-tools pass as well. I'm getting Javadoc errors when I run the test-patch.sh script, but these are coming from the hadoop-yarn-api module and have been popping up for the past few weeks when I try to do a Hadoop build (I've been using the "-Dmaven.javadoc.skip=true" flag because of this). I haven't seen an issue opened for this, so I'll look into it and see if perhaps something isn't configured right on my end. Since this patch simply rolls back the relevant portion of the code to its state prior to HADOOP-10714 , there could be some simple improvements we might want to roll into this patch, such as using conf.getTrimmed() instead of conf.get().
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12703087/HADOOP-11670.002.patch
          against trunk revision 95bfd08.

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-tools/hadoop-aws.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/5871//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/5871//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12703087/HADOOP-11670.002.patch against trunk revision 95bfd08. +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-tools/hadoop-aws. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/5871//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/5871//console This message is automatically generated.
          Hide
          budde Adam Budde added a comment -

          No new tests are needed for this patch as it does not introduce new features or functionality.

          In order to manually verify this patch, I copied the patched distro to an AWS EC2 instance configured with an IAM instance profile that authorizes access to several S3 buckets. I used the 'hdfs dfs' command in order to verify that I could access these buckets without providing credentials. I then set the fs.s3a.awsAccessKeyId and fs.s3a.awsSecretAccessKey parameters to the credentials for an AWS account with permission to access additional S3 buckets not included in the IAM instance profile. I then used 'hdfs dfs' to verify that the standard parameter-based authentication was functioning properly and correctly allowed access to these additional buckets.

          Show
          budde Adam Budde added a comment - No new tests are needed for this patch as it does not introduce new features or functionality. In order to manually verify this patch, I copied the patched distro to an AWS EC2 instance configured with an IAM instance profile that authorizes access to several S3 buckets. I used the 'hdfs dfs' command in order to verify that I could access these buckets without providing credentials. I then set the fs.s3a.awsAccessKeyId and fs.s3a.awsSecretAccessKey parameters to the credentials for an AWS account with permission to access additional S3 buckets not included in the IAM instance profile. I then used 'hdfs dfs' to verify that the standard parameter-based authentication was functioning properly and correctly allowed access to these additional buckets.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          All the S3a tests are now failing for me

          
          testOutputStreamClosedTwice(org.apache.hadoop.fs.s3a.TestS3AFileSystemContract)  Time elapsed: 0.01 sec  <<< ERROR!
          com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain
          	at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:117)
          	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3521)
          	at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031)
          	at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994)
          	at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297)
          	at org.apache.hadoop.fs.s3a.S3ATestUtils.createTestFileSystem(S3ATestUtils.java:51)
          	at org.apache.hadoop.fs.s3a.TestS3AFileSystemContract.setUp(TestS3AFileSystemContract.java:46)
          
          
          Show
          stevel@apache.org Steve Loughran added a comment - All the S3a tests are now failing for me testOutputStreamClosedTwice(org.apache.hadoop.fs.s3a.TestS3AFileSystemContract) Time elapsed: 0.01 sec <<< ERROR! com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:117) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3521) at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031) at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994) at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297) at org.apache.hadoop.fs.s3a.S3ATestUtils.createTestFileSystem(S3ATestUtils.java:51) at org.apache.hadoop.fs.s3a.TestS3AFileSystemContract.setUp(TestS3AFileSystemContract.java:46)
          Hide
          budde Adam Budde added a comment -

          Didn't see that you need to enable the tests-on profile to actually run the hadoop-aws tests. I'm seeing the failures too now. Taking a look.

          Show
          budde Adam Budde added a comment - Didn't see that you need to enable the tests-on profile to actually run the hadoop-aws tests. I'm seeing the failures too now. Taking a look.
          Hide
          budde Adam Budde added a comment -

          What is the proper way to execute the hadoop-aws tests? I'm trying to execute 'mvn test -Ptests-on' in the hadoop-tools/hadoop-aws dir and every tests fails with the following exception:

          Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
          Running org.apache.hadoop.fs.s3native.TestS3NInMemoryFileSystem
          Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.239 sec <<< FAILURE! - in org.apache.hadoop.fs.s3native.TestS3NInMemoryFileSystem
          testBasicReadWriteIO(org.apache.hadoop.fs.s3native.TestS3NInMemoryFileSystem)  Time elapsed: 0.209 sec  <<< ERROR!
          java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId: file:/mnt/md0/build/hadoop/hadoop-tools/hadoop-aws/target/test-classes/core-site.xml; lineNumber: 47; columnNumber: 36; An include with href 'auth-keys.xml'failed, and no fallback element was found.
                  at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:257)
                  at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:348)
                  at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
                  at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2499)
                  at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2487)
                  at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2558)
                  at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2511)
                  at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2424)
                  at org.apache.hadoop.conf.Configuration.get(Configuration.java:998)
                  at org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1048)
                  at org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1449)
                  at org.apache.hadoop.fs.FileSystem.initialize(FileSystem.java:204)
                  at org.apache.hadoop.fs.s3native.NativeS3FileSystem.initialize(NativeS3FileSystem.java:322)
                  at org.apache.hadoop.fs.s3native.TestS3NInMemoryFileSystem.setUp(TestS3NInMemoryFileSystem.java:44)
          
          Show
          budde Adam Budde added a comment - What is the proper way to execute the hadoop-aws tests? I'm trying to execute 'mvn test -Ptests-on' in the hadoop-tools/hadoop-aws dir and every tests fails with the following exception: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.fs.s3native.TestS3NInMemoryFileSystem Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.239 sec <<< FAILURE! - in org.apache.hadoop.fs.s3native.TestS3NInMemoryFileSystem testBasicReadWriteIO(org.apache.hadoop.fs.s3native.TestS3NInMemoryFileSystem) Time elapsed: 0.209 sec <<< ERROR! java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId: file:/mnt/md0/build/hadoop/hadoop-tools/hadoop-aws/target/test-classes/core-site.xml; lineNumber: 47; columnNumber: 36; An include with href 'auth-keys.xml'failed, and no fallback element was found. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:257) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:348) at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150) at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2499) at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2487) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2558) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2511) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2424) at org.apache.hadoop.conf.Configuration.get(Configuration.java:998) at org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1048) at org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1449) at org.apache.hadoop.fs.FileSystem.initialize(FileSystem.java:204) at org.apache.hadoop.fs.s3native.NativeS3FileSystem.initialize(NativeS3FileSystem.java:322) at org.apache.hadoop.fs.s3native.TestS3NInMemoryFileSystem.setUp(TestS3NInMemoryFileSystem.java:44)
          Hide
          stevel@apache.org Steve Loughran added a comment -

          looking more, the HADOOP-10714 patch didn't just disable IAM, it changed the config names for non-IAM binding. As such it became an incompatible change across both auth methods. The patch -002 fixes it in the source (why my test runs failed); just needs the docs to catch up, which I can do

          Show
          stevel@apache.org Steve Loughran added a comment - looking more, the HADOOP-10714 patch didn't just disable IAM, it changed the config names for non-IAM binding. As such it became an incompatible change across both auth methods. The patch -002 fixes it in the source (why my test runs failed); just needs the docs to catch up, which I can do
          Hide
          stevel@apache.org Steve Loughran added a comment -

          Patch -003; same as patch -002 with the docs in sync

          Show
          stevel@apache.org Steve Loughran added a comment - Patch -003; same as patch -002 with the docs in sync
          Hide
          budde Adam Budde added a comment -

          Ok, I found the comment included in hadoop-tools/hadoop-aws/test/resources/core-site.xml about creating an auth-keys.xml file, so you can disregard my previous comment.

          How would you feel about changing the parameter names in Constants.java to match the changes made in HADOOP-10714? While this would make it so this patch isn't a pure revert, the changed names in HADOOP-10714 are consistent with those used for s3 and s3n. Could save some frustration for users who are on a snapshot build and have already adopted this change.

          Show
          budde Adam Budde added a comment - Ok, I found the comment included in hadoop-tools/hadoop-aws/test/resources/core-site.xml about creating an auth-keys.xml file, so you can disregard my previous comment. How would you feel about changing the parameter names in Constants.java to match the changes made in HADOOP-10714 ? While this would make it so this patch isn't a pure revert, the changed names in HADOOP-10714 are consistent with those used for s3 and s3n. Could save some frustration for users who are on a snapshot build and have already adopted this change.
          Hide
          stevel@apache.org Steve Loughran added a comment -
          1. HADOOP-11901 did actually add support for both key names, I don't remember why they changed.
          2. If someone adopts a snapshot build they are going to have to change their code again. I'm reluctant to make the source tree over complex to handle a situation that only existed in a SNAPSHOT between releases; there's no guarantees of compatibility there. (i.e. while we care about 2.6->2.7, we don't care about about 2.7.0-SNAPSHOT > 2.7.0 , as long as the 2.6>2.7 compatibility relationship holds)

          Submitting patch -003 to jenkins. If jenkins is happy, my local test runs are passing, so I'll put it in. It is a return to the 2.6 code path, after all

          Show
          stevel@apache.org Steve Loughran added a comment - HADOOP-11901 did actually add support for both key names, I don't remember why they changed. If someone adopts a snapshot build they are going to have to change their code again. I'm reluctant to make the source tree over complex to handle a situation that only existed in a SNAPSHOT between releases; there's no guarantees of compatibility there. (i.e. while we care about 2.6->2.7, we don't care about about 2.7.0-SNAPSHOT > 2.7.0 , as long as the 2.6 >2.7 compatibility relationship holds) Submitting patch -003 to jenkins. If jenkins is happy, my local test runs are passing, so I'll put it in. It is a return to the 2.6 code path, after all
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12703143/HADOOP-11670-003.patch
          against trunk revision 608ebd5.

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-tools/hadoop-aws.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/5877//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/5877//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12703143/HADOOP-11670-003.patch against trunk revision 608ebd5. +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-tools/hadoop-aws. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/5877//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/5877//console This message is automatically generated.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          +1 jenkins happy, steve happy: committing. Thanks for finding this and for fixing it!

          Show
          stevel@apache.org Steve Loughran added a comment - +1 jenkins happy, steve happy: committing. Thanks for finding this and for fixing it!
          Hide
          stevel@apache.org Steve Loughran added a comment -

          As a quick postmortem on this regression

          1. I think we put too much stuff into HADOOP-10714; it could have been better split into tests, fix for the single JIRA-titled issue and independently, any other "optimisations" & doc changes.
          2. we don't have any jenkins servers testing all auth mechanisms on a regular basis

          At the very least we need someone to have a jenkins setup to run regularly looking for regressions in s3 & swift support. I know this done for azure.

          Show
          stevel@apache.org Steve Loughran added a comment - As a quick postmortem on this regression I think we put too much stuff into HADOOP-10714 ; it could have been better split into tests, fix for the single JIRA-titled issue and independently, any other "optimisations" & doc changes. we don't have any jenkins servers testing all auth mechanisms on a regular basis At the very least we need someone to have a jenkins setup to run regularly looking for regressions in s3 & swift support. I know this done for azure.
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #7279 (See https://builds.apache.org/job/Hadoop-trunk-Commit/7279/)
          HADOOP-11670. Regression: s3a auth setup broken. (Adam Budde via stevel) (stevel: rev 64443490d7f189e8e42d284615f3814ef751395a)

          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
          • hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #7279 (See https://builds.apache.org/job/Hadoop-trunk-Commit/7279/ ) HADOOP-11670 . Regression: s3a auth setup broken. (Adam Budde via stevel) (stevel: rev 64443490d7f189e8e42d284615f3814ef751395a) hadoop-common-project/hadoop-common/CHANGES.txt hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #127 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/127/)
          HADOOP-11670. Regression: s3a auth setup broken. (Adam Budde via stevel) (stevel: rev 64443490d7f189e8e42d284615f3814ef751395a)

          • hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
          • hadoop-common-project/hadoop-common/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #127 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/127/ ) HADOOP-11670 . Regression: s3a auth setup broken. (Adam Budde via stevel) (stevel: rev 64443490d7f189e8e42d284615f3814ef751395a) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java hadoop-common-project/hadoop-common/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Yarn-trunk #861 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/861/)
          HADOOP-11670. Regression: s3a auth setup broken. (Adam Budde via stevel) (stevel: rev 64443490d7f189e8e42d284615f3814ef751395a)

          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Yarn-trunk #861 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/861/ ) HADOOP-11670 . Regression: s3a auth setup broken. (Adam Budde via stevel) (stevel: rev 64443490d7f189e8e42d284615f3814ef751395a) hadoop-common-project/hadoop-common/CHANGES.txt hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk #2059 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2059/)
          HADOOP-11670. Regression: s3a auth setup broken. (Adam Budde via stevel) (stevel: rev 64443490d7f189e8e42d284615f3814ef751395a)

          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
          • hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #2059 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2059/ ) HADOOP-11670 . Regression: s3a auth setup broken. (Adam Budde via stevel) (stevel: rev 64443490d7f189e8e42d284615f3814ef751395a) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java hadoop-common-project/hadoop-common/CHANGES.txt hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #118 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/118/)
          HADOOP-11670. Regression: s3a auth setup broken. (Adam Budde via stevel) (stevel: rev 64443490d7f189e8e42d284615f3814ef751395a)

          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
          • hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #118 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/118/ ) HADOOP-11670 . Regression: s3a auth setup broken. (Adam Budde via stevel) (stevel: rev 64443490d7f189e8e42d284615f3814ef751395a) hadoop-common-project/hadoop-common/CHANGES.txt hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #127 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/127/)
          HADOOP-11670. Regression: s3a auth setup broken. (Adam Budde via stevel) (stevel: rev 64443490d7f189e8e42d284615f3814ef751395a)

          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #127 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/127/ ) HADOOP-11670 . Regression: s3a auth setup broken. (Adam Budde via stevel) (stevel: rev 64443490d7f189e8e42d284615f3814ef751395a) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java hadoop-common-project/hadoop-common/CHANGES.txt hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2077 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2077/)
          HADOOP-11670. Regression: s3a auth setup broken. (Adam Budde via stevel) (stevel: rev 64443490d7f189e8e42d284615f3814ef751395a)

          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
          • hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2077 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2077/ ) HADOOP-11670 . Regression: s3a auth setup broken. (Adam Budde via stevel) (stevel: rev 64443490d7f189e8e42d284615f3814ef751395a) hadoop-common-project/hadoop-common/CHANGES.txt hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md

            People

            • Assignee:
              budde Adam Budde
              Reporter:
              budde Adam Budde
            • Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development