Details

    • Type: Sub-task
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: 2.8.0
    • Fix Version/s: 2.8.0, 3.0.0-alpha1
    • Component/s: fs/s3
    • Labels:
      None
    • Release Note:
      S3A now supports read access to a public S3 bucket even if the client does not configure any AWS credentials. See the documentation of configuration property fs.s3a.aws.credentials.provider for further details.

      Description

      If an S3 bucket is public, anyone should be able to read from it.

      However, you cannot create an s3a client bonded to a public bucket unless you have some credentials; the doesBucketExist() check rejects the call.

      1. HADOOP-13237-branch-2.002.patch
        10 kB
        Chris Nauroth
      2. HADOOP-13237.001.patch
        4 kB
        Chris Nauroth

        Activity

        Hide
        stevel@apache.org Steve Loughran added a comment -

        stack

        16/06/03 21:40:37 INFO BlockManagerMasterEndpoint: Registering block manager localhost:60011 with 511.1 MB RAM, BlockManagerId(driver, localhost, 60011)
        16/06/03 21:40:37 INFO BlockManagerMaster: Registered BlockManager
        16/06/03 21:40:39 ERROR S3ALineCount: Failed to execute line count
        org.apache.hadoop.fs.s3a.AWSClientIOException: doesBucketExist on landsat-pds: com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain: Unable to load AWS credentials from any provider in the chain
        	at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:82)
        	at org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:300)
        	at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:267)
        	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2793)
        	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:101)
        	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2830)
        	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2812)
        	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)
        	at org.apache.spark.cloud.s3.examples.S3ALineCount$.innerMain(S3ALineCount.scala:75)
        	at org.apache.spark.cloud.s3.examples.S3ALineCount$.main(S3ALineCount.scala:50)
        	at org.apache.spark.cloud.s3.examples.S3ALineCount.main(S3ALineCount.scala)
        	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        	at java.lang.reflect.Method.invoke(Method.java:498)
        	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
        	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
        	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
        Caused by: com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain
        	at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:117)
        	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3779)
        	at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1107)
        	at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1070)
        	at org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:288)
        	... 18 more
        
        Show
        stevel@apache.org Steve Loughran added a comment - stack 16/06/03 21:40:37 INFO BlockManagerMasterEndpoint: Registering block manager localhost:60011 with 511.1 MB RAM, BlockManagerId(driver, localhost, 60011) 16/06/03 21:40:37 INFO BlockManagerMaster: Registered BlockManager 16/06/03 21:40:39 ERROR S3ALineCount: Failed to execute line count org.apache.hadoop.fs.s3a.AWSClientIOException: doesBucketExist on landsat-pds: com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain: Unable to load AWS credentials from any provider in the chain at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:82) at org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:300) at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:267) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2793) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:101) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2830) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2812) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389) at org.apache.spark.cloud.s3.examples.S3ALineCount$.innerMain(S3ALineCount.scala:75) at org.apache.spark.cloud.s3.examples.S3ALineCount$.main(S3ALineCount.scala:50) at org.apache.spark.cloud.s3.examples.S3ALineCount.main(S3ALineCount.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:117) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3779) at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1107) at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1070) at org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:288) ... 18 more
        Hide
        stevel@apache.org Steve Loughran added a comment -

        Should we maybe be more relaxed about failures of verifying a bucket exists on startup?

        I'll try and experiment with downgrading to a warn and seeing what happens to a test run.

        Irony: we never see this problem in hadoop-aws test runs, because they only run if you have credentials.

        Show
        stevel@apache.org Steve Loughran added a comment - Should we maybe be more relaxed about failures of verifying a bucket exists on startup? I'll try and experiment with downgrading to a warn and seeing what happens to a test run. Irony: we never see this problem in hadoop-aws test runs, because they only run if you have credentials.
        Hide
        cnauroth Chris Nauroth added a comment -

        This looks to me like AnonymousAWSCredentials is fundamentally unusable in a AWSCredentialsProviderChain.

        The AnonymousAWSCredentials is hard-coded to return a null key and secret.

        https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-core/src/main/java/com/amazonaws/auth/AnonymousAWSCredentials.java#L26-L38

        However, the chain is coded to throw an exception if it walks the whole chain and can't find a non-null key and secret.

        https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-core/src/main/java/com/amazonaws/auth/AWSCredentialsProviderChain.java#L108-L132

        I'd be curious if it works when you swap out the credentials = new AWSCredentialsProviderChain(...) line for a straight call to credentials = new AnonymousAWSCredentialsProvider(). If it does, then I think this could be interpreted as a bug in the AWS SDK, and we might consider filing a patch to that project.

        In the absence of AWS SDK changes, we could have a configuration property like fs.s3a.anonymous.access, which if true would skip the chain and just create the anonymous provider. Actually, it might be good for anonymous access to be opt-in via configuration anyway, because I expect most deployments wouldn't want anonymous access and would prefer to fail fast so they know to lock down their bucket.

        Show
        cnauroth Chris Nauroth added a comment - This looks to me like AnonymousAWSCredentials is fundamentally unusable in a AWSCredentialsProviderChain . The AnonymousAWSCredentials is hard-coded to return a null key and secret. https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-core/src/main/java/com/amazonaws/auth/AnonymousAWSCredentials.java#L26-L38 However, the chain is coded to throw an exception if it walks the whole chain and can't find a non-null key and secret. https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-core/src/main/java/com/amazonaws/auth/AWSCredentialsProviderChain.java#L108-L132 I'd be curious if it works when you swap out the credentials = new AWSCredentialsProviderChain(...) line for a straight call to credentials = new AnonymousAWSCredentialsProvider() . If it does, then I think this could be interpreted as a bug in the AWS SDK, and we might consider filing a patch to that project. In the absence of AWS SDK changes, we could have a configuration property like fs.s3a.anonymous.access , which if true would skip the chain and just create the anonymous provider. Actually, it might be good for anonymous access to be opt-in via configuration anyway, because I expect most deployments wouldn't want anonymous access and would prefer to fail fast so they know to lock down their bucket.
        Hide
        stevel@apache.org Steve Loughran added a comment -

        I don't see us being able to fix this; I've tried to bypass auth or insert fake credentials, and the s3 reads of the public landsat dataset fail at the verifyBucketExists() call. Command that out and it fails on the first read. That's even though the datasets are visible over http.

        Assumption: you really need credentials to use the AWS library, even if you are accessing other people's public data. The client is presumably setting up auth without negotiating over requirements at the far end, and bailing out early if there aren't any. And, if you make up credentials, they get rejected s3 side for being invalid.

        Show
        stevel@apache.org Steve Loughran added a comment - I don't see us being able to fix this; I've tried to bypass auth or insert fake credentials, and the s3 reads of the public landsat dataset fail at the verifyBucketExists() call. Command that out and it fails on the first read. That's even though the datasets are visible over http. Assumption: you really need credentials to use the AWS library, even if you are accessing other people's public data. The client is presumably setting up auth without negotiating over requirements at the far end, and bailing out early if there aren't any. And, if you make up credentials, they get rejected s3 side for being invalid.
        Hide
        cnauroth Chris Nauroth added a comment -

        Hello Steve Loughran. I got curious about this, and I think I have a solution, so I'm reopening and attaching a patch. This is an incomplete patch just to communicate the idea, so I won't click Submit Patch yet.

        I mentioned before that I think anonymous access should be opt-in only through explicit configuration, so users don't mistakenly set up an insecure deployment. Instead of adding a new property, I now think the existing fs.s3a.aws.credentials.provider should be fine for this. By setting it equal to AnonymousAWSCredentialsProvider, it should bypass the credentials chain (which insists on finding non-null credentials) and instead use anonymous credentials directly.

        Unfortunately, there is a bug with that. The reflection-based credential provider initialization logic demands that the class have a constructor that accepts a URI and a Configuration. That wouldn't make sense for an AnonymousAWSCredentialsProvider, so I've added a fallback path to the initialization to support calling a default constructor.

        I tested this by removing my S3A credentials from configuration and trying to access the public landsat-pds bucket. I was able to repro the bug you reported. Then, I applied my patch, retried, and it worked fine.

        > hadoop fs -cat s3a://landsat-pds/run_info.json
        cat: doesBucketExist on landsat-pds: com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain: Unable to load AWS credentials from any provider in the chain
        
        > hadoop fs -Dfs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider -cat s3a://landsat-pds/run_info.json
        {"active_run": "unknown on ip-10-144-75-61 started at 2016-06-06 18:09:24.791372 (landsat_ingestor_exec.py)", "last_run": 4215}
        

        Is this what you had in mind? If so, let me know, and I'll finish off the remaining work for this patch:

        1. Add a unit test for anonymous access.
        2. Update documentation of fs.s3a.aws.credentials.provider in core-default.xml.
        3. Update hadoop-aws site documentation with more discussion of fs.s3a.aws.credentials.provider.
        4. Any other feedback from you or other code reviewers.
        Show
        cnauroth Chris Nauroth added a comment - Hello Steve Loughran . I got curious about this, and I think I have a solution, so I'm reopening and attaching a patch. This is an incomplete patch just to communicate the idea, so I won't click Submit Patch yet. I mentioned before that I think anonymous access should be opt-in only through explicit configuration, so users don't mistakenly set up an insecure deployment. Instead of adding a new property, I now think the existing fs.s3a.aws.credentials.provider should be fine for this. By setting it equal to AnonymousAWSCredentialsProvider , it should bypass the credentials chain (which insists on finding non-null credentials) and instead use anonymous credentials directly. Unfortunately, there is a bug with that. The reflection-based credential provider initialization logic demands that the class have a constructor that accepts a URI and a Configuration . That wouldn't make sense for an AnonymousAWSCredentialsProvider , so I've added a fallback path to the initialization to support calling a default constructor. I tested this by removing my S3A credentials from configuration and trying to access the public landsat-pds bucket. I was able to repro the bug you reported. Then, I applied my patch, retried, and it worked fine. > hadoop fs -cat s3a: //landsat-pds/run_info.json cat: doesBucketExist on landsat-pds: com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain: Unable to load AWS credentials from any provider in the chain > hadoop fs -Dfs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider -cat s3a: //landsat-pds/run_info.json { "active_run" : "unknown on ip-10-144-75-61 started at 2016-06-06 18:09:24.791372 (landsat_ingestor_exec.py)" , "last_run" : 4215} Is this what you had in mind? If so, let me know, and I'll finish off the remaining work for this patch: Add a unit test for anonymous access. Update documentation of fs.s3a.aws.credentials.provider in core-default.xml. Update hadoop-aws site documentation with more discussion of fs.s3a.aws.credentials.provider. Any other feedback from you or other code reviewers.
        Hide
        stevel@apache.org Steve Loughran added a comment -

        wow, good find.

        1. we could have an anon provider subclass which has the constructor; that would eliminate the need to have a handler.
        2. maybe also: log @ Info?
        3. this should be straightforward to test
        Show
        stevel@apache.org Steve Loughran added a comment - wow, good find. we could have an anon provider subclass which has the constructor; that would eliminate the need to have a handler. maybe also: log @ Info? this should be straightforward to test
        Hide
        cnauroth Chris Nauroth added a comment -

        Here is patch 002 for branch-2. I'm currently doing a full test run against an S3 bucket in US-west-2.

        • Documentation updated.
        • Tests added.

        we could have an anon provider subclass which has the constructor; that would eliminate the need to have a handler.

        I'm not sure I understood this comment. AnonymousAWSCredentialsProvider is our own code in S3A, so we have control over the constructors we want it to provide. I considered providing a constructor that accepts and ignores a URI and Configuration, but I thought it would cause confusion to see a constructor with unused arguments. Instead, I expanded the reflection logic to support calling the default constructor. I haven't yet made any changes related to this in this revision of the patch, so if you still want to request changes, please let me know.

        maybe also: log @ Info?

        I looked into this. Unfortunately, info-level logging would propagate out to stderr in the shell example I gave earlier, and this would be undesirable output. Maybe the existing debug-level logging is sufficient?

        Show
        cnauroth Chris Nauroth added a comment - Here is patch 002 for branch-2. I'm currently doing a full test run against an S3 bucket in US-west-2. Documentation updated. Tests added. we could have an anon provider subclass which has the constructor; that would eliminate the need to have a handler. I'm not sure I understood this comment. AnonymousAWSCredentialsProvider is our own code in S3A, so we have control over the constructors we want it to provide. I considered providing a constructor that accepts and ignores a URI and Configuration , but I thought it would cause confusion to see a constructor with unused arguments. Instead, I expanded the reflection logic to support calling the default constructor. I haven't yet made any changes related to this in this revision of the patch, so if you still want to request changes, please let me know. maybe also: log @ Info? I looked into this. Unfortunately, info-level logging would propagate out to stderr in the shell example I gave earlier, and this would be undesirable output. Maybe the existing debug-level logging is sufficient?
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 28s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        0 mvndep 0m 31s Maven dependency ordering for branch
        +1 mvninstall 6m 34s branch-2 passed
        +1 compile 5m 42s branch-2 passed with JDK v1.8.0_91
        +1 compile 6m 26s branch-2 passed with JDK v1.7.0_101
        +1 checkstyle 1m 27s branch-2 passed
        +1 mvnsite 1m 15s branch-2 passed
        +1 mvneclipse 0m 27s branch-2 passed
        +1 findbugs 2m 7s branch-2 passed
        +1 javadoc 1m 13s branch-2 passed with JDK v1.8.0_91
        +1 javadoc 1m 21s branch-2 passed with JDK v1.7.0_101
        0 mvndep 0m 14s Maven dependency ordering for patch
        +1 mvninstall 0m 56s the patch passed
        +1 compile 5m 39s the patch passed with JDK v1.8.0_91
        +1 javac 5m 39s the patch passed
        +1 compile 6m 25s the patch passed with JDK v1.7.0_101
        +1 javac 6m 25s the patch passed
        +1 checkstyle 1m 22s root: The patch generated 0 new + 9 unchanged - 2 fixed = 9 total (was 11)
        +1 mvnsite 1m 17s the patch passed
        +1 mvneclipse 0m 26s the patch passed
        -1 whitespace 0m 0s The patch has 49 line(s) that end in whitespace. Use git apply --whitespace=fix.
        +1 xml 0m 1s The patch has no ill-formed XML file.
        +1 findbugs 2m 36s the patch passed
        +1 javadoc 1m 6s the patch passed with JDK v1.8.0_91
        +1 javadoc 1m 24s the patch passed with JDK v1.7.0_101
        -1 unit 8m 13s hadoop-common in the patch failed with JDK v1.7.0_101.
        +1 unit 0m 13s hadoop-aws in the patch passed with JDK v1.7.0_101.
        +1 asflicense 0m 21s The patch does not generate ASF License warnings.
        76m 48s



        Reason Tests
        JDK v1.8.0_91 Timed out junit tests org.apache.hadoop.http.TestHttpServerLifecycle
        JDK v1.7.0_101 Failed junit tests hadoop.metrics2.impl.TestGangliaMetrics



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:babe025
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12808793/HADOOP-13237-branch-2.002.patch
        JIRA Issue HADOOP-13237
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle
        uname Linux d7b23e98125f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision branch-2 / 154c7c3
        Default Java 1.7.0_101
        Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_91 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101
        findbugs v3.0.0
        whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/9682/artifact/patchprocess/whitespace-eol.txt
        unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9682/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_101.txt
        JDK v1.7.0_101 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9682/testReport/
        modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
        Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9682/console
        Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 28s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 0m 31s Maven dependency ordering for branch +1 mvninstall 6m 34s branch-2 passed +1 compile 5m 42s branch-2 passed with JDK v1.8.0_91 +1 compile 6m 26s branch-2 passed with JDK v1.7.0_101 +1 checkstyle 1m 27s branch-2 passed +1 mvnsite 1m 15s branch-2 passed +1 mvneclipse 0m 27s branch-2 passed +1 findbugs 2m 7s branch-2 passed +1 javadoc 1m 13s branch-2 passed with JDK v1.8.0_91 +1 javadoc 1m 21s branch-2 passed with JDK v1.7.0_101 0 mvndep 0m 14s Maven dependency ordering for patch +1 mvninstall 0m 56s the patch passed +1 compile 5m 39s the patch passed with JDK v1.8.0_91 +1 javac 5m 39s the patch passed +1 compile 6m 25s the patch passed with JDK v1.7.0_101 +1 javac 6m 25s the patch passed +1 checkstyle 1m 22s root: The patch generated 0 new + 9 unchanged - 2 fixed = 9 total (was 11) +1 mvnsite 1m 17s the patch passed +1 mvneclipse 0m 26s the patch passed -1 whitespace 0m 0s The patch has 49 line(s) that end in whitespace. Use git apply --whitespace=fix. +1 xml 0m 1s The patch has no ill-formed XML file. +1 findbugs 2m 36s the patch passed +1 javadoc 1m 6s the patch passed with JDK v1.8.0_91 +1 javadoc 1m 24s the patch passed with JDK v1.7.0_101 -1 unit 8m 13s hadoop-common in the patch failed with JDK v1.7.0_101. +1 unit 0m 13s hadoop-aws in the patch passed with JDK v1.7.0_101. +1 asflicense 0m 21s The patch does not generate ASF License warnings. 76m 48s Reason Tests JDK v1.8.0_91 Timed out junit tests org.apache.hadoop.http.TestHttpServerLifecycle JDK v1.7.0_101 Failed junit tests hadoop.metrics2.impl.TestGangliaMetrics Subsystem Report/Notes Docker Image:yetus/hadoop:babe025 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12808793/HADOOP-13237-branch-2.002.patch JIRA Issue HADOOP-13237 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle uname Linux d7b23e98125f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2 / 154c7c3 Default Java 1.7.0_101 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_91 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 findbugs v3.0.0 whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/9682/artifact/patchprocess/whitespace-eol.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9682/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_101.txt JDK v1.7.0_101 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9682/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9682/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        cnauroth Chris Nauroth added a comment -

        I have completed a successful parallel test run against US-west-2. The test failures in the last pre-commit run are unrelated.

        Show
        cnauroth Chris Nauroth added a comment - I have completed a successful parallel test run against US-west-2. The test failures in the last pre-commit run are unrelated.
        Hide
        stevel@apache.org Steve Loughran added a comment -

        LGTM...doing a test run

        Show
        stevel@apache.org Steve Loughran added a comment - LGTM...doing a test run
        Hide
        stevel@apache.org Steve Loughran added a comment -

        +1, works for me.

        Show
        stevel@apache.org Steve Loughran added a comment - +1, works for me.
        Hide
        stevel@apache.org Steve Loughran added a comment -

        (commit in progress, just being strict and testing on every branch before pushing up results)

        Show
        stevel@apache.org Steve Loughran added a comment - (commit in progress, just being strict and testing on every branch before pushing up results)
        Hide
        hudson Hudson added a comment -

        SUCCESS: Integrated in Hadoop-trunk-Commit #9936 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9936/)
        HADOOP-13237: s3a initialization against public bucket fails if caller (stevel: rev 656c460c0e79ee144d6ef48d85cec04a1af3b2cc)

        • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AAWSCredentialsProvider.java
        • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/BasicAWSCredentialsProvider.java
        • hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
        • hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
        • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
        • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/AnonymousAWSCredentialsProvider.java
        Show
        hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-trunk-Commit #9936 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9936/ ) HADOOP-13237 : s3a initialization against public bucket fails if caller (stevel: rev 656c460c0e79ee144d6ef48d85cec04a1af3b2cc) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AAWSCredentialsProvider.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/BasicAWSCredentialsProvider.java hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md hadoop-common-project/hadoop-common/src/main/resources/core-default.xml hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/AnonymousAWSCredentialsProvider.java

          People

          • Assignee:
            cnauroth Chris Nauroth
            Reporter:
            stevel@apache.org Steve Loughran
          • Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development