Details

    • Type: Sub-task
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: 2.8.0
    • Fix Version/s: 3.0.0-beta1
    • Component/s: fs/s3
    • Labels:
      None
    • Release Note:
      S3A now defaults to using the "v2" S3 list API, which speeds up large-scale path listings. Non-AWS S3 implementations may not support this API: consult the S3A documentation on how to revert to the v1 API.

      Description

      Unlike version 1 of the S3 List Objects API, version 2 by default does not fetch object owner information, which S3A doesn't need anyway. By switching to v2, there will be less data to transfer/process. Also, it should be more robust when listing a versioned bucket with "a large number of delete markers" (according to AWS).

      Methods in S3AFileSystem that use this API include:

      • getFileStatus(Path)
      • innerDelete(Path, boolean)
      • innerListStatus(Path)
      • innerRename(Path, Path)

      Requires AWS SDK 1.10.75 or later.

      1. HADOOP-13421-HADOOP-13345.001.patch
        27 kB
        Aaron Fabbri
      2. HADOOP-13421.002.patch
        35 kB
        Aaron Fabbri
      3. HADOOP-13421.003.patch
        36 kB
        Aaron Fabbri
      4. HADOOP-13421.004.patch
        37 kB
        Aaron Fabbri

        Issue Links

          Activity

          Hide
          stevel@apache.org Steve Loughran added a comment -

          this could be useful. How tangible are the speedups on your experience?

          Show
          stevel@apache.org Steve Loughran added a comment - this could be useful. How tangible are the speedups on your experience?
          Hide
          slider Steven K. Wong added a comment -

          Listing 100 keys, time taken is 15% shorter when tested using the Python-based AWS CLI, and the XML response size on the wire is 25% smaller. YMMV. In particular, the size difference can vary significantly if the returned keys (not the objects referenced by the keys) are long vs. short. Each of my keys in this case is about 160 bytes long.

          Show
          slider Steven K. Wong added a comment - Listing 100 keys, time taken is 15% shorter when tested using the Python-based AWS CLI, and the XML response size on the wire is 25% smaller. YMMV. In particular, the size difference can vary significantly if the returned keys (not the objects referenced by the keys) are long vs. short. Each of my keys in this case is about 160 bytes long.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          We may need to make this optional, to support other s3 endpoints. Pieter Reuse —what would you suggest?

          Show
          stevel@apache.org Steve Loughran added a comment - We may need to make this optional, to support other s3 endpoints. Pieter Reuse —what would you suggest?
          Hide
          stevel@apache.org Steve Loughran added a comment -

          Listing v2 will also improve LIST availability for object stores with lots of delete markers.

          Show
          stevel@apache.org Steve Loughran added a comment - Listing v2 will also improve LIST availability for object stores with lots of delete markers.
          Hide
          PieterReuse Pieter Reuse added a comment -

          Thank you for putting this on my radar, Steve Loughran. I will look deeper into it and come back on this in the course of next week.

          Show
          PieterReuse Pieter Reuse added a comment - Thank you for putting this on my radar, Steve Loughran . I will look deeper into it and come back on this in the course of next week.
          Hide
          Thomas Demoor Thomas Demoor added a comment -

          Thanks for reaching out Steve Loughran (Pieter Reuse discussed with me).

          This API call is quite recent (May 4th 2016) so there are a lot of legacy systems which will not support this. Our latest release does but I think if we don't make this optional the majority of other s3 endpoints will brake.

          Making this optional seems relatively low cost. I quickly checked and we don't seem to use the marker functionality in the V1 implementation which is replaced by start-after in V2.

          Show
          Thomas Demoor Thomas Demoor added a comment - Thanks for reaching out Steve Loughran ( Pieter Reuse discussed with me). This API call is quite recent (May 4th 2016) so there are a lot of legacy systems which will not support this. Our latest release does but I think if we don't make this optional the majority of other s3 endpoints will brake. Making this optional seems relatively low cost. I quickly checked and we don't seem to use the marker functionality in the V1 implementation which is replaced by start-after in V2.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          Anyone got a patch for this we could get done in time for 3.0 beta-1?

          Show
          stevel@apache.org Steve Loughran added a comment - Anyone got a patch for this we could get done in time for 3.0 beta-1?
          Hide
          fabbri Aaron Fabbri added a comment -

          I'd be happy to work on this.. If anyone else has a patch in the works, though, let me know.

          Show
          fabbri Aaron Fabbri added a comment - I'd be happy to work on this.. If anyone else has a patch in the works, though, let me know.
          Hide
          fabbri Aaron Fabbri added a comment -

          Got this working, with v1 compatibility config knob (off by default). Now I'm working on InconsistentAmazonS3Client; gotta instrument the v2 APIs with the failure injection stuff.

          Show
          fabbri Aaron Fabbri added a comment - Got this working, with v1 compatibility config knob (off by default). Now I'm working on InconsistentAmazonS3Client ; gotta instrument the v2 APIs with the failure injection stuff.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          OK, let's get the 13345 merge rounded off & in & then do this

          Show
          stevel@apache.org Steve Loughran added a comment - OK, let's get the 13345 merge rounded off & in & then do this
          Hide
          fabbri Aaron Fabbri added a comment -

          Agreed.

          Show
          fabbri Aaron Fabbri added a comment - Agreed.
          Hide
          fabbri Aaron Fabbri added a comment -

          Attaching v1 patch, which is based on the HADOOP-13345 feature branch.

          Still needs some new test cases. V2 list API is now the default, so I'm thinking I'll just add a couple of sanity-check integration tests that create a S3AFileSystem w/ v1 mode enabled and do some operations that depend on list. Open to input here.

          Tested in us-west-2 w/ and w/o dynamoDB. I'm still having issues getting 100% clean parallel test runs.

          Show
          fabbri Aaron Fabbri added a comment - Attaching v1 patch, which is based on the HADOOP-13345 feature branch. Still needs some new test cases. V2 list API is now the default, so I'm thinking I'll just add a couple of sanity-check integration tests that create a S3AFileSystem w/ v1 mode enabled and do some operations that depend on list. Open to input here. Tested in us-west-2 w/ and w/o dynamoDB. I'm still having issues getting 100% clean parallel test runs.
          Hide
          fabbri Aaron Fabbri added a comment -

          Whoever added the forced list response paging to ITestS3AContractGetFileStatus, thank you. Was going to add that and see it is already there.

          Also explains why that test was timing out with v2 list.. not just slow home internet.. I needed to change this bit:

          diff --git a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
          index e8b739432d1..eb80d37a12f 100644
          --- a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
          +++ b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
          @@ -1113,7 +1113,7 @@ protected ListObjectsV2Result continueListObjects(ListObjectsV2Request req,
                 ListObjectsV2Result objects) {
               incrementStatistic(OBJECT_CONTINUE_LIST_REQUESTS);
               incrementReadOperations();
          -    req.setContinuationToken(objects.getContinuationToken());
          +    req.setContinuationToken(objects.getNextContinuationToken());
               return s3.listObjectsV2(req);
             }
          

          So, the v2 response has two continuation token fields, ContinuationToken and NextContinuationToken. Turns out i was using the former and retrieving the same 2 results over and over. Gave me a giggle, had to share.. V2 patch coming soon.

          Show
          fabbri Aaron Fabbri added a comment - Whoever added the forced list response paging to ITestS3AContractGetFileStatus, thank you. Was going to add that and see it is already there. Also explains why that test was timing out with v2 list.. not just slow home internet.. I needed to change this bit: diff --git a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java index e8b739432d1..eb80d37a12f 100644 --- a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java +++ b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java @@ -1113,7 +1113,7 @@ protected ListObjectsV2Result continueListObjects(ListObjectsV2Request req, ListObjectsV2Result objects) { incrementStatistic(OBJECT_CONTINUE_LIST_REQUESTS); incrementReadOperations(); - req.setContinuationToken(objects.getContinuationToken()); + req.setContinuationToken(objects.getNextContinuationToken()); return s3.listObjectsV2(req); } So, the v2 response has two continuation token fields, ContinuationToken and NextContinuationToken . Turns out i was using the former and retrieving the same 2 results over and over. Gave me a giggle, had to share.. V2 patch coming soon.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          Nothing wrong with getting things utterly wrong...it's what the tests are there to catch.

          The specific test you mention went in on all the listing rework...too much code to test. See also HADOOP-10714 . If you play with some of the scale test options you can create larger directory sets too

          Show
          stevel@apache.org Steve Loughran added a comment - Nothing wrong with getting things utterly wrong...it's what the tests are there to catch. The specific test you mention went in on all the listing rework...too much code to test. See also HADOOP-10714 . If you play with some of the scale test options you can create larger directory sets too
          Hide
          stevel@apache.org Steve Loughran added a comment -

          w.r.t regression testing, you could subclass some listing test and just change the config in create time to use the old settings. That way: consistent tests, no need to maintain a new set, easier to compare regressions between two options, etc. etc.

          Show
          stevel@apache.org Steve Loughran added a comment - w.r.t regression testing, you could subclass some listing test and just change the config in create time to use the old settings. That way: consistent tests, no need to maintain a new set, easier to compare regressions between two options, etc. etc.
          Hide
          fabbri Aaron Fabbri added a comment -

          Steve Loughran thanks, I was thinking the same thing for regression testing. I used ITestS3AContractGetFileStatus as it seems to exercise a lot of different cases for the list objects API.

          Show
          fabbri Aaron Fabbri added a comment - Steve Loughran thanks, I was thinking the same thing for regression testing. I used ITestS3AContractGetFileStatus as it seems to exercise a lot of different cases for the list objects API.
          Hide
          fabbri Aaron Fabbri added a comment - - edited

          Attaching v2 patch.

          I ended up using a separate class that can hold either version for the list objects request and response. Always translating to the SDK's v2 objects saved a little garbage but ended up being error-prone.

          All integration tests passed in us-west-2. Rerunning right now with dynamodb.

          Show
          fabbri Aaron Fabbri added a comment - - edited Attaching v2 patch. I ended up using a separate class that can hold either version for the list objects request and response. Always translating to the SDK's v2 objects saved a little garbage but ended up being error-prone. All integration tests passed in us-west-2. Rerunning right now with dynamodb.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 18s Docker mode activated.
                Prechecks
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
                trunk Compile Tests
          0 mvndep 0m 46s Maven dependency ordering for branch
          +1 mvninstall 15m 26s trunk passed
          +1 compile 16m 36s trunk passed
          +1 checkstyle 2m 7s trunk passed
          +1 mvnsite 1m 44s trunk passed
          +1 findbugs 2m 24s trunk passed
          +1 javadoc 1m 22s trunk passed
                Patch Compile Tests
          0 mvndep 0m 19s Maven dependency ordering for patch
          +1 mvninstall 1m 7s the patch passed
          +1 compile 11m 13s the patch passed
          +1 javac 11m 13s the patch passed
          -0 checkstyle 2m 7s root: The patch generated 8 new + 8 unchanged - 0 fixed = 16 total (was 8)
          +1 mvnsite 1m 37s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 2s The patch has no ill-formed XML file.
          +1 findbugs 2m 29s the patch passed
          +1 javadoc 1m 23s the patch passed
                Other Tests
          -1 unit 8m 20s hadoop-common in the patch failed.
          +1 unit 0m 52s hadoop-aws in the patch passed.
          +1 asflicense 0m 35s The patch does not generate ASF License warnings.
          92m 47s



          Reason Tests
          Failed junit tests hadoop.security.TestKDiag



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:71bbb86
          JIRA Issue HADOOP-13421
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12885499/HADOOP-13421.002.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle
          uname Linux d20e05fdc706 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / d4035d4
          Default Java 1.8.0_144
          findbugs v3.1.0-RC1
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/13174/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/13174/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/13174/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/13174/console
          Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 18s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.       trunk Compile Tests 0 mvndep 0m 46s Maven dependency ordering for branch +1 mvninstall 15m 26s trunk passed +1 compile 16m 36s trunk passed +1 checkstyle 2m 7s trunk passed +1 mvnsite 1m 44s trunk passed +1 findbugs 2m 24s trunk passed +1 javadoc 1m 22s trunk passed       Patch Compile Tests 0 mvndep 0m 19s Maven dependency ordering for patch +1 mvninstall 1m 7s the patch passed +1 compile 11m 13s the patch passed +1 javac 11m 13s the patch passed -0 checkstyle 2m 7s root: The patch generated 8 new + 8 unchanged - 0 fixed = 16 total (was 8) +1 mvnsite 1m 37s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 2s The patch has no ill-formed XML file. +1 findbugs 2m 29s the patch passed +1 javadoc 1m 23s the patch passed       Other Tests -1 unit 8m 20s hadoop-common in the patch failed. +1 unit 0m 52s hadoop-aws in the patch passed. +1 asflicense 0m 35s The patch does not generate ASF License warnings. 92m 47s Reason Tests Failed junit tests hadoop.security.TestKDiag Subsystem Report/Notes Docker Image:yetus/hadoop:71bbb86 JIRA Issue HADOOP-13421 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12885499/HADOOP-13421.002.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle uname Linux d20e05fdc706 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / d4035d4 Default Java 1.8.0_144 findbugs v3.1.0-RC1 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/13174/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/13174/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/13174/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/13174/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          stevel@apache.org Steve Loughran added a comment -
          fs.s3a.use.list.v1
          

          better

          fs.s3a.list.version="1"
          

          line us up for a v3 algorithm in some time in the future. This will propagate into the s3a FS code too.

          • Can just call the class ListRequest, or, if you really want an s3 prefix.

          One thing to consider: what if someone who only has the v1 API wants to run the tests? I know Thomas and Ewan are happy, but don't know about others. Maybe: if the version API is explicitly set to v1 for a bucket then skip the V2 tests?

          Looking at the code though, I think that's a moot issue. If someone forces the list API to v1 in their settings, that will be implicitly picked yp by everything except the v1 test, which just becomes a duplicate of the superclass in terms of test coverage...its not going to fail.

          Show
          stevel@apache.org Steve Loughran added a comment - fs.s3a.use.list.v1 better fs.s3a.list.version= "1" line us up for a v3 algorithm in some time in the future. This will propagate into the s3a FS code too. Can just call the class ListRequest , or, if you really want an s3 prefix. One thing to consider: what if someone who only has the v1 API wants to run the tests? I know Thomas and Ewan are happy, but don't know about others. Maybe: if the version API is explicitly set to v1 for a bucket then skip the V2 tests? Looking at the code though, I think that's a moot issue. If someone forces the list API to v1 in their settings, that will be implicitly picked yp by everything except the v1 test, which just becomes a duplicate of the superclass in terms of test coverage...its not going to fail.
          Hide
          fabbri Aaron Fabbri added a comment -

          better
          fs.s3a.list.version="1"
          line us up for a v3 algorithm in some time in the future.

          Ok.. Thought about that. What do you think behavior should be if the value is out of range? 1. Fallback to default (v2) or 2. Fail to init S3A FS.

          I'll roll another patch with that change and checkstyle fixes.

          Show
          fabbri Aaron Fabbri added a comment - better fs.s3a.list.version="1" line us up for a v3 algorithm in some time in the future. Ok.. Thought about that. What do you think behavior should be if the value is out of range? 1. Fallback to default (v2) or 2. Fail to init S3A FS. I'll roll another patch with that change and checkstyle fixes.
          Hide
          fabbri Aaron Fabbri added a comment -

          Attaching v3 patch. Changes from v2:

          • Use an integer instead of boolean for config as suggested by Steve Loughran.
          • Skip testInconsistentS3ClientDeletes() test case when v1 list is configured.
          • Checkstyle cleanups.

          Testing (in us-west-2):

          • All integration tests w/ v2
          • All integration tests w/ v2 + s3guard
          • All integration tests w/ v1 configured.

          No failures.

          Show
          fabbri Aaron Fabbri added a comment - Attaching v3 patch. Changes from v2: Use an integer instead of boolean for config as suggested by Steve Loughran . Skip testInconsistentS3ClientDeletes() test case when v1 list is configured. Checkstyle cleanups. Testing (in us-west-2): All integration tests w/ v2 All integration tests w/ v2 + s3guard All integration tests w/ v1 configured. No failures.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          I'd say "fall back to the default version". Why? it would allow a later version to support a new v3 version, and not have a config enabling to fail if used for some older code

          Show
          stevel@apache.org Steve Loughran added a comment - I'd say "fall back to the default version". Why? it would allow a later version to support a new v3 version, and not have a config enabling to fail if used for some older code
          Hide
          stevel@apache.org Steve Loughran added a comment -

          LGTM

          +1

          Show
          stevel@apache.org Steve Loughran added a comment - LGTM +1
          Hide
          stevel@apache.org Steve Loughran added a comment -

          oh, one thing: docs?

          Show
          stevel@apache.org Steve Loughran added a comment - oh, one thing: docs?
          Hide
          fabbri Aaron Fabbri added a comment -

          v4 patch. Same as v3, but adds new config option to index.md's configuration section.

          Show
          fabbri Aaron Fabbri added a comment - v4 patch. Same as v3, but adds new config option to index.md's configuration section.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          committed to branch 3.0 & trunk. Thanks!

          Show
          stevel@apache.org Steve Loughran added a comment - committed to branch 3.0 & trunk. Thanks!
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12822 (See https://builds.apache.org/job/Hadoop-trunk-Commit/12822/)
          HADOOP-13421. Switch to v2 of the S3 List Objects API in S3A. (stevel: rev 5bbca80428ffbe776650652de86a3bba885edb31)

          • (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InconsistentAmazonS3Client.java
          • (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
          • (add) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ListResult.java
          • (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
          • (add) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ListRequest.java
          • (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AGetFileStatus.java
          • (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
          • (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardListConsistency.java
          • (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
          • (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Listing.java
          • (add) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AContractGetFileStatusV1List.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12822 (See https://builds.apache.org/job/Hadoop-trunk-Commit/12822/ ) HADOOP-13421 . Switch to v2 of the S3 List Objects API in S3A. (stevel: rev 5bbca80428ffbe776650652de86a3bba885edb31) (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InconsistentAmazonS3Client.java (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java (add) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ListResult.java (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md (add) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ListRequest.java (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AGetFileStatus.java (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardListConsistency.java (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Listing.java (add) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AContractGetFileStatusV1List.java

            People

            • Assignee:
              fabbri Aaron Fabbri
              Reporter:
              slider Steven K. Wong
            • Votes:
              3 Vote for this issue
              Watchers:
              9 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development