Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-12891

S3AFileSystem should configure Multipart Copy threshold and chunk size

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 2.7.2
    • Fix Version/s: 2.8.0, 3.0.0-alpha1
    • Component/s: fs/s3
    • Labels:
      None
    • Target Version/s:

      Description

      In the AWS S3 Java SDK the defaults for Multipart Copy threshold and chunk size are very high [1],

          /** Default size threshold for Amazon S3 object after which multi-part copy is initiated. */
          private static final long DEFAULT_MULTIPART_COPY_THRESHOLD = 5 * GB;
      
          /** Default minimum size of each part for multi-part copy. */
          private static final long DEFAULT_MINIMUM_COPY_PART_SIZE = 100 * MB;
      

      In internal testing we have found that a lower but still reasonable threshold and chunk size can be extremely beneficial. In our case we set both the threshold and size to 25 MB with good results.

      Amazon enforces a minimum of 5 MB [2].

      For the S3A filesystem, file renames are actually implemented via a remote copy request, which is already quite slow compared to a rename on HDFS. This very high threshold for utilizing the multipart functionality can make the performance considerably worse, particularly for files in the 100MB to 5GB range which is fairly typical for mapreduce job outputs.

      Two apparent options are:

      1) Use the same configuration (fs.s3a.multipart.threshold, fs.s3a.multipart.size) for both. This seems preferable as the accompanying documentation [3] for these configuration properties actually already says that they are applicable for either "uploads or copies". We just need to add in the missing TransferManagerConfiguration#setMultipartCopyThreshold [4] and TransferManagerConfiguration#setMultipartCopyPartSize [5] calls at [6] like:

          /* Handle copies in the same way as uploads. */
          transferConfiguration.setMultipartCopyPartSize(partSize);
          transferConfiguration.setMultipartCopyThreshold(multiPartThreshold);
      

      2) Add two new configuration properties so that the copy threshold and part size can be independently configured, maybe change the defaults to be lower than Amazon's, set into TransferManagerConfiguration in the same way.

      In any case at a minimum if neither of the above options are acceptable changes the config documentation should be adjusted to match the code, noting that fs.s3a.multipart.threshold and fs.s3a.multipart.size are applicable to uploads of new objects only and not copies (i.e. renaming objects).

      [1] https://github.com/aws/aws-sdk-java/blob/1.10.58/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.java#L36-L40
      [2] http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html
      [3] https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#S3A
      [4] http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.html#setMultipartCopyThreshold(long)
      [5] http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.html#setMultipartCopyPartSize(long)
      [6] https://github.com/apache/hadoop/blob/release-2.7.2-RC2/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L286

      1. HADOOP-12891-002.patch
        2 kB
        Steve Loughran
      2. HADOOP-12891-001.patch
        0.9 kB
        Steve Loughran

        Issue Links

          Activity

          Hide
          stevel@apache.org Steve Loughran added a comment -

          One thing to consider is block size for bulk operations

          if, at some point in the future, AWS were to provide a way to determine the block sizes, to make the best use of it you'd want to have "reasonably" sized partitions, where 'reasonable' includes setup costs of work. Of course, since there's no locality cost, small partitions could perhaps be merged to create the illusion of bigger blocks; it'd only be a hint to the amount of parallelism that can be applied to s3 reads

          Show
          stevel@apache.org Steve Loughran added a comment - One thing to consider is block size for bulk operations if, at some point in the future, AWS were to provide a way to determine the block sizes, to make the best use of it you'd want to have "reasonably" sized partitions, where 'reasonable' includes setup costs of work. Of course, since there's no locality cost, small partitions could perhaps be merged to create the illusion of bigger blocks; it'd only be a hint to the amount of parallelism that can be applied to s3 reads
          Hide
          stevel@apache.org Steve Loughran added a comment -

          Reviewing this, option #1, same block size, is the one to use. Bear in mind that —apparently— each partition is stored as a separate file, so you get better IO perf if you read different partitions in parallel, than the same one. Having the published block size matching the partition size delivers this —but as you don't know the partition size, you need to set things up per object store to be consistent.

          1. some output committers do rename() to commit; using the same partition size should be the optimal structure
          2. and it will preserve the same block size.
          Show
          stevel@apache.org Steve Loughran added a comment - Reviewing this, option #1, same block size, is the one to use. Bear in mind that —apparently— each partition is stored as a separate file, so you get better IO perf if you read different partitions in parallel, than the same one. Having the published block size matching the partition size delivers this —but as you don't know the partition size, you need to set things up per object store to be consistent. some output committers do rename() to commit; using the same partition size should be the optimal structure and it will preserve the same block size.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          This is the code snippet from the JIRA description converted into a patch.

          I haven't written any of this, just pasted it in to place and created the diff. Which means I don't (yet) consider myself tainted enough to not be able to review it.

          1. Given this doesn't add any new config options, it shouldn't need much in the way of doc changes, just mention in the config options that this affects copy too.
          2. testability? None, not unless someone knows how to get partition info from S3. All that can be done is regression testing
          Show
          stevel@apache.org Steve Loughran added a comment - This is the code snippet from the JIRA description converted into a patch. I haven't written any of this, just pasted it in to place and created the diff. Which means I don't (yet) consider myself tainted enough to not be able to review it. Given this doesn't add any new config options, it shouldn't need much in the way of doc changes, just mention in the config options that this affects copy too. testability? None, not unless someone knows how to get partition info from S3. All that can be done is regression testing
          Hide
          noslowerdna Andrew Olson added a comment -

          +1 for this patch.

          > just mention in the config options that this affects copy too

          That doc actually has already been there,
          https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md#other-properties-1
          https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml#L861-L871

          Perhaps should be updated to mention that rename=copy for S3.

          > testability?
          For what it's worth I had previously informally tested this same code change and confirmed that it improved performance significantly.

          On a side note although this change is certainly beneficial, avoiding renames altogether is really the ideal solution (e.g. https://issues.cloudera.org/browse/KITE-1118).

          Show
          noslowerdna Andrew Olson added a comment - +1 for this patch. > just mention in the config options that this affects copy too That doc actually has already been there, https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md#other-properties-1 https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml#L861-L871 Perhaps should be updated to mention that rename=copy for S3. > testability? For what it's worth I had previously informally tested this same code change and confirmed that it improved performance significantly. On a side note although this change is certainly beneficial, avoiding renames altogether is really the ideal solution (e.g. https://issues.cloudera.org/browse/KITE-1118 ).
          Hide
          stevel@apache.org Steve Loughran added a comment -

          that's a real problem ... in fact Spark just backed out on its DirectOutputCommitter which was 0 rename due to the problems it caused on failed executions. Even with a rename-on-commit there's still a window of trouble, enough for speculative execution to break (more precisely: to generate invalid output if both rename)

          Show
          stevel@apache.org Steve Loughran added a comment - that's a real problem ... in fact Spark just backed out on its DirectOutputCommitter which was 0 rename due to the problems it caused on failed executions. Even with a rename-on-commit there's still a window of trouble, enough for speculative execution to break (more precisely: to generate invalid output if both rename)
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 10s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 6m 37s trunk passed
          +1 compile 0m 13s trunk passed with JDK v1.8.0_77
          +1 compile 0m 13s trunk passed with JDK v1.7.0_95
          +1 checkstyle 0m 14s trunk passed
          +1 mvnsite 0m 19s trunk passed
          +1 mvneclipse 0m 12s trunk passed
          +1 findbugs 0m 32s trunk passed
          +1 javadoc 0m 12s trunk passed with JDK v1.8.0_77
          +1 javadoc 0m 15s trunk passed with JDK v1.7.0_95
          +1 mvninstall 0m 13s the patch passed
          +1 compile 0m 10s the patch passed with JDK v1.8.0_77
          +1 javac 0m 10s the patch passed
          +1 compile 0m 12s the patch passed with JDK v1.7.0_95
          +1 javac 0m 12s the patch passed
          +1 checkstyle 0m 11s the patch passed
          +1 mvnsite 0m 17s the patch passed
          +1 mvneclipse 0m 10s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 0m 39s the patch passed
          +1 javadoc 0m 10s the patch passed with JDK v1.8.0_77
          +1 javadoc 0m 12s the patch passed with JDK v1.7.0_95
          +1 unit 0m 10s hadoop-aws in the patch passed with JDK v1.8.0_77.
          +1 unit 0m 12s hadoop-aws in the patch passed with JDK v1.7.0_95.
          +1 asflicense 0m 16s Patch does not generate ASF License warnings.
          12m 47s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:fbe3e86
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12799759/HADOOP-12891-001.patch
          JIRA Issue HADOOP-12891
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 016306a69a3f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / af9bdbe
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9130/testReport/
          modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9130/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 10s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 6m 37s trunk passed +1 compile 0m 13s trunk passed with JDK v1.8.0_77 +1 compile 0m 13s trunk passed with JDK v1.7.0_95 +1 checkstyle 0m 14s trunk passed +1 mvnsite 0m 19s trunk passed +1 mvneclipse 0m 12s trunk passed +1 findbugs 0m 32s trunk passed +1 javadoc 0m 12s trunk passed with JDK v1.8.0_77 +1 javadoc 0m 15s trunk passed with JDK v1.7.0_95 +1 mvninstall 0m 13s the patch passed +1 compile 0m 10s the patch passed with JDK v1.8.0_77 +1 javac 0m 10s the patch passed +1 compile 0m 12s the patch passed with JDK v1.7.0_95 +1 javac 0m 12s the patch passed +1 checkstyle 0m 11s the patch passed +1 mvnsite 0m 17s the patch passed +1 mvneclipse 0m 10s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 0m 39s the patch passed +1 javadoc 0m 10s the patch passed with JDK v1.8.0_77 +1 javadoc 0m 12s the patch passed with JDK v1.7.0_95 +1 unit 0m 10s hadoop-aws in the patch passed with JDK v1.8.0_77. +1 unit 0m 12s hadoop-aws in the patch passed with JDK v1.7.0_95. +1 asflicense 0m 16s Patch does not generate ASF License warnings. 12m 47s Subsystem Report/Notes Docker Image:yetus/hadoop:fbe3e86 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12799759/HADOOP-12891-001.patch JIRA Issue HADOOP-12891 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 016306a69a3f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / af9bdbe Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9130/testReport/ modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9130/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          yes, a para on copy in rename would be good, along with the same text in hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md

          Would you be able to do that?

          Show
          stevel@apache.org Steve Loughran added a comment - yes, a para on copy in rename would be good, along with the same text in hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md Would you be able to do that?
          Hide
          stevel@apache.org Steve Loughran added a comment -

          Patch 002: Adds the documentation.

          Excluding the docs, this patch is andrew's work, just converted into a .patch file. Therefore I still consider myself in a position to be a reviewer. However, I'd still like others with s3 access to test this, just to make sure there aren't surprises.

          My tests were against Amazon S3 Ireland, BTW

          Show
          stevel@apache.org Steve Loughran added a comment - Patch 002: Adds the documentation. Excluding the docs, this patch is andrew's work, just converted into a .patch file. Therefore I still consider myself in a position to be a reviewer. However, I'd still like others with s3 access to test this, just to make sure there aren't surprises. My tests were against Amazon S3 Ireland, BTW
          Hide
          noslowerdna Andrew Olson added a comment -

          Thanks Steve, looks good from here.

          Show
          noslowerdna Andrew Olson added a comment - Thanks Steve, looks good from here.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 10s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          0 mvndep 0m 13s Maven dependency ordering for branch
          +1 mvninstall 6m 34s trunk passed
          +1 compile 5m 43s trunk passed with JDK v1.8.0_77
          +1 compile 6m 46s trunk passed with JDK v1.7.0_95
          +1 checkstyle 1m 5s trunk passed
          +1 mvnsite 1m 14s trunk passed
          +1 mvneclipse 0m 26s trunk passed
          +1 findbugs 2m 6s trunk passed
          +1 javadoc 1m 5s trunk passed with JDK v1.8.0_77
          +1 javadoc 1m 17s trunk passed with JDK v1.7.0_95
          0 mvndep 0m 14s Maven dependency ordering for patch
          +1 mvninstall 0m 56s the patch passed
          +1 compile 5m 44s the patch passed with JDK v1.8.0_77
          +1 javac 5m 44s the patch passed
          +1 compile 6m 44s the patch passed with JDK v1.7.0_95
          +1 javac 6m 44s the patch passed
          +1 checkstyle 1m 2s the patch passed
          +1 mvnsite 1m 15s the patch passed
          +1 mvneclipse 0m 26s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 xml 0m 0s The patch has no ill-formed XML file.
          +1 findbugs 2m 31s the patch passed
          +1 javadoc 1m 7s the patch passed with JDK v1.8.0_77
          +1 javadoc 1m 21s the patch passed with JDK v1.7.0_95
          +1 unit 8m 0s hadoop-common in the patch passed with JDK v1.8.0_77.
          +1 unit 0m 12s hadoop-aws in the patch passed with JDK v1.8.0_77.
          +1 unit 8m 17s hadoop-common in the patch passed with JDK v1.7.0_95.
          +1 unit 0m 15s hadoop-aws in the patch passed with JDK v1.7.0_95.
          +1 asflicense 0m 21s Patch does not generate ASF License warnings.
          66m 19s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:fbe3e86
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12799994/HADOOP-12891-002.patch
          JIRA Issue HADOOP-12891
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle
          uname Linux 56ea5dbd8dde 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 4838b73
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9144/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9144/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 10s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. 0 mvndep 0m 13s Maven dependency ordering for branch +1 mvninstall 6m 34s trunk passed +1 compile 5m 43s trunk passed with JDK v1.8.0_77 +1 compile 6m 46s trunk passed with JDK v1.7.0_95 +1 checkstyle 1m 5s trunk passed +1 mvnsite 1m 14s trunk passed +1 mvneclipse 0m 26s trunk passed +1 findbugs 2m 6s trunk passed +1 javadoc 1m 5s trunk passed with JDK v1.8.0_77 +1 javadoc 1m 17s trunk passed with JDK v1.7.0_95 0 mvndep 0m 14s Maven dependency ordering for patch +1 mvninstall 0m 56s the patch passed +1 compile 5m 44s the patch passed with JDK v1.8.0_77 +1 javac 5m 44s the patch passed +1 compile 6m 44s the patch passed with JDK v1.7.0_95 +1 javac 6m 44s the patch passed +1 checkstyle 1m 2s the patch passed +1 mvnsite 1m 15s the patch passed +1 mvneclipse 0m 26s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 xml 0m 0s The patch has no ill-formed XML file. +1 findbugs 2m 31s the patch passed +1 javadoc 1m 7s the patch passed with JDK v1.8.0_77 +1 javadoc 1m 21s the patch passed with JDK v1.7.0_95 +1 unit 8m 0s hadoop-common in the patch passed with JDK v1.8.0_77. +1 unit 0m 12s hadoop-aws in the patch passed with JDK v1.8.0_77. +1 unit 8m 17s hadoop-common in the patch passed with JDK v1.7.0_95. +1 unit 0m 15s hadoop-aws in the patch passed with JDK v1.7.0_95. +1 asflicense 0m 21s Patch does not generate ASF License warnings. 66m 19s Subsystem Report/Notes Docker Image:yetus/hadoop:fbe3e86 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12799994/HADOOP-12891-002.patch JIRA Issue HADOOP-12891 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle uname Linux 56ea5dbd8dde 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 4838b73 Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9144/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9144/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          +1 committed —thanks for your contribution here Andrew.

          Please check out and build the 2.8 branch and make sure all works well; we've been doing some other changes too —finding problems early would be invaluable to us

          Show
          stevel@apache.org Steve Loughran added a comment - +1 committed —thanks for your contribution here Andrew. Please check out and build the 2.8 branch and make sure all works well; we've been doing some other changes too —finding problems early would be invaluable to us
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #9653 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9653/)
          HADOOP-12891. S3AFileSystem should configure Multipart Copy threshold (stevel: rev 19f0f9608e31203523943f008ac701b6f3d7973c)

          • hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
          • hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #9653 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9653/ ) HADOOP-12891 . S3AFileSystem should configure Multipart Copy threshold (stevel: rev 19f0f9608e31203523943f008ac701b6f3d7973c) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java hadoop-common-project/hadoop-common/src/main/resources/core-default.xml

            People

            • Assignee:
              noslowerdna Andrew Olson
              Reporter:
              noslowerdna Andrew Olson
            • Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development