Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-10714

AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Critical
    • Resolution: Fixed
    • Affects Version/s: 2.5.0
    • Fix Version/s: 2.7.0
    • Component/s: fs/s3
    • Labels:
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      In the patch for HADOOP-10400, calls to AmazonS3Client.deleteObjects() need to have the number of entries at 1000 or below. Otherwise we get a Malformed XML error similar to:

      com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3, AWS Request ID: 6626AD56A3C76F5B, AWS Error Code: MalformedXML, AWS Error Message: The XML you provided was not well-formed or did not validate against our published schema, S3 Extended Request ID: DOt6C+Y84mGSoDuaQTCo33893VaoKGEVC3y1k2zFIQRm+AJkFH2mTyrDgnykSL+v
      at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
      at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
      at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
      at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
      at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3480)
      at com.amazonaws.services.s3.AmazonS3Client.deleteObjects(AmazonS3Client.java:1739)
      at org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:388)
      at org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:829)
      at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
      at org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:874)
      at org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:878)

      Note that this is mentioned in the AWS documentation:
      http://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html

      "The Multi-Object Delete request contains a list of up to 1000 keys that you want to delete. In the XML, you provide the object key names, and optionally, version IDs if you want to delete a specific version of the object from a versioning-enabled bucket. For each key, Amazon S3….”

      Thanks to Matteo Bertozzi and Rahul Bhartia from AWS for identifying the problem.

      1. HADOOP-10714-1.patch
        3 kB
        David S. Wang
      2. HADOOP-10714.001.patch
        49 kB
        Juan Yu
      3. HADOOP-10714.002.patch
        49 kB
        Juan Yu
      4. HADOOP-10714.003.patch
        48 kB
        Juan Yu
      5. HADOOP-10714.004.patch
        47 kB
        Juan Yu
      6. HADOOP-10714.005.patch
        49 kB
        Juan Yu
      7. HADOOP-10714.006.patch
        49 kB
        Juan Yu
      8. HADOOP-10714-007.patch
        75 kB
        Steve Loughran
      9. HADOOP-10714.008.patch
        78 kB
        Juan Yu
      10. HADOOP-10714-009.patch
        90 kB
        Steve Loughran
      11. HADOOP-10714.010.patch
        85 kB
        Juan Yu

        Issue Links

          Activity

          Hide
          mbertozzi Matteo Bertozzi added a comment -

          the patch looks good to me.
          The simple test that we have run to reproduce the problem is to create more than 1000 files and then rename the folder.

          Show
          mbertozzi Matteo Bertozzi added a comment - the patch looks good to me. The simple test that we have run to reproduce the problem is to create more than 1000 files and then rename the folder.
          Hide
          stevel@apache.org Steve Loughran added a comment -
          • can this just be merged into a new HADOOP-10400 patch ... as that isn't checked in yet it should just be updated
          • scale tests are good -for the swift stuff we have some scalable ones which you can tune off the test config file. This lets you run smaller tests over slower links. File size can be kept low for better performance.

          Tests for a large set of files should

          1. verify that the results of a directory listing is complete
          2. try a rename() (as this has a delete inside)
          3. do the delete()
          Show
          stevel@apache.org Steve Loughran added a comment - can this just be merged into a new HADOOP-10400 patch ... as that isn't checked in yet it should just be updated scale tests are good -for the swift stuff we have some scalable ones which you can tune off the test config file. This lets you run smaller tests over slower links. File size can be kept low for better performance. Tests for a large set of files should verify that the results of a directory listing is complete try a rename() (as this has a delete inside) do the delete()
          Hide
          jyu@cloudera.com Juan Yu added a comment -

          This patch contains the following changes:

          • Added scale test for s3a
          • Modified rename() to make it behavior closer to s3n and other FileSystem
            1. Fails if both src and dst are the same directory,
            2. Return true and no-op if both src and dst are the same file
            3. Allows if src is file and dst is directory
            4. Fails if dst is a child of the source directory
          • Modified some base contract tests due to S3's limitation.
          • Increased surefire plugin's forkedProcessTimeoutInSeconds to 30 minutes because the rename thousands of files in S3 takes time.

          I ran all S3A unit tests against S3 and they passed.

          Show
          jyu@cloudera.com Juan Yu added a comment - This patch contains the following changes: Added scale test for s3a Modified rename() to make it behavior closer to s3n and other FileSystem 1. Fails if both src and dst are the same directory, 2. Return true and no-op if both src and dst are the same file 3. Allows if src is file and dst is directory 4. Fails if dst is a child of the source directory Modified some base contract tests due to S3's limitation. Increased surefire plugin's forkedProcessTimeoutInSeconds to 30 minutes because the rename thousands of files in S3 takes time. I ran all S3A unit tests against S3 and they passed.
          Hide
          hadoopqa Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12670036/HADOOP-10714.001.patch
          against trunk revision 25fd69a.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 10 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4774//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4774//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12670036/HADOOP-10714.001.patch against trunk revision 25fd69a. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 10 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4774//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4774//console This message is automatically generated.
          Hide
          clamb Charles Lamb added a comment -

          General:

          • Several lines bust the 80 char limit.
          • There are several places where there are extra newlines. There should only ever be one blank line.
          • Remove unused imports in your new files.

          S3AFileSystem

          • // deleteUnnecessaryFakeDirectories

          This looks like the name of a method rather than a comment.

          TestS3AContractRename

          • s/s3a don't/s3a doesn't/

          core-site.xml

          <!-- Values used when running unit tests. This is mostly empty, to -->
          <!-- use of the default values, overriding the potentially -->
          <!-- user-editted core-site.xml in the conf/ directory. -->

          can be changed to:

          <!-- Values used when running unit tests. Specify any values in here that
          should override the default values. -->

          • You can remove the extra newline after <configuration>.

          S3AFileSystemBaseTest

          • s/If you keys/If your keys/, s/as passed/as passed./

          S3AScaleTestBase

          • remove the extra newline after the class comment.

          S3ATestUtils

          • You can move "implements S3ATestConstants" to the line above.
          • testReceivedData decl has incorrect indentation. Extra newline at the end of this method.
          • generateTestData has the same indent problem in the decl.
          • I'd prefer that you do import statics of various org.junit.Assert methods in TestS3AFileSYstemBasicOps rather than extending S3AFileSystemBaseTest with it.

          TestS3AFileSystemBasicOps

          • remove the extra newline after the class comment.

          I'm unclear about what the test renaming is about. Could you please comment on that in the Jira?

          TestS3AFileSystemContract

          if (!renameSupported()) return;

          should be changed to:

          if (!renameSupported()) {
          return;
          }

          • remove the blank line before the closing } of testRenameDirectoryAsExistingDirectory(), ditto blank line before last } in the file.
          Show
          clamb Charles Lamb added a comment - General: Several lines bust the 80 char limit. There are several places where there are extra newlines. There should only ever be one blank line. Remove unused imports in your new files. S3AFileSystem // deleteUnnecessaryFakeDirectories This looks like the name of a method rather than a comment. TestS3AContractRename s/s3a don't/s3a doesn't/ core-site.xml <!-- Values used when running unit tests. This is mostly empty, to --> <!-- use of the default values, overriding the potentially --> <!-- user-editted core-site.xml in the conf/ directory. --> can be changed to: <!-- Values used when running unit tests. Specify any values in here that should override the default values. --> You can remove the extra newline after <configuration>. S3AFileSystemBaseTest s/If you keys/If your keys/, s/as passed/as passed./ S3AScaleTestBase remove the extra newline after the class comment. S3ATestUtils You can move "implements S3ATestConstants" to the line above. testReceivedData decl has incorrect indentation. Extra newline at the end of this method. generateTestData has the same indent problem in the decl. I'd prefer that you do import statics of various org.junit.Assert methods in TestS3AFileSYstemBasicOps rather than extending S3AFileSystemBaseTest with it. TestS3AFileSystemBasicOps remove the extra newline after the class comment. I'm unclear about what the test renaming is about. Could you please comment on that in the Jira? TestS3AFileSystemContract if (!renameSupported()) return; should be changed to: if (!renameSupported()) { return; } remove the blank line before the closing } of testRenameDirectoryAsExistingDirectory(), ditto blank line before last } in the file.
          Hide
          atm Aaron T. Myers added a comment -

          The patch looks pretty good to me. Charlie's nits don't seem unreasonable. I'll add one comment of my own, but it's really not a big deal:

          +        if (LOG.isDebugEnabled()) {
          +          LOG.debug(
          +              "cannot rename a directory to a subdirectory of self");
          +        }
          

          The reason that we have "LOG.isDebugEnabled" in the code at all is to prevent the overhead of string concatenation when the message doesn't have just a string constant. In this case there's no string concatenation at all, so you can just go ahead and do the "LOG.debug"

          I'll be +1 once all these little nits are addressed.

          Thanks, Juan.

          Show
          atm Aaron T. Myers added a comment - The patch looks pretty good to me. Charlie's nits don't seem unreasonable. I'll add one comment of my own, but it's really not a big deal: + if (LOG.isDebugEnabled()) { + LOG.debug( + "cannot rename a directory to a subdirectory of self" ); + } The reason that we have " LOG.isDebugEnabled " in the code at all is to prevent the overhead of string concatenation when the message doesn't have just a string constant. In this case there's no string concatenation at all, so you can just go ahead and do the " LOG.debug " I'll be +1 once all these little nits are addressed. Thanks, Juan.
          Hide
          clamb Charles Lamb added a comment -

          BTW Juan, I apologize for my terse comments above. I had written them up in an editor buffer and then in a rush I just cut and pasted them into the comment frame without putting a greeting or any preamble in there.

          In general, it looks like the patch does what it is supposed to do so nice work on this.

          I also realized that my 2nd to last comment about the if (!renameSupported()) got badly formatted. The idea I was trying to convey is that there should be braces around the return; for those two if's.

          Thanks Juan.

          Show
          clamb Charles Lamb added a comment - BTW Juan, I apologize for my terse comments above. I had written them up in an editor buffer and then in a rush I just cut and pasted them into the comment frame without putting a greeting or any preamble in there. In general, it looks like the patch does what it is supposed to do so nice work on this. I also realized that my 2nd to last comment about the if (!renameSupported()) got badly formatted. The idea I was trying to convey is that there should be braces around the return; for those two if's. Thanks Juan.
          Hide
          jyu@cloudera.com Juan Yu added a comment -

          Thanks Charles Lamb and Aaron T. Myers for code reviewing. new patch to address all comments. Where can I find a guideline for code formatting?

          The reason we need the rename test is because s3 doesn't support rename. we have to copy src to dest on s3 then delete src to mimic rename operation.
          because of the copy step, rename() on s3 could take long time if the source folder contains lots of files.

          Hi Steve Loughran, do you have time to review this patch as well?

          Show
          jyu@cloudera.com Juan Yu added a comment - Thanks Charles Lamb and Aaron T. Myers for code reviewing. new patch to address all comments. Where can I find a guideline for code formatting? The reason we need the rename test is because s3 doesn't support rename. we have to copy src to dest on s3 then delete src to mimic rename operation. because of the copy step, rename() on s3 could take long time if the source folder contains lots of files. Hi Steve Loughran , do you have time to review this patch as well?
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12670180/HADOOP-10714.002.patch
          against trunk revision f85cc14.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 10 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The following test timeouts occurred in hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws:

          org.apache.hadoop.http.TestHttpServer

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4779//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4779//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12670180/HADOOP-10714.002.patch against trunk revision f85cc14. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 10 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The following test timeouts occurred in hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws: org.apache.hadoop.http.TestHttpServer +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4779//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4779//console This message is automatically generated.
          Hide
          clamb Charles Lamb added a comment -

          Hi Juan,

          The Hadoop Coding Standards are here: https://wiki.apache.org/hadoop/CodeReviewChecklist

          As mentioned at the top of that file, it's basically the Sun Java Coding standards with an indentation of 2, not 4. But to be clear (I learned this lesson the hard way), it's 2 for new lines and 4 for continuations of lines). BTW (sorry, I can't help myself), the LOG.debug line that ATM mentioned probably does not need to be broken after the '('.

          Show
          clamb Charles Lamb added a comment - Hi Juan, The Hadoop Coding Standards are here: https://wiki.apache.org/hadoop/CodeReviewChecklist As mentioned at the top of that file, it's basically the Sun Java Coding standards with an indentation of 2, not 4. But to be clear (I learned this lesson the hard way), it's 2 for new lines and 4 for continuations of lines). BTW (sorry, I can't help myself), the LOG.debug line that ATM mentioned probably does not need to be broken after the '('.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          Style-guide wise, I have been [writing one during test runs].

          new tests in general

          If there's extra basic things to test. e.g. TestS3AFileSystemBasicOps, there's no reason why they can't be pulled up to the core.

          scale

          testOpenCreate() relies on the tester having good upload bandwidth. It needs to be configurable or optional. We already do some of this for seek() testing, with ContractOptions having an option TEST_RANDOM_SEEK_COUNT for test performance. (that one because the cost of seeking is so high on remote object stores over HTTPS). Ideally I'd like to see some abstract filesystem scalability contract test which could then be implemented by S3a and others (file & hdfs at the very least) so that its a definition of what big files should do.

          Looking at the openstack tests, SwiftScaleTestBase is a basis for scaled FS tests, with one specific test, TestWriteManySmallFiles looking at the cost of creating and deleting lots of files. If we had something like that in the abstract tests, we'd have caught the problem you've seen : scale problems with many thousands of small files.

          test helpers

          You should look at ContractTestUtils for test helpers, replacing thing like assertTrue(fs.exists(path("/tests3a/c/a/"))); with ContractTestUtils.assertPathExists(), which will produce a meaningful exception and a listing of the parent dir on a failure. You could even put testReceivedData in there too, once renamed to something like verifyReceivedData. There's already methods there that can do exactly what you've rewritten, though they won't work to multi-GB files as they save to byte arrays first. Your new verification code can be the start of some more scalability tests.

          Same for the other methods —but fix createAndReadFileTest to always close its streams.

          rename()

          Finally, the actual source code change: rename changes. The correct behaviour of FileSystem.rename() is something I'm not confident I understand, and I'm not confident that others do either. More precisely: Posix rename is fundamentally misunderstood and DFS.rename() diverges from it anyway.

          If rename() problems weren't picked up in the previous tests, it means that not only does S3A need the tests you've added, AbstractContractRenameTest needs more tests too.

          Summary

          Looking at my comments, My key point is that you've done some really good scale tests here, as well as tests to validate S3A behaviour. The scale tests can at some point be applied to others, so maybe design it that way from the outset. Add under S3A /test a test case based on the abstract contract class tree something with the scale tests

          TestS3aContractScale extends AbstractFSContractTestBase {
          
              // scale tests, using/extending ContractTestUtils
          
          }
          

          that way there's no need for you to add new tests to all the other filesystems here, and spend the time getting them to work. For now we'll just trust HDFS to handle files >5GB and more than 1000 entries in a directory.

          The rename tests are a different matter. If there's something up with S3a.rename(), and it wasn't found, then the root tests are lacking. Please add them to AbstractContractRenameTest and see what breaks. If it is HDFS, then the new tests are wrong. If they work on HDFS but fail on other things (swift:// s3n://) then that's something we need to know about.

          Show
          stevel@apache.org Steve Loughran added a comment - Style-guide wise, I have been [ writing one during test runs ]. new tests in general If there's extra basic things to test. e.g. TestS3AFileSystemBasicOps , there's no reason why they can't be pulled up to the core. scale testOpenCreate() relies on the tester having good upload bandwidth. It needs to be configurable or optional. We already do some of this for seek() testing, with ContractOptions having an option TEST_RANDOM_SEEK_COUNT for test performance. (that one because the cost of seeking is so high on remote object stores over HTTPS). Ideally I'd like to see some abstract filesystem scalability contract test which could then be implemented by S3a and others (file & hdfs at the very least) so that its a definition of what big files should do. Looking at the openstack tests, SwiftScaleTestBase is a basis for scaled FS tests, with one specific test, TestWriteManySmallFiles looking at the cost of creating and deleting lots of files. If we had something like that in the abstract tests, we'd have caught the problem you've seen : scale problems with many thousands of small files. test helpers You should look at ContractTestUtils for test helpers, replacing thing like assertTrue(fs.exists(path("/tests3a/c/a/"))); with ContractTestUtils.assertPathExists() , which will produce a meaningful exception and a listing of the parent dir on a failure. You could even put testReceivedData in there too, once renamed to something like verifyReceivedData . There's already methods there that can do exactly what you've rewritten, though they won't work to multi-GB files as they save to byte arrays first. Your new verification code can be the start of some more scalability tests. Same for the other methods —but fix createAndReadFileTest to always close its streams. rename() Finally, the actual source code change: rename changes. The correct behaviour of FileSystem.rename() is something I'm not confident I understand, and I'm not confident that others do either. More precisely: Posix rename is fundamentally misunderstood and DFS.rename() diverges from it anyway. If rename() problems weren't picked up in the previous tests, it means that not only does S3A need the tests you've added, AbstractContractRenameTest needs more tests too. Summary Looking at my comments, My key point is that you've done some really good scale tests here, as well as tests to validate S3A behaviour. The scale tests can at some point be applied to others, so maybe design it that way from the outset. Add under S3A /test a test case based on the abstract contract class tree something with the scale tests TestS3aContractScale extends AbstractFSContractTestBase { // scale tests, using/extending ContractTestUtils } that way there's no need for you to add new tests to all the other filesystems here, and spend the time getting them to work. For now we'll just trust HDFS to handle files >5GB and more than 1000 entries in a directory. The rename tests are a different matter. If there's something up with S3a.rename(), and it wasn't found, then the root tests are lacking. Please add them to AbstractContractRenameTest and see what breaks. If it is HDFS, then the new tests are wrong. If they work on HDFS but fail on other things (swift:// s3n://) then that's something we need to know about.
          Hide
          jyu@cloudera.com Juan Yu added a comment -

          Thanks Charles and Steve.
          here is a new patch to address all Steve Loughran's comments except the "abstract out the scale tests" request.
          I'd like to file another JIRA for that.
          Most of tests in my previous patch are tests from the original s3a patch.
          I compared them with the contract tests. most are duplicates so I removed them. a few of them are worth to keep. I added them to abstract contract test and verified they work on HDFS.

          Show
          jyu@cloudera.com Juan Yu added a comment - Thanks Charles and Steve. here is a new patch to address all Steve Loughran 's comments except the "abstract out the scale tests" request. I'd like to file another JIRA for that. Most of tests in my previous patch are tests from the original s3a patch. I compared them with the contract tests. most are duplicates so I removed them. a few of them are worth to keep. I added them to abstract contract test and verified they work on HDFS.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12670455/HADOOP-10714.003.patch
          against trunk revision 0a64149.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 10 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws:

          org.apache.hadoop.fs.contract.localfs.TestLocalFSContractRename

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4786//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4786//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12670455/HADOOP-10714.003.patch against trunk revision 0a64149. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 10 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws: org.apache.hadoop.fs.contract.localfs.TestLocalFSContractRename +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4786//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4786//console This message is automatically generated.
          Hide
          jyu@cloudera.com Juan Yu added a comment -

          I modified the new test testRenameWithNonEmptySubDir to honor HDFS' behavior, but seems HDFS doesn't honor POSIX rename behavior.
          "If the link named by the new argument exists, it shall be removed and old renamed to new. "
          http://pubs.opengroup.org/onlinepubs/007904875/functions/rename.html

          HDFS rename creates a subfolder under dest dir. this is the same as linux though.

          • srcdir/file
          • emptydestdir
            after rename
          • emptydestdir/srcdir/file

          Should HDFS honor POSIX rename behavior?

          Show
          jyu@cloudera.com Juan Yu added a comment - I modified the new test testRenameWithNonEmptySubDir to honor HDFS' behavior, but seems HDFS doesn't honor POSIX rename behavior. "If the link named by the new argument exists, it shall be removed and old renamed to new. " http://pubs.opengroup.org/onlinepubs/007904875/functions/rename.html HDFS rename creates a subfolder under dest dir. this is the same as linux though. srcdir/file emptydestdir after rename emptydestdir/srcdir/file Should HDFS honor POSIX rename behavior?
          Hide
          stevel@apache.org Steve Loughran added a comment -

          You are right, HDFS does't quite follow posix. There's actually a difference between the command line mv operation and the internal rename API; I think the behaviour of bits of HDFS match the CLI.

          HDFS cannot/will not change its behavior; the file system specification says "HDFS is the definition of the FS API". And like I also said "nobody really understands rename"...even the POSIX API isn't that great.

          Show
          stevel@apache.org Steve Loughran added a comment - You are right, HDFS does't quite follow posix. There's actually a difference between the command line mv operation and the internal rename API; I think the behaviour of bits of HDFS match the CLI. HDFS cannot/will not change its behavior; the file system specification says "HDFS is the definition of the FS API". And like I also said "nobody really understands rename"...even the POSIX API isn't that great.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          patch-wise, it looks good; abstracting out the scale tests is something that can be done later -at least now it is configurable for S3a.

          The test is failing on jenkins as you've added a test/resources/core-site.xml, which triggers the test run. Rather than do something complicated there, why not

          1. change the -aws POM to trigger the test off `resources/contract-test-options.xml` ... which is of course the file needed for all the contract tests.
          2. add to the `core-site.xml` the config options for the scale tests, along with the default values. This makes it easier to see what options to change.
          Show
          stevel@apache.org Steve Loughran added a comment - patch-wise, it looks good; abstracting out the scale tests is something that can be done later -at least now it is configurable for S3a. The test is failing on jenkins as you've added a test/resources/core-site.xml, which triggers the test run. Rather than do something complicated there, why not change the -aws POM to trigger the test off `resources/contract-test-options.xml` ... which is of course the file needed for all the contract tests. add to the `core-site.xml` the config options for the scale tests, along with the default values. This makes it easier to see what options to change.
          Hide
          jyu@cloudera.com Juan Yu added a comment -

          the failed test is the new rename test TestLocalFSContractRename I added to abstract contract. it fails RawLocalFileSystem enforce POSIX rename behavior which is different than HDFS.

          public boolean rename(Path src, Path dst) throws IOException {
              // Attempt rename using Java API.
              File srcFile = pathToFile(src);
              File dstFile = pathToFile(dst);
              if (srcFile.renameTo(dstFile)) {
                return true;
              }
          
              // Enforce POSIX rename behavior that a source directory replaces an existing
              // destination if the destination is an empty directory.  On most platforms,
              // this is already handled by the Java API call above.  Some platforms
              // (notably Windows) do not provide this behavior, so the Java API call above
              // fails.  Delete destination and attempt rename again.
          
          Show
          jyu@cloudera.com Juan Yu added a comment - the failed test is the new rename test TestLocalFSContractRename I added to abstract contract. it fails RawLocalFileSystem enforce POSIX rename behavior which is different than HDFS. public boolean rename(Path src, Path dst) throws IOException { // Attempt rename using Java API. File srcFile = pathToFile(src); File dstFile = pathToFile(dst); if (srcFile.renameTo(dstFile)) { return true ; } // Enforce POSIX rename behavior that a source directory replaces an existing // destination if the destination is an empty directory. On most platforms, // this is already handled by the Java API call above. Some platforms // (notably Windows) do not provide this behavior, so the Java API call above // fails. Delete destination and attempt rename again.
          Hide
          jyu@cloudera.com Juan Yu added a comment -

          Since we cannot change HDFS's rename behavior, I guess the same for RawLocalFileSystem.
          I modified the new rename test to accept both POSIX rename behavior and CLI rename behavior.

          Show
          jyu@cloudera.com Juan Yu added a comment - Since we cannot change HDFS's rename behavior, I guess the same for RawLocalFileSystem. I modified the new rename test to accept both POSIX rename behavior and CLI rename behavior.
          Hide
          hadoopqa Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12670744/HADOOP-10714.004.patch
          against trunk revision a1fd804.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 10 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4793//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4793//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12670744/HADOOP-10714.004.patch against trunk revision a1fd804. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 10 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4793//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4793//console This message is automatically generated.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          OK, in this situation we're going to need to make the test handle both outcomes.

          1. ContractOptions already lists a set of rename flags -are there any which match the operation?
          2. If so, check it in the test and make the relevant assertions for whichever outcome has been declared as supported. that is, if it says it renames like posix, it had better.
          3. if you have to add a new option & put it in local file, then so be it ... at least we've expanded the declarative list of different behaviours.

          thanks for getting involved in the depths of cross-FS-rename semantics BTW.

          Show
          stevel@apache.org Steve Loughran added a comment - OK, in this situation we're going to need to make the test handle both outcomes. ContractOptions already lists a set of rename flags -are there any which match the operation? If so, check it in the test and make the relevant assertions for whichever outcome has been declared as supported. that is, if it says it renames like posix, it had better. if you have to add a new option & put it in local file, then so be it ... at least we've expanded the declarative list of different behaviours. thanks for getting involved in the depths of cross-FS-rename semantics BTW.
          Hide
          jyu@cloudera.com Juan Yu added a comment -

          Thanks Steve Loughran. I added a new ContractOption for the rename behavior. This contract-driven FS test suite is very flexible and be able to handle various case. Great job on that.

          Show
          jyu@cloudera.com Juan Yu added a comment - Thanks Steve Loughran . I added a new ContractOption for the rename behavior. This contract-driven FS test suite is very flexible and be able to handle various case. Great job on that.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12670819/HADOOP-10714.005.patch
          against trunk revision b93d960.

          -1 patch. Trunk compilation may be broken.

          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4795//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12670819/HADOOP-10714.005.patch against trunk revision b93d960. -1 patch . Trunk compilation may be broken. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4795//console This message is automatically generated.
          Hide
          jyu@cloudera.com Juan Yu added a comment -

          Same patch as previous to trigger Jenkins build.

          Show
          jyu@cloudera.com Juan Yu added a comment - Same patch as previous to trigger Jenkins build.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12670888/HADOOP-10714.006.patch
          against trunk revision ef784a2.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 13 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 1 new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws:

          org.apache.hadoop.crypto.random.TestOsSecureRandom

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4796//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/4796//artifact/PreCommit-HADOOP-Build-patchprocess/newPatchFindbugsWarningshadoop-common.html
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4796//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12670888/HADOOP-10714.006.patch against trunk revision ef784a2. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 13 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. -1 findbugs . The patch appears to introduce 1 new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws: org.apache.hadoop.crypto.random.TestOsSecureRandom +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4796//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/4796//artifact/PreCommit-HADOOP-Build-patchprocess/newPatchFindbugsWarningshadoop-common.html Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4796//console This message is automatically generated.
          Hide
          jyu@cloudera.com Juan Yu added a comment -

          I don't think the failed test is related with this patch.

          Show
          jyu@cloudera.com Juan Yu added a comment - I don't think the failed test is related with this patch.
          Hide
          jyu@cloudera.com Juan Yu added a comment -

          Steve Loughran, any comment about the latest patch? thanks.

          Show
          jyu@cloudera.com Juan Yu added a comment - Steve Loughran , any comment about the latest patch? thanks.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          I don't have time to play with this for the next few days; going offline & travelling, sorry.

          1. From what I look at it seems good...what I'd like to do is test the swift client against the new tests —but that can wait & if they fail, that's something to file against fs/swift.
          2. new tests seem good

          If someone (charles?) can apply and run the s3a tests, I'd take that as a sign that this patch is good to go

          Show
          stevel@apache.org Steve Loughran added a comment - I don't have time to play with this for the next few days; going offline & travelling, sorry. From what I look at it seems good...what I'd like to do is test the swift client against the new tests —but that can wait & if they fail, that's something to file against fs/swift. new tests seem good If someone (charles?) can apply and run the s3a tests, I'd take that as a sign that this patch is good to go
          Hide
          jyu@cloudera.com Juan Yu added a comment -

          Thanks Steve Loughran. I've run the contract test with swift client, all passed.

          Charles Lamb and Aaron T. Myers, could one of you apply and run the s3a tests and let me know if you see any issue?

          Show
          jyu@cloudera.com Juan Yu added a comment - Thanks Steve Loughran . I've run the contract test with swift client, all passed. Charles Lamb and Aaron T. Myers , could one of you apply and run the s3a tests and let me know if you see any issue?
          Hide
          stevel@apache.org Steve Loughran added a comment -

          Running the tests, seeing some problems, will post updated patch

          Show
          stevel@apache.org Steve Loughran added a comment - Running the tests, seeing some problems, will post updated patch
          Hide
          stevel@apache.org Steve Loughran added a comment -

          Updated patch. I haven't run the scale tests themselves; got into issues with the core tests and some code review work.

          1. {{testRenameWithNonEmptySubDir() }} sticks its data straight into the hadoop-common dir
            hadoop-common-project/hadoop-common/testRenameWithNonEmptySubDir/. The base class path() method
            sets up directories properly. (fixed)
          2. BasicAWSCredentialsProvider and credential setup gets trimmed config option, checks for "" as well as null.
          3. downgraded logs of FS operations from INFO to DEBUG
          4. fixed "livetest" probes to compare scheme in test URI; existing comparator wouldn't have worked.
          5. renamed TestDeleteManyFiles to TestS3ADeleteManyFiles
          6. saw intermittent failures of TestS3AContractRename; after which test wouldn't restart.
          testRenameWithNonEmptySubDir(org.apache.hadoop.fs.contract.s3a.TestS3AContractRename)  Time elapsed: 17.563 sec  <<< FAILURE!
          java.lang.AssertionError: not deleted: unexpectedly found testRenameWithNonEmptySubDir/src1/source.txt as  S3AFileStatus{path=s3a://tests3neu/user/stevel/testRenameWithNonEmptySubDir/src1/source.txt; isDirectory=false; length=27; replication=1; blocksize=0; modification_time=1411936074000; access_time=0; owner=; group=; permission=rw-rw-rw-; isSymlink=false}
          	at org.junit.Assert.fail(Assert.java:88)
          	at org.apache.hadoop.fs.contract.ContractTestUtils.assertPathDoesNotExist(ContractTestUtils.java:702)
          	at org.apache.hadoop.fs.contract.AbstractContractRenameTest.testRenameWithNonEmptySubDir(AbstractContractRenameTest.java:223)
          	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
          	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
          	at java.lang.reflect.Method.invoke(Method.java:606)
          	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
          	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
          	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
          	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
          	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
          	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
          	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
          

          The next run failed as the source dirs were still there:

          Running org.apache.hadoop.fs.contract.s3a.TestS3AContractRename
          Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 48.168 sec <<< FAILURE! - in org.apache.hadoop.fs.contract.s3a.TestS3AContractRename
          testRenameWithNonEmptySubDir(org.apache.hadoop.fs.contract.s3a.TestS3AContractRename)  Time elapsed: 6.49 sec  <<< ERROR!
          org.apache.hadoop.fs.FileAlreadyExistsException: testRenameWithNonEmptySubDir/src1/source.txt already exists
          	at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:252)
          	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
          	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
          	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
          	at org.apache.hadoop.fs.contract.ContractTestUtils.createFile(ContractTestUtils.java:507)
          	at org.apache.hadoop.fs.contract.ContractTestUtils.writeTextFile(ContractTestUtils.java:491)
          	at org.apache.hadoop.fs.contract.AbstractContractRenameTest.testRenameWithNonEmptySubDir(AbstractContractRenameTest.java:198)
          	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
          	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
          	at java.lang.reflect.Method.invoke(Method.java:606)
          	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
          	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
          	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
          	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
          	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
          	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
          	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
          
          

          Fix: make sure dirs are cleaned before test runs. applied same change to scale test

          Show
          stevel@apache.org Steve Loughran added a comment - Updated patch. I haven't run the scale tests themselves; got into issues with the core tests and some code review work. {{testRenameWithNonEmptySubDir() }} sticks its data straight into the hadoop-common dir hadoop-common-project/hadoop-common/testRenameWithNonEmptySubDir/ . The base class path() method sets up directories properly. (fixed) BasicAWSCredentialsProvider and credential setup gets trimmed config option, checks for "" as well as null. downgraded logs of FS operations from INFO to DEBUG fixed "livetest" probes to compare scheme in test URI; existing comparator wouldn't have worked. renamed TestDeleteManyFiles to TestS3ADeleteManyFiles saw intermittent failures of TestS3AContractRename ; after which test wouldn't restart. testRenameWithNonEmptySubDir(org.apache.hadoop.fs.contract.s3a.TestS3AContractRename) Time elapsed: 17.563 sec <<< FAILURE! java.lang.AssertionError: not deleted: unexpectedly found testRenameWithNonEmptySubDir/src1/source.txt as S3AFileStatus{path=s3a: //tests3neu/user/stevel/testRenameWithNonEmptySubDir/src1/source.txt; isDirectory= false ; length=27; replication=1; blocksize=0; modification_time=1411936074000; access_time=0; owner=; group=; permission=rw-rw-rw-; isSymlink= false } at org.junit.Assert.fail(Assert.java:88) at org.apache.hadoop.fs.contract.ContractTestUtils.assertPathDoesNotExist(ContractTestUtils.java:702) at org.apache.hadoop.fs.contract.AbstractContractRenameTest.testRenameWithNonEmptySubDir(AbstractContractRenameTest.java:223) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) The next run failed as the source dirs were still there: Running org.apache.hadoop.fs.contract.s3a.TestS3AContractRename Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 48.168 sec <<< FAILURE! - in org.apache.hadoop.fs.contract.s3a.TestS3AContractRename testRenameWithNonEmptySubDir(org.apache.hadoop.fs.contract.s3a.TestS3AContractRename) Time elapsed: 6.49 sec <<< ERROR! org.apache.hadoop.fs.FileAlreadyExistsException: testRenameWithNonEmptySubDir/src1/source.txt already exists at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:252) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786) at org.apache.hadoop.fs.contract.ContractTestUtils.createFile(ContractTestUtils.java:507) at org.apache.hadoop.fs.contract.ContractTestUtils.writeTextFile(ContractTestUtils.java:491) at org.apache.hadoop.fs.contract.AbstractContractRenameTest.testRenameWithNonEmptySubDir(AbstractContractRenameTest.java:198) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Fix: make sure dirs are cleaned before test runs. applied same change to scale test
          Hide
          jyu@cloudera.com Juan Yu added a comment -

          New patch looks good. +1. Thanks Steve Loughran
          I noticed the new rename test has the test directory left over and plan to send a new patch. you beat me on fixing it first.

          Show
          jyu@cloudera.com Juan Yu added a comment - New patch looks good. +1. Thanks Steve Loughran I noticed the new rename test has the test directory left over and plan to send a new patch. you beat me on fixing it first.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12671719/HADOOP-10714-007.patch
          against trunk revision 400e1bb.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 15 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws:

          org.apache.hadoop.crypto.random.TestOsSecureRandom

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4822//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4822//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12671719/HADOOP-10714-007.patch against trunk revision 400e1bb. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 15 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws: org.apache.hadoop.crypto.random.TestOsSecureRandom +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4822//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4822//console This message is automatically generated.
          Hide
          clamb Charles Lamb added a comment -

          Juan Yu,

          I'm having trouble running the tests with the latest patch. I'll contact you offline to see if we can figure out what's going on.

          Show
          clamb Charles Lamb added a comment - Juan Yu , I'm having trouble running the tests with the latest patch. I'll contact you offline to see if we can figure out what's going on.
          Hide
          clamb Charles Lamb added a comment -

          Juan Yu,

          The tests all worked like a champ:

          -------------------------------------------------------
           T E S T S
          -------------------------------------------------------
          Running org.apache.hadoop.fs.contract.s3n.TestS3NContractRootDir
          Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.68 sec - in org.apache.hadoop.fs.contract.s3n.TestS3NContractRootDir
          Running org.apache.hadoop.fs.contract.s3n.TestS3NContractRename
          Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 57.28 sec - in org.apache.hadoop.fs.contract.s3n.TestS3NContractRename
          Running org.apache.hadoop.fs.contract.s3n.TestS3NContractMkdir
          Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.416 sec - in org.apache.hadoop.fs.contract.s3n.TestS3NContractMkdir
          Running org.apache.hadoop.fs.contract.s3n.TestS3NContractSeek
          Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 74.451 sec - in org.apache.hadoop.fs.contract.s3n.TestS3NContractSeek
          Running org.apache.hadoop.fs.contract.s3n.TestS3NContractOpen
          Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.289 sec - in org.apache.hadoop.fs.contract.s3n.TestS3NContractOpen
          Running org.apache.hadoop.fs.contract.s3n.TestS3NContractCreate
          Tests run: 6, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 35.512 sec - in org.apache.hadoop.fs.contract.s3n.TestS3NContractCreate
          Running org.apache.hadoop.fs.contract.s3n.TestS3NContractDelete
          Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 72.683 sec - in org.apache.hadoop.fs.contract.s3n.TestS3NContractDelete
          
          Results :
          
          Tests run: 46, Failures: 0, Errors: 0, Skipped: 3
          
          Running org.apache.hadoop.fs.contract.s3a.TestS3AContractRename
          Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 86.871 sec - in org.apache.hadoop.fs.contract.s3a.TestS3AContractRename
          Running org.apache.hadoop.fs.contract.s3a.TestS3AContractMkdir
          Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 55.332 sec - in org.apache.hadoop.fs.contract.s3a.TestS3AContractMkdir
          Running org.apache.hadoop.fs.contract.s3a.TestS3AContractCreate
          Tests run: 6, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 47.507 sec - in org.apache.hadoop.fs.contract.s3a.TestS3AContractCreate
          Running org.apache.hadoop.fs.contract.s3a.TestS3AContractDelete
          Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.011 sec - in org.apache.hadoop.fs.contract.s3a.TestS3AContractDelete
          Running org.apache.hadoop.fs.contract.s3a.TestS3AContractSeek
          Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 99.172 sec - in org.apache.hadoop.fs.contract.s3a.TestS3AContractSeek
          Running org.apache.hadoop.fs.contract.s3a.TestS3AContractOpen
          Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.234 sec - in org.apache.hadoop.fs.contract.s3a.TestS3AContractOpen
          Running org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDir
          Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.172 sec - in org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDir
          
          Results :
          
          Tests run: 46, Failures: 0, Errors: 0, Skipped: 3
          
          Show
          clamb Charles Lamb added a comment - Juan Yu , The tests all worked like a champ: ------------------------------------------------------- T E S T S ------------------------------------------------------- Running org.apache.hadoop.fs.contract.s3n.TestS3NContractRootDir Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.68 sec - in org.apache.hadoop.fs.contract.s3n.TestS3NContractRootDir Running org.apache.hadoop.fs.contract.s3n.TestS3NContractRename Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 57.28 sec - in org.apache.hadoop.fs.contract.s3n.TestS3NContractRename Running org.apache.hadoop.fs.contract.s3n.TestS3NContractMkdir Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.416 sec - in org.apache.hadoop.fs.contract.s3n.TestS3NContractMkdir Running org.apache.hadoop.fs.contract.s3n.TestS3NContractSeek Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 74.451 sec - in org.apache.hadoop.fs.contract.s3n.TestS3NContractSeek Running org.apache.hadoop.fs.contract.s3n.TestS3NContractOpen Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.289 sec - in org.apache.hadoop.fs.contract.s3n.TestS3NContractOpen Running org.apache.hadoop.fs.contract.s3n.TestS3NContractCreate Tests run: 6, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 35.512 sec - in org.apache.hadoop.fs.contract.s3n.TestS3NContractCreate Running org.apache.hadoop.fs.contract.s3n.TestS3NContractDelete Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 72.683 sec - in org.apache.hadoop.fs.contract.s3n.TestS3NContractDelete Results : Tests run: 46, Failures: 0, Errors: 0, Skipped: 3 Running org.apache.hadoop.fs.contract.s3a.TestS3AContractRename Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 86.871 sec - in org.apache.hadoop.fs.contract.s3a.TestS3AContractRename Running org.apache.hadoop.fs.contract.s3a.TestS3AContractMkdir Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 55.332 sec - in org.apache.hadoop.fs.contract.s3a.TestS3AContractMkdir Running org.apache.hadoop.fs.contract.s3a.TestS3AContractCreate Tests run: 6, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 47.507 sec - in org.apache.hadoop.fs.contract.s3a.TestS3AContractCreate Running org.apache.hadoop.fs.contract.s3a.TestS3AContractDelete Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.011 sec - in org.apache.hadoop.fs.contract.s3a.TestS3AContractDelete Running org.apache.hadoop.fs.contract.s3a.TestS3AContractSeek Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 99.172 sec - in org.apache.hadoop.fs.contract.s3a.TestS3AContractSeek Running org.apache.hadoop.fs.contract.s3a.TestS3AContractOpen Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.234 sec - in org.apache.hadoop.fs.contract.s3a.TestS3AContractOpen Running org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDir Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.172 sec - in org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDir Results : Tests run: 46, Failures: 0, Errors: 0, Skipped: 3
          Hide
          stevel@apache.org Steve Loughran added a comment -

          charles —you happy with this? If so I'm +1 for it.

          there's one thing I'd like to do as an experience of setting up the tests, something we could merge in with this patch.

          It is: make the authentication property names in s3a match those of s3n. They do the same thing, but have different names. Making the same except for s/s3a/s3n/ will aid migration and documentation.

          That change is easy enough to do here, and something we need to do before releasing this in 2.6.

          There's another extension later, "convert more HTTP error codes into standard exceptions" —which can be done later

          Show
          stevel@apache.org Steve Loughran added a comment - charles —you happy with this? If so I'm +1 for it. there's one thing I'd like to do as an experience of setting up the tests, something we could merge in with this patch. It is: make the authentication property names in s3a match those of s3n. They do the same thing, but have different names. Making the same except for s/s3a/s3n/ will aid migration and documentation. That change is easy enough to do here, and something we need to do before releasing this in 2.6. There's another extension later, "convert more HTTP error codes into standard exceptions" —which can be done later
          Hide
          clamb Charles Lamb added a comment -

          charles —you happy with this?

          Yes, +1 (non-binding) from me.

          Show
          clamb Charles Lamb added a comment - charles —you happy with this? Yes, +1 (non-binding) from me.
          Hide
          jyu@cloudera.com Juan Yu added a comment -

          Why don't we just use one authentication property for all three, s3/s3n/s3a like the following?
          fs.aws.access.key and fs.aws.secret.key
          There is really no need to use one per connector.

          Show
          jyu@cloudera.com Juan Yu added a comment - Why don't we just use one authentication property for all three, s3/s3n/s3a like the following? fs.aws.access.key and fs.aws.secret.key There is really no need to use one per connector.
          Hide
          jyu@cloudera.com Juan Yu added a comment -

          Let s3a use S3Credentials to get authentication info, same as s3 and s3n.
          If we decide to use the same property name for all three in the future, just need to change it in S3Credentials.

          Show
          jyu@cloudera.com Juan Yu added a comment - Let s3a use S3Credentials to get authentication info, same as s3 and s3n. If we decide to use the same property name for all three in the future, just need to change it in S3Credentials .
          Hide
          hadoopqa Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12672248/HADOOP-10714.008.patch
          against trunk revision 17d1202.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 15 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4844//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4844//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12672248/HADOOP-10714.008.patch against trunk revision 17d1202. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 15 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4844//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4844//console This message is automatically generated.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          This is the patch I'm going to commit; I just need people with better bandwidth than a bay area hotel to run the tests that upload the big files. That experience shows we need to make those tests optional.

          I've not changed the code, just

          1. finished off the docs
          2. renamed the S3/S3N fs subclasses of FileSystemContractTestBase to Test* and so automatically run
          3. made them fail meaningfully if their args aren't set
          4. made sure it builds and runs on branch 2. To do that I had to remove the @override tag from TestS3AFileSystemContract.testMoveDirUnderParent()
          5. doubled the test timeouts in the POM -which still arent enough for some networks.

          So: some minor test tuning; I do want to finish the full upload test (tommorrow) before committing as is.

          Show
          stevel@apache.org Steve Loughran added a comment - This is the patch I'm going to commit; I just need people with better bandwidth than a bay area hotel to run the tests that upload the big files. That experience shows we need to make those tests optional. I've not changed the code, just finished off the docs renamed the S3/S3N fs subclasses of FileSystemContractTestBase to Test* and so automatically run made them fail meaningfully if their args aren't set made sure it builds and runs on branch 2. To do that I had to remove the @override tag from TestS3AFileSystemContract.testMoveDirUnderParent() doubled the test timeouts in the POM -which still arent enough for some networks. So: some minor test tuning; I do want to finish the full upload test (tommorrow) before committing as is.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          TestJets3tNativeFileSystemStore is timing out for me, even in places with better bandwidth. Can others try?

          Show
          stevel@apache.org Steve Loughran added a comment - TestJets3tNativeFileSystemStore is timing out for me, even in places with better bandwidth. Can others try?
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12674458/HADOOP-10714-009.patch
          against trunk revision 793dbf2.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 19 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 2 new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4911//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/4911//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4911//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12674458/HADOOP-10714-009.patch against trunk revision 793dbf2. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 19 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. -1 findbugs . The patch appears to introduce 2 new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4911//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/4911//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4911//console This message is automatically generated.
          Hide
          jyu@cloudera.com Juan Yu added a comment -

          I tried it. with 1G file upload, the test takes ~20 minutes. which means the 5G one will take more than 1.5 hrs.

          Show
          jyu@cloudera.com Juan Yu added a comment - I tried it. with 1G file upload, the test takes ~20 minutes. which means the 5G one will take more than 1.5 hrs.
          Hide
          jyu@cloudera.com Juan Yu added a comment -

          I also ran all other s3a, s3n tests and they passed.

          Show
          jyu@cloudera.com Juan Yu added a comment - I also ran all other s3a, s3n tests and they passed.
          Hide
          atm Aaron T. Myers added a comment -

          Steve Loughran - is Juan's confirmation of the tests sufficient for you? If yes, are you OK with me going ahead and committing this? Seems like you were satisfied with it based on your comment on October 12.

          Thanks a lot.

          Show
          atm Aaron T. Myers added a comment - Steve Loughran - is Juan's confirmation of the tests sufficient for you? If yes, are you OK with me going ahead and committing this? Seems like you were satisfied with it based on your comment on October 12. Thanks a lot.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          sorry, not had a chance do do another run, but its been making me fell guilty

          +1 for the patch if you rever that rename I did of the slow test. my own fault for interfering

          Show
          stevel@apache.org Steve Loughran added a comment - sorry, not had a chance do do another run, but its been making me fell guilty +1 for the patch if you rever that rename I did of the slow test. my own fault for interfering
          Hide
          jyu@cloudera.com Juan Yu added a comment -

          Revert the rename change, and comment the S3N extra large file upload test.
          Will add test config flag for the extra large file uploading in HADOOP-11128.

          Show
          jyu@cloudera.com Juan Yu added a comment - Revert the rename change, and comment the S3N extra large file upload test. Will add test config flag for the extra large file uploading in HADOOP-11128 .
          Hide
          hadoopqa Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12679481/HADOOP-10714.010.patch
          against trunk revision 73068f6.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 18 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/5027//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/5027//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12679481/HADOOP-10714.010.patch against trunk revision 73068f6. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 18 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/5027//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/5027//console This message is automatically generated.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          LGTM, +1

          Show
          stevel@apache.org Steve Loughran added a comment - LGTM, +1
          Hide
          atm Aaron T. Myers added a comment -

          Great, thanks Steve and Juan.

          I'm going to commit this momentarily based on Steve's +1.

          Show
          atm Aaron T. Myers added a comment - Great, thanks Steve and Juan. I'm going to commit this momentarily based on Steve's +1.
          Hide
          atm Aaron T. Myers added a comment -

          I've just committed this to trunk and branch-2.

          Thanks a lot for the contribution, Juan, and thanks also to Steve and Charlie for the reviews.

          Show
          atm Aaron T. Myers added a comment - I've just committed this to trunk and branch-2. Thanks a lot for the contribution, Juan, and thanks also to Steve and Charlie for the reviews.
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-trunk-Commit #6460 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6460/)
          HADOOP-10714. AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call. Contributed by Juan Yu. (atm: rev 6ba52d88ec11444cbac946ffadbc645acd0657de)

          • hadoop-common-project/hadoop-common/src/test/resources/contract/localfs.xml
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AOutputStream.java
          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractRename.java
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractOptions.java
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/S3AScaleTestBase.java
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/NativeS3FileSystemContractBaseTest.java
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3/S3FileSystemContractBaseTest.java
          • .gitignore
          • hadoop-tools/hadoop-aws/src/test/resources/core-site.xml
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AFileSystemContract.java
          • hadoop-tools/hadoop-aws/src/main/site/markdown/tools/hadoop-aws/index.md
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractMkdirTest.java
          • hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/TestJets3tNativeFileSystemStore.java
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractRenameTest.java
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/TestS3ADeleteManyFiles.java
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java
          • hadoop-tools/hadoop-aws/pom.xml
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java
          • hadoop-tools/hadoop-aws/src/test/resources/contract/s3a.xml
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3AFileSystemContractBaseTest.java
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/S3Credentials.java
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/BasicAWSCredentialsProvider.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-trunk-Commit #6460 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6460/ ) HADOOP-10714 . AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call. Contributed by Juan Yu. (atm: rev 6ba52d88ec11444cbac946ffadbc645acd0657de) hadoop-common-project/hadoop-common/src/test/resources/contract/localfs.xml hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AOutputStream.java hadoop-common-project/hadoop-common/CHANGES.txt hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractRename.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractOptions.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/S3AScaleTestBase.java hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/NativeS3FileSystemContractBaseTest.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3/S3FileSystemContractBaseTest.java .gitignore hadoop-tools/hadoop-aws/src/test/resources/core-site.xml hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AFileSystemContract.java hadoop-tools/hadoop-aws/src/main/site/markdown/tools/hadoop-aws/index.md hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractMkdirTest.java hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/TestJets3tNativeFileSystemStore.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractRenameTest.java hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/TestS3ADeleteManyFiles.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java hadoop-tools/hadoop-aws/pom.xml hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java hadoop-tools/hadoop-aws/src/test/resources/contract/s3a.xml hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3AFileSystemContractBaseTest.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/S3Credentials.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/BasicAWSCredentialsProvider.java
          Hide
          jyu@cloudera.com Juan Yu added a comment -

          Thanks David S. Wang for the initial patch, Steve Loughran for review and documentation, Charles Lamb for review and testing, Aaron T. Myers for committing this.

          Show
          jyu@cloudera.com Juan Yu added a comment - Thanks David S. Wang for the initial patch, Steve Loughran for review and documentation, Charles Lamb for review and testing, Aaron T. Myers for committing this.
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Yarn-trunk #735 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/735/)
          HADOOP-10714. AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call. Contributed by Juan Yu. (atm: rev 6ba52d88ec11444cbac946ffadbc645acd0657de)

          • hadoop-tools/hadoop-aws/src/main/site/markdown/tools/hadoop-aws/index.md
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3AFileSystemContractBaseTest.java
          • hadoop-common-project/hadoop-common/src/test/resources/contract/localfs.xml
          • hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AOutputStream.java
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
          • hadoop-tools/hadoop-aws/pom.xml
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractOptions.java
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractRenameTest.java
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/TestJets3tNativeFileSystemStore.java
          • hadoop-tools/hadoop-aws/src/test/resources/core-site.xml
          • hadoop-tools/hadoop-aws/src/test/resources/contract/s3a.xml
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/S3AScaleTestBase.java
          • .gitignore
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3/S3FileSystemContractBaseTest.java
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/S3Credentials.java
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/TestS3ADeleteManyFiles.java
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractMkdirTest.java
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractRename.java
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AFileSystemContract.java
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/BasicAWSCredentialsProvider.java
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/NativeS3FileSystemContractBaseTest.java
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Yarn-trunk #735 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/735/ ) HADOOP-10714 . AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call. Contributed by Juan Yu. (atm: rev 6ba52d88ec11444cbac946ffadbc645acd0657de) hadoop-tools/hadoop-aws/src/main/site/markdown/tools/hadoop-aws/index.md hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3AFileSystemContractBaseTest.java hadoop-common-project/hadoop-common/src/test/resources/contract/localfs.xml hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md hadoop-common-project/hadoop-common/CHANGES.txt hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AOutputStream.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java hadoop-tools/hadoop-aws/pom.xml hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractOptions.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractRenameTest.java hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/TestJets3tNativeFileSystemStore.java hadoop-tools/hadoop-aws/src/test/resources/core-site.xml hadoop-tools/hadoop-aws/src/test/resources/contract/s3a.xml hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/S3AScaleTestBase.java .gitignore hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3/S3FileSystemContractBaseTest.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/S3Credentials.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/TestS3ADeleteManyFiles.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractMkdirTest.java hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractRename.java hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AFileSystemContract.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/BasicAWSCredentialsProvider.java hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/NativeS3FileSystemContractBaseTest.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Hdfs-trunk #1925 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1925/)
          HADOOP-10714. AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call. Contributed by Juan Yu. (atm: rev 6ba52d88ec11444cbac946ffadbc645acd0657de)

          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/TestJets3tNativeFileSystemStore.java
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/NativeS3FileSystemContractBaseTest.java
          • hadoop-tools/hadoop-aws/src/test/resources/contract/s3a.xml
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java
          • hadoop-common-project/hadoop-common/src/test/resources/contract/localfs.xml
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/BasicAWSCredentialsProvider.java
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3AFileSystemContractBaseTest.java
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/TestS3ADeleteManyFiles.java
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
          • hadoop-tools/hadoop-aws/pom.xml
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractOptions.java
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/S3Credentials.java
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractMkdirTest.java
          • hadoop-tools/hadoop-aws/src/main/site/markdown/tools/hadoop-aws/index.md
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3/S3FileSystemContractBaseTest.java
          • .gitignore
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AFileSystemContract.java
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AOutputStream.java
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractRename.java
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/S3AScaleTestBase.java
          • hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractRenameTest.java
          • hadoop-tools/hadoop-aws/src/test/resources/core-site.xml
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Hdfs-trunk #1925 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1925/ ) HADOOP-10714 . AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call. Contributed by Juan Yu. (atm: rev 6ba52d88ec11444cbac946ffadbc645acd0657de) hadoop-common-project/hadoop-common/CHANGES.txt hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/TestJets3tNativeFileSystemStore.java hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/NativeS3FileSystemContractBaseTest.java hadoop-tools/hadoop-aws/src/test/resources/contract/s3a.xml hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java hadoop-common-project/hadoop-common/src/test/resources/contract/localfs.xml hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/BasicAWSCredentialsProvider.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3AFileSystemContractBaseTest.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/TestS3ADeleteManyFiles.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java hadoop-tools/hadoop-aws/pom.xml hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractOptions.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/S3Credentials.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractMkdirTest.java hadoop-tools/hadoop-aws/src/main/site/markdown/tools/hadoop-aws/index.md hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3/S3FileSystemContractBaseTest.java .gitignore hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AFileSystemContract.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AOutputStream.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractRename.java hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/S3AScaleTestBase.java hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractRenameTest.java hadoop-tools/hadoop-aws/src/test/resources/core-site.xml
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk #1949 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1949/)
          HADOOP-10714. AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call. Contributed by Juan Yu. (atm: rev 6ba52d88ec11444cbac946ffadbc645acd0657de)

          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractMkdirTest.java
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
          • hadoop-tools/hadoop-aws/pom.xml
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AOutputStream.java
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/NativeS3FileSystemContractBaseTest.java
          • hadoop-tools/hadoop-aws/src/test/resources/contract/s3a.xml
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractRename.java
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractRenameTest.java
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AFileSystemContract.java
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/S3Credentials.java
          • hadoop-common-project/hadoop-common/src/test/resources/contract/localfs.xml
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3AFileSystemContractBaseTest.java
          • .gitignore
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
          • hadoop-tools/hadoop-aws/src/main/site/markdown/tools/hadoop-aws/index.md
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/BasicAWSCredentialsProvider.java
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java
          • hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3/S3FileSystemContractBaseTest.java
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/TestJets3tNativeFileSystemStore.java
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/S3AScaleTestBase.java
          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractOptions.java
          • hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/TestS3ADeleteManyFiles.java
          • hadoop-tools/hadoop-aws/src/test/resources/core-site.xml
          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk #1949 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1949/ ) HADOOP-10714 . AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call. Contributed by Juan Yu. (atm: rev 6ba52d88ec11444cbac946ffadbc645acd0657de) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractMkdirTest.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java hadoop-tools/hadoop-aws/pom.xml hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AOutputStream.java hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/NativeS3FileSystemContractBaseTest.java hadoop-tools/hadoop-aws/src/test/resources/contract/s3a.xml hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractRename.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractRenameTest.java hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AFileSystemContract.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/S3Credentials.java hadoop-common-project/hadoop-common/src/test/resources/contract/localfs.xml hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3AFileSystemContractBaseTest.java .gitignore hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java hadoop-tools/hadoop-aws/src/main/site/markdown/tools/hadoop-aws/index.md hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/BasicAWSCredentialsProvider.java hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3/S3FileSystemContractBaseTest.java hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/TestJets3tNativeFileSystemStore.java hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/S3AScaleTestBase.java hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractOptions.java hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/TestS3ADeleteManyFiles.java hadoop-tools/hadoop-aws/src/test/resources/core-site.xml hadoop-common-project/hadoop-common/CHANGES.txt hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java
          Hide
          cnauroth Chris Nauroth added a comment -

          After this patch, the hadoop-aws site documentation is no longer built. The build expects the documentation source files in src/site instead of src/main/site. I filed a patch on HADOOP-11394 to fix this.

          Show
          cnauroth Chris Nauroth added a comment - After this patch, the hadoop-aws site documentation is no longer built. The build expects the documentation source files in src/site instead of src/main/site. I filed a patch on HADOOP-11394 to fix this.

            People

            • Assignee:
              jyu@cloudera.com Juan Yu
              Reporter:
              dsw David S. Wang
            • Votes:
              0 Vote for this issue
              Watchers:
              12 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development