Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-12815

TestS3ContractRootDir#testRmEmptyRootDirNonRecursive and TestS3ContractRootDir#testRmRootRecursive fail on branch-2.

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Won't Fix
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None
    • Release Note:
      for branch-2 only

      Description

      TestS3ContractRootDir#testRmEmptyRootDirNonRecursive and TestS3ContractRootDir#testRmRootRecursive fail on branch-2. The tests pass on trunk.

        Issue Links

          Activity

          Hide
          cnauroth Chris Nauroth added a comment -

          This patch was superseded by HADOOP-12801, which annotated the tests with @Ignore, and HADOOP-13239, which deprecated s3://.

          Show
          cnauroth Chris Nauroth added a comment - This patch was superseded by HADOOP-12801 , which annotated the tests with @Ignore , and HADOOP-13239 , which deprecated s3:// .
          Hide
          raviprak Ravi Prakash added a comment -

          Thanks for your review Chris! I'm sorry I haven't had a chance to look at the patch yet, but will do soon.

          Show
          raviprak Ravi Prakash added a comment - Thanks for your review Chris! I'm sorry I haven't had a chance to look at the patch yet, but will do soon.
          Hide
          cnauroth Chris Nauroth added a comment -

          Hello Matthew Paduano. Thank you for your diligence tracking down the history of these test failures. Unfortunately, the attached patch is more change in S3 than I'd like to commit, considering the poor recent track record of the S3 code and the plan to deprecate it in HADOOP-12709. Unless Steve Loughran has a different opinion or some further background about why HADOOP-9258 wasn't committed to branch-2, then I'd instead propose we simply delete the failing tests (essentially declaring defeat) as a small step towards the deprecation/removal.

          Show
          cnauroth Chris Nauroth added a comment - Hello Matthew Paduano . Thank you for your diligence tracking down the history of these test failures. Unfortunately, the attached patch is more change in S3 than I'd like to commit, considering the poor recent track record of the S3 code and the plan to deprecate it in HADOOP-12709 . Unless Steve Loughran has a different opinion or some further background about why HADOOP-9258 wasn't committed to branch-2, then I'd instead propose we simply delete the failing tests (essentially declaring defeat) as a small step towards the deprecation/removal.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 22s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 7m 48s branch-2 passed
          +1 compile 0m 14s branch-2 passed with JDK v1.8.0_72
          +1 compile 0m 13s branch-2 passed with JDK v1.7.0_95
          +1 checkstyle 0m 16s branch-2 passed
          +1 mvnsite 0m 20s branch-2 passed
          +1 mvneclipse 0m 59s branch-2 passed
          +1 findbugs 0m 33s branch-2 passed
          +1 javadoc 0m 15s branch-2 passed with JDK v1.8.0_72
          +1 javadoc 0m 14s branch-2 passed with JDK v1.7.0_95
          +1 mvninstall 0m 13s the patch passed
          +1 compile 0m 10s the patch passed with JDK v1.8.0_72
          +1 javac 0m 10s the patch passed
          +1 compile 0m 11s the patch passed with JDK v1.7.0_95
          +1 javac 0m 11s the patch passed
          +1 checkstyle 0m 11s the patch passed
          +1 mvnsite 0m 16s the patch passed
          +1 mvneclipse 0m 12s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 0m 40s the patch passed
          +1 javadoc 0m 14s the patch passed with JDK v1.8.0_72
          +1 javadoc 0m 14s the patch passed with JDK v1.7.0_95
          +1 unit 0m 12s hadoop-aws in the patch passed with JDK v1.8.0_72.
          +1 unit 0m 12s hadoop-aws in the patch passed with JDK v1.7.0_95.
          +1 asflicense 0m 19s Patch does not generate ASF License warnings.
          15m 21s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:babe025
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12790548/HADOOP-12815.branch-2.01.patch
          JIRA Issue HADOOP-12815
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 3d6a971f8e01 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision branch-2 / bd0f508
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_72 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8747/testReport/
          modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8747/console
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 22s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 7m 48s branch-2 passed +1 compile 0m 14s branch-2 passed with JDK v1.8.0_72 +1 compile 0m 13s branch-2 passed with JDK v1.7.0_95 +1 checkstyle 0m 16s branch-2 passed +1 mvnsite 0m 20s branch-2 passed +1 mvneclipse 0m 59s branch-2 passed +1 findbugs 0m 33s branch-2 passed +1 javadoc 0m 15s branch-2 passed with JDK v1.8.0_72 +1 javadoc 0m 14s branch-2 passed with JDK v1.7.0_95 +1 mvninstall 0m 13s the patch passed +1 compile 0m 10s the patch passed with JDK v1.8.0_72 +1 javac 0m 10s the patch passed +1 compile 0m 11s the patch passed with JDK v1.7.0_95 +1 javac 0m 11s the patch passed +1 checkstyle 0m 11s the patch passed +1 mvnsite 0m 16s the patch passed +1 mvneclipse 0m 12s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 0m 40s the patch passed +1 javadoc 0m 14s the patch passed with JDK v1.8.0_72 +1 javadoc 0m 14s the patch passed with JDK v1.7.0_95 +1 unit 0m 12s hadoop-aws in the patch passed with JDK v1.8.0_72. +1 unit 0m 12s hadoop-aws in the patch passed with JDK v1.7.0_95. +1 asflicense 0m 19s Patch does not generate ASF License warnings. 15m 21s Subsystem Report/Notes Docker Image:yetus/hadoop:babe025 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12790548/HADOOP-12815.branch-2.01.patch JIRA Issue HADOOP-12815 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 3d6a971f8e01 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2 / bd0f508 Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_72 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8747/testReport/ modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8747/console Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          -1 patch 0m 5s HADOOP-12815 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help.



          Subsystem Report/Notes
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12789617/HADOOP-12815.01.patch
          JIRA Issue HADOOP-12815
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8706/console
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. -1 patch 0m 5s HADOOP-12815 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. Subsystem Report/Notes JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12789617/HADOOP-12815.01.patch JIRA Issue HADOOP-12815 Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8706/console Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          mattpaduano Matthew Paduano added a comment -

          I tried copying the files in 'a10055cf' from trunk into the appropriate places in branch-2.
          The native S3 tests/mods cause a list of test failures (I did not even look at them all).

          But the changes to S3FileSystem and Jets3tFileSystemStore work and fix the issue
          in branch-2. The patch I attached passes the S3 tests when applied to branch-2:

          -------------------------------------------------------
           T E S T S
          -------------------------------------------------------
          Running org.apache.hadoop.fs.contract.s3.TestS3ContractDelete
          Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.396 sec - in org.apache.hadoop.fs.contract.s3.TestS3ContractDelete
          Running org.apache.hadoop.fs.contract.s3.TestS3ContractCreate
          Tests run: 6, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 17.72 sec - in org.apache.hadoop.fs.contract.s3.TestS3ContractCreate
          Running org.apache.hadoop.fs.contract.s3.TestS3ContractRename
          Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.866 sec - in org.apache.hadoop.fs.contract.s3.TestS3ContractRename
          Running org.apache.hadoop.fs.contract.s3.TestS3ContractRootDir
          Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.763 sec - in org.apache.hadoop.fs.contract.s3.TestS3ContractRootDir
          Running org.apache.hadoop.fs.contract.s3.TestS3ContractMkdir
          Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.448 sec - in org.apache.hadoop.fs.contract.s3.TestS3ContractMkdir
          Running org.apache.hadoop.fs.contract.s3.TestS3ContractSeek
          Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.839 sec - in org.apache.hadoop.fs.contract.s3.TestS3ContractSeek
          Running org.apache.hadoop.fs.contract.s3.TestS3ContractOpen
          Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.038 sec - in org.apache.hadoop.fs.contract.s3.TestS3ContractOpen
          
          Results :
          
          Tests run: 47, Failures: 0, Errors: 0, Skipped: 1
          
          Show
          mattpaduano Matthew Paduano added a comment - I tried copying the files in 'a10055cf' from trunk into the appropriate places in branch-2. The native S3 tests/mods cause a list of test failures (I did not even look at them all). But the changes to S3FileSystem and Jets3tFileSystemStore work and fix the issue in branch-2. The patch I attached passes the S3 tests when applied to branch-2: ------------------------------------------------------- T E S T S ------------------------------------------------------- Running org.apache.hadoop.fs.contract.s3.TestS3ContractDelete Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.396 sec - in org.apache.hadoop.fs.contract.s3.TestS3ContractDelete Running org.apache.hadoop.fs.contract.s3.TestS3ContractCreate Tests run: 6, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 17.72 sec - in org.apache.hadoop.fs.contract.s3.TestS3ContractCreate Running org.apache.hadoop.fs.contract.s3.TestS3ContractRename Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.866 sec - in org.apache.hadoop.fs.contract.s3.TestS3ContractRename Running org.apache.hadoop.fs.contract.s3.TestS3ContractRootDir Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.763 sec - in org.apache.hadoop.fs.contract.s3.TestS3ContractRootDir Running org.apache.hadoop.fs.contract.s3.TestS3ContractMkdir Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.448 sec - in org.apache.hadoop.fs.contract.s3.TestS3ContractMkdir Running org.apache.hadoop.fs.contract.s3.TestS3ContractSeek Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.839 sec - in org.apache.hadoop.fs.contract.s3.TestS3ContractSeek Running org.apache.hadoop.fs.contract.s3.TestS3ContractOpen Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.038 sec - in org.apache.hadoop.fs.contract.s3.TestS3ContractOpen Results : Tests run: 47, Failures: 0, Errors: 0, Skipped: 1
          Hide
          mattpaduano Matthew Paduano added a comment -

          I tracked the issue to the following commit:

          git show a10055cf

          commit a10055cf6de058d10dec54705a6de746ecca111f
          Author: Steve Loughran <stevel@apache.org>
          Date:   Mon Mar 25 13:12:27 2013 +0000
          
              HADOOP-9258 Add stricter tests to FileSystemContractTestBase (includes fixes
           for production code HADOOP-9261 & HADOOP-9265 and test enhancements HADOOP-9228
          , HADOOP-9227 & HADOOP-9259)
          

          This is sort of a big patch... not sure if we want to patch the entire thing to resolve the failing
          tests (or if patching the entire thing works or not!). Maybe @SteveL can comment on why
          a10055cf wasn't pushed to branch-2.

          I did track the failing tests to the following patch/diff and tested this alone.
          This patch allows tests that are identified in this ticket to pass:

          diff --git a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/Jets3tFileSystemStore.java b/had
          index e5387f3..a186c14 100644
          --- a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/Jets3tFileSystemStore.java
          +++ b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/Jets3tFileSystemStore.java
          @@ -138,9 +138,15 @@ public void deleteBlock(Block block) throws IOException {
           
             @Override
             public boolean inodeExists(Path path) throws IOException {
          -    InputStream in = get(pathToKey(path), true);
          +    String key = pathToKey(path);
          +    InputStream in = get(key, true);
               if (in == null) {
          -      return false;
          +      if (isRoot(key)) {
          +        storeINode(path, INode.DIRECTORY_INODE);
          +        return true;
          +      } else {
          +        return false;
          +      }
               }
               in.close();
               return true;
          @@ -218,7 +224,13 @@ private void checkMetadata(S3Object object) throws S3FileSystemException,
           
             @Override
             public INode retrieveINode(Path path) throws IOException {
          -    return INode.deserialize(get(pathToKey(path), true));
          +    String key = pathToKey(path);
          +    InputStream in = get(key, true);
          +    if (in == null && isRoot(key)) {
          +      storeINode(path, INode.DIRECTORY_INODE);
          +      return INode.DIRECTORY_INODE;
          +    }
          +    return INode.deserialize(in);
             }
           
             @Override
          @@ -377,6 +389,10 @@ private String blockToKey(Block block) {
               return blockToKey(block.getId());
             }
           
          +  private boolean isRoot(String key) {
          +    return key.isEmpty() || key.equals("/");
          +  }
          +
             @Override
             public void purge() throws IOException {
               try {
          

          An alternative, lighter touch would be to submit the above as a patch to this ticket.

          Show
          mattpaduano Matthew Paduano added a comment - I tracked the issue to the following commit: git show a10055cf commit a10055cf6de058d10dec54705a6de746ecca111f Author: Steve Loughran <stevel@apache.org> Date: Mon Mar 25 13:12:27 2013 +0000 HADOOP-9258 Add stricter tests to FileSystemContractTestBase (includes fixes for production code HADOOP-9261 & HADOOP-9265 and test enhancements HADOOP-9228 , HADOOP-9227 & HADOOP-9259) This is sort of a big patch... not sure if we want to patch the entire thing to resolve the failing tests (or if patching the entire thing works or not!). Maybe @SteveL can comment on why a10055cf wasn't pushed to branch-2. I did track the failing tests to the following patch/diff and tested this alone. This patch allows tests that are identified in this ticket to pass: diff --git a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/Jets3tFileSystemStore.java b/had index e5387f3..a186c14 100644 --- a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/Jets3tFileSystemStore.java +++ b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/Jets3tFileSystemStore.java @@ -138,9 +138,15 @@ public void deleteBlock(Block block) throws IOException { @Override public boolean inodeExists(Path path) throws IOException { - InputStream in = get(pathToKey(path), true ); + String key = pathToKey(path); + InputStream in = get(key, true ); if (in == null ) { - return false ; + if (isRoot(key)) { + storeINode(path, INode.DIRECTORY_INODE); + return true ; + } else { + return false ; + } } in.close(); return true ; @@ -218,7 +224,13 @@ private void checkMetadata(S3Object object) throws S3FileSystemException, @Override public INode retrieveINode(Path path) throws IOException { - return INode.deserialize(get(pathToKey(path), true )); + String key = pathToKey(path); + InputStream in = get(key, true ); + if (in == null && isRoot(key)) { + storeINode(path, INode.DIRECTORY_INODE); + return INode.DIRECTORY_INODE; + } + return INode.deserialize(in); } @Override @@ -377,6 +389,10 @@ private String blockToKey(Block block) { return blockToKey(block.getId()); } + private boolean isRoot( String key) { + return key.isEmpty() || key.equals( "/" ); + } + @Override public void purge() throws IOException { try { An alternative, lighter touch would be to submit the above as a patch to this ticket.
          Hide
          cnauroth Chris Nauroth added a comment -

          Matthew Paduano, thank you.

          Show
          cnauroth Chris Nauroth added a comment - Matthew Paduano , thank you.
          Hide
          mattpaduano Matthew Paduano added a comment -

          I am able to reproduce these errors/stacktraces.
          I will investigate and update this ticket.

          Show
          mattpaduano Matthew Paduano added a comment - I am able to reproduce these errors/stacktraces. I will investigate and update this ticket.
          Hide
          cnauroth Chris Nauroth added a comment -
          Tests run: 6, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 6.288 sec <<< FAILURE! - in org.apache.hadoop.fs.contract.s3.TestS3ContractRootDir
          testRmEmptyRootDirNonRecursive(org.apache.hadoop.fs.contract.s3.TestS3ContractRootDir)  Time elapsed: 1.984 sec  <<< ERROR!
          java.io.FileNotFoundException: /: No such file or directory.
          	at org.apache.hadoop.fs.s3.S3FileSystem.getFileStatus(S3FileSystem.java:356)
          	at org.apache.hadoop.fs.contract.ContractTestUtils.assertIsDirectory(ContractTestUtils.java:464)
          	at org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRmEmptyRootDirNonRecursive(AbstractContractRootDirectoryTest.java:67)
          	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
          	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
          	at java.lang.reflect.Method.invoke(Method.java:606)
          	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
          	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
          	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
          	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
          	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
          	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
          	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
          
          testRmRootRecursive(org.apache.hadoop.fs.contract.s3.TestS3ContractRootDir)  Time elapsed: 1.029 sec  <<< ERROR!
          java.io.FileNotFoundException: /: No such file or directory.
          	at org.apache.hadoop.fs.s3.S3FileSystem.getFileStatus(S3FileSystem.java:356)
          	at org.apache.hadoop.fs.contract.ContractTestUtils.assertIsDirectory(ContractTestUtils.java:464)
          	at org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRmRootRecursive(AbstractContractRootDirectoryTest.java:101)
          	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
          	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
          	at java.lang.reflect.Method.invoke(Method.java:606)
          	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
          	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
          	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
          	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
          	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
          	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
          	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
          
          Show
          cnauroth Chris Nauroth added a comment - Tests run: 6, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 6.288 sec <<< FAILURE! - in org.apache.hadoop.fs.contract.s3.TestS3ContractRootDir testRmEmptyRootDirNonRecursive(org.apache.hadoop.fs.contract.s3.TestS3ContractRootDir) Time elapsed: 1.984 sec <<< ERROR! java.io.FileNotFoundException: /: No such file or directory. at org.apache.hadoop.fs.s3.S3FileSystem.getFileStatus(S3FileSystem.java:356) at org.apache.hadoop.fs.contract.ContractTestUtils.assertIsDirectory(ContractTestUtils.java:464) at org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRmEmptyRootDirNonRecursive(AbstractContractRootDirectoryTest.java:67) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) testRmRootRecursive(org.apache.hadoop.fs.contract.s3.TestS3ContractRootDir) Time elapsed: 1.029 sec <<< ERROR! java.io.FileNotFoundException: /: No such file or directory. at org.apache.hadoop.fs.s3.S3FileSystem.getFileStatus(S3FileSystem.java:356) at org.apache.hadoop.fs.contract.ContractTestUtils.assertIsDirectory(ContractTestUtils.java:464) at org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRmRootRecursive(AbstractContractRootDirectoryTest.java:101) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

            People

            • Assignee:
              mattpaduano Matthew Paduano
              Reporter:
              cnauroth Chris Nauroth
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development