Details

    • Type: Sub-task
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: 2.9.0
    • Fix Version/s: 2.8.0, 3.0.0-alpha2
    • Component/s: fs/s3
    • Labels:
      None
    • Target Version/s:

      Description

      if you try to delete the root directory on s3a, you get politely but firmly told you can't

      2016-03-30 12:01:44,924 INFO  s3a.S3AFileSystem (S3AFileSystem.java:delete(638)) - s3a cannot delete the root directory
      

      The semantics of rm -rf "/" are defined, they are "delete everything underneath, while preserving the root dir itself".

      1. s3a needs to support this.
      2. this skipped through the FS contract tests in AbstractContractRootDirectoryTest; the option of whether deleting / works or not should be made configurable.
      1. HADOOP-12977-001.patch
        7 kB
        Steve Loughran
      2. HADOOP-12977-002.patch
        18 kB
        Steve Loughran
      3. HADOOP-12977-branch-2-002.patch
        18 kB
        Steve Loughran
      4. HADOOP-12977-branch-2-002.patch
        18 kB
        Steve Loughran
      5. HADOOP-12977-branch-2-003.patch
        18 kB
        Steve Loughran
      6. HADOOP-12977-branch-2-004.patch
        18 kB
        Steve Loughran
      7. HADOOP-12977-branch-2-005.patch
        18 kB
        Steve Loughran
      8. HADOOP-12977-branch-2-006.patch
        18 kB
        Chris Nauroth

        Issue Links

          Activity

          Hide
          stevel@apache.org Steve Loughran added a comment -

          I know I filed this, but I can see that the s3a root dir tests do try to delete root.

          Show
          stevel@apache.org Steve Loughran added a comment - I know I filed this, but I can see that the s3a root dir tests do try to delete root.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          ... The root directory test testRmRootRecursive() allows for the operation to return false, meaning "the root dir wasn't deleted";

          Looking at the fs spect. it says

          The POSIX model assumes that if the user has the correct permissions to delete everything, they are free to do so (resulting in an empty filesystem).

              if isDir(FS, p) and isRoot(p) and recursive :
                  FS' = ({["/"]}, {}, {}, {})
                  result = True
          

          In contrast, HDFS never permits the deletion of the root of a filesystem; the filesystem can be taken offline and reformatted if an empty filesystem is desired.

              if isDir(FS, p) and isRoot(p) and recursive :
                  FS' = FS
                  result = False
          

          So: the s3a logic follows that of HDFS: you can't do rm -rf /. Yet unlike HDFS, you can't take the fs offline and do a delete.

          1. we need to decide what to do
          2. the contract tests should require the filesystem to declare what they do, follow HDFS or Posix
          Show
          stevel@apache.org Steve Loughran added a comment - ... The root directory test testRmRootRecursive() allows for the operation to return false, meaning "the root dir wasn't deleted"; Looking at the fs spect. it says The POSIX model assumes that if the user has the correct permissions to delete everything, they are free to do so (resulting in an empty filesystem). if isDir(FS, p) and isRoot(p) and recursive : FS' = ({[ "/" ]}, {}, {}, {}) result = True In contrast, HDFS never permits the deletion of the root of a filesystem; the filesystem can be taken offline and reformatted if an empty filesystem is desired. if isDir(FS, p) and isRoot(p) and recursive : FS' = FS result = False So: the s3a logic follows that of HDFS: you can't do rm -rf / . Yet unlike HDFS, you can't take the fs offline and do a delete. we need to decide what to do the contract tests should require the filesystem to declare what they do, follow HDFS or Posix
          Hide
          stevel@apache.org Steve Loughran added a comment -

          reviewing HDFS, HDFS-8983 changed it's actions: it takes a list of protected directories and raises an AccessControlException on a delete, even if you have permissions. I don't know about what happens on a root dir though

          I propose that S3a is changed to let you clean up the root dir

          Show
          stevel@apache.org Steve Loughran added a comment - reviewing HDFS, HDFS-8983 changed it's actions: it takes a list of protected directories and raises an AccessControlException on a delete, even if you have permissions. I don't know about what happens on a root dir though I propose that S3a is changed to let you clean up the root dir
          Hide
          stevel@apache.org Steve Loughran added a comment -

          Patch 001

          -s3a changes to `PathAlreadyExistsException` if the path is there. HDFS does that.
          -noticed the root delete test was saying "nonrecursive', but it was being recursive. Duplicated the test to have recursive and nonrecursive tests.
          -S3A's root tests fail
          -As does HDFS. It looks like HDFS doesn't permit "rm /" even if "/" is empty.

          Show
          stevel@apache.org Steve Loughran added a comment - Patch 001 -s3a changes to `PathAlreadyExistsException` if the path is there. HDFS does that. -noticed the root delete test was saying "nonrecursive', but it was being recursive. Duplicated the test to have recursive and nonrecursive tests. -S3A's root tests fail -As does HDFS. It looks like HDFS doesn't permit "rm /" even if "/" is empty.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          HDFS test failure —need to understand if this is intentional, and, if so, is it the correct error message

          testRmEmptyRootDirNonRecursive(org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory)  Time elapsed: 0.081 sec  <<< ERROR!
          org.apache.hadoop.ipc.RemoteException: `/ is non empty': Directory is not empty
          	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:105)
          	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:2784)
          	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:1061)
          	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:626)
          	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
          	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:644)
          	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
          	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2339)
          	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2335)
          	at java.security.AccessController.doPrivileged(Native Method)
          	at javax.security.auth.Subject.doAs(Subject.java:422)
          	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1743)
          	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2335)
          
          	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
          	at org.apache.hadoop.ipc.Client.call(Client.java:1443)
          	at org.apache.hadoop.ipc.Client.call(Client.java:1353)
          	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
          	at com.sun.proxy.$Proxy24.delete(Unknown Source)
          	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:559)
          	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
          	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
          	at java.lang.reflect.Method.invoke(Method.java:498)
          	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:396)
          	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
          	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
          	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
          	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335)
          	at com.sun.proxy.$Proxy25.delete(Unknown Source)
          	at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1642)
          	at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:794)
          	at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:791)
          	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
          	at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:801)
          	at org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRmEmptyRootDirNonRecursive(AbstractContractRootDirectoryTest.java:80)
          	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
          	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
          	at java.lang.reflect.Method.invoke(Method.java:498)
          	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
          	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
          	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
          	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
          	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
          	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
          	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
          	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
          
          Show
          stevel@apache.org Steve Loughran added a comment - HDFS test failure —need to understand if this is intentional, and, if so, is it the correct error message testRmEmptyRootDirNonRecursive(org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory) Time elapsed: 0.081 sec <<< ERROR! org.apache.hadoop.ipc.RemoteException: `/ is non empty': Directory is not empty at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:105) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:2784) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:1061) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:626) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:644) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2339) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2335) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1743) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2335) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497) at org.apache.hadoop.ipc.Client.call(Client.java:1443) at org.apache.hadoop.ipc.Client.call(Client.java:1353) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233) at com.sun.proxy.$Proxy24.delete(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:559) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:396) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335) at com.sun.proxy.$Proxy25.delete(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1642) at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:794) at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:791) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:801) at org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRmEmptyRootDirNonRecursive(AbstractContractRootDirectoryTest.java:80) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
          Hide
          stevel@apache.org Steve Loughran added a comment -

          S3a

          
          testRmEmptyRootDirNonRecursive(org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDir)  Time elapsed: 0.42 sec  <<< ERROR!
          org.apache.hadoop.fs.PathIsNotEmptyDirectoryException: `Path is a folder: s3a://steve-something/ and it is not an empty directory': Directory is not empty
          	at org.apache.hadoop.fs.s3a.S3AFileSystem.innerDelete(S3AFileSystem.java:1126)
          	at org.apache.hadoop.fs.s3a.S3AFileSystem.delete(S3AFileSystem.java:1094)
          	at org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRmEmptyRootDirNonRecursive(AbstractContractRootDirectoryTest.java:80)
          	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
          	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
          	at java.lang.reflect.Method.invoke(Method.java:498)
          	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
          	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
          	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
          	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
          	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
          	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
          	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
          	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
          
          
          Show
          stevel@apache.org Steve Loughran added a comment - S3a testRmEmptyRootDirNonRecursive(org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDir) Time elapsed: 0.42 sec <<< ERROR! org.apache.hadoop.fs.PathIsNotEmptyDirectoryException: `Path is a folder: s3a: //steve-something/ and it is not an empty directory': Directory is not empty at org.apache.hadoop.fs.s3a.S3AFileSystem.innerDelete(S3AFileSystem.java:1126) at org.apache.hadoop.fs.s3a.S3AFileSystem.delete(S3AFileSystem.java:1094) at org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRmEmptyRootDirNonRecursive(AbstractContractRootDirectoryTest.java:80) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
          Hide
          stevel@apache.org Steve Loughran added a comment -

          while looking at this again, I managed to delete the bucket entirely. worth knowing that it is possible. For the curious, here is the stack trace

          testRmNonEmptyRootDirNonRecursive(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir)  Time elapsed: 0.295 sec  <<< ERROR! java.io.FileNotFoundException: innerMkdirs on /test: com.amazonaws.services.s3.model.AmazonS3Exception: The specified bucket does not exist (Service: Amazon S3; Status Code: 404; Error Code: NoSuchBucket; Request ID: 090FF7B0739884CD), S3 Extended Request ID: D7uOVeMMQqJ/Xtmz9CHHJGvSj27MSXMLU7sRc+KqAq0uXWr06U5WBKLo2tzUiFvadg1iCeaAV6E=
          	at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:130)
          	at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:85)
          	at org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:1180)
          	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1916)
          	at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
          	at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:193)
          	at org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.setup(AbstractContractRootDirectoryTest.java:49)
          	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
          	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
          	at java.lang.reflect.Method.invoke(Method.java:498)
          	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
          	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
          	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
          	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
          	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
          	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
          	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
          Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: The specified bucket does not exist (Service: Amazon S3; Status Code: 404; Error Code: NoSuchBucket; Request ID: 090FF7B0739884CD)
          	at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
          	at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
          	at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
          	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
          	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)
          	at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1472)
          	at com.amazonaws.services.s3.transfer.internal.UploadCallable.uploadInOneChunk(UploadCallable.java:131)
          	at com.amazonaws.services.s3.transfer.internal.UploadCallable.call(UploadCallable.java:123)
          	at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:139)
          	at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:47)
          	at org.apache.hadoop.fs.s3a.BlockingThreadPoolExecutorService$CallableWithPermitRelease.call(BlockingThreadPoolExecutorService.java:239)
          	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
          	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
          	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
          	at java.lang.Thread.run(Thread.java:745)
          

          An S3AFileSystem instance will not start up if the bucket is missing. This is the stack you see if the bucket is deleted during the lifespan of the FS instance

          Show
          stevel@apache.org Steve Loughran added a comment - while looking at this again, I managed to delete the bucket entirely. worth knowing that it is possible. For the curious, here is the stack trace testRmNonEmptyRootDirNonRecursive(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.295 sec <<< ERROR! java.io.FileNotFoundException: innerMkdirs on /test: com.amazonaws.services.s3.model.AmazonS3Exception: The specified bucket does not exist (Service: Amazon S3; Status Code: 404; Error Code: NoSuchBucket; Request ID: 090FF7B0739884CD), S3 Extended Request ID: D7uOVeMMQqJ/Xtmz9CHHJGvSj27MSXMLU7sRc+KqAq0uXWr06U5WBKLo2tzUiFvadg1iCeaAV6E= at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:130) at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:85) at org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:1180) at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1916) at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338) at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:193) at org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.setup(AbstractContractRootDirectoryTest.java:49) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: The specified bucket does not exist (Service: Amazon S3; Status Code: 404; Error Code: NoSuchBucket; Request ID: 090FF7B0739884CD) at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182) at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770) at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785) at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1472) at com.amazonaws.services.s3.transfer.internal.UploadCallable.uploadInOneChunk(UploadCallable.java:131) at com.amazonaws.services.s3.transfer.internal.UploadCallable.call(UploadCallable.java:123) at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:139) at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:47) at org.apache.hadoop.fs.s3a.BlockingThreadPoolExecutorService$CallableWithPermitRelease.call(BlockingThreadPoolExecutorService.java:239) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang. Thread .run( Thread .java:745) An S3AFileSystem instance will not start up if the bucket is missing. This is the stack you see if the bucket is deleted during the lifespan of the FS instance
          Hide
          stevel@apache.org Steve Loughran added a comment -

          Patch 002

          1. FS spec docs cover what posix does (deletes / if you have permissions), HDFS (deletes it if you don't have any protected dirs), and what other implementations MAY do.
          2. Add root tests for outcomes

          S3A still ignores delete / and I think it should remain so —because after having accidentally deleted a bucket while working on this, I know how hard it is to recreate one.

          Show
          stevel@apache.org Steve Loughran added a comment - Patch 002 FS spec docs cover what posix does (deletes / if you have permissions), HDFS (deletes it if you don't have any protected dirs), and what other implementations MAY do. Add root tests for outcomes S3A still ignores delete / and I think it should remain so —because after having accidentally deleted a bucket while working on this, I know how hard it is to recreate one.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          -1 patch 0m 7s HADOOP-12977 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help.



          Subsystem Report/Notes
          JIRA Issue HADOOP-12977
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12830707/HADOOP-12977-002.patch
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10628/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. -1 patch 0m 7s HADOOP-12977 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. Subsystem Report/Notes JIRA Issue HADOOP-12977 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12830707/HADOOP-12977-002.patch Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10628/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          branch-2 for branch-2; the patch tweaks the S3 test which no longer exists

          Show
          stevel@apache.org Steve Loughran added a comment - branch-2 for branch-2; the patch tweaks the S3 test which no longer exists
          Hide
          stevel@apache.org Steve Loughran added a comment -

          branch-2 for branch-2; the patch tweaks the S3 test which no longer exists

          Show
          stevel@apache.org Steve Loughran added a comment - branch-2 for branch-2; the patch tweaks the S3 test which no longer exists
          Hide
          liuml07 Mingliang Liu added a comment -

          I like this patch, especially the FS spec improvements. +1 after following minor comments are considered/addressed. Thanks.

          AbstractContractRootDirectoryTest.java
          88	    assertEquals("/ not empty", 0, children.length);
          89	    if (children.length > 0) {
          90	      StringBuilder error = new StringBuilder();
          91	      error.append("Deletion of child entries failed, still have")
          92	          .append(children.length)
          93	          .append('\n');
          94	      for (FileStatus child : children) {
          95	        error.append("  ").append(child.getPath()).append('\n');
          96	      }
          97	      fail(error.toString());
          98	    }
          

          If the children > 0 (non-empty), the assert in line 88 will simply throw exception and the following code will be suppressed?
          Another possible nit is that, when appending new line, System.lineSeparator() is preferred to '\n'.

          Is it possible to delete a path like "//"? I'm not sure about this.

          Show
          liuml07 Mingliang Liu added a comment - I like this patch, especially the FS spec improvements. +1 after following minor comments are considered/addressed. Thanks. AbstractContractRootDirectoryTest.java 88 assertEquals( "/ not empty" , 0, children.length); 89 if (children.length > 0) { 90 StringBuilder error = new StringBuilder(); 91 error.append( "Deletion of child entries failed, still have" ) 92 .append(children.length) 93 .append('\n'); 94 for (FileStatus child : children) { 95 error.append( " " ).append(child.getPath()).append('\n'); 96 } 97 fail(error.toString()); 98 } If the children > 0 (non-empty), the assert in line 88 will simply throw exception and the following code will be suppressed? Another possible nit is that, when appending new line, System.lineSeparator() is preferred to '\n'. Is it possible to delete a path like "//"? I'm not sure about this.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          ooh, good point about "//". I'll add a test.
          regarding the assert, yes, I'll cut it. I added the second bit in to do a dump of what's going on, but clearly never hit that codepath again (I think it was an s3 list inconstency).

          Show
          stevel@apache.org Steve Loughran added a comment - ooh, good point about "//". I'll add a test. regarding the assert, yes, I'll cut it. I added the second bit in to do a dump of what's going on, but clearly never hit that codepath again (I think it was an s3 list inconstency).
          Hide
          cnauroth Chris Nauroth added a comment -

          I'm not sure a test for "//" adds much value, considering that when the path string gets passed through Path / URI, the consecutive '/' characters will get normalized down to just one '/'. There is no harm in testing it, but I think it just degenerates to the same thing as testing "/". This would really be more of a unit test of Path behavior than FileSystem behavior. Actually, I just checked and that's covered by a unit test in TestPath#testNormalize.

          Show
          cnauroth Chris Nauroth added a comment - I'm not sure a test for "//" adds much value, considering that when the path string gets passed through Path / URI , the consecutive '/' characters will get normalized down to just one '/'. There is no harm in testing it, but I think it just degenerates to the same thing as testing "/". This would really be more of a unit test of Path behavior than FileSystem behavior. Actually, I just checked and that's covered by a unit test in TestPath#testNormalize .
          Hide
          stevel@apache.org Steve Loughran added a comment -

          patch 003; rebase to branch-2; address Mingliang Liu comment

          Show
          stevel@apache.org Steve Loughran added a comment - patch 003; rebase to branch-2; address Mingliang Liu comment
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          -1 patch 0m 7s HADOOP-12977 does not apply to branch-2. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help.



          Subsystem Report/Notes
          JIRA Issue HADOOP-12977
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12831746/HADOOP-12977-branch-2-003.patch
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10669/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. -1 patch 0m 7s HADOOP-12977 does not apply to branch-2. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. Subsystem Report/Notes JIRA Issue HADOOP-12977 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12831746/HADOOP-12977-branch-2-003.patch Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10669/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          patch 004, retrying rebased patch

          Show
          stevel@apache.org Steve Loughran added a comment - patch 004, retrying rebased patch
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 16s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 4 new or modified test files.
          0 mvndep 1m 5s Maven dependency ordering for branch
          +1 mvninstall 6m 38s branch-2 passed
          +1 compile 5m 31s branch-2 passed with JDK v1.8.0_101
          +1 compile 6m 31s branch-2 passed with JDK v1.7.0_111
          +1 checkstyle 1m 24s branch-2 passed
          +1 mvnsite 1m 24s branch-2 passed
          +1 mvneclipse 0m 32s branch-2 passed
          +1 findbugs 2m 16s branch-2 passed
          +1 javadoc 1m 2s branch-2 passed with JDK v1.8.0_101
          +1 javadoc 1m 17s branch-2 passed with JDK v1.7.0_111
          0 mvndep 0m 16s Maven dependency ordering for patch
          +1 mvninstall 1m 5s the patch passed
          +1 compile 5m 28s the patch passed with JDK v1.8.0_101
          +1 javac 5m 28s the patch passed
          +1 compile 6m 34s the patch passed with JDK v1.7.0_111
          +1 javac 6m 34s the patch passed
          -0 checkstyle 1m 28s root: The patch generated 1 new + 52 unchanged - 0 fixed = 53 total (was 52)
          +1 mvnsite 1m 28s the patch passed
          +1 mvneclipse 0m 39s the patch passed
          -1 whitespace 0m 0s The patch has 48 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
          +1 findbugs 2m 50s the patch passed
          -1 javadoc 0m 19s hadoop-tools_hadoop-aws-jdk1.8.0_101 with JDK v1.8.0_101 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
          +1 javadoc 1m 24s the patch passed with JDK v1.7.0_111
          +1 unit 8m 24s hadoop-common in the patch passed with JDK v1.7.0_111.
          +1 unit 0m 29s hadoop-aws in the patch passed with JDK v1.7.0_111.
          +1 asflicense 0m 28s The patch does not generate ASF License warnings.
          93m 41s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:b59b8b7
          JIRA Issue HADOOP-12977
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12831938/HADOOP-12977-branch-2-004.patch
          Optional Tests asflicense mvnsite compile javac javadoc mvninstall unit findbugs checkstyle
          uname Linux 70b300555c84 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision branch-2 / 5f1432d
          Default Java 1.7.0_111
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_101 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_111
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/10690/artifact/patchprocess/diff-checkstyle-root.txt
          whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/10690/artifact/patchprocess/whitespace-eol.txt
          javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/10690/artifact/patchprocess/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdk1.8.0_101.txt
          JDK v1.7.0_111 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10690/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10690/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 16s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 4 new or modified test files. 0 mvndep 1m 5s Maven dependency ordering for branch +1 mvninstall 6m 38s branch-2 passed +1 compile 5m 31s branch-2 passed with JDK v1.8.0_101 +1 compile 6m 31s branch-2 passed with JDK v1.7.0_111 +1 checkstyle 1m 24s branch-2 passed +1 mvnsite 1m 24s branch-2 passed +1 mvneclipse 0m 32s branch-2 passed +1 findbugs 2m 16s branch-2 passed +1 javadoc 1m 2s branch-2 passed with JDK v1.8.0_101 +1 javadoc 1m 17s branch-2 passed with JDK v1.7.0_111 0 mvndep 0m 16s Maven dependency ordering for patch +1 mvninstall 1m 5s the patch passed +1 compile 5m 28s the patch passed with JDK v1.8.0_101 +1 javac 5m 28s the patch passed +1 compile 6m 34s the patch passed with JDK v1.7.0_111 +1 javac 6m 34s the patch passed -0 checkstyle 1m 28s root: The patch generated 1 new + 52 unchanged - 0 fixed = 53 total (was 52) +1 mvnsite 1m 28s the patch passed +1 mvneclipse 0m 39s the patch passed -1 whitespace 0m 0s The patch has 48 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply +1 findbugs 2m 50s the patch passed -1 javadoc 0m 19s hadoop-tools_hadoop-aws-jdk1.8.0_101 with JDK v1.8.0_101 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) +1 javadoc 1m 24s the patch passed with JDK v1.7.0_111 +1 unit 8m 24s hadoop-common in the patch passed with JDK v1.7.0_111. +1 unit 0m 29s hadoop-aws in the patch passed with JDK v1.7.0_111. +1 asflicense 0m 28s The patch does not generate ASF License warnings. 93m 41s Subsystem Report/Notes Docker Image:yetus/hadoop:b59b8b7 JIRA Issue HADOOP-12977 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12831938/HADOOP-12977-branch-2-004.patch Optional Tests asflicense mvnsite compile javac javadoc mvninstall unit findbugs checkstyle uname Linux 70b300555c84 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2 / 5f1432d Default Java 1.7.0_111 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_101 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_111 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/10690/artifact/patchprocess/diff-checkstyle-root.txt whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/10690/artifact/patchprocess/whitespace-eol.txt javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/10690/artifact/patchprocess/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdk1.8.0_101.txt JDK v1.7.0_111 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10690/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10690/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          Patch 005;

          address yetus warnings, and have deleteUnnecessaryFakeDirectories() log & swallow the InvalidRequestException: the condition shouldn't arise, and if it does, it does not interfere with the goal of deleting the directories

          Show
          stevel@apache.org Steve Loughran added a comment - Patch 005; address yetus warnings, and have deleteUnnecessaryFakeDirectories() log & swallow the InvalidRequestException: the condition shouldn't arise, and if it does, it does not interfere with the goal of deleting the directories
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 18s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 4 new or modified test files.
          0 mvndep 1m 2s Maven dependency ordering for branch
          +1 mvninstall 6m 39s branch-2 passed
          +1 compile 5m 46s branch-2 passed with JDK v1.8.0_101
          +1 compile 6m 37s branch-2 passed with JDK v1.7.0_111
          +1 checkstyle 1m 29s branch-2 passed
          +1 mvnsite 1m 23s branch-2 passed
          +1 mvneclipse 0m 31s branch-2 passed
          +1 findbugs 2m 20s branch-2 passed
          +1 javadoc 1m 2s branch-2 passed with JDK v1.8.0_101
          +1 javadoc 1m 17s branch-2 passed with JDK v1.7.0_111
          0 mvndep 0m 16s Maven dependency ordering for patch
          +1 mvninstall 1m 6s the patch passed
          +1 compile 6m 10s the patch passed with JDK v1.8.0_101
          +1 javac 6m 10s the patch passed
          +1 compile 7m 2s the patch passed with JDK v1.7.0_111
          +1 javac 7m 2s the patch passed
          +1 checkstyle 1m 32s the patch passed
          +1 mvnsite 1m 29s the patch passed
          +1 mvneclipse 0m 38s the patch passed
          -1 whitespace 0m 0s The patch has 48 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
          +1 findbugs 2m 52s the patch passed
          +1 javadoc 1m 9s the patch passed with JDK v1.8.0_101
          +1 javadoc 1m 22s the patch passed with JDK v1.7.0_111
          +1 unit 7m 56s hadoop-common in the patch passed with JDK v1.7.0_111.
          +1 unit 0m 28s hadoop-aws in the patch passed with JDK v1.7.0_111.
          +1 asflicense 0m 29s The patch does not generate ASF License warnings.
          94m 16s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:b59b8b7
          JIRA Issue HADOOP-12977
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12831964/HADOOP-12977-branch-2-005.patch
          Optional Tests asflicense mvnsite compile javac javadoc mvninstall unit findbugs checkstyle
          uname Linux 9a8957496898 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision branch-2 / 69c1ab4
          Default Java 1.7.0_111
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_101 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_111
          findbugs v3.0.0
          whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/10692/artifact/patchprocess/whitespace-eol.txt
          JDK v1.7.0_111 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10692/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10692/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 18s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 4 new or modified test files. 0 mvndep 1m 2s Maven dependency ordering for branch +1 mvninstall 6m 39s branch-2 passed +1 compile 5m 46s branch-2 passed with JDK v1.8.0_101 +1 compile 6m 37s branch-2 passed with JDK v1.7.0_111 +1 checkstyle 1m 29s branch-2 passed +1 mvnsite 1m 23s branch-2 passed +1 mvneclipse 0m 31s branch-2 passed +1 findbugs 2m 20s branch-2 passed +1 javadoc 1m 2s branch-2 passed with JDK v1.8.0_101 +1 javadoc 1m 17s branch-2 passed with JDK v1.7.0_111 0 mvndep 0m 16s Maven dependency ordering for patch +1 mvninstall 1m 6s the patch passed +1 compile 6m 10s the patch passed with JDK v1.8.0_101 +1 javac 6m 10s the patch passed +1 compile 7m 2s the patch passed with JDK v1.7.0_111 +1 javac 7m 2s the patch passed +1 checkstyle 1m 32s the patch passed +1 mvnsite 1m 29s the patch passed +1 mvneclipse 0m 38s the patch passed -1 whitespace 0m 0s The patch has 48 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply +1 findbugs 2m 52s the patch passed +1 javadoc 1m 9s the patch passed with JDK v1.8.0_101 +1 javadoc 1m 22s the patch passed with JDK v1.7.0_111 +1 unit 7m 56s hadoop-common in the patch passed with JDK v1.7.0_111. +1 unit 0m 28s hadoop-aws in the patch passed with JDK v1.7.0_111. +1 asflicense 0m 29s The patch does not generate ASF License warnings. 94m 16s Subsystem Report/Notes Docker Image:yetus/hadoop:b59b8b7 JIRA Issue HADOOP-12977 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12831964/HADOOP-12977-branch-2-005.patch Optional Tests asflicense mvnsite compile javac javadoc mvninstall unit findbugs checkstyle uname Linux 9a8957496898 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2 / 69c1ab4 Default Java 1.7.0_111 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_101 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_111 findbugs v3.0.0 whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/10692/artifact/patchprocess/whitespace-eol.txt JDK v1.7.0_111 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10692/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10692/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          cnauroth Chris Nauroth added a comment -

          Steve, thank you for patch 005. This looks good to me. I verified that all subclasses of AbstractContractRootDirectoryTest are passing and all tests in hadoop-aws are passing.

          I'm attaching revision 006 with trivial changes to implement Mingliang's suggestion of using System.lineSeparator() instead of '\n' and cleaning up a whitespace warning.

          I am +1 for revision 006 on branch-2. This will not apply to trunk or branch-2.8 though, so we'd need separate patch files for those.

          I did see TestSwiftContractRootDir#testRmNonEmptyRootDirNonRecursive fail one time on the new assertIsFile call. (See below.) I re-ran the test many times and it did not repro. I'm going to write this off as me doing something wrong or possibly eventual consistency.

          Running org.apache.hadoop.fs.swift.contract.TestSwiftContractRootDir
          Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 55.525 sec <<< FAILURE! - in org.apache.hadoop.fs.swift.contract.TestSwiftContractRootDir
          testRmNonEmptyRootDirNonRecursive(org.apache.hadoop.fs.swift.contract.TestSwiftContractRootDir)  Time elapsed: 5.685 sec  <<< FAILURE!
          java.lang.AssertionError: File claims to be a directory /testRmNonEmptyRootDirNonRecursive  SwiftFileStatus{ path=swift://cnauroth-test-swift.rackspace/testRmNonEmptyRootDirNonRecursive; isDirectory=true; length=0; blocksize=33554432; modification_time=1475779099000}
          	at org.junit.Assert.fail(Assert.java:88)
          	at org.junit.Assert.assertTrue(Assert.java:41)
          	at org.junit.Assert.assertFalse(Assert.java:64)
          	at org.apache.hadoop.fs.contract.ContractTestUtils.assertIsFile(ContractTestUtils.java:699)
          	at org.apache.hadoop.fs.contract.ContractTestUtils.assertIsFile(ContractTestUtils.java:688)
          	at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.assertIsFile(AbstractFSContractTestBase.java:316)
          	at org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRmNonEmptyRootDirNonRecursive(AbstractContractRootDirectoryTest.java:121)
          	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
          	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
          	at java.lang.reflect.Method.invoke(Method.java:606)
          	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
          	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
          	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
          	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
          	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
          	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
          	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
          	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
          
          Show
          cnauroth Chris Nauroth added a comment - Steve, thank you for patch 005. This looks good to me. I verified that all subclasses of AbstractContractRootDirectoryTest are passing and all tests in hadoop-aws are passing. I'm attaching revision 006 with trivial changes to implement Mingliang's suggestion of using System.lineSeparator() instead of '\n' and cleaning up a whitespace warning. I am +1 for revision 006 on branch-2. This will not apply to trunk or branch-2.8 though, so we'd need separate patch files for those. I did see TestSwiftContractRootDir#testRmNonEmptyRootDirNonRecursive fail one time on the new assertIsFile call. (See below.) I re-ran the test many times and it did not repro. I'm going to write this off as me doing something wrong or possibly eventual consistency. Running org.apache.hadoop.fs.swift.contract.TestSwiftContractRootDir Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 55.525 sec <<< FAILURE! - in org.apache.hadoop.fs.swift.contract.TestSwiftContractRootDir testRmNonEmptyRootDirNonRecursive(org.apache.hadoop.fs.swift.contract.TestSwiftContractRootDir) Time elapsed: 5.685 sec <<< FAILURE! java.lang.AssertionError: File claims to be a directory /testRmNonEmptyRootDirNonRecursive SwiftFileStatus{ path=swift: //cnauroth-test-swift.rackspace/testRmNonEmptyRootDirNonRecursive; isDirectory= true ; length=0; blocksize=33554432; modification_time=1475779099000} at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertFalse(Assert.java:64) at org.apache.hadoop.fs.contract.ContractTestUtils.assertIsFile(ContractTestUtils.java:699) at org.apache.hadoop.fs.contract.ContractTestUtils.assertIsFile(ContractTestUtils.java:688) at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.assertIsFile(AbstractFSContractTestBase.java:316) at org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRmNonEmptyRootDirNonRecursive(AbstractContractRootDirectoryTest.java:121) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 18s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 4 new or modified test files.
          0 mvndep 0m 15s Maven dependency ordering for branch
          +1 mvninstall 6m 31s branch-2 passed
          +1 compile 5m 30s branch-2 passed with JDK v1.8.0_101
          +1 compile 6m 31s branch-2 passed with JDK v1.7.0_111
          +1 checkstyle 1m 25s branch-2 passed
          +1 mvnsite 1m 23s branch-2 passed
          +1 mvneclipse 0m 31s branch-2 passed
          +1 findbugs 2m 16s branch-2 passed
          +1 javadoc 1m 1s branch-2 passed with JDK v1.8.0_101
          +1 javadoc 1m 17s branch-2 passed with JDK v1.7.0_111
          0 mvndep 0m 16s Maven dependency ordering for patch
          +1 mvninstall 1m 5s the patch passed
          +1 compile 5m 28s the patch passed with JDK v1.8.0_101
          +1 javac 5m 28s the patch passed
          +1 compile 6m 34s the patch passed with JDK v1.7.0_111
          +1 javac 6m 34s the patch passed
          +1 checkstyle 1m 27s the patch passed
          +1 mvnsite 1m 29s the patch passed
          +1 mvneclipse 0m 39s the patch passed
          -1 whitespace 0m 0s The patch has 47 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
          +1 findbugs 2m 49s the patch passed
          +1 javadoc 1m 9s the patch passed with JDK v1.8.0_101
          +1 javadoc 1m 22s the patch passed with JDK v1.7.0_111
          +1 unit 9m 5s hadoop-common in the patch passed with JDK v1.7.0_111.
          +1 unit 0m 33s hadoop-aws in the patch passed with JDK v1.7.0_111.
          +1 asflicense 0m 27s The patch does not generate ASF License warnings.
          93m 27s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:b59b8b7
          JIRA Issue HADOOP-12977
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12832020/HADOOP-12977-branch-2-006.patch
          Optional Tests asflicense mvnsite compile javac javadoc mvninstall unit findbugs checkstyle
          uname Linux 9030876642dc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision branch-2 / ecccb11
          Default Java 1.7.0_111
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_101 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_111
          findbugs v3.0.0
          whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/10694/artifact/patchprocess/whitespace-eol.txt
          JDK v1.7.0_111 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10694/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10694/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 18s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 4 new or modified test files. 0 mvndep 0m 15s Maven dependency ordering for branch +1 mvninstall 6m 31s branch-2 passed +1 compile 5m 30s branch-2 passed with JDK v1.8.0_101 +1 compile 6m 31s branch-2 passed with JDK v1.7.0_111 +1 checkstyle 1m 25s branch-2 passed +1 mvnsite 1m 23s branch-2 passed +1 mvneclipse 0m 31s branch-2 passed +1 findbugs 2m 16s branch-2 passed +1 javadoc 1m 1s branch-2 passed with JDK v1.8.0_101 +1 javadoc 1m 17s branch-2 passed with JDK v1.7.0_111 0 mvndep 0m 16s Maven dependency ordering for patch +1 mvninstall 1m 5s the patch passed +1 compile 5m 28s the patch passed with JDK v1.8.0_101 +1 javac 5m 28s the patch passed +1 compile 6m 34s the patch passed with JDK v1.7.0_111 +1 javac 6m 34s the patch passed +1 checkstyle 1m 27s the patch passed +1 mvnsite 1m 29s the patch passed +1 mvneclipse 0m 39s the patch passed -1 whitespace 0m 0s The patch has 47 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply +1 findbugs 2m 49s the patch passed +1 javadoc 1m 9s the patch passed with JDK v1.8.0_101 +1 javadoc 1m 22s the patch passed with JDK v1.7.0_111 +1 unit 9m 5s hadoop-common in the patch passed with JDK v1.7.0_111. +1 unit 0m 33s hadoop-aws in the patch passed with JDK v1.7.0_111. +1 asflicense 0m 27s The patch does not generate ASF License warnings. 93m 27s Subsystem Report/Notes Docker Image:yetus/hadoop:b59b8b7 JIRA Issue HADOOP-12977 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12832020/HADOOP-12977-branch-2-006.patch Optional Tests asflicense mvnsite compile javac javadoc mvninstall unit findbugs checkstyle uname Linux 9030876642dc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2 / ecccb11 Default Java 1.7.0_111 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_101 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_111 findbugs v3.0.0 whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/10694/artifact/patchprocess/whitespace-eol.txt JDK v1.7.0_111 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10694/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10694/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          cnauroth Chris Nauroth added a comment -

          The remaining whitespace warnings are not relevant to this patch.

          Show
          cnauroth Chris Nauroth added a comment - The remaining whitespace warnings are not relevant to this patch.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          Patch applied to trunk and branch-2; the trunk patch was simply done by skipping the s3 test; git lets you do this with ease

          git apply -3 --verbose --whitespace=fix --exclude hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3/ITestS3ContractRootDir.java  HADOOP-12977-branch-2-006.patch 
          
          Show
          stevel@apache.org Steve Loughran added a comment - Patch applied to trunk and branch-2; the trunk patch was simply done by skipping the s3 test; git lets you do this with ease git apply -3 --verbose --whitespace=fix --exclude hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3/ITestS3ContractRootDir.java HADOOP-12977-branch-2-006.patch
          Hide
          stevel@apache.org Steve Loughran added a comment -

          Patch applied to trunk and branch-2; the trunk patch was simply done by skipping the s3 test; git lets you do this with ease

          git apply -3 --verbose --whitespace=fix --exclude hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3/ITestS3ContractRootDir.java  HADOOP-12977-branch-2-006.patch 
          

          Branch-2.8 is another matter; I will do that quickly and attach under here again

          Show
          stevel@apache.org Steve Loughran added a comment - Patch applied to trunk and branch-2; the trunk patch was simply done by skipping the s3 test; git lets you do this with ease git apply -3 --verbose --whitespace=fix --exclude hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3/ITestS3ContractRootDir.java HADOOP-12977-branch-2-006.patch Branch-2.8 is another matter; I will do that quickly and attach under here again
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10563 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10563/)
          HADOOP-12977 s3a to handle delete("/", true) robustly. Contributed by (stevel: rev ebd4f39a393e5fa9a810c6a36b749549229a53df)

          • (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
          • (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractRootDirectoryTest.java
          • (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java
          • (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextURIBase.java
          • (edit) hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10563 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10563/ ) HADOOP-12977 s3a to handle delete("/", true) robustly. Contributed by (stevel: rev ebd4f39a393e5fa9a810c6a36b749549229a53df) (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractRootDirectoryTest.java (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextURIBase.java (edit) hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
          Hide
          stevel@apache.org Steve Loughran added a comment -

          branch-2.8 was addressed by cherry picking the HADOOP-13164 faster delete of fake directories patch; delivers the speedup there and eliminates the diff between the two branches.

          Show
          stevel@apache.org Steve Loughran added a comment - branch-2.8 was addressed by cherry picking the HADOOP-13164 faster delete of fake directories patch; delivers the speedup there and eliminates the diff between the two branches.

            People

            • Assignee:
              stevel@apache.org Steve Loughran
              Reporter:
              stevel@apache.org Steve Loughran
            • Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development