Hadoop Common
  1. Hadoop Common
  2. HADOOP-771

Namenode should return error when trying to delete non-empty directory

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.8.0
    • Fix Version/s: 0.17.0
    • Component/s: None
    • Labels:
      None
    • Environment:

      all

    • Hadoop Flags:
      Incompatible change
    • Release Note:
      Hide
      This patch adds a new api to file system i.e delete(path, boolean), deprecating the previous delete(path).
      the new api recursively deletes files only if boolean is set to true.
      If path is a file, the boolean value does not matter, if path is a directory and the directory is non empty delete(path, false) will throw an exception and delete(path, true) will delete all files recursively.
      Show
      This patch adds a new api to file system i.e delete(path, boolean), deprecating the previous delete(path). the new api recursively deletes files only if boolean is set to true. If path is a file, the boolean value does not matter, if path is a directory and the directory is non empty delete(path, false) will throw an exception and delete(path, true) will delete all files recursively.

      Description

      Currently, the namenode.delete() method allows recursive deletion of a directory. That is, even a non-empty directory could be deleted using namenode.delete(). To avoid costly programmer errors, the namenode should not remove the non-empty directories in this method. Recursively deleting directory should either be performed with listPaths() followed by a delete() for every path, or with a specific namenode method such as deleteRecursive().

      1. patch_771_1.patch
        12 kB
        Mahadev konar
      2. Hadoop-771_2.patch
        14 kB
        Mahadev konar
      3. Hadoop-771_3.patch
        14 kB
        Mahadev konar
      4. Hadoop-771_4.patch
        15 kB
        Mahadev konar
      5. Hadoop-771_5.patch
        93 kB
        Mahadev konar
      6. Hadoop-771_6.patch
        95 kB
        Mahadev konar
      7. Hadoop-771_7.patch
        95 kB
        Mahadev konar
      8. Hadoop-771_8.patch
        95 kB
        Mahadev konar

        Issue Links

          Activity

          Hide
          Robert Chansler added a comment -

          Noted as incompatible in changes.txt

          Show
          Robert Chansler added a comment - Noted as incompatible in changes.txt
          Hide
          Tsz Wo Nicholas Sze added a comment -

          Created HADOOP-3202 for deprecating FileUtil.fullyDelete(FileSystem fs, Path dir).

          Show
          Tsz Wo Nicholas Sze added a comment - Created HADOOP-3202 for deprecating FileUtil.fullyDelete(FileSystem fs, Path dir).
          Hide
          Doug Cutting added a comment -

          > Should we deprecate FileUtil.fullyDelete(FileSystem fs, Path dir)?

          +1. Not only is it redundant, but it still uses the pre-URI style of passing both a FileSystem and a Path, making it doubly-redundant too, in addition, as well and so on.

          Show
          Doug Cutting added a comment - > Should we deprecate FileUtil.fullyDelete(FileSystem fs, Path dir)? +1. Not only is it redundant, but it still uses the pre-URI style of passing both a FileSystem and a Path, making it doubly-redundant too, in addition, as well and so on.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          Should we deprecate FileUtil.fullyDelete(FileSystem fs, Path dir)? Is it the same as FileSystem.delete(path, recursive=true)?

          Show
          Tsz Wo Nicholas Sze added a comment - Should we deprecate FileUtil.fullyDelete(FileSystem fs, Path dir)? Is it the same as FileSystem.delete(path, recursive=true)?
          Hide
          Hudson added a comment -
          Show
          Hudson added a comment - Integrated in Hadoop-trunk #426 (See http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/426/ )
          Hide
          dhruba borthakur added a comment -

          I just committed this. Thanks Mahadev.

          Show
          dhruba borthakur added a comment - I just committed this. Thanks Mahadev.
          Hide
          Mahadev konar added a comment -

          the javac patch warnings are just due to declaration of deprecated api and not the use.

          Show
          Mahadev konar added a comment - the javac patch warnings are just due to declaration of deprecated api and not the use.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12377489/Hadoop-771_8.patch
          against trunk revision 619744.

          @author +1. The patch does not contain any @author tags.

          tests included +1. The patch appears to include 216 new or modified tests.

          javadoc +1. The javadoc tool did not generate any warning messages.

          javac -1. The applied patch generated 598 javac compiler warnings (more than the trunk's current 589 warnings).

          release audit +1. The applied patch does not generate any new release audit warnings.

          findbugs +1. The patch does not introduce any new Findbugs warnings.

          core tests +1. The patch passed core unit tests.

          contrib tests +1. The patch passed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1923/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1923/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1923/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1923/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12377489/Hadoop-771_8.patch against trunk revision 619744. @author +1. The patch does not contain any @author tags. tests included +1. The patch appears to include 216 new or modified tests. javadoc +1. The javadoc tool did not generate any warning messages. javac -1. The applied patch generated 598 javac compiler warnings (more than the trunk's current 589 warnings). release audit +1. The applied patch does not generate any new release audit warnings. findbugs +1. The patch does not introduce any new Findbugs warnings. core tests +1. The patch passed core unit tests. contrib tests +1. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1923/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1923/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1923/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1923/console This message is automatically generated.
          Hide
          Mahadev konar added a comment -

          the last patch had a conflict... resolving and uploading a new one.

          Show
          Mahadev konar added a comment - the last patch had a conflict... resolving and uploading a new one.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12377487/Hadoop-771_7.patch
          against trunk revision 619744.

          @author +1. The patch does not contain any @author tags.

          tests included +1. The patch appears to include 216 new or modified tests.

          javadoc +1. The javadoc tool did not generate any warning messages.

          javac +1. The applied patch does not generate any new javac compiler warnings.

          release audit +1. The applied patch does not generate any new release audit warnings.

          findbugs -1. The patch appears to cause Findbugs to fail.

          core tests -1. The patch failed core unit tests.

          contrib tests -1. The patch failed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1922/testReport/
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1922/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1922/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12377487/Hadoop-771_7.patch against trunk revision 619744. @author +1. The patch does not contain any @author tags. tests included +1. The patch appears to include 216 new or modified tests. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new javac compiler warnings. release audit +1. The applied patch does not generate any new release audit warnings. findbugs -1. The patch appears to cause Findbugs to fail. core tests -1. The patch failed core unit tests. contrib tests -1. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1922/testReport/ Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1922/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1922/console This message is automatically generated.
          Hide
          Mahadev konar added a comment -

          uploading the patch that applies to trunk.

          Show
          Mahadev konar added a comment - uploading the patch that applies to trunk.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12377386/Hadoop-771_6.patch
          against trunk revision 619744.

          @author +1. The patch does not contain any @author tags.

          tests included +1. The patch appears to include 216 new or modified tests.

          patch -1. The patch command could not apply the patch.

          Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1921/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12377386/Hadoop-771_6.patch against trunk revision 619744. @author +1. The patch does not contain any @author tags. tests included +1. The patch appears to include 216 new or modified tests. patch -1. The patch command could not apply the patch. Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1921/console This message is automatically generated.
          Hide
          Mahadev konar added a comment -

          trying hudson again....

          Show
          Mahadev konar added a comment - trying hudson again....
          Hide
          Mahadev konar added a comment -

          attaching a new patch removing some more javac warnings.

          Show
          Mahadev konar added a comment - attaching a new patch removing some more javac warnings.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12377313/Hadoop-771_5.patch
          against trunk revision 619744.

          @author +1. The patch does not contain any @author tags.

          tests included +1. The patch appears to include 210 new or modified tests.

          javadoc +1. The javadoc tool did not generate any warning messages.

          javac -1. The applied patch generated 605 javac compiler warnings (more than the trunk's current 592 warnings).

          release audit +1. The applied patch does not generate any new release audit warnings.

          findbugs +1. The patch does not introduce any new Findbugs warnings.

          core tests +1. The patch passed core unit tests.

          contrib tests +1. The patch passed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1912/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1912/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1912/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1912/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12377313/Hadoop-771_5.patch against trunk revision 619744. @author +1. The patch does not contain any @author tags. tests included +1. The patch appears to include 210 new or modified tests. javadoc +1. The javadoc tool did not generate any warning messages. javac -1. The applied patch generated 605 javac compiler warnings (more than the trunk's current 592 warnings). release audit +1. The applied patch does not generate any new release audit warnings. findbugs +1. The patch does not introduce any new Findbugs warnings. core tests +1. The patch passed core unit tests. contrib tests +1. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1912/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1912/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1912/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1912/console This message is automatically generated.
          Hide
          Mahadev konar added a comment -

          trying hudson!!

          Show
          Mahadev konar added a comment - trying hudson!!
          Hide
          Mahadev konar added a comment -

          updated patch getting rid of the javac warnings – replacing all the uses of delete(path) with delete(path, true).

          Show
          Mahadev konar added a comment - updated patch getting rid of the javac warnings – replacing all the uses of delete(path) with delete(path, true).
          Hide
          Mahadev konar added a comment -

          forgot to remove delete(path) uses!

          Show
          Mahadev konar added a comment - forgot to remove delete(path) uses!
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12377190/Hadoop-771_4.patch
          against trunk revision 619744.

          @author +1. The patch does not contain any @author tags.

          tests included +1. The patch appears to include 3 new or modified tests.

          javadoc +1. The javadoc tool did not generate any warning messages.

          javac -1. The applied patch generated 808 javac compiler warnings (more than the trunk's current 590 warnings).

          release audit +1. The applied patch does not generate any new release audit warnings.

          findbugs +1. The patch does not introduce any new Findbugs warnings.

          core tests +1. The patch passed core unit tests.

          contrib tests +1. The patch passed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1906/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1906/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1906/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1906/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12377190/Hadoop-771_4.patch against trunk revision 619744. @author +1. The patch does not contain any @author tags. tests included +1. The patch appears to include 3 new or modified tests. javadoc +1. The javadoc tool did not generate any warning messages. javac -1. The applied patch generated 808 javac compiler warnings (more than the trunk's current 590 warnings). release audit +1. The applied patch does not generate any new release audit warnings. findbugs +1. The patch does not introduce any new Findbugs warnings. core tests +1. The patch passed core unit tests. contrib tests +1. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1906/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1906/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1906/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1906/console This message is automatically generated.
          Hide
          Mahadev konar added a comment -

          retrying hudson.

          Show
          Mahadev konar added a comment - retrying hudson.
          Hide
          Mahadev konar added a comment -

          uploading a new patch with dhruba's comments incorporated.

          Show
          Mahadev konar added a comment - uploading a new patch with dhruba's comments incorporated.
          Hide
          dhruba borthakur added a comment -

          I think we should deprecate NameNode.delete(path) and DFSClient.delete(Path) as well. The callers have to use the new API that explicitly sets the "recursive" flag. We should remove FSNamesystem.delete(Path) altogether. Otherwise code looks good. +1.

          Show
          dhruba borthakur added a comment - I think we should deprecate NameNode.delete(path) and DFSClient.delete(Path) as well. The callers have to use the new API that explicitly sets the "recursive" flag. We should remove FSNamesystem.delete(Path) altogether. Otherwise code looks good. +1.
          Hide
          Mahadev konar added a comment -

          this patch fixes dhruba's comments.

          Show
          Mahadev konar added a comment - this patch fixes dhruba's comments.
          Hide
          dhruba borthakur added a comment -

          We can deprecate all the 'delete(name)' APIs.

          The description of the new public AP can describe better what happens when a recursive flag is set and the pathname is a file.

          Show
          dhruba borthakur added a comment - We can deprecate all the 'delete(name)' APIs. The description of the new public AP can describe better what happens when a recursive flag is set and the pathname is a file.
          Hide
          Mahadev konar added a comment -

          here is a mroe recent patch (applies to trunk) which fixes recursive deletes for s3 and cleans up some debug statements.

          Show
          Mahadev konar added a comment - here is a mroe recent patch (applies to trunk) which fixes recursive deletes for s3 and cleans up some debug statements.
          Hide
          Mahadev konar added a comment -

          i meant delete(path, boolean) (not delte(path, boolean)).

          Show
          Mahadev konar added a comment - i meant delete(path, boolean) (not delte(path, boolean)).
          Hide
          Mahadev konar added a comment -

          this patch create a new filesystem api delte(path, boolean) which would throw an exception in case path is a directory, is non emppty and boolean is false. If boolean is set to true it behaves the same as previous delete. I have also deprecated delete(path). applications would use delete(path, boolean) instead. I have made changes to all the other file systems to implement them correctly.

          Show
          Mahadev konar added a comment - this patch create a new filesystem api delte(path, boolean) which would throw an exception in case path is a directory, is non emppty and boolean is false. If boolean is set to true it behaves the same as previous delete. I have also deprecated delete(path). applications would use delete(path, boolean) instead. I have made changes to all the other file systems to implement them correctly.
          Hide
          Mahadev konar added a comment -

          so is implementing a deletion with recursive flag acceptable to all? we still keep the api with just delete for backwards compatibility.

          Show
          Mahadev konar added a comment - so is implementing a deletion with recursive flag acceptable to all? we still keep the api with just delete for backwards compatibility.
          Hide
          Sameer Paranjpye added a comment -

          > If we get rid of recursive deletion then we'll still have a method somewhere that implements recursive deletion and folks will still call it and things will still get unintentionally
          > deleted. I don't think that's the solution here.

          I agree, even if we prohibit recursive deletion at the Namenode it'll get implemented in a library and called nonetheless. One might be able to abort a deletion in flight, but I don't believe that a user could react quickly enough to a mistaken deletion. The increase in the number of Namenode RPCs is an additional problem.

          Hairong's suggestion for a boolean flag specifying recursion would be a good way to go, it would require programmers to be explicit about recursion and avoid additional RPC traffic.

          Show
          Sameer Paranjpye added a comment - > If we get rid of recursive deletion then we'll still have a method somewhere that implements recursive deletion and folks will still call it and things will still get unintentionally > deleted. I don't think that's the solution here. I agree, even if we prohibit recursive deletion at the Namenode it'll get implemented in a library and called nonetheless. One might be able to abort a deletion in flight, but I don't believe that a user could react quickly enough to a mistaken deletion. The increase in the number of Namenode RPCs is an additional problem. Hairong's suggestion for a boolean flag specifying recursion would be a good way to go, it would require programmers to be explicit about recursion and avoid additional RPC traffic.
          Hide
          Milind Bhandarkar added a comment -

          >> On the other hand, won't an RPC per file cause a lot of namenode traffic?

          yes, it would. But only when user intentionally deletes all his/her data, which I believe happens rarely.

          I think, DFS is optimized for a small number of large files, rather than large number of small files, so in practice, this erasure of entire tree should not happen frequently.

          Show
          Milind Bhandarkar added a comment - >> On the other hand, won't an RPC per file cause a lot of namenode traffic? yes, it would. But only when user intentionally deletes all his/her data, which I believe happens rarely. I think, DFS is optimized for a small number of large files, rather than large number of small files, so in practice, this erasure of entire tree should not happen frequently.
          Hide
          Doug Cutting added a comment -

          > A single RPC to delete an entire directory structure is the main problem

          On the other hand, won't an RPC per file cause a lot of namenode traffic?

          Show
          Doug Cutting added a comment - > A single RPC to delete an entire directory structure is the main problem On the other hand, won't an RPC per file cause a lot of namenode traffic?
          Hide
          Hairong Kuang added a comment -

          Currently FileSystem APIs only support "rmr". To avoid programmatic error, should we change the FileSystem interface so that it distinguishes between rm and rmr? Instead of having "void delete(Path)", we could have "void delete(Path, boolean)". The boolean flag indicates if the deletion is recursive or not.

          Show
          Hairong Kuang added a comment - Currently FileSystem APIs only support "rmr". To avoid programmatic error, should we change the FileSystem interface so that it distinguishes between rm and rmr? Instead of having "void delete(Path)", we could have "void delete(Path, boolean)". The boolean flag indicates if the deletion is recursive or not.
          Hide
          Milind Bhandarkar added a comment -

          I agree. The first three fixes should be given a higher priority. But eventually allowing recursive deletion with a single RPC should be prevented. It will also allow us to move the INode inner class outside of FSDirectory, and save on precious memory.

          Show
          Milind Bhandarkar added a comment - I agree. The first three fixes should be given a higher priority. But eventually allowing recursive deletion with a single RPC should be prevented. It will also allow us to move the INode inner class outside of FSDirectory, and save on precious memory.
          Hide
          Doug Cutting added a comment -

          There were a number of things that could have prevented this. First, I'm told this was triggered by a buggy script parser that parsed 'rm<space><space>foo' as 'rm(""); foo();'. Fixing that would avoid this. Second, the script execution engine should have used Hadoop's Trash feature, so that deletions go through an intermediate directory. Third, rm("") shouldn't delete one's home directory but instead throw an exception. And, finally, fourth, if we distinguished between 'rm' and 'rmr', this might not have happened, but there's a good chance that the script would have instead contained 'rmr<space><space>foo', and it still would have happened. So I think distinguishing between recursive and non-recursive file deletion is the weakest defense against this. It might not be a bad thing. Hadoop's own FsShell already makes this distinction. Perhaps Pig's scripting language should too, and perhaps Hadoop's APIs should even make this distinction. But I'd give the first three fixes much higher priority.

          Show
          Doug Cutting added a comment - There were a number of things that could have prevented this. First, I'm told this was triggered by a buggy script parser that parsed 'rm<space><space>foo' as 'rm(""); foo();'. Fixing that would avoid this. Second, the script execution engine should have used Hadoop's Trash feature, so that deletions go through an intermediate directory. Third, rm("") shouldn't delete one's home directory but instead throw an exception. And, finally, fourth, if we distinguished between 'rm' and 'rmr', this might not have happened, but there's a good chance that the script would have instead contained 'rmr<space><space>foo', and it still would have happened. So I think distinguishing between recursive and non-recursive file deletion is the weakest defense against this. It might not be a bad thing. Hadoop's own FsShell already makes this distinction. Perhaps Pig's scripting language should too, and perhaps Hadoop's APIs should even make this distinction. But I'd give the first three fixes much higher priority.
          Hide
          Milind Bhandarkar added a comment -

          >> think the problem is that 'FileSystem#remove(new Path(""))' removes one's home directory.

          I think that's just one part of the problem.

          A single RPC to delete an entire directory structure is the main problem, I think. If deleting a directory were to take multiple roundtrips between client and namenode, then it would be possible to ^C the client to stop wiping out the entire tree. No ?

          Show
          Milind Bhandarkar added a comment - >> think the problem is that 'FileSystem#remove(new Path(""))' removes one's home directory. I think that's just one part of the problem. A single RPC to delete an entire directory structure is the main problem, I think. If deleting a directory were to take multiple roundtrips between client and namenode, then it would be possible to ^C the client to stop wiping out the entire tree. No ?
          Hide
          Jim Kellerman added a comment -

          Doug Cutting wrote:
          > I think the problem is that 'FileSystem#remove(new Path(""))' removes one's home directory. To address
          > this we could simply prohibit the empty string as a Path. One should be forced to use at least "." to refer
          > to the connected/default directory. Would that work?

          +1

          Show
          Jim Kellerman added a comment - Doug Cutting wrote: > I think the problem is that 'FileSystem#remove(new Path(""))' removes one's home directory. To address > this we could simply prohibit the empty string as a Path. One should be forced to use at least "." to refer > to the connected/default directory. Would that work? +1
          Hide
          Doug Cutting added a comment -

          If we get rid of recursive deletion then we'll still have a method somewhere that implements recursive deletion and folks will still call it and things will still get unintentionally deleted. I don't think that's the solution here.

          I think the problem is that 'FileSystem#remove(new Path(""))' removes one's home directory. To address this we could simply prohibit the empty string as a Path. One should be forced to use at least "." to refer to the connected/default directory. Would that work?

          Show
          Doug Cutting added a comment - If we get rid of recursive deletion then we'll still have a method somewhere that implements recursive deletion and folks will still call it and things will still get unintentionally deleted. I don't think that's the solution here. I think the problem is that 'FileSystem#remove(new Path(""))' removes one's home directory. To address this we could simply prohibit the empty string as a Path. One should be forced to use at least "." to refer to the connected/default directory. Would that work?
          Hide
          Milind Bhandarkar added a comment -

          Can this be fixed ?
          A programmer error can cost a lot here.
          No other file system allows this, IMHO for a reason.
          Having been burnt once too many times, I would like to disallow recursive delete completely.

          Show
          Milind Bhandarkar added a comment - Can this be fixed ? A programmer error can cost a lot here. No other file system allows this, IMHO for a reason. Having been burnt once too many times, I would like to disallow recursive delete completely.

            People

            • Assignee:
              Mahadev konar
              Reporter:
              Milind Bhandarkar
            • Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development