Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-12358

Add -safely flag to rm to prompt when deleting many files

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.8.0, 3.0.0-alpha1
    • Component/s: fs
    • Labels:
    • Target Version/s:

      Description

      We have seen many cases with customers deleting data inadvertently with -skipTrash. The FSShell should prompt user if the size of the data or the number of files being deleted is bigger than a threshold even though -skipTrash is being used.

      1. HADOOP-12358.00.patch
        6 kB
        Xiaoyu Yao
      2. HADOOP-12358.01.patch
        9 kB
        Xiaoyu Yao
      3. HADOOP-12358.02.patch
        20 kB
        Xiaoyu Yao
      4. HADOOP-12358.03.patch
        28 kB
        Xiaoyu Yao
      5. HADOOP-12358.04.patch
        25 kB
        Xiaoyu Yao
      6. HADOOP-12358.05.patch
        28 kB
        Xiaoyu Yao
      7. HADOOP-12358.06.patch
        28 kB
        Xiaoyu Yao
      8. HADOOP-12358.07.patch
        28 kB
        Xiaoyu Yao

        Activity

        Hide
        aw Allen Wittenauer added a comment -

        This feature can't be on by default in 2.x as it is an incompatible change and will break existing automation.

        Show
        aw Allen Wittenauer added a comment - This feature can't be on by default in 2.x as it is an incompatible change and will break existing automation.
        Hide
        xyao Xiaoyu Yao added a comment -

        Attach an initial patch without unit test. I will add unit tests later today.

        Show
        xyao Xiaoyu Yao added a comment - Attach an initial patch without unit test. I will add unit tests later today.
        Hide
        andrew.wang Andrew Wang added a comment -

        Allen Wittenauer it's off by default in the patch, so compat looks satisfied

        Hi Xiaoyu Yao if I understand the patch, it calls getContentSummary beforehand which is a recursive operation. Means that doing rm with this option on is now one RPC per directory in the deleted tree, much more expensive.

        I can understand a safety mechanism for not deleting / (seen that before in Unix), but this is a novel one and comes at a pretty high cost. If users want more safety, they shouldn't use -skipTrash. -skipTrash is like -f, do we really need to nanny users even when they've already explicitly opted out of our existing safety mechanism?

        There's also some danger of client OOMs when trying to delete a large directory, since getContentSummary is not using the iterator-based listing. That's an issue we can fix in a different JIRA though.

        Show
        andrew.wang Andrew Wang added a comment - Allen Wittenauer it's off by default in the patch, so compat looks satisfied Hi Xiaoyu Yao if I understand the patch, it calls getContentSummary beforehand which is a recursive operation. Means that doing rm with this option on is now one RPC per directory in the deleted tree, much more expensive. I can understand a safety mechanism for not deleting / (seen that before in Unix), but this is a novel one and comes at a pretty high cost. If users want more safety, they shouldn't use -skipTrash. -skipTrash is like -f, do we really need to nanny users even when they've already explicitly opted out of our existing safety mechanism? There's also some danger of client OOMs when trying to delete a large directory, since getContentSummary is not using the iterator-based listing. That's an issue we can fix in a different JIRA though.
        Hide
        xyao Xiaoyu Yao added a comment -

        @Andrew wang, thanks for the feedback. Good point for mentioning the getContentSummary() performance issue. However, compare with the irrecoverable data loss due to various user mistakes without confirmation. Some Unix best practices even suggest

         alias rm="rm -i" 

        to avoid dangerous deletions without confirmation. I think there are values to warn HDFS users about the consequences and ensure a confirmation for dangerous deletions when the feature is enabled. When they see how much data will be gone after the confirmation, most people will be very cautious to say 'Yes'. This will save a lot of efforts later to recover deletions, which is extremely hard with today's -skipTrash option.

        they shouldn't use -skipTrash. -skipTrash is like -f,

        Correct me if I'm wrong. My understanding of -skipTrash (don't make the delete recoverable) is different from -f (confirm unrecoverable delete).

        Show
        xyao Xiaoyu Yao added a comment - @Andrew wang, thanks for the feedback. Good point for mentioning the getContentSummary() performance issue. However, compare with the irrecoverable data loss due to various user mistakes without confirmation . Some Unix best practices even suggest alias rm= "rm -i" to avoid dangerous deletions without confirmation. I think there are values to warn HDFS users about the consequences and ensure a confirmation for dangerous deletions when the feature is enabled. When they see how much data will be gone after the confirmation, most people will be very cautious to say 'Yes'. This will save a lot of efforts later to recover deletions, which is extremely hard with today's -skipTrash option. they shouldn't use -skipTrash. -skipTrash is like -f, Correct me if I'm wrong. My understanding of -skipTrash (don't make the delete recoverable) is different from -f (confirm unrecoverable delete).
        Hide
        xyao Xiaoyu Yao added a comment -

        Change to check getContentSummary for bulk deletion confirmation when bulkDeleteWarning is enabled and (Trash is disabled or skipTrash is specified) to mitigate the performance concerns. When trash is enabled and skipTrash is not specified, this will be a noop.

        Show
        xyao Xiaoyu Yao added a comment - Change to check getContentSummary for bulk deletion confirmation when bulkDeleteWarning is enabled and (Trash is disabled or skipTrash is specified) to mitigate the performance concerns. When trash is enabled and skipTrash is not specified, this will be a noop.
        Hide
        aw Allen Wittenauer added a comment -

        it's off by default in the patch, so compat looks satisfied

        There wasn't a patch posted when I made the comment. After the data loss caused by namenode -finalize changing in an incompatible way, I'm starting to pro-actively remind people that ops folks build tools around our shell commands. Changing the default behaviors of these commands is an act of aggression against the user base, regardless of intent.

        That said: IMO, I think this falls into the 'enough rope to hang yourself' area of the world. Between snapshots and trash, there are plenty of ways to protect against this situation. You can't protect the user from making every single mistake without turning the system into a 'nanny system' that makes you verify every single thing. All this will do is cause people to set -Dblahblah=false or build their own delete command and continue on their way.

        Never mind that this is at the cost of three extra config vars! Complexity, complexity, complexity...

        Show
        aw Allen Wittenauer added a comment - it's off by default in the patch, so compat looks satisfied There wasn't a patch posted when I made the comment. After the data loss caused by namenode -finalize changing in an incompatible way, I'm starting to pro-actively remind people that ops folks build tools around our shell commands. Changing the default behaviors of these commands is an act of aggression against the user base, regardless of intent. That said: IMO, I think this falls into the 'enough rope to hang yourself' area of the world. Between snapshots and trash, there are plenty of ways to protect against this situation. You can't protect the user from making every single mistake without turning the system into a 'nanny system' that makes you verify every single thing. All this will do is cause people to set -Dblahblah=false or build their own delete command and continue on their way. Never mind that this is at the cost of three extra config vars! Complexity, complexity, complexity...
        Hide
        xyao Xiaoyu Yao added a comment -

        There's also some danger of client OOMs when trying to delete a large directory, since getContentSummary is not using the iterator-based listing. That's an issue we can fix in a different JIRA though.

        If the client OOM because of deleting large directory, make it OOM upon getContentSummary can actually help avoiding an inconsistent (half completed) deletion states.

        Show
        xyao Xiaoyu Yao added a comment - There's also some danger of client OOMs when trying to delete a large directory, since getContentSummary is not using the iterator-based listing. That's an issue we can fix in a different JIRA though. If the client OOM because of deleting large directory, make it OOM upon getContentSummary can actually help avoiding an inconsistent (half completed) deletion states.
        Hide
        xyao Xiaoyu Yao added a comment -

        Thanks Allen Wittenauer for the feedback. I really appreciate it. Some earlier ones regarding compatibility have included in my 1st patch as Andrew Wang has mentioned.

        Between snapshots and trash, there are plenty of ways to protect against this situation.

        What if these are not configured or not configured correctly by ops and used correctly by users (-skipTrash)?

        You can't protect the user from making every single mistake without turning the system into a 'nanny system' that makes you verify every single thing.

        We don't want to build a 'nanny system'. The purpose of this JIRA is to protect/warn against bulk deletion only on misconfigured cluster (without trash/snapshot) or misuse (trash is skipped), which is hard to recover.

        Show
        xyao Xiaoyu Yao added a comment - Thanks Allen Wittenauer for the feedback. I really appreciate it. Some earlier ones regarding compatibility have included in my 1st patch as Andrew Wang has mentioned. Between snapshots and trash, there are plenty of ways to protect against this situation. What if these are not configured or not configured correctly by ops and used correctly by users (-skipTrash)? You can't protect the user from making every single mistake without turning the system into a 'nanny system' that makes you verify every single thing. We don't want to build a 'nanny system'. The purpose of this JIRA is to protect/warn against bulk deletion only on misconfigured cluster (without trash/snapshot) or misuse (trash is skipped), which is hard to recover.
        Hide
        xyao Xiaoyu Yao added a comment -

        Summary of delta for patch v1: change to check deletion size for warning only when trash is not enabled or skipped.

             if (bulkDeleteWarning &&
        117	          (!Trash.isTrashEnabled(item.fs, item.path, getConf()) ||
        118	          skipTrash)) {
        
        Show
        xyao Xiaoyu Yao added a comment - Summary of delta for patch v1: change to check deletion size for warning only when trash is not enabled or skipped. if (bulkDeleteWarning && 117 (!Trash.isTrashEnabled(item.fs, item.path, getConf()) || 118 skipTrash)) {
        Hide
        aw Allen Wittenauer added a comment -

        What if these are not configured or not configured correctly by ops and used correctly by users (-skipTrash)?

        Then it sucks to be them. Should we also fail to bring up the namenode if only one namenode dir is configured?

        Again, it's impossible to protect against every possible failure scenario. Education and better (human) processes go for a long ways towards make Hadoop usable.

        The purpose of this JIRA is to protect/warn against bulk deletion only on misconfigured cluster (without trash/snapshot) or misuse (trash is skipped), which is hard to recover.

        Who are we to judge whether my cluster is misconfigured in this case? Do you understand the use cases?

        Show
        aw Allen Wittenauer added a comment - What if these are not configured or not configured correctly by ops and used correctly by users (-skipTrash)? Then it sucks to be them. Should we also fail to bring up the namenode if only one namenode dir is configured? Again, it's impossible to protect against every possible failure scenario. Education and better (human) processes go for a long ways towards make Hadoop usable. The purpose of this JIRA is to protect/warn against bulk deletion only on misconfigured cluster (without trash/snapshot) or misuse (trash is skipped), which is hard to recover. Who are we to judge whether my cluster is misconfigured in this case? Do you understand the use cases?
        Hide
        xyao Xiaoyu Yao added a comment -

        Adding unit test. Also fix an issue in CLI related tests that the shell has an null configuration (all default) which makes any customized settings before test not effective.

        Show
        xyao Xiaoyu Yao added a comment - Adding unit test. Also fix an issue in CLI related tests that the shell has an null configuration (all default) which makes any customized settings before test not effective.
        Hide
        xyao Xiaoyu Yao added a comment -

        Adding the missing the new test files.

        Show
        xyao Xiaoyu Yao added a comment - Adding the missing the new test files.
        Hide
        hadoopqa Hadoop QA added a comment -



        -1 overall



        Vote Subsystem Runtime Comment
        -1 pre-patch 23m 35s Findbugs (version 3.0.0) appears to be broken on trunk.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 tests included 0m 0s The patch appears to include 14 new or modified test files.
        +1 javac 10m 4s There were no new javac warning messages.
        +1 javadoc 12m 24s There were no new javadoc warning messages.
        +1 release audit 0m 30s The applied patch does not increase the total number of release audit warnings.
        -1 checkstyle 2m 24s The applied patch generated 6 new checkstyle issues (total was 197, now 203).
        -1 whitespace 0m 1s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix.
        +1 install 2m 7s mvn install still works.
        +1 eclipse:eclipse 0m 52s The patch built with eclipse:eclipse.
        -1 findbugs 6m 40s The patch appears to introduce 4 new Findbugs (version 3.0.0) warnings.
        -1 common tests 23m 6s Tests failed in hadoop-common.
        -1 mapreduce tests 0m 16s Tests failed in hadoop-mapreduce-client-jobclient.
        -1 hdfs tests 0m 42s Tests failed in hadoop-hdfs.
            83m 43s  



        Reason Tests
        FindBugs module:hadoop-hdfs
        Failed unit tests hadoop.fs.contract.localfs.TestLocalFSContractRename
        Timed out tests org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractAppend
        Failed build hadoop-mapreduce-client-jobclient
          hadoop-hdfs



        Subsystem Report/Notes
        Patch URL http://issues.apache.org/jira/secure/attachment/12752593/HADOOP-12358.03.patch
        Optional Tests javadoc javac unit findbugs checkstyle
        git revision trunk / f44b599
        checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/7534/artifact/patchprocess/diffcheckstylehadoop-common.txt
        whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/7534/artifact/patchprocess/whitespace.txt
        Findbugs warnings https://builds.apache.org/job/PreCommit-HADOOP-Build/7534/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
        hadoop-common test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7534/artifact/patchprocess/testrun_hadoop-common.txt
        hadoop-mapreduce-client-jobclient test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7534/artifact/patchprocess/testrun_hadoop-mapreduce-client-jobclient.txt
        hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7534/artifact/patchprocess/testrun_hadoop-hdfs.txt
        Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/7534/testReport/
        Java 1.7.0_55
        uname Linux asf908.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/7534/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment -1 pre-patch 23m 35s Findbugs (version 3.0.0) appears to be broken on trunk. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 14 new or modified test files. +1 javac 10m 4s There were no new javac warning messages. +1 javadoc 12m 24s There were no new javadoc warning messages. +1 release audit 0m 30s The applied patch does not increase the total number of release audit warnings. -1 checkstyle 2m 24s The applied patch generated 6 new checkstyle issues (total was 197, now 203). -1 whitespace 0m 1s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. +1 install 2m 7s mvn install still works. +1 eclipse:eclipse 0m 52s The patch built with eclipse:eclipse. -1 findbugs 6m 40s The patch appears to introduce 4 new Findbugs (version 3.0.0) warnings. -1 common tests 23m 6s Tests failed in hadoop-common. -1 mapreduce tests 0m 16s Tests failed in hadoop-mapreduce-client-jobclient. -1 hdfs tests 0m 42s Tests failed in hadoop-hdfs.     83m 43s   Reason Tests FindBugs module:hadoop-hdfs Failed unit tests hadoop.fs.contract.localfs.TestLocalFSContractRename Timed out tests org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractAppend Failed build hadoop-mapreduce-client-jobclient   hadoop-hdfs Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12752593/HADOOP-12358.03.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / f44b599 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/7534/artifact/patchprocess/diffcheckstylehadoop-common.txt whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/7534/artifact/patchprocess/whitespace.txt Findbugs warnings https://builds.apache.org/job/PreCommit-HADOOP-Build/7534/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html hadoop-common test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7534/artifact/patchprocess/testrun_hadoop-common.txt hadoop-mapreduce-client-jobclient test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7534/artifact/patchprocess/testrun_hadoop-mapreduce-client-jobclient.txt hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7534/artifact/patchprocess/testrun_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/7534/testReport/ Java 1.7.0_55 uname Linux asf908.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/7534/console This message was automatically generated.
        Hide
        andrew.wang Andrew Wang added a comment -

        If the client OOM because of deleting large directory, make it OOM upon getContentSummary can actually help avoiding an inconsistent (half completed) deletion states.

        This leads into one of my favorite topics, which is how and why HDFS APIs differ from POSIX. POSIX gives you unlink and rmdir, so rm has to crawl the directory tree, doing O operations. However, HDFS implements recursive delete as a single RPC, so 1 operation. This is for performance. We want to avoid recursing when doing a big delete since RPCs are expensive. Deletes are also most of the time intentional. So, this patch greatly slows down the common case, when we already have safety mechanisms like trash and snapshots in place, and is counter to the intent of the recursive delete RPC.

        The other API difference I like is how HDFS combines readdir and stat into listStatus, again to avoid extra RPCs.

        Finally, to tie it back to your comment, right now there is no OOM (or partial delete) since the client just calls the single RPC and does not need to enumerate the directory. With this patch, it would. This would be a regression where a client with a small heap now cannot delete a large directory.

        Show
        andrew.wang Andrew Wang added a comment - If the client OOM because of deleting large directory, make it OOM upon getContentSummary can actually help avoiding an inconsistent (half completed) deletion states. This leads into one of my favorite topics, which is how and why HDFS APIs differ from POSIX. POSIX gives you unlink and rmdir, so rm has to crawl the directory tree, doing O operations. However, HDFS implements recursive delete as a single RPC, so 1 operation. This is for performance. We want to avoid recursing when doing a big delete since RPCs are expensive. Deletes are also most of the time intentional. So, this patch greatly slows down the common case, when we already have safety mechanisms like trash and snapshots in place, and is counter to the intent of the recursive delete RPC. The other API difference I like is how HDFS combines readdir and stat into listStatus, again to avoid extra RPCs. Finally, to tie it back to your comment, right now there is no OOM (or partial delete) since the client just calls the single RPC and does not need to enumerate the directory. With this patch, it would. This would be a regression where a client with a small heap now cannot delete a large directory.
        Hide
        hadoopqa Hadoop QA added a comment -



        -1 overall



        Vote Subsystem Runtime Comment
        -1 pre-patch 25m 48s Pre-patch trunk has 4 extant Findbugs (version 3.0.0) warnings.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 tests included 0m 0s The patch appears to include 14 new or modified test files.
        +1 javac 11m 53s There were no new javac warning messages.
        +1 javadoc 11m 3s There were no new javadoc warning messages.
        +1 release audit 0m 24s The applied patch does not increase the total number of release audit warnings.
        -1 checkstyle 2m 49s The applied patch generated 6 new checkstyle issues (total was 197, now 203).
        -1 whitespace 0m 2s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix.
        +1 install 1m 33s mvn install still works.
        +1 eclipse:eclipse 0m 32s The patch built with eclipse:eclipse.
        +1 findbugs 5m 20s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
        +1 common tests 26m 59s Tests passed in hadoop-common.
        -1 mapreduce tests 75m 43s Tests failed in hadoop-mapreduce-client-jobclient.
        -1 hdfs tests 161m 21s Tests failed in hadoop-hdfs.
            324m 8s  



        Reason Tests
        Failed unit tests hadoop.mapreduce.security.ssl.TestEncryptedShuffle
          hadoop.mapred.TestMiniMRChildTask
          hadoop.conf.TestNoDefaultsJobConf
          hadoop.mapred.TestJobSysDirWithDFS
          hadoop.fs.TestDFSIO
          hadoop.mapreduce.security.TestBinaryTokenFile
          hadoop.mapred.TestReduceFetch
          hadoop.mapreduce.v2.TestNonExistentJob
          hadoop.ipc.TestMRCJCSocketFactory
          hadoop.mapred.TestClusterMapReduceTestCase
          hadoop.mapred.TestMRIntermediateDataEncryption
          hadoop.mapred.TestReduceFetchFromPartialMem
          hadoop.fs.TestFileSystem
          hadoop.mapred.TestJobName
          hadoop.mapreduce.TestMapReduceLazyOutput
          hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers
          hadoop.mapred.TestSpecialCharactersInOutputPath
          hadoop.mapreduce.security.TestMRCredentials
          hadoop.mapred.TestMRCJCFileInputFormat
          hadoop.mapred.TestMerge
          hadoop.mapred.lib.TestDelegatingInputFormat
          hadoop.mapred.TestMiniMRClasspath
          hadoop.mapred.join.TestDatamerge
          hadoop.mapreduce.v2.TestMRJobs
          hadoop.cli.TestXAttrCLI
          hadoop.cli.TestHDFSCLI
          hadoop.cli.TestAclCLI
          hadoop.cli.TestDeleteCLI
          hadoop.cli.TestCryptoAdminCLI
          hadoop.hdfs.TestLocalDFS
          hadoop.cli.TestCacheAdminCLI
        Timed out tests org.apache.hadoop.mapred.TestLazyOutput
          org.apache.hadoop.mapreduce.TestChild
          org.apache.hadoop.mapreduce.TestMRJobClient



        Subsystem Report/Notes
        Patch URL http://issues.apache.org/jira/secure/attachment/12752593/HADOOP-12358.03.patch
        Optional Tests javadoc javac unit findbugs checkstyle
        git revision trunk / c992bcf
        Pre-patch Findbugs warnings https://builds.apache.org/job/PreCommit-HADOOP-Build/7533/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
        checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/7533/artifact/patchprocess/diffcheckstylehadoop-common.txt
        whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/7533/artifact/patchprocess/whitespace.txt
        hadoop-common test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7533/artifact/patchprocess/testrun_hadoop-common.txt
        hadoop-mapreduce-client-jobclient test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7533/artifact/patchprocess/testrun_hadoop-mapreduce-client-jobclient.txt
        hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7533/artifact/patchprocess/testrun_hadoop-hdfs.txt
        Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/7533/testReport/
        Java 1.7.0_55
        uname Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/7533/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment -1 pre-patch 25m 48s Pre-patch trunk has 4 extant Findbugs (version 3.0.0) warnings. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 14 new or modified test files. +1 javac 11m 53s There were no new javac warning messages. +1 javadoc 11m 3s There were no new javadoc warning messages. +1 release audit 0m 24s The applied patch does not increase the total number of release audit warnings. -1 checkstyle 2m 49s The applied patch generated 6 new checkstyle issues (total was 197, now 203). -1 whitespace 0m 2s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. +1 install 1m 33s mvn install still works. +1 eclipse:eclipse 0m 32s The patch built with eclipse:eclipse. +1 findbugs 5m 20s The patch does not introduce any new Findbugs (version 3.0.0) warnings. +1 common tests 26m 59s Tests passed in hadoop-common. -1 mapreduce tests 75m 43s Tests failed in hadoop-mapreduce-client-jobclient. -1 hdfs tests 161m 21s Tests failed in hadoop-hdfs.     324m 8s   Reason Tests Failed unit tests hadoop.mapreduce.security.ssl.TestEncryptedShuffle   hadoop.mapred.TestMiniMRChildTask   hadoop.conf.TestNoDefaultsJobConf   hadoop.mapred.TestJobSysDirWithDFS   hadoop.fs.TestDFSIO   hadoop.mapreduce.security.TestBinaryTokenFile   hadoop.mapred.TestReduceFetch   hadoop.mapreduce.v2.TestNonExistentJob   hadoop.ipc.TestMRCJCSocketFactory   hadoop.mapred.TestClusterMapReduceTestCase   hadoop.mapred.TestMRIntermediateDataEncryption   hadoop.mapred.TestReduceFetchFromPartialMem   hadoop.fs.TestFileSystem   hadoop.mapred.TestJobName   hadoop.mapreduce.TestMapReduceLazyOutput   hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers   hadoop.mapred.TestSpecialCharactersInOutputPath   hadoop.mapreduce.security.TestMRCredentials   hadoop.mapred.TestMRCJCFileInputFormat   hadoop.mapred.TestMerge   hadoop.mapred.lib.TestDelegatingInputFormat   hadoop.mapred.TestMiniMRClasspath   hadoop.mapred.join.TestDatamerge   hadoop.mapreduce.v2.TestMRJobs   hadoop.cli.TestXAttrCLI   hadoop.cli.TestHDFSCLI   hadoop.cli.TestAclCLI   hadoop.cli.TestDeleteCLI   hadoop.cli.TestCryptoAdminCLI   hadoop.hdfs.TestLocalDFS   hadoop.cli.TestCacheAdminCLI Timed out tests org.apache.hadoop.mapred.TestLazyOutput   org.apache.hadoop.mapreduce.TestChild   org.apache.hadoop.mapreduce.TestMRJobClient Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12752593/HADOOP-12358.03.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / c992bcf Pre-patch Findbugs warnings https://builds.apache.org/job/PreCommit-HADOOP-Build/7533/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/7533/artifact/patchprocess/diffcheckstylehadoop-common.txt whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/7533/artifact/patchprocess/whitespace.txt hadoop-common test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7533/artifact/patchprocess/testrun_hadoop-common.txt hadoop-mapreduce-client-jobclient test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7533/artifact/patchprocess/testrun_hadoop-mapreduce-client-jobclient.txt hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7533/artifact/patchprocess/testrun_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/7533/testReport/ Java 1.7.0_55 uname Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/7533/console This message was automatically generated.
        Hide
        xyao Xiaoyu Yao added a comment -

        Finally, to tie it back to your comment, right now there is no OOM (or partial delete) since the client just calls the single RPC and does not need to enumerate the directory. With this patch, it would. This would be a regression where a client with a small heap now cannot delete a large directory.

        Andrew Wang, this is not the case for HDFS. The default FileSystem#getContentSummary does recursion on client side. But the HDFS implementation in DistributedFileSystem#getContentSummary does not. It is also a single RPC like DistributedFileSystem#delete which has recursion on NN side.

        Show
        xyao Xiaoyu Yao added a comment - Finally, to tie it back to your comment, right now there is no OOM (or partial delete) since the client just calls the single RPC and does not need to enumerate the directory. With this patch, it would. This would be a regression where a client with a small heap now cannot delete a large directory. Andrew Wang , this is not the case for HDFS. The default FileSystem#getContentSummary does recursion on client side. But the HDFS implementation in DistributedFileSystem#getContentSummary does not. It is also a single RPC like DistributedFileSystem#delete which has recursion on NN side.
        Hide
        xyao Xiaoyu Yao added a comment -

        If the concern is for non-HDFS, we could do the check only if the underlying filesystem is DistributedFileSystem.

        Show
        xyao Xiaoyu Yao added a comment - If the concern is for non-HDFS, we could do the check only if the underlying filesystem is DistributedFileSystem.
        Hide
        arpitagarwal Arpit Agarwal added a comment - - edited

        There are following concerns from the Jira.

        1. Compatibility. The checks are off by default.
        2. getContentSummary requires too many RPCs for filesystems other then DFS.
        3. Configuration complexity.
          • We can get rid of the boolean setting, e.g. just disable the check if the thresholds are zero or negative. If we also get rid of the size-based threshold we only need one new setting for the inode count threshold.
        4. getContentSummary is expensive. This is a valid concern.

        Does it make sense to move this check to the NN? NN already does a recursive permissions check for every delete call (FsPermissionChecker#checkSubAccess). A suggested approach:

        1. Add a FileSystem#delete overload that takes a threshold.
        2. Extend the recursive permissions check to compute the number of descendant inodes. It is a little ugly but avoids recursing twice. We can skip the file size check.
        3. If the computed inode count is below the threshold the dir is deleted, else the call fails.
        4. If the call fails the shell command throws your prompt. If the user chooses Y, invoke the regular delete call.
        5. If the underlying filesystem does not support checking the threshold then it just performs a regular delete. This takes care of the second concern above.

        This still has the potential to break automation when the feature is enabled so we can make the default behavior to simply fail the delete call. An additional parameter can allow prompting to override the checks.

        Show
        arpitagarwal Arpit Agarwal added a comment - - edited There are following concerns from the Jira. Compatibility. The checks are off by default. getContentSummary requires too many RPCs for filesystems other then DFS. Configuration complexity. We can get rid of the boolean setting, e.g. just disable the check if the thresholds are zero or negative. If we also get rid of the size-based threshold we only need one new setting for the inode count threshold. getContentSummary is expensive. This is a valid concern. Does it make sense to move this check to the NN? NN already does a recursive permissions check for every delete call ( FsPermissionChecker#checkSubAccess ). A suggested approach: Add a FileSystem#delete overload that takes a threshold. Extend the recursive permissions check to compute the number of descendant inodes. It is a little ugly but avoids recursing twice. We can skip the file size check. If the computed inode count is below the threshold the dir is deleted, else the call fails. If the call fails the shell command throws your prompt. If the user chooses Y, invoke the regular delete call. If the underlying filesystem does not support checking the threshold then it just performs a regular delete. This takes care of the second concern above. This still has the potential to break automation when the feature is enabled so we can make the default behavior to simply fail the delete call. An additional parameter can allow prompting to override the checks.
        Hide
        aw Allen Wittenauer added a comment -

        If automation really wants to delete that many blocks, then what? How does it override if the only option to do so forces a prompt?

        Show
        aw Allen Wittenauer added a comment - If automation really wants to delete that many blocks, then what? How does it override if the only option to do so forces a prompt?
        Hide
        xyao Xiaoyu Yao added a comment -

        We could add "-y, --yes, or --assume-yes" to automatic yes to prompts so that delete run non-interactively.

        Show
        xyao Xiaoyu Yao added a comment - We could add "-y, --yes, or --assume-yes" to automatic yes to prompts so that delete run non-interactively.
        Hide
        arpitagarwal Arpit Agarwal added a comment -

        If automation really wants to delete that many blocks, then what? How does it override if the only option to do so forces a prompt?

        The administrator should consider this potential breakage before enabling it. We should make it explicit in the documentation.

        We could add "-y, --yes, or --assume-yes" to automatic yes to prompts so that delete run non-interactively.

        We have seen some administrators routinely pass –skipTrash which leads to these situations. IMO if we add --yes or similar they will just start including that as well.

        Show
        arpitagarwal Arpit Agarwal added a comment - If automation really wants to delete that many blocks, then what? How does it override if the only option to do so forces a prompt? The administrator should consider this potential breakage before enabling it. We should make it explicit in the documentation. We could add "-y, --yes, or --assume-yes" to automatic yes to prompts so that delete run non-interactively. We have seen some administrators routinely pass –skipTrash which leads to these situations. IMO if we add --yes or similar they will just start including that as well.
        Hide
        aw Allen Wittenauer added a comment -

        Actually, now that I think about it, doing this will break large jobs that remove their working directories prior to execution.

        At this point, I'm leaning towards just a flat out -1. I can't think of a single other file system that has this limitation across the multitude of operating systems I've worked on. It will definitely surprise users. Given that there are plenty of other ways to protect against users making a mistake (snapshots, trash) and the countless ways to work around it even when it is turned on, the risk/reward isn't really there.

        Show
        aw Allen Wittenauer added a comment - Actually, now that I think about it, doing this will break large jobs that remove their working directories prior to execution. At this point, I'm leaning towards just a flat out -1. I can't think of a single other file system that has this limitation across the multitude of operating systems I've worked on. It will definitely surprise users. Given that there are plenty of other ways to protect against users making a mistake (snapshots, trash) and the countless ways to work around it even when it is turned on, the risk/reward isn't really there.
        Hide
        xyao Xiaoyu Yao added a comment -

        Thanks Arpit Agarwal for the feedback.

        We have seen some administrators routinely pass –skipTrash which leads to these situations. IMO if we add --yes or similar they will just start including that as well.

        Agree, adding "-y" is not a good idea as we should not leave work around for it when it is enabled. As different use cases have different expectations for rm, we will also document it to enable it only for appropriate ones and leave it disabled by default.

        Show
        xyao Xiaoyu Yao added a comment - Thanks Arpit Agarwal for the feedback. We have seen some administrators routinely pass –skipTrash which leads to these situations. IMO if we add --yes or similar they will just start including that as well. Agree, adding "-y" is not a good idea as we should not leave work around for it when it is enabled. As different use cases have different expectations for rm, we will also document it to enable it only for appropriate ones and leave it disabled by default.
        Hide
        xyao Xiaoyu Yao added a comment -

        Thanks Allen Wittenauer for the feedback.

        doing this will break large jobs that remove their working directories prior to execution.

        Only admins who want to use this feature would consider this aspect and by default it is disabled. Also, the feature is exposed only via FSShell, so MR jobs using delete API will not be impacted.

        I can't think of a single other file system that has this limitation across the multitude of operating systems I've worked on. It will definitely surprise users. Given that there are plenty of other ways to protect against users making a mistake (snapshots, trash) and the countless ways to work around it even when it is turned on, the risk/reward isn't really there.

        There is no risk because of the above two reasons. It is a useful feature because it will reduce the occurrences of cases where admins deleted large amount of data inadvertently. It doesn't prevent all mistakes users can make..but it will prevent against some mistakes and that itself is worth a reward.

        Show
        xyao Xiaoyu Yao added a comment - Thanks Allen Wittenauer for the feedback. doing this will break large jobs that remove their working directories prior to execution. Only admins who want to use this feature would consider this aspect and by default it is disabled. Also, the feature is exposed only via FSShell, so MR jobs using delete API will not be impacted. I can't think of a single other file system that has this limitation across the multitude of operating systems I've worked on. It will definitely surprise users. Given that there are plenty of other ways to protect against users making a mistake (snapshots, trash) and the countless ways to work around it even when it is turned on, the risk/reward isn't really there. There is no risk because of the above two reasons. It is a useful feature because it will reduce the occurrences of cases where admins deleted large amount of data inadvertently. It doesn't prevent all mistakes users can make..but it will prevent against some mistakes and that itself is worth a reward.
        Hide
        aw Allen Wittenauer added a comment -

        Only admins who want to use this feature would consider this aspect and by default it is disabled.

        If I have to enable this feature, why wouldn't I just enable trash and/or snapshots which are a) significantly lower risk and b) almost certainly won't break my existing workflows in a surprising way?

        Also, the feature is exposed only via FSShell, so MR jobs using delete API will not be impacted.

        There are a TON of workflows that look like:

        hadoop fs -rm -r /workdir
        yarn jar job.jar
        

        where job.jar then runs multiple TB to /workdir. Enabling this will break a large number of those jobs.

        It is a useful feature because it will reduce the occurrences of cases where admins deleted large amount of data inadvertently.

        No, it won't. We'll just write scripts that bombard the NN and delete everything over multiple RPCs or we'll write our own code to bypass the FsShell completely.

        But here, I'll give you an out. I'll remove my -1 in one of two ways:

        • This limitation is tied to a flag on the rm command. Then we can write some shell code to utilize .hadooprc to build subcommand aliasing. (.e.g., alias "hadoop fs -rm"="hadoop fs -rm -safely" or whatever.) Just be aware that this will only work in trunk.

        and/or

        • This limitation is tied to a new fs command.

        Making it system wide is not an option and will cause widespread destruction.

        Show
        aw Allen Wittenauer added a comment - Only admins who want to use this feature would consider this aspect and by default it is disabled. If I have to enable this feature, why wouldn't I just enable trash and/or snapshots which are a) significantly lower risk and b) almost certainly won't break my existing workflows in a surprising way? Also, the feature is exposed only via FSShell, so MR jobs using delete API will not be impacted. There are a TON of workflows that look like: hadoop fs -rm -r /workdir yarn jar job.jar where job.jar then runs multiple TB to /workdir. Enabling this will break a large number of those jobs. It is a useful feature because it will reduce the occurrences of cases where admins deleted large amount of data inadvertently. No, it won't. We'll just write scripts that bombard the NN and delete everything over multiple RPCs or we'll write our own code to bypass the FsShell completely. But here, I'll give you an out. I'll remove my -1 in one of two ways: This limitation is tied to a flag on the rm command. Then we can write some shell code to utilize .hadooprc to build subcommand aliasing. (.e.g., alias "hadoop fs -rm"="hadoop fs -rm -safely" or whatever.) Just be aware that this will only work in trunk. and/or This limitation is tied to a new fs command. Making it system wide is not an option and will cause widespread destruction.
        Hide
        owen.omalley Owen O'Malley added a comment -

        I agree with Allen. This is a bad feature that will break lots of users.

        The trash feature already does this better and because it has been used for many years, is expected behavior.

        Show
        owen.omalley Owen O'Malley added a comment - I agree with Allen. This is a bad feature that will break lots of users. The trash feature already does this better and because it has been used for many years, is expected behavior.
        Hide
        xyao Xiaoyu Yao added a comment -

        Thanks Allen Wittenauer for the suggestions to improve the usability and compatibility of this feature.

        This limitation is tied to a flag on the rm command. Then we can write some shell code to utilize .hadooprc to build subcommand aliasing. (.e.g., alias "hadoop fs -rm"="hadoop fs -rm -safely" or whatever.) Just be aware that this will only work in trunk.

        I like this idea as it offers something close to GNU "rm -i", which is usually aliased as "rm" in .bashrc for data safety.

        Show
        xyao Xiaoyu Yao added a comment - Thanks Allen Wittenauer for the suggestions to improve the usability and compatibility of this feature. This limitation is tied to a flag on the rm command. Then we can write some shell code to utilize .hadooprc to build subcommand aliasing. (.e.g., alias "hadoop fs -rm"="hadoop fs -rm -safely" or whatever.) Just be aware that this will only work in trunk. I like this idea as it offers something close to GNU "rm -i", which is usually aliased as "rm" in .bashrc for data safety.
        Hide
        hadoopqa Hadoop QA added a comment -



        -1 overall



        Vote Subsystem Runtime Comment
        0 pre-patch 20m 12s Pre-patch trunk compilation is healthy.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 tests included 0m 0s The patch appears to include 14 new or modified test files.
        +1 javac 7m 39s There were no new javac warning messages.
        +1 javadoc 9m 46s There were no new javadoc warning messages.
        +1 release audit 0m 24s The applied patch does not increase the total number of release audit warnings.
        -1 checkstyle 1m 59s The applied patch generated 3 new checkstyle issues (total was 197, now 199).
        +1 whitespace 0m 1s The patch has no lines that end in whitespace.
        +1 install 1m 32s mvn install still works.
        +1 eclipse:eclipse 0m 34s The patch built with eclipse:eclipse.
        +1 findbugs 5m 8s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
        +1 common tests 22m 27s Tests passed in hadoop-common.
        +1 mapreduce tests 104m 53s Tests passed in hadoop-mapreduce-client-jobclient.
        +1 hdfs tests 163m 21s Tests passed in hadoop-hdfs.
            338m 34s  



        Subsystem Report/Notes
        Patch URL http://issues.apache.org/jira/secure/attachment/12753118/HADOOP-12358.04.patch
        Optional Tests javadoc javac unit findbugs checkstyle
        git revision trunk / e2c9b28
        checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/7552/artifact/patchprocess/diffcheckstylehadoop-common.txt
        hadoop-common test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7552/artifact/patchprocess/testrun_hadoop-common.txt
        hadoop-mapreduce-client-jobclient test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7552/artifact/patchprocess/testrun_hadoop-mapreduce-client-jobclient.txt
        hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7552/artifact/patchprocess/testrun_hadoop-hdfs.txt
        Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/7552/testReport/
        Java 1.7.0_55
        uname Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/7552/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 20m 12s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 14 new or modified test files. +1 javac 7m 39s There were no new javac warning messages. +1 javadoc 9m 46s There were no new javadoc warning messages. +1 release audit 0m 24s The applied patch does not increase the total number of release audit warnings. -1 checkstyle 1m 59s The applied patch generated 3 new checkstyle issues (total was 197, now 199). +1 whitespace 0m 1s The patch has no lines that end in whitespace. +1 install 1m 32s mvn install still works. +1 eclipse:eclipse 0m 34s The patch built with eclipse:eclipse. +1 findbugs 5m 8s The patch does not introduce any new Findbugs (version 3.0.0) warnings. +1 common tests 22m 27s Tests passed in hadoop-common. +1 mapreduce tests 104m 53s Tests passed in hadoop-mapreduce-client-jobclient. +1 hdfs tests 163m 21s Tests passed in hadoop-hdfs.     338m 34s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12753118/HADOOP-12358.04.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / e2c9b28 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/7552/artifact/patchprocess/diffcheckstylehadoop-common.txt hadoop-common test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7552/artifact/patchprocess/testrun_hadoop-common.txt hadoop-mapreduce-client-jobclient test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7552/artifact/patchprocess/testrun_hadoop-mapreduce-client-jobclient.txt hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7552/artifact/patchprocess/testrun_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/7552/testReport/ Java 1.7.0_55 uname Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/7552/console This message was automatically generated.
        Hide
        hadoopqa Hadoop QA added a comment -



        -1 overall



        Vote Subsystem Runtime Comment
        -1 pre-patch 17m 35s Findbugs (version ) appears to be broken on trunk.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 tests included 0m 0s The patch appears to include 14 new or modified test files.
        +1 javac 7m 40s There were no new javac warning messages.
        +1 javadoc 9m 52s There were no new javadoc warning messages.
        +1 release audit 0m 22s The applied patch does not increase the total number of release audit warnings.
        +1 checkstyle 2m 34s There were no new checkstyle issues.
        +1 whitespace 0m 1s The patch has no lines that end in whitespace.
        +1 install 1m 28s mvn install still works.
        +1 eclipse:eclipse 0m 32s The patch built with eclipse:eclipse.
        +1 findbugs 5m 6s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
        +1 common tests 22m 32s Tests passed in hadoop-common.
        -1 mapreduce tests 101m 15s Tests failed in hadoop-mapreduce-client-jobclient.
        -1 hdfs tests 0m 50s Tests failed in hadoop-hdfs.
            169m 51s  



        Reason Tests
        Failed unit tests hadoop.mapreduce.lib.chain.TestChainErrors
          hadoop.mapreduce.security.TestMRCredentials
          hadoop.mapreduce.TestMapCollection
          hadoop.mapreduce.TestLocalRunner
          hadoop.mapreduce.v2.TestNonExistentJob
          hadoop.mapreduce.lib.input.TestMRCJCFileInputFormat
          hadoop.mapreduce.lib.partition.TestMRKeyFieldBasedComparator
          hadoop.mapreduce.security.TestBinaryTokenFile
          hadoop.mapreduce.security.ssl.TestEncryptedShuffle
          hadoop.mapreduce.lib.input.TestFixedLengthInputFormat
          hadoop.mapreduce.TestValueIterReset
        Failed build hadoop-hdfs



        Subsystem Report/Notes
        Patch URL http://issues.apache.org/jira/secure/attachment/12753140/HADOOP-12358.05.patch
        Optional Tests javadoc javac unit findbugs checkstyle
        git revision trunk / e2c9b28
        hadoop-common test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7553/artifact/patchprocess/testrun_hadoop-common.txt
        hadoop-mapreduce-client-jobclient test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7553/artifact/patchprocess/testrun_hadoop-mapreduce-client-jobclient.txt
        hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7553/artifact/patchprocess/testrun_hadoop-hdfs.txt
        Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/7553/testReport/
        Java 1.7.0_55
        uname Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/7553/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment -1 pre-patch 17m 35s Findbugs (version ) appears to be broken on trunk. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 14 new or modified test files. +1 javac 7m 40s There were no new javac warning messages. +1 javadoc 9m 52s There were no new javadoc warning messages. +1 release audit 0m 22s The applied patch does not increase the total number of release audit warnings. +1 checkstyle 2m 34s There were no new checkstyle issues. +1 whitespace 0m 1s The patch has no lines that end in whitespace. +1 install 1m 28s mvn install still works. +1 eclipse:eclipse 0m 32s The patch built with eclipse:eclipse. +1 findbugs 5m 6s The patch does not introduce any new Findbugs (version 3.0.0) warnings. +1 common tests 22m 32s Tests passed in hadoop-common. -1 mapreduce tests 101m 15s Tests failed in hadoop-mapreduce-client-jobclient. -1 hdfs tests 0m 50s Tests failed in hadoop-hdfs.     169m 51s   Reason Tests Failed unit tests hadoop.mapreduce.lib.chain.TestChainErrors   hadoop.mapreduce.security.TestMRCredentials   hadoop.mapreduce.TestMapCollection   hadoop.mapreduce.TestLocalRunner   hadoop.mapreduce.v2.TestNonExistentJob   hadoop.mapreduce.lib.input.TestMRCJCFileInputFormat   hadoop.mapreduce.lib.partition.TestMRKeyFieldBasedComparator   hadoop.mapreduce.security.TestBinaryTokenFile   hadoop.mapreduce.security.ssl.TestEncryptedShuffle   hadoop.mapreduce.lib.input.TestFixedLengthInputFormat   hadoop.mapreduce.TestValueIterReset Failed build hadoop-hdfs Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12753140/HADOOP-12358.05.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / e2c9b28 hadoop-common test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7553/artifact/patchprocess/testrun_hadoop-common.txt hadoop-mapreduce-client-jobclient test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7553/artifact/patchprocess/testrun_hadoop-mapreduce-client-jobclient.txt hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7553/artifact/patchprocess/testrun_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/7553/testReport/ Java 1.7.0_55 uname Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/7553/console This message was automatically generated.
        Hide
        hadoopqa Hadoop QA added a comment -



        -1 overall



        Vote Subsystem Runtime Comment
        0 pre-patch 19m 39s Pre-patch trunk compilation is healthy.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 tests included 0m 0s The patch appears to include 14 new or modified test files.
        +1 javac 7m 40s There were no new javac warning messages.
        +1 javadoc 9m 58s There were no new javadoc warning messages.
        +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings.
        +1 checkstyle 2m 36s There were no new checkstyle issues.
        +1 whitespace 0m 1s The patch has no lines that end in whitespace.
        +1 install 1m 34s mvn install still works.
        +1 eclipse:eclipse 0m 32s The patch built with eclipse:eclipse.
        +1 findbugs 5m 8s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
        -1 common tests 22m 0s Tests failed in hadoop-common.
        +1 mapreduce tests 102m 53s Tests passed in hadoop-mapreduce-client-jobclient.
        -1 hdfs tests 162m 30s Tests failed in hadoop-hdfs.
            334m 57s  



        Reason Tests
        Failed unit tests hadoop.fs.TestSymlinkLocalFSFileContext
          hadoop.hdfs.web.TestWebHDFSOAuth2



        Subsystem Report/Notes
        Patch URL http://issues.apache.org/jira/secure/attachment/12753169/HADOOP-12358.06.patch
        Optional Tests javadoc javac unit findbugs checkstyle
        git revision trunk / 837fb75
        hadoop-common test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7556/artifact/patchprocess/testrun_hadoop-common.txt
        hadoop-mapreduce-client-jobclient test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7556/artifact/patchprocess/testrun_hadoop-mapreduce-client-jobclient.txt
        hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7556/artifact/patchprocess/testrun_hadoop-hdfs.txt
        Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/7556/testReport/
        Java 1.7.0_55
        uname Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/7556/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 19m 39s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 14 new or modified test files. +1 javac 7m 40s There were no new javac warning messages. +1 javadoc 9m 58s There were no new javadoc warning messages. +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings. +1 checkstyle 2m 36s There were no new checkstyle issues. +1 whitespace 0m 1s The patch has no lines that end in whitespace. +1 install 1m 34s mvn install still works. +1 eclipse:eclipse 0m 32s The patch built with eclipse:eclipse. +1 findbugs 5m 8s The patch does not introduce any new Findbugs (version 3.0.0) warnings. -1 common tests 22m 0s Tests failed in hadoop-common. +1 mapreduce tests 102m 53s Tests passed in hadoop-mapreduce-client-jobclient. -1 hdfs tests 162m 30s Tests failed in hadoop-hdfs.     334m 57s   Reason Tests Failed unit tests hadoop.fs.TestSymlinkLocalFSFileContext   hadoop.hdfs.web.TestWebHDFSOAuth2 Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12753169/HADOOP-12358.06.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / 837fb75 hadoop-common test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7556/artifact/patchprocess/testrun_hadoop-common.txt hadoop-mapreduce-client-jobclient test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7556/artifact/patchprocess/testrun_hadoop-mapreduce-client-jobclient.txt hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7556/artifact/patchprocess/testrun_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/7556/testReport/ Java 1.7.0_55 uname Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/7556/console This message was automatically generated.
        Hide
        xyao Xiaoyu Yao added a comment -

        Thanks all for the review and feedbacks. Update to patch v6 with the following summary of changes based on the feedback:
        1. add "-safely" option to -rm command as Allen Wittenauer suggested.
        2. Reduce the configuration complexity as Arpit Agarwal suggested.
        Only one key "hadoop.shell.delete.limit.num.files" is used.
        3. Document 'hadoop.shell.delete.limit.num.files' in core-default.xml.
        4. Make this feature optional and off by default to avoid breaking any existing automations. It is enabled only if all the three criteria are met:

        • Trash is not enabled or unable to protect the directory to be deleted
        • and -safely is used in the rm command
        • and hadoop.shell.delete.limit.num.files > 0
          This way, the admin can choose if they think the feature is useful for certain use cases. Especially with HADOOP-11353, admin can alias 'hadoop -rm' to 'hadoop -rm -safely' in .hadooprc like 'rm' to 'rm -i' in Linux deployments when necessary.

        Arpit Agarwal: Given HDFS-4995 and HDFS-8046 have improved the NN locking issue of getContentSummary, is it OK to investigate for performance improvement in a separate JIRA?

        Show
        xyao Xiaoyu Yao added a comment - Thanks all for the review and feedbacks. Update to patch v6 with the following summary of changes based on the feedback: 1. add "-safely" option to -rm command as Allen Wittenauer suggested. 2. Reduce the configuration complexity as Arpit Agarwal suggested. Only one key "hadoop.shell.delete.limit.num.files" is used. 3. Document 'hadoop.shell.delete.limit.num.files' in core-default.xml. 4. Make this feature optional and off by default to avoid breaking any existing automations. It is enabled only if all the three criteria are met: Trash is not enabled or unable to protect the directory to be deleted and -safely is used in the rm command and hadoop.shell.delete.limit.num.files > 0 This way, the admin can choose if they think the feature is useful for certain use cases. Especially with HADOOP-11353 , admin can alias 'hadoop -rm' to 'hadoop -rm -safely' in .hadooprc like 'rm' to 'rm -i' in Linux deployments when necessary. Arpit Agarwal : Given HDFS-4995 and HDFS-8046 have improved the NN locking issue of getContentSummary, is it OK to investigate for performance improvement in a separate JIRA?
        Hide
        andrew.wang Andrew Wang added a comment -

        Patch looks good overall, thanks for discussion everyone. Good point about the # RPCs, I was looking at FileSystem rather than DFS. Patch-wise I have a few nits, otherwise LGTM though:

        • Unrelated whitespace change in Trash.java
        • Can we rename checkDeleteLimit to be canBeSafelyDeleted or something? I think that's more descriptive.
        • Maybe add comment to help text about potential performance impact of the -safely flag

        Since this is opt-in via the new flag, I'm okay putting it in even if getContentSummary is a bit expensive.

        Show
        andrew.wang Andrew Wang added a comment - Patch looks good overall, thanks for discussion everyone. Good point about the # RPCs, I was looking at FileSystem rather than DFS. Patch-wise I have a few nits, otherwise LGTM though: Unrelated whitespace change in Trash.java Can we rename checkDeleteLimit to be canBeSafelyDeleted or something? I think that's more descriptive. Maybe add comment to help text about potential performance impact of the -safely flag Since this is opt-in via the new flag, I'm okay putting it in even if getContentSummary is a bit expensive.
        Hide
        xyao Xiaoyu Yao added a comment -

        Thanks Andrew Wang for the review. Attach patch v7 to address your feedback.

        Delta of change from v6:

        • Revert unrelated white space only change in Trash.java
        • Remove unused variable DELETE_PROMPT in Delete.java
        • Rename checkDeleteLimit() to canBeSafelyDeleted() in Delete.java
        • Rename config key to HADOOP_SHELL_SAFELY_DELETE_LIMIT_NUM_FILES =
          "hadoop.shell.safely.delete.limit.num.files"
        • Add comment to -rm help text as suggested.
        Show
        xyao Xiaoyu Yao added a comment - Thanks Andrew Wang for the review. Attach patch v7 to address your feedback. Delta of change from v6: Revert unrelated white space only change in Trash.java Remove unused variable DELETE_PROMPT in Delete.java Rename checkDeleteLimit() to canBeSafelyDeleted() in Delete.java Rename config key to HADOOP_SHELL_SAFELY_DELETE_LIMIT_NUM_FILES = "hadoop.shell.safely.delete.limit.num.files" Add comment to -rm help text as suggested.
        Hide
        hadoopqa Hadoop QA added a comment -



        -1 overall



        Vote Subsystem Runtime Comment
        0 pre-patch 20m 5s Pre-patch trunk compilation is healthy.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 tests included 0m 0s The patch appears to include 14 new or modified test files.
        +1 javac 7m 38s There were no new javac warning messages.
        +1 javadoc 9m 55s There were no new javadoc warning messages.
        +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings.
        +1 checkstyle 2m 38s There were no new checkstyle issues.
        +1 whitespace 0m 1s The patch has no lines that end in whitespace.
        +1 install 1m 35s mvn install still works.
        +1 eclipse:eclipse 0m 39s The patch built with eclipse:eclipse.
        +1 findbugs 5m 14s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
        +1 common tests 23m 10s Tests passed in hadoop-common.
        +1 mapreduce tests 104m 15s Tests passed in hadoop-mapreduce-client-jobclient.
        -1 hdfs tests 165m 42s Tests failed in hadoop-hdfs.
            341m 20s  



        Reason Tests
        Failed unit tests hadoop.hdfs.server.blockmanagement.TestBlockManager
          hadoop.hdfs.web.TestWebHDFSOAuth2



        Subsystem Report/Notes
        Patch URL http://issues.apache.org/jira/secure/attachment/12753624/HADOOP-12358.07.patch
        Optional Tests javadoc javac unit findbugs checkstyle
        git revision trunk / 4620767
        hadoop-common test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7576/artifact/patchprocess/testrun_hadoop-common.txt
        hadoop-mapreduce-client-jobclient test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7576/artifact/patchprocess/testrun_hadoop-mapreduce-client-jobclient.txt
        hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7576/artifact/patchprocess/testrun_hadoop-hdfs.txt
        Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/7576/testReport/
        Java 1.7.0_55
        uname Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/7576/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 20m 5s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 14 new or modified test files. +1 javac 7m 38s There were no new javac warning messages. +1 javadoc 9m 55s There were no new javadoc warning messages. +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings. +1 checkstyle 2m 38s There were no new checkstyle issues. +1 whitespace 0m 1s The patch has no lines that end in whitespace. +1 install 1m 35s mvn install still works. +1 eclipse:eclipse 0m 39s The patch built with eclipse:eclipse. +1 findbugs 5m 14s The patch does not introduce any new Findbugs (version 3.0.0) warnings. +1 common tests 23m 10s Tests passed in hadoop-common. +1 mapreduce tests 104m 15s Tests passed in hadoop-mapreduce-client-jobclient. -1 hdfs tests 165m 42s Tests failed in hadoop-hdfs.     341m 20s   Reason Tests Failed unit tests hadoop.hdfs.server.blockmanagement.TestBlockManager   hadoop.hdfs.web.TestWebHDFSOAuth2 Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12753624/HADOOP-12358.07.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / 4620767 hadoop-common test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7576/artifact/patchprocess/testrun_hadoop-common.txt hadoop-mapreduce-client-jobclient test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7576/artifact/patchprocess/testrun_hadoop-mapreduce-client-jobclient.txt hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HADOOP-Build/7576/artifact/patchprocess/testrun_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/7576/testReport/ Java 1.7.0_55 uname Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/7576/console This message was automatically generated.
        Hide
        xyao Xiaoyu Yao added a comment -

        Allen Wittenauer, can you help review patch v7 to see if it addresses your concerns? Thanks!

        Show
        xyao Xiaoyu Yao added a comment - Allen Wittenauer , can you help review patch v7 to see if it addresses your concerns? Thanks!
        Hide
        xyao Xiaoyu Yao added a comment -

        Andrew Wang, can you help review changes based on your comments in patch v7? Thanks!

        Show
        xyao Xiaoyu Yao added a comment - Andrew Wang , can you help review changes based on your comments in patch v7? Thanks!
        Hide
        andrew.wang Andrew Wang added a comment -

        +1 LGTM, I'll commit shortly

        Show
        andrew.wang Andrew Wang added a comment - +1 LGTM, I'll commit shortly
        Hide
        andrew.wang Andrew Wang added a comment -

        Committed to trunk and branch-2. CHANGES.txt pick to branch-2 was unclean since we have two BUGFIX sections, I think we had some JIRA go in the wrong section and then some cherrypicks pulled excess things from trunk's CHANGES.txt. Sigh.

        Show
        andrew.wang Andrew Wang added a comment - Committed to trunk and branch-2. CHANGES.txt pick to branch-2 was unclean since we have two BUGFIX sections, I think we had some JIRA go in the wrong section and then some cherrypicks pulled excess things from trunk's CHANGES.txt. Sigh.
        Hide
        andrew.wang Andrew Wang added a comment -

        Thanks of course to Xiaoyu Yao Allen Wittenauer and all for working on this and help reviewing!

        Show
        andrew.wang Andrew Wang added a comment - Thanks of course to Xiaoyu Yao Allen Wittenauer and all for working on this and help reviewing!
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Hadoop-trunk-Commit #8406 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8406/)
        HADOOP-12358. Add -safely flag to rm to prompt when deleting many files. Contributed by Xiaoyu Yao. (wang: rev e1feaf6db03451068c660a863926032b35a569f8)

        • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLICommand.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestHDFSCLI.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testDeleteConf.xml
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCacheAdminCLI.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestAclCLI.java
        • hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
        • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLITestCmd.java
        • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/TestCLI.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestDeleteCLI.java
        • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestXAttrCLI.java
        • hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/cli/CLITestCmdMR.java
        • hadoop-common-project/hadoop-common/CHANGES.txt
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/CLITestCmdDFS.java
        • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java
        • hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #8406 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8406/ ) HADOOP-12358 . Add -safely flag to rm to prompt when deleting many files. Contributed by Xiaoyu Yao. (wang: rev e1feaf6db03451068c660a863926032b35a569f8) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLICommand.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestHDFSCLI.java hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testDeleteConf.xml hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCacheAdminCLI.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestAclCLI.java hadoop-common-project/hadoop-common/src/main/resources/core-default.xml hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLITestCmd.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/TestCLI.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestDeleteCLI.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestXAttrCLI.java hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/cli/CLITestCmdMR.java hadoop-common-project/hadoop-common/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/CLITestCmdDFS.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
        Hide
        xyao Xiaoyu Yao added a comment -

        Thanks Andrew Wang for reviewing/committing the patch and Allen Wittenauer, Arpit Agarwal and all for the discussions to improve this feature.

        Show
        xyao Xiaoyu Yao added a comment - Thanks Andrew Wang for reviewing/committing the patch and Allen Wittenauer , Arpit Agarwal and all for the discussions to improve this feature.
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #348 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/348/)
        HADOOP-12358. Add -safely flag to rm to prompt when deleting many files. Contributed by Xiaoyu Yao. (wang: rev e1feaf6db03451068c660a863926032b35a569f8)

        • hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
        • hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/cli/CLITestCmdMR.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testDeleteConf.xml
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestXAttrCLI.java
        • hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
        • hadoop-common-project/hadoop-common/CHANGES.txt
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestAclCLI.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/CLITestCmdDFS.java
        • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLITestCmd.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCacheAdminCLI.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestHDFSCLI.java
        • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java
        • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java
        • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLICommand.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestDeleteCLI.java
        • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/TestCLI.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #348 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/348/ ) HADOOP-12358 . Add -safely flag to rm to prompt when deleting many files. Contributed by Xiaoyu Yao. (wang: rev e1feaf6db03451068c660a863926032b35a569f8) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/cli/CLITestCmdMR.java hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testDeleteConf.xml hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestXAttrCLI.java hadoop-common-project/hadoop-common/src/test/resources/testConf.xml hadoop-common-project/hadoop-common/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestAclCLI.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/CLITestCmdDFS.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLITestCmd.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCacheAdminCLI.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestHDFSCLI.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLICommand.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestDeleteCLI.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/TestCLI.java
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #354 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/354/)
        HADOOP-12358. Add -safely flag to rm to prompt when deleting many files. Contributed by Xiaoyu Yao. (wang: rev e1feaf6db03451068c660a863926032b35a569f8)

        • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java
        • hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/cli/CLITestCmdMR.java
        • hadoop-common-project/hadoop-common/CHANGES.txt
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestXAttrCLI.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testDeleteConf.xml
        • hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
        • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/CLITestCmdDFS.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestDeleteCLI.java
        • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/TestCLI.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestAclCLI.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestHDFSCLI.java
        • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLICommand.java
        • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLITestCmd.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java
        • hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCacheAdminCLI.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #354 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/354/ ) HADOOP-12358 . Add -safely flag to rm to prompt when deleting many files. Contributed by Xiaoyu Yao. (wang: rev e1feaf6db03451068c660a863926032b35a569f8) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/cli/CLITestCmdMR.java hadoop-common-project/hadoop-common/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestXAttrCLI.java hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testDeleteConf.xml hadoop-common-project/hadoop-common/src/test/resources/testConf.xml hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/CLITestCmdDFS.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestDeleteCLI.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/TestCLI.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestAclCLI.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestHDFSCLI.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLICommand.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLITestCmd.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java hadoop-common-project/hadoop-common/src/main/resources/core-default.xml hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCacheAdminCLI.java
        Hide
        hudson Hudson added a comment -

        SUCCESS: Integrated in Hadoop-Yarn-trunk #1086 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/1086/)
        HADOOP-12358. Add -safely flag to rm to prompt when deleting many files. Contributed by Xiaoyu Yao. (wang: rev e1feaf6db03451068c660a863926032b35a569f8)

        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestHDFSCLI.java
        • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/CLITestCmdDFS.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestDeleteCLI.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testDeleteConf.xml
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java
        • hadoop-common-project/hadoop-common/CHANGES.txt
        • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLITestCmd.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java
        • hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/cli/CLITestCmdMR.java
        • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
        • hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestXAttrCLI.java
        • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/TestCLI.java
        • hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
        • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLICommand.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestAclCLI.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCacheAdminCLI.java
        Show
        hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Yarn-trunk #1086 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/1086/ ) HADOOP-12358 . Add -safely flag to rm to prompt when deleting many files. Contributed by Xiaoyu Yao. (wang: rev e1feaf6db03451068c660a863926032b35a569f8) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestHDFSCLI.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/CLITestCmdDFS.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestDeleteCLI.java hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testDeleteConf.xml hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java hadoop-common-project/hadoop-common/CHANGES.txt hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLITestCmd.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/cli/CLITestCmdMR.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java hadoop-common-project/hadoop-common/src/main/resources/core-default.xml hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestXAttrCLI.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/TestCLI.java hadoop-common-project/hadoop-common/src/test/resources/testConf.xml hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLICommand.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestAclCLI.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCacheAdminCLI.java
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Hadoop-Mapreduce-trunk #2297 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2297/)
        HADOOP-12358. Add -safely flag to rm to prompt when deleting many files. Contributed by Xiaoyu Yao. (wang: rev e1feaf6db03451068c660a863926032b35a569f8)

        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCacheAdminCLI.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestXAttrCLI.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java
        • hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
        • hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testDeleteConf.xml
        • hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/cli/CLITestCmdMR.java
        • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLICommand.java
        • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/TestCLI.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/CLITestCmdDFS.java
        • hadoop-common-project/hadoop-common/CHANGES.txt
        • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
        • hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestDeleteCLI.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestHDFSCLI.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestAclCLI.java
        • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java
        • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLITestCmd.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk #2297 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2297/ ) HADOOP-12358 . Add -safely flag to rm to prompt when deleting many files. Contributed by Xiaoyu Yao. (wang: rev e1feaf6db03451068c660a863926032b35a569f8) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCacheAdminCLI.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestXAttrCLI.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java hadoop-common-project/hadoop-common/src/test/resources/testConf.xml hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testDeleteConf.xml hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/cli/CLITestCmdMR.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLICommand.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/TestCLI.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/CLITestCmdDFS.java hadoop-common-project/hadoop-common/CHANGES.txt hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java hadoop-common-project/hadoop-common/src/main/resources/core-default.xml hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestDeleteCLI.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestHDFSCLI.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestAclCLI.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLITestCmd.java
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Hadoop-Hdfs-trunk #2275 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2275/)
        HADOOP-12358. Add -safely flag to rm to prompt when deleting many files. Contributed by Xiaoyu Yao. (wang: rev e1feaf6db03451068c660a863926032b35a569f8)

        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestDeleteCLI.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testDeleteConf.xml
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestHDFSCLI.java
        • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLICommand.java
        • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java
        • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/TestCLI.java
        • hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/cli/CLITestCmdMR.java
        • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
        • hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/CLITestCmdDFS.java
        • hadoop-common-project/hadoop-common/CHANGES.txt
        • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLITestCmd.java
        • hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestXAttrCLI.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestAclCLI.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCacheAdminCLI.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #2275 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2275/ ) HADOOP-12358 . Add -safely flag to rm to prompt when deleting many files. Contributed by Xiaoyu Yao. (wang: rev e1feaf6db03451068c660a863926032b35a569f8) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestDeleteCLI.java hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testDeleteConf.xml hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestHDFSCLI.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLICommand.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/TestCLI.java hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/cli/CLITestCmdMR.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java hadoop-common-project/hadoop-common/src/main/resources/core-default.xml hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/CLITestCmdDFS.java hadoop-common-project/hadoop-common/CHANGES.txt hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLITestCmd.java hadoop-common-project/hadoop-common/src/test/resources/testConf.xml hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestXAttrCLI.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestAclCLI.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCacheAdminCLI.java
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #337 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/337/)
        HADOOP-12358. Add -safely flag to rm to prompt when deleting many files. Contributed by Xiaoyu Yao. (wang: rev e1feaf6db03451068c660a863926032b35a569f8)

        • hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testDeleteConf.xml
        • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/TestCLI.java
        • hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestHDFSCLI.java
        • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java
        • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLICommand.java
        • hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java
        • hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/cli/CLITestCmdMR.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java
        • hadoop-common-project/hadoop-common/CHANGES.txt
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCacheAdminCLI.java
        • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLITestCmd.java
        • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestXAttrCLI.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestDeleteCLI.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/CLITestCmdDFS.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestAclCLI.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #337 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/337/ ) HADOOP-12358 . Add -safely flag to rm to prompt when deleting many files. Contributed by Xiaoyu Yao. (wang: rev e1feaf6db03451068c660a863926032b35a569f8) hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testDeleteConf.xml hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/TestCLI.java hadoop-common-project/hadoop-common/src/main/resources/core-default.xml hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestHDFSCLI.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLICommand.java hadoop-common-project/hadoop-common/src/test/resources/testConf.xml hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/cli/CLITestCmdMR.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java hadoop-common-project/hadoop-common/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCacheAdminCLI.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/CLITestCmd.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestXAttrCLI.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestDeleteCLI.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/CLITestCmdDFS.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestAclCLI.java

          People

          • Assignee:
            xyao Xiaoyu Yao
            Reporter:
            xyao Xiaoyu Yao
          • Votes:
            0 Vote for this issue
            Watchers:
            12 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development