Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-15620 Über-jira: S3A phase VI: Hadoop 3.3 features
  3. HADOOP-13585

shell rm command to not rename to ~/.Trash in object stores

    XMLWordPrintableJSON

    Details

    • Type: Sub-task
    • Status: Open
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: 2.8.0
    • Fix Version/s: None
    • Component/s: util
    • Labels:
      None
    • Target Version/s:

      Description

      When you do a hadoop fs -rm -s3a://bucket/large-file there's a long delay and then you are told that it's been moved to s3a://Users/stevel/.Trash/current/large-file. Where it still incurs costs. You need to then delete that file using -skipTrash because the fs -expunge command only works on the local fs: you can't point it at an object store unless that is the default FS.

      I'd like an option to tell the shell to tell it that it should bypass the renaming on an FS-by-FS basis. And the for fs expunge to take a filesystem as an optional argument.

        Attachments

          Activity

            People

            • Assignee:
              Unassigned
              Reporter:
              stevel@apache.org Steve Loughran
            • Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

              • Created:
                Updated: