Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-18477 Über-jira: S3A Hadoop 3.3.9 features
  3. HADOOP-13585

shell rm command to not rename to ~/.Trash in object stores

    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 2.8.0
    • None
    • util
    • None

    Description

      When you do a hadoop fs -rm -s3a://bucket/large-file there's a long delay and then you are told that it's been moved to s3a://Users/stevel/.Trash/current/large-file. Where it still incurs costs. You need to then delete that file using -skipTrash because the fs -expunge command only works on the local fs: you can't point it at an object store unless that is the default FS.

      I'd like an option to tell the shell to tell it that it should bypass the renaming on an FS-by-FS basis. And the for fs expunge to take a filesystem as an optional argument.

      Attachments

        Activity

          People

            Unassigned Unassigned
            stevel@apache.org Steve Loughran
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated: