Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-15220 Über-jira: S3a phase V: Hadoop 3.2 features
  3. HADOOP-15191

Add Private/Unstable BulkDelete operations to supporting object stores for DistCP

    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Resolved
    • Major
    • Resolution: Won't Fix
    • 2.9.0
    • None
    • fs/s3, tools/distcp
    • None

    Description

      Large scale DistCP with the -delete option doesn't finish in a viable time because of the final CopyCommitter doing a 1 by 1 delete of all missing files. This isn't randomized (the list is sorted), and it's throttled by AWS.

      If bulk deletion of files was exposed as an API, distCP would do 1/1000 of the REST calls, so not get throttled.

      Proposed: add an initially private/unstable interface for stores, BulkDelete which declares a page size and offers a bulkDelete(List<Path>) operation for the bulk deletion.

      Attachments

        1. HADOOP-15191-001.patch
          9 kB
          Steve Loughran
        2. HADOOP-15191-002.patch
          56 kB
          Steve Loughran
        3. HADOOP-15191-003.patch
          60 kB
          Steve Loughran
        4. HADOOP-15191-004.patch
          66 kB
          Steve Loughran

        Issue Links

          Activity

            People

              stevel@apache.org Steve Loughran
              stevel@apache.org Steve Loughran
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: