Uploaded image for project: 'Apache Hudi'
  1. Apache Hudi
  2. HUDI-993

Use hoodie.delete.shuffle.parallelism for Delete API

    XMLWordPrintableJSON

Details

    Description

      While HUDI-328 introduced Delete API, I noticed deduplicateKeys method doesn't allow any parallelism for RDD operation while deduplicateRecords for upsert uses parallelism on RDD.

      And "hoodie.delete.shuffle.parallelism" doesn't seem to be used.

       

      I found certain cases, like input RDD has few parallelism but target table has large files, certain Spark job's performance is suffered from low parallelism. so in this case,  upsert performance with "EmptyHoodieRecordPayload" is faster than delete API.

      Also this is due to the fact that "hoodie.combine.before.upsert" is true by default, when it's not enabled, the issue would be the same.

      So I wonder input RDD should be repartition as "hoodie.delete.shuffle.parallelism" when " hoodie.combine.before.delete" is false for better performance regardless of "hoodie.combine.before.delete"

       

       

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              dongwook Dongwook Kwon
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: