Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-48378

Limit the maximum number of dynamic partitions

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 4.0.0
    • 4.0.0
    • SQL
    • None

    Description

      This issues https://issues.apache.org/jira/browse/SPARK-37217  

      ‘Assuming that 1001 partitions are written, the data of 10001 partitions will be deleted first, but because hive.exec.max.dynamic.partitions is 1000 by default, loadDynamicPartitions will fail at this time, but the data of 1001 partitions has been deleted.

      So we can check whether the number of dynamic partitions is greater than hive.exec.max.dynamic.partitions before deleting, it should fail quickly at this time.’

       

      Can this be done to make it more comprehensive.

      If a task on the execution node has already generated partitions that exceed hive. exec. max. dynamic. partitions, the task should be stopped at the execution node because the cost of generating data is high.

      If the executing node does not generate a partition exceeding hive.axec.max.dynamic.partitions for a certain task, it is still determined through the driver.

       

       

       

       

       

       

      Attachments

        Activity

          People

            Unassigned Unassigned
            guihuawen guihuawen
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated: