Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-25411

Implement range partition in Spark

    XMLWordPrintableJSON

Details

    • New Feature
    • Status: Resolved
    • Major
    • Resolution: Incomplete
    • 2.3.0
    • None
    • SQL

    Description

      In our product environment, there are some partitioned fact tables, which are all quite huge. To accelerate join execution, we need make them also bucketed. Than comes the problem, if the bucket number is large enough, there may be too many files(files count = bucket number * partition count), which may bring pressure to the HDFS. And if the bucket number is small, Spark will launch equal number of tasks to read/write it.

       

      So, can we implement a new partition support range values, just like range partition in Oracle/MySQL (https://docs.oracle.com/cd/E17952_01/mysql-5.7-en/partitioning-range.html). Say, we can partition by a date column, and make every two months as a partition, or partitioned by a integer column, make interval of 10000 as a partition.

       

      Ideally, feature like range partition should be implemented in Hive. While, it's been always hard to update Hive version in a prod environment, and much lightweight and flexible if we implement it in Spark.

      Attachments

        1. range partition design doc.pdf
          173 kB
          Wang, Gang

        Activity

          People

            Unassigned Unassigned
            gwang3 Wang, Gang
            Votes:
            1 Vote for this issue
            Watchers:
            8 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: