Details
-
Improvement
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
-
None
Description
The first version of the ClusteredWriter in Hive-Iceberg will be lenient for bucketed tables: i.e. the records do not need to be ordered by the bucket values, the writer will just close its current file and open a new one for out-of-order records.
This is suboptimal for the long-term due to creating many small files. Spark uses a UDF to compute the bucket value for each record and therefore it is able to order the records by bucket values, achieving optimal clustering.
The proposed change adds a new UDF that uses Iceberg's bucket transformation function to produce bucket values from constants or any column input. All types that Iceberg buckets support are supported in this UDF too, except for UUID.
This UDF is then used in SortedDynPartitionOptimizer to sort data during write if the target Iceberg target has bucket transform partitioning.
To enable this, Hive has been extended with the feature that allows storage handlers to define custom sorting expressions, to be passed to FileSink operator's DynPartContext during dynamic partitioning write scenarios.
The lenient version of ClusteredWriter in patched-iceberg-core has been disposed of as it is not needed anymore with this feature in.
Attachments
Issue Links
- links to