Details
-
Improvement
-
Status: Resolved
-
P2
-
Resolution: Won't Do
-
2.29.0
-
None
Description
When number of shards is explicitly specified, then default sharding function is `RandomShardingFunction`. `WriteFiles` does have an option to pass in custom sharding function but that is not surfaced on user facing API at `FileIO`.
This is limiting in these 2 use-cases:
- I need to generate shards which are compatible with Hive bucketing and therefore need to decide shard assignment based on data fields of element being sharded
- When run e.g. on Spark and job encounters failure which cause loss of some data from previous stages, Spark does issue recompute of necessary task in necessary stages. Because shard assignment is random, some data will end up in different shards and cause duplicates in final dataset
I propose to surface `.withShardingFunction()` at FileIO level so user can choose custom sharding strategy when desired.
Attachments
Issue Links
- split to
-
BEAM-12654 FileIO can produce duplicates in output files
-
- Open
-
- links to