Details
-
New Feature
-
Status: Open
-
P3
-
Resolution: Unresolved
-
None
-
None
Description
Current sink API writes all data to a single destination, but there are many use cases where different pieces of data need to be routed to different destinations where the set of destinations is data-dependent (so can't be implemented with a Partition transform).
One internally discussed proposal was an API of the form:
PCollection<Void> PCollection<T>.apply(
Write.using(DoFn<T, SinkT> where,
MapFn<SinkT, WriteOperation<WriteResultT, T>> how)
so an item T gets written to a destination (or multiple destinations) determined by "where"; and the writing strategy is determined by "how" that produces a WriteOperation (current API - global init/write/global finalize hooks) for any given destination.
This API also has other benefits:
- allows the SinkT to be computed dynamically (in "where"), rather than specified at pipeline construction time
- removes the necessity for a Sink class entirely
- is sequenceable w.r.t. downstream transforms (you can stick transforms onto the returned PCollection<Void>, while the current Write.to() returns a PDone)
Attachments
Issue Links
- is duplicated by
-
BEAM-159 Support fixed number of shards in sinks
- Resolved
- links to