Details
-
Improvement
-
Status: Open
-
Major
-
Resolution: Unresolved
-
1.18.1
-
None
Description
There are some use-cases in which data sources are pre-partitioned:
- Kafka broker is already partitioned w.r.t. some key[s]
- There are multiple [Flink] jobs that materialize their outputs and read them as input subsequently
One of the main benefits is that we might avoid unnecessary shuffling.
There is already an experimental feature in DataStream to support a subset of these [1].
We should support this for Flink Table/SQL as well.
[1] https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/experimental/
Attachments
Issue Links
- links to
- mentioned in
-
Page Loading...