Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
1.14.0
Description
As one of the most commonly connectors, Kafka sink should be ported to the new interfaces as quickly as possible such that Flink would support batch queries on Kafka.
The implementation should probably follow the current implementation as close as possible.
Attachments
Issue Links
- relates to
-
FLINK-28370 Add close method for KafkaRecordSerializationSchema
- Open
- links to
- mentioned in
-
Page Loading...
1.
|
Implement at-least-once Kafka Sink | Resolved | Fabian Paul | |
2.
|
Implement exactly-once Kafka Sink | Resolved | Fabian Paul | |
3.
|
Migrate Table API to new KafkaSink | Resolved | Fabian Paul | |
4.
|
Create a KafkaRecordSerializationSchemas valueOnly helper | Resolved | Fabian Paul | |
5.
|
Write documentation for new KafkaSink | Resolved | Fabian Paul | |
6.
|
Move sink to org.apache.kafka.conntor.kafka.sink package | Resolved | Fabian Paul | |
7.
|
Migrate BufferedUpsertSinkFunction to FLIP-143 | Resolved | Fabian Paul | |
8.
|
Test FLIP-143 KafkaSink | Closed | Ruan Hang | |
9.
|
Add FLIP-33 metrics to new KafkaSink | Resolved | Fabian Paul |