Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Duplicate
-
1.16.2, 1.18.0, 1.17.1
Description
// code placeholder CREATE TABLE kafkaTest ( a STRING NOT NULL, config ARRAY<ROW<notificationTypeOptInReference STRING NOT NULL,cso STRING NOT NULL,autocreate BOOLEAN NOT NULL> NOT NULL> NOT NULL, ingestionTime TIMESTAMP(3) METADATA FROM 'timestamp', PRIMARY KEY (businessEvent) NOT ENFORCED) WITH ( 'connector' = 'kafka', 'topic' = 'test', 'properties.group.id' = 'testGroup', 'scan.startup.mode' = 'earliest-offset', 'properties.bootstrap.servers' = '', 'properties.security.protocol' = 'SASL_SSL', 'properties.sasl.mechanism' = 'PLAIN', 'properties.sasl.jaas.config' = ';', 'value.format' = 'json', 'sink.partitioner' = 'fixed' );
If we with the following INSERT, it will see that the last item in the array is placed in the topic 3 times and the first two are igniored.
// code placeholder INSERT INTO kafkaTest VALUES ('Transaction', ARRAY[ROW('G', 'IT', true),ROW('H', 'FR', true), ROW('I', 'IT', false)], TIMESTAMP '2023-08-30 14:01:00');
The result:
If I use the 'print' sink, I can get the right result. So I think this is a bug of 'kafka' connector.
Attachments
Attachments
Issue Links
- duplicates
-
FLINK-32296 Flink SQL handle array of row incorrectly
- Resolved
- links to