Details
-
Bug
-
Status: Open
-
Major
-
Resolution: Unresolved
-
1.11.1
-
None
-
None
Description
Take the following test:
org.apache.flink.table.planner.runtime.stream.sql.SplitAggregateITCase#testMinMaxWithRetraction
val t1 = tEnv.sqlQuery( s""" |SELECT | c, MIN(b), MAX(b), COUNT(DISTINCT a) |FROM( | SELECT | a, COUNT(DISTINCT b) as b, MAX(b) as c | FROM T | GROUP BY a |) GROUP BY c """.stripMargin) val sink = new TestingRetractSink t1.toRetractStream[Row].addSink(sink) env.execute() println(sink.getRawResults)
The query schema is
root |-- c: INT |-- EXPR$1: BIGINT NOT NULL |-- EXPR$2: BIGINT NOT NULL |-- EXPR$3: BIGINT NOT NULL
This should be correct as the count is never null and thus min/max are never null, however, we can receive null in the sink.
List((true,1,null,null,1), (true,2,2,2,1), (false,1,null,null,1), (true,6,2,2,1), (true,5,1,1,0), (false,5,1,1,0), (true,5,1,1,2), (true,4,2,2,0), (false,5,1,1,2), (true,5,1,3,2), (false,4,2,2,0), (false,5,1,3,2), (true,5,1,4,2))
Attachments
Issue Links
- blocks
-
FLINK-18703 Use new data structure converters when legacy types are not present
- Closed