Details
-
Improvement
-
Status: Open
-
Not a Priority
-
Resolution: Unresolved
-
None
-
None
-
flink 1.12.0
hive-exec 2.3.5
Description
when using flink sql to insert into hive from kafka, heap out of memory occrus randomly.
Hive table using year/month/day/hour as partition, it seems the max heap space needed is corresponded to active partition number(according to kafka message disordered and delay). which means if partition number increases, the heap space needed also increase, may cause the heap out of memory.
when write record, is it possible to take the whole heap space usage into account in checkBlockSizeReached, or some other method to avoid OOM?
Attachments
Issue Links
- relates to
-
FLINK-31092 Hive ITCases fail with OutOfMemoryError
- Closed