Details
-
Bug
-
Status: Closed
-
Critical
-
Resolution: Fixed
-
1.12.2, 1.13.0
Description
Currently, if we start a native K8s session cluster when K8s HA enabled, we could not run more than 20 streaming jobs.
The latest job is always initializing, and the previous one is created and waiting to be assigned. It seems that some internal resources have been exhausted, e.g. okhttp thread pool , tcp connections or something else.
Attachments
Attachments
Issue Links
- causes
-
FLINK-22047 Could not find FLINSHED Flink job and can't submit job
- Closed
- is related to
-
FLINK-22054 Using a shared watcher for ConfigMap watching
- Closed
- relates to
-
FLINK-21942 KubernetesLeaderRetrievalDriver not closed after terminated which lead to connection leak
- Closed
- links to