Details
-
Bug
-
Status: Closed
-
Critical
-
Resolution: Fixed
-
kubernetes-operator-1.6.0
Description
Since https://issues.apache.org/jira/browse/FLINK-32589 the operator does not rely on the Flink configuration anymore to store the parallelism overrides. Instead, it stores them internally in the autoscaler config map. Upon scalings without the rescaling API, the spec is changed on the fly during reconciliation and the parallelism overrides are added.
Unfortunately, this yields to the cluster getting stuck with the job in FINISHED state after taking a savepoint for upgrade. The operator assumes that the new cluster got deployed successfully and goes into DEPLOYED state again.
Log flow (from oldest to newest):
- Rescheduling new reconciliation immediately to execute scaling operation.
- Upgrading/Restarting running job, suspending first...
- Job is in running state, ready for upgrade with SAVEPOINT
- Suspending existing deployment.
- Suspending job with savepoint.
- Job successfully suspended with savepoint
- The resource is being upgraded
- Pending upgrade is already deployed, updating status.
- Observing JobManager deployment. Previous status: DEPLOYING
- JobManager deployment port is ready, waiting for the Flink REST API...
- DEPLOYED The resource is deployed/submitted to Kubernetes, but it’s not yet considered to be stable and might be rolled back in the future
It appears the issue might be in (8): https://github.com/apache/flink-kubernetes-operator/blob/c09671c5c51277c266b8c45d493317d3be1324c0/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/observer/deployment/AbstractFlinkDeploymentObserver.java#L260 because the generation id hasn't been changed by the mere parallelism override change.
Attachments
Attachments
Issue Links
- is caused by
-
FLINK-32589 Carry over parallelism overrides to prevent users from clearing them on updates
- Closed
- links to