Details
-
Bug
-
Status: Closed
-
Minor
-
Resolution: Not A Bug
-
1.14.4
-
None
Description
By constructing kubernetesClusterDescriptor and Fabric8FlinkKubeClient to deploy kubernetes application mode of job,The code is shown below.
//Initialize flinkConfiguration and set options including TOTAL_PROCESS_MEMORY Configuration flinkConfiguration = GlobalConfiguration.loadConfiguration(); flinkConfiguration.set(DeploymentOptions.TARGET, KubernetesDeploymentTarget.APPLICATION.getName()) .set(PipelineOptions.JARS, Collections.singletonList(flinkDistJar)) .set(KubernetesConfigOptions.CLUSTER_ID, "APPLICATION1") .set(KubernetesConfigOptions.CONTAINER_IMAGE, "img_url") .set(KubernetesConfigOptions.CONTAINER_IMAGE_PULL_POLICY, KubernetesConfigOptions.ImagePullPolicy.Always) .set(JobManagerOptions.TOTAL_PROCESS_MEMORY, MemorySize.parse("1024M")) .set...; //Construct kubernetesClusterDescriptor and Fabric8FlinkKubeClient KubernetesClusterDescriptor kubernetesClusterDescriptor = new KubernetesClusterDescriptor( flinkConfiguration, new Fabric8FlinkKubeClient( flinkConfiguration, new DefaultKubernetesClient(), Executors.newFixedThreadPool(2) ) ); ApplicationConfiguration applicationConfiguration = new ApplicationConfiguration(execArgs, null); //deploy kubernetes application mode of job ClusterClient<String> clusterClient = kubernetesClusterDescriptor.deployApplicationCluster( new ClusterSpecification.ClusterSpecificationBuilder().createClusterSpecification(), applicationConfiguration ).getClusterClient(); String clusterId = clusterClient.getClusterId();
As above,I set TOTAL_PROCESS_MEMORY to 1024M,The flink UI displays the following memory configuration,which is clearly correct(448+128+256+192=1024).
But when I turn to JobManager using Kubectl Describe Deployment, I found that the POD memory of JobManager was always 768M, which should have been equal to TOTAL_PROCESS_MEMORY 1024M. And no matter how I adjust TOTAL_PROCESS_MEMORY parameter it doesn't work.
The result is a POD OOMkilled when JobManager memory usage exceeds 768M.
I expect the JobManager pod to be equal to TOTAL_PROCESS_MEMORY, so I can adjust the memory to suit my needs.
Is there something WRONG with my configuration, or should JobManager's pod take up the same amount of memory as TOTAL_PROCESS_MEMORY?