I have been testing Spark 2.4.0 RC4 on Kubernetes to run Python Spark Applications and running into an issue where the AppId label on the driver and executors mis-match. I am using the https://github.com/GoogleCloudPlatform/spark-on-k8s-operator to run these applications.
I see a spark.app.id of the form spark-* as "spark-app-selector" label on the driver as well as in the K8 config-map which gets created for the driver via spark-submit . My guess is this is coming from https://github.com/apache/spark/blob/f6cc354d83c2c9a757f9b507aadd4dbdc5825cca/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/KubernetesClientApplication.scala#L211
But when the driver actually comes up and brings up executors etc. , I see that the "spark-app-selector" label on the executors as well as the spark.app.Id config within the user-code on the driver is something of the form spark-application-* ( probably from https://github.com/apache/spark/blob/b19a28dea098c7d6188f8540429c50f42952d678/core/src/main/scala/org/apache/spark/SparkContext.scala#L511 & https://github.com/apache/spark/blob/bfb74394a5513134ea1da9fcf4a1783b77dd64e4/core/src/main/scala/org/apache/spark/scheduler/SchedulerBackend.scala#L26 )
We were consuming this "spark-app-selector" label on the Driver Pod to get the App Id and use it to look-up the app in SparkHistory server (among other use-cases). but due to this mis-match, this logic no longer works. This was working fine in Spark 2.2 fork for Kubernetes which i was using earlier. Is this expected behavior and if yes, what's the correct way to fetch the applicationId from outside the application ?
Let me know if I can provide any more details or if I am doing something wrong. Here is an example run with different spark-app-selector label on the driver/executor :