Details
-
Improvement
-
Status: Open
-
Major
-
Resolution: Unresolved
-
3.0.1
-
None
-
None
Description
Currently Spark 3 on Kubernetes supports loading driver and executor pod templates from Local file system: https://github.com/apache/spark/blob/master/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/KubernetesUtils.scala#L87. However, this is not very convenient as user needs to either bake the pod templates into client pod image or manually mounting the file as configMap. It would be nice if Spark supports loading pod templates from Hadoop Compatible File Systems (such as S3A), so that user can directly update the pod template files in S3 without changing the underline Kubernetes job definition (eg. Updating Docker image or updating ConfigMap)