Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-31726

Make spark.files available in driver with cluster deploy mode on kubernetes



    • Type: Improvement
    • Status: Open
    • Priority: Minor
    • Resolution: Unresolved
    • Affects Version/s: 3.0.0
    • Fix Version/s: None
    • Component/s: Kubernetes, Spark Core
    • Labels:


      currently on yarn with cluster deploy mode --files makes the files available for driver and executors and also put them on classpath for driver and executors.

      on k8s with cluster deploy mode --files makes the files available on executors but they are not on classpath. it does not make the files available on driver and they are not on driver classpath.

      it would be nice if the k8s behavior was consistent with yarn, or at least makes the files available on driver. once the files are available there is a simple workaround to get them on classpath using spark.driver.extraClassPath="./"


      we recently started testing kubernetes for spark. our main platform is yarn on which we use client deploy mode. our first experience was that client deploy mode was difficult to use on k8s (we dont launch from inside a pod). so we switched to cluster deploy mode, which seems to behave well on k8s. but then we realized that our program rely on reading files on classpath (application.conf, log4j.properties etc.) that are on the client but now are no longer on the driver (since driver is no longer on client). an easy fix for this seems to be to ship the files using --files to make them available on driver, but we could not get this to work.



          Issue Links



              • Assignee:
                koert koert kuipers
              • Votes:
                0 Vote for this issue
                1 Start watching this issue


                • Created: