Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-3561

Allow for pluggable execution contexts in Spark


    • Type: New Feature
    • Status: Resolved
    • Priority: Major
    • Resolution: Won't Fix
    • Affects Version/s: 1.1.0
    • Fix Version/s: None
    • Component/s: Spark Core
    • Labels:


      Currently Spark provides integration with external resource-managers such as Apache Hadoop YARN, Mesos etc. Specifically in the context of YARN, the current architecture of Spark-on-YARN can be enhanced to provide significantly better utilization of cluster resources for large scale, batch and/or ETL applications when run alongside other applications (Spark and others) and services in YARN.

      The proposed approach would introduce a pluggable JobExecutionContext (trait) - a gateway and a delegate to Hadoop execution environment - as a non-public api (@Experimental) not exposed to end users of Spark.
      The trait will define 6 operations:

      • hadoopFile
      • newAPIHadoopFile
      • broadcast
      • runJob
      • persist
      • unpersist

      Each method directly maps to the corresponding methods in current version of SparkContext. JobExecutionContext implementation will be accessed by SparkContext via master URL as "execution-context:foo.bar.MyJobExecutionContext" with default implementation containing the existing code from SparkContext, thus allowing current (corresponding) methods of SparkContext to delegate to such implementation. An integrator will now have an option to provide custom implementation of DefaultExecutionContext by either implementing it from scratch or extending form DefaultExecutionContext.

      Please see the attached design doc for more details.


        1. SPARK-3561.pdf
          106 kB
          Oleg Zhurakousky

          Issue Links



              • Assignee:
                ozhurakousky Oleg Zhurakousky
              • Votes:
                4 Vote for this issue
                83 Start watching this issue


                • Created: