Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-24615

Accelerator-aware task scheduling for Spark


    • Type: Improvement
    • Status: Open
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: 2.4.0
    • Fix Version/s: None
    • Component/s: Spark Core
    • Labels:


      In the machine learning area, accelerator card (GPU, FPGA, TPU) is predominant compared to CPUs. To make the current Spark architecture to work with accelerator cards, Spark itself should understand the existence of accelerators and know how to schedule task onto the executors where accelerators are equipped.

      Current Spark’s scheduler schedules tasks based on the locality of the data plus the available of CPUs. This will introduce some problems when scheduling tasks with accelerators required.

      1. CPU cores are usually more than accelerators on one node, using CPU cores to schedule accelerator required tasks will introduce the mismatch.
      2. In one cluster, we always assume that CPU is equipped in each node, but this is not true of accelerator cards.
      3. The existence of heterogeneous tasks (accelerator required or not) requires scheduler to schedule tasks with a smart way.

      So here propose to improve the current scheduler to support heterogeneous tasks (accelerator requires or not). This can be part of the work of Project hydrogen.

      Details is attached in google doc. It doesn't cover all the implementation details, just highlight the parts should be changed.


      CC Yanbo Liang Mingjie Tang


          Issue Links



              • Assignee:
                jiangxb1987 Xingbo Jiang
                jerryshao Saisai Shao
                Xiangrui Meng
              • Votes:
                7 Vote for this issue
                42 Start watching this issue


                • Created: