Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-30417

SPARK-29976 calculation of slots wrong for Standalone Mode

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 3.0.0
    • 3.0.0
    • Spark Core
    • None

    Description

      In SPARK-29976 we added a config to determine if we should allow speculation when the number of tasks is less then the number of slots on a single executor.  The problem is that for standalone mode (and  mesos coarse grained) the EXECUTOR_CORES config is not set properly by default. In those modes the number of executor cores is all the cores of the Worker.    The default of EXECUTOR_CORES is 1.

      The calculation:

      val speculationTasksLessEqToSlots = numTasks <= (conf.get(EXECUTOR_CORES) / sched.CPUS_PER_TASK)

      If someone set the cpus per task > 1 then this would end up being false even if 1 task.  Note that the default case where cpus per task is 1 and executor cores is 1 it works out ok but is only applied if 1 task vs number of slots on the executor.

      Here we really don't know the number of executor cores for standalone mode or mesos so I think a decent solution is to just use 1 in those cases and document the difference.

      Something like max(conf.get(EXECUTOR_CORES) / sched.CPUS_PER_TASK, 1)

       

      Attachments

        Issue Links

          Activity

            People

              yuchen.huo Yuchen Huo
              tgraves Thomas Graves
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: