Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-20662

Block jobs that have greater than a configured number of tasks

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Won't Fix
    • 1.6.0, 2.0.0
    • None
    • Spark Core
    • None

    Description

      In a shared cluster, it's desirable for an admin to block large Spark jobs. While there might not be a single metrics defining the size of a job, the number of tasks is usually a good indicator. Thus, it would be useful for Spark scheduler to block a job whose number of tasks reaches a configured limit. By default, the limit could be just infinite, to retain the existing behavior.

      MapReduce has mapreduce.job.max.map and mapreduce.job.max.reduce to be configured, which blocks a MR job at job submission time.

      The proposed configuration is spark.job.max.tasks with a default value -1 (infinite).

      Attachments

        Activity

          People

            Unassigned Unassigned
            xuefuz Xuefu Zhang
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: