Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-48210

Modify the description of whether dynamic partitioning is enabled in the “ Stage Level Scheduling Overview”

    XMLWordPrintableJSON

Details

    Description

      “  Stage Level Scheduling Overview ” in running-on-yarn and  running-on-kubernetes

      The description of dynamic partitioning is inconsistent with the code implementation verification.

      In running-on-yarn 

      '

      • When dynamic allocation is disabled: It allows users to specify different task resource requirements at the stage level and will use the same executors requested at startup.

      '

      But the  implementation is:

      Class:ResourceProfileManager

      Fuc:isSupported

      private[spark] def isSupported(rp: ResourceProfile): Boolean = {
      assert(master != null)
      if (rp.isInstanceOf[TaskResourceProfile] && !dynamicEnabled) {
      if ((notRunningUnitTests || testExceptionThrown) &&
      !(isStandaloneOrLocalCluster || isYarn || isK8s))

      { throw new SparkException("TaskResourceProfiles are only supported for Standalone, " + "Yarn and Kubernetes cluster for now when dynamic allocation is disabled.") }

      }

       

      The judgment of this code is that it does not support TaskResourceProfile in Yarn and k8s when dynamic partitioning is closed.

       

      The description in the document does not match, so the document needs to be modified.

       

       

       

       

       

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              guihuawen guihuawen
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

                Created:
                Updated: