Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-26269

YarnAllocator should have same blacklist behaviour with YARN to maxmize use of cluster resource

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: 2.3.1, 2.3.2, 2.4.0
    • Fix Version/s: 2.4.1, 3.0.0
    • Component/s: Spark Core, YARN
    • Labels:
      None
    • Target Version/s:

      Description

      Currently, YarnAllocator may put a node with a completed container whose exit status is not one of SUCCESS, PREEMPTED, KILLED_EXCEEDED_VMEM, KILLED_EXCEEDED_PMEM into blacklist. Howerver, for other exit status, e.g. KILLED_BY_RESOURCEMANAGER, Yarn do not consider its related nodes shoule be added into blacklist(see YARN's explaination for detail https://github.com/apache/hadoop/blob/228156cfd1b474988bc4fedfbf7edddc87db41e3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/Apps.java#L273). So, relaxing the current blacklist rule and having the same blacklist behaviour with YARN would maxmize use of cluster resources.

       

        Attachments

          Activity

            People

            • Assignee:
              Ngone51 wuyi
              Reporter:
              Ngone51 wuyi
            • Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

              • Due:
                Created:
                Updated:
                Resolved: