Uploaded image for project: 'Hadoop YARN'
  1. Hadoop YARN
  2. YARN-4499

Bad config values of "yarn.scheduler.maximum-allocation-vcores"

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Duplicate
    • 2.7.1, 2.6.2
    • None
    • scheduler
    • None

    Description

      Currently, the default value of yarn.scheduler.maximum-allocation-vcores is 32, according to yarn-default.xml.

      However, in YarnConfiguration.java, we specify the default to be 4.

        public static final String RM_SCHEDULER_MAXIMUM_ALLOCATION_VCORES =
            YARN_PREFIX + "scheduler.maximum-allocation-vcores";
        public static final int DEFAULT_RM_SCHEDULER_MAXIMUM_ALLOCATION_VCORES = 4;
      

      The default in the code looks correct to me. Actually I feel that the default value should be the same as yarn.nodemanager.resource.cpu-vcores (whose default is 8) ---if we have 8 cores for scheduling, there's few reason we only allow the maximum of 4...

      The Cloudera's article on Tuning the Cluster for MapReduce v2 (YARN) also suggests that "the maximum value ( yarn.scheduler.maximum-allocation-vcores) is usually equal to yarn.nodemanager.resource.cpu-vcores..."

      At the very least, we should fix yarn-default.xml. The error is pretty bad. A simple search on the Internet shows some ppl are confused by this error, for example,
      https://community.cloudera.com/t5/Cloudera-Manager-Installation/yarn-nodemanager-resource-cpu-vcores-and-yarn-scheduler-maximum/td-p/31098

      (but seriously, I think we should have an automatic defaults with the min as 1 and the max equal to the number of cores on the machine...

      Attachments

        Activity

          People

            Unassigned Unassigned
            tianyin Tianyin Xu
            Votes:
            0 Vote for this issue
            Watchers:
            7 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: