Uploaded image for project: 'Hadoop YARN'
  1. Hadoop YARN
  2. YARN-8668

Inconsistency between capacity and fair scheduler in the aspect of computing node available resource

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Not A Problem
    • None
    • None
    • None

    Description

      We have observed that given capacityScheduler and defaultResourceCalculor,  when there are many memory resources in a node, running heavy workload, then the available vcores of this node will be negative!

      I noticed that in capacityScheduler.java, use code below to calculate the available resources for allocating containers:

      
      if (calculator.computeAvailableContainers(Resources
       .add(node.getUnallocatedResource(), node.getTotalKillableResources()),
       minimumAllocation) <= 0) {
       if (LOG.isDebugEnabled()) {
       LOG.debug("This node or this node partition doesn't have available or"
       + "killable resource");
       }
      
      

      while in fairscheduler FsAppAttempt.java, similar code was found:

      
      // Can we allocate a container on this node?
      if (Resources.fitsIn(capability, available)) {
      
      ...
      
      }
      
      

      Why is the inconsistency? I think we should use Resources.fitsIn(smaller,bigger) instead in capacityScheduler !!!

       

      Attachments

        1. YARN-8668.001.patch
          1 kB
          Yeliang Cang

        Activity

          People

            Cyl Yeliang Cang
            Cyl Yeliang Cang
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: