Uploaded image for project: 'Hadoop YARN'
  1. Hadoop YARN
  2. YARN-9024

ClusterNodeTracker maximum allocation does not respect resource units

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Not A Problem
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None

      Description

      If a custom resource is defined with a default unit value (base unit) and a node reports its total capability in a different unit (e.g. M) then ClusterNodeTracker.getMaxAllowedAllocation returns the max allocation resource in the base unit, so the reported resource unit is not respected.

      The issue is when the updateMaxResources method is called (i.e. NM node is registered), the unit of the node's resources is not checked. In this method, we need to convert the reported value to the unit defined by RM for the individual resource types.

      I also wanted to add a testcase where memory has G as its unit, but it was not possible easily without hacky code so I only added a testcase that verifies custom resource values.

        Attachments

        1. YARN-9024.001.patch
          9 kB
          Szilard Nemeth

          Issue Links

            Activity

              People

              • Assignee:
                snemeth Szilard Nemeth
                Reporter:
                snemeth Szilard Nemeth
              • Votes:
                0 Vote for this issue
                Watchers:
                1 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: