Uploaded image for project: 'Hadoop YARN'
  1. Hadoop YARN
  2. YARN-10033

TestProportionalCapacityPreemptionPolicy not initializing vcores for effective max resources

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 3.3.0, 3.2.1, 3.1.3
    • 3.3.0, 3.2.2, 3.1.4
    • capacity scheduler, test
    • None

    Description

      TestProportionalCapacityPreemptionPolicy#testPreemptionWithVCoreResource is preempting more containers than would happen on a real cluster.
      This is because the process for mocking CS queues in TestProportionalCapacityPreemptionPolicy fails to take into consideration vcores when mocking effective max resources.
      This causes miscalculations for how many vcores to preempt when the DRF is being used in the test:

      TempQueuePerPartition#offer
          Resource absMaxCapIdealAssignedDelta = Resources.componentwiseMax(
              Resources.subtract(getMax(), idealAssigned),
              Resource.newInstance(0, 0));
      

      In the above code, the preemption policy is offering resources to an underserved queue. getMax() will use the effective max resource if it exists. Since this test is mocking effective max resources, it will return that value. However, since the mock doesn't include vcores, the test treats memory as the dominant resource and awards too many preempted containers to the underserved queue.

      Attachments

        1. YARN-10033.001.patch
          3 kB
          Eric Payne
        2. YARN-10033.002.patch
          3 kB
          Eric Payne
        3. YARN-10033.003.patch
          4 kB
          Eric Payne

        Activity

          People

            epayne Eric Payne
            epayne Eric Payne
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: