Uploaded image for project: 'Hadoop YARN'
  1. Hadoop YARN
  2. YARN-10760

Number of allocated OPPORTUNISTIC containers can dip below 0

    XMLWordPrintableJSON

Details

    • Reviewed

    Description

      AbstractYarnScheduler.completedContainers can potentially be called from multiple sources, yet it appears that there are scenarios in which the caller does not hold the appropriate lock, which can lead to the count of OpportunisticSchedulerMetrics.AllocatedOContainers falling below 0.
      To prevent double counting when releasing allocated O containers, a simple fix might be to check if the RMContainer has already been removed beforehand, though that may not fix the underlying issue that causes the race condition.

      Following is "capture" of OpportunisticSchedulerMetrics.AllocatedOContainers falling below 0 via a JMX query:

      {
          "name" : "Hadoop:service=ResourceManager,name=OpportunisticSchedulerMetrics",
          "modelerType" : "OpportunisticSchedulerMetrics",
          "tag.OpportunisticSchedulerMetrics" : "ResourceManager",
          "tag.Context" : "yarn",
          "tag.Hostname" : "",
          "AllocatedOContainers" : -2716,
          "AggregateOContainersAllocated" : 306020,
          "AggregateOContainersReleased" : 308736,
          "AggregateNodeLocalOContainersAllocated" : 0,
          "AggregateRackLocalOContainersAllocated" : 0,
          "AggregateOffSwitchOContainersAllocated" : 306020,
          "AllocateLatencyOQuantilesNumOps" : 0,
          "AllocateLatencyOQuantiles50thPercentileTime" : 0,
          "AllocateLatencyOQuantiles75thPercentileTime" : 0,
          "AllocateLatencyOQuantiles90thPercentileTime" : 0,
          "AllocateLatencyOQuantiles95thPercentileTime" : 0,
          "AllocateLatencyOQuantiles99thPercentileTime" : 0
        }
      

      UPDATE: Upon further investigation, it seems that the culprit is that we are not incrementing AllocatedOContainers when the RM restarts, so the deallocation still decrements the recovered OContainers, but we never increment them on recovery. We have an initial fix for this, and are waiting for verification of the fix.

      Attachments

        Issue Links

          Activity

            People

              afchung90 Andrew Chung
              afchung90 Andrew Chung
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Time Tracking

                  Estimated:
                  Original Estimate - Not Specified
                  Not Specified
                  Remaining:
                  Remaining Estimate - 0h
                  0h
                  Logged:
                  Time Spent - 1.5h
                  1.5h