Uploaded image for project: 'IMPALA'
  1. IMPALA
  2. IMPALA-2182

MemTracker hierarchy may not correctly handle large RM limits

    XMLWordPrintableJSON

Details

    Description

      The MemTracker RM limit might fail to be expanded when an expansion request comes back successfully and could potentially cause the query to fail.

      Consider the following contrived example: The process mem limit is 10G and the child query tracker currently has an RM limit of 9.5G. If TryConsume is called, e.g. for a 2M buffer, we will send an expansion request for 2M but Llama will "normalize" the request and give us back the configured normalized size (rounding up) if it can, e.g. say 1G. Now we should be able to at least use 500M if we want within this query because it would still be less than the process 10G limit, but the code today will see that the current rm limit (9.5G) + the allocated size (1G) is above the 10G and then just say the consume failed. At this point the query may spill or may fail, but if it doesn't fail, it likely will ask for more memory again later (as it was already at the limit) and try to repeat the same process. Not only will we have repeatedly requested more memory from Llama than we can use or are using, but it's likely that eventually one of these calls to TryConsume() will result in the query failing. (It depends who called TryConsume and whether or not that memory was truly needed to continue executing.)

      Attachments

        Activity

          People

            mjacobs Matthew Jacobs
            mjacobs Matthew Jacobs
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: