Uploaded image for project: 'IMPALA'
  1. IMPALA
  2. IMPALA-7486

Admit less memory on dedicated coordinator for admission control purposes

Attach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    Description

      Following on from IMPALA-7349, we should consider handling dedicated coordinators specially rather than admitting a uniform amount of memory on all backends.

      The specific scenario I'm interested in targeting is the case where we a coordinator that is executing many "lightweight" coordinator fragments, e.g. just an ExchangeNode and PlanRootSink, plus maybe other lightweight operators like UnionNode that don't use much memory or CPU. With the current behaviour it's possible for a coordinator to reach capacity from the point-of-view of admission control when at runtime it is actually very lightly loaded.

      This is particularly true if coordinators and executors have different process mem limits. This will be somewhat common since they're often deployed on different hardware or the coordinator will have more memory dedicated to its embedded JVM for the catalog cache.

      More generally we could admit different amounts per backend depending on how many fragments are running, but I think this incremental step would address the most important cases and be a little easier to understand.

      We may want to defer this work until we've implemented distributed runtime filter aggregation, which will significantly reduce coordinator memory pressure, and until we've improved distributed overadmission (since the coordinator behaviour may help throttle overadmission ).

      Attachments

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            bikramjeet.vig Bikramjeet Vig
            tarmstrong Tim Armstrong
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment