Uploaded image for project: 'Apache AsterixDB'
  1. Apache AsterixDB
  2. ASTERIXDB-2715

Dynamic Memory Component Architecture

    XMLWordPrintableJSON

    Details

    • Type: Improvement
    • Status: Open
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: STO - Storage
    • Labels:
      None

      Description

      AsterixDB uses a static memory component management architecture by dividing the write memory budget evenly to the active datasets. This leads to low memory utilization and cannot support a large number of active datasets efficiently. To address this problem, we introduce a dynamic memory memory component architecture, which has the following design decisions:

      • All write memory pages are managed via a global virtual buffer cache (global VBC). Each memory component simply requests pages from this global VBC upon writes and return pages unpon flushes. Thus, memory allocation is fully dynamic and on-demand and there is no need for pre-allocating write memory.
      • The global VBC keeps track of the list of the primary LSM-trees across all partitions. Whenever the write memory is nearly full, it selects one primary LSM-tree and flushes it as well as its secondary indexes to disk. Currently we only flush one LSM-tree partition at a time. By doing so, the reclaimed memory can be used by other components and this in turns increases the memory utilization.
      • For datasets with filters, using large memory components may hurt query performance. Thus, we additionally introduce a parameter to control the maximum memory component size for filtered datasets.

        Attachments

          Activity

            People

            • Assignee:
              luochen01 Chen Luo
              Reporter:
              luochen01 Chen Luo
            • Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

              • Created:
                Updated: