Uploaded image for project: 'Kudu'
  1. Kudu
  2. KUDU-2466

Fault tolerant scanners can over-allocate memory and crash a cluster

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Critical
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 1.10.0
    • Component/s: tserver
    • Labels:
      None

      Description

      When testing a Spark job with fault tolerant scanners enabled, reading a large table (~1.5TB replicated) with many columns resulted in using up all of the memory on the tablet servers. 400 GB of total memory was being consumed though the memory limit was configured for 60 GB. This impacted all services on the machines making the cluster effectively unusable. Killing the job running the scans did not free the memory. However, restarting the Tablet servers resulted in a healthy cluster. 

       

      Based on a chat with Todd Lipcon, Jean-Daniel Cryans, and Mike Percy it looks like we are not lazy in MergeIterator initialization and we could fix this by being lazy about the merger based on rowset bounds. Limiting the number of concurrently open scanners to O(rowset height).

       

        Attachments

          Activity

            People

            • Assignee:
              adar Adar Dembo
              Reporter:
              granthenke Grant Henke
            • Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: