Uploaded image for project: 'Kylin'
  1. Kylin
  2. KYLIN-1601

Need not to shrink scan cache when hbase rows can be large

    XMLWordPrintableJSON

    Details

    • Type: Improvement
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: v1.5.2
    • Component/s: None
    • Labels:
      None

      Description

      to control memory usage we used to shrink scan cache when hbase rows can be large

      if (RowValueDecoder.hasMemHungryMeasures(rowValueDecoders))

      { scan.setCaching(scan.getCaching() / 10); }

      however since now scan.setCaching is accompanied by scan.setMaxResultSize, it's no longer necessary because the size limit will come first before cache rows limit.

      quote from http://www.cloudera.com/documentation/enterprise/5-2-x/topics/admin_hbase_scanning.htm:

      "When you use setCaching and setMaxResultSize together, single server requests are limited by either number of rows or maximum result size, whichever limit comes first."

        Attachments

          Activity

            People

            • Assignee:
              mahongbin Hongbin Ma
              Reporter:
              mahongbin Hongbin Ma
            • Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: