Uploaded image for project: 'Kylin'
  1. Kylin
  2. KYLIN-1601

Need not to shrink scan cache when hbase rows can be large

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • v1.5.2
    • None
    • None

    Description

      to control memory usage we used to shrink scan cache when hbase rows can be large

      if (RowValueDecoder.hasMemHungryMeasures(rowValueDecoders))

      { scan.setCaching(scan.getCaching() / 10); }

      however since now scan.setCaching is accompanied by scan.setMaxResultSize, it's no longer necessary because the size limit will come first before cache rows limit.

      quote from http://www.cloudera.com/documentation/enterprise/5-2-x/topics/admin_hbase_scanning.htm:

      "When you use setCaching and setMaxResultSize together, single server requests are limited by either number of rows or maximum result size, whichever limit comes first."

      Attachments

        Activity

          People

            mahongbin Hongbin Ma
            mahongbin Hongbin Ma
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: