Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-15950

Fix memstore size estimates to be more tighter

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • 1.4.0, 2.0.0
    • None
    • None
    • Incompatible change
    • Hide
      The estimates of heap usage by the memstore objects (KeyValue, object and array header sizes, etc) have been made more accurate for heap sizes up to 32G (using CompressedOops), resulting in them dropping by 10-50% in practice. This also results in less number of flushes and compactions due to "fatter" flushes. YMMV. As a result, the actual heap usage of the memstore before being flushed may increase by up to 100%. If configured memory limits for the region server had been tuned based on observed usage, this change could result in worse GC behavior or even OutOfMemory errors. Set the environment property (not hbase-site.xml) "hbase.memorylayout.use.unsafe" to false to disable.

      Show
      The estimates of heap usage by the memstore objects (KeyValue, object and array header sizes, etc) have been made more accurate for heap sizes up to 32G (using CompressedOops), resulting in them dropping by 10-50% in practice. This also results in less number of flushes and compactions due to "fatter" flushes. YMMV. As a result, the actual heap usage of the memstore before being flushed may increase by up to 100%. If configured memory limits for the region server had been tuned based on observed usage, this change could result in worse GC behavior or even OutOfMemory errors. Set the environment property (not hbase-site.xml) "hbase.memorylayout.use.unsafe" to false to disable.

    Description

      While testing something else, I was loading a region with a lot of data. Writing 30M cells in 1M rows, with 1 byte values.

      The memstore size turned out to be estimated as 4.5GB, while with the JFR profiling I can see that we are using 2.8GB for all the objects in the memstore (KV + KV byte[] + CSLM.Node + CSLM.Index).

      This obviously means that there is room in the write cache that we are not effectively using.

      Attachments

        1. hbase-15950-v0.patch
          13 kB
          Enis Soztutar
        2. hbase-15950-v1.patch
          28 kB
          Enis Soztutar
        3. hbase-15950-v2.branch-1.patch
          15 kB
          Enis Soztutar
        4. hbase-15950-v2.branch-1.patch
          15 kB
          Enis Soztutar
        5. hbase-15950-v2.patch
          28 kB
          Enis Soztutar
        6. Screen Shot 2016-06-02 at 8.48.27 PM.png
          339 kB
          Enis Soztutar

        Issue Links

          Activity

            People

              enis Enis Soztutar
              enis Enis Soztutar
              Votes:
              0 Vote for this issue
              Watchers:
              11 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: