Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-11544

[Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Critical
    • Resolution: Fixed
    • None
    • 1.1.0, 2.0.0
    • None
    • None
    • Reviewed
    • Hide
      Results returned from RPC calls may now be returned as partials

      When is a Result marked as a partial?
      When the server must stop the scan because the max size limit has been reached. Means that the LAST Result returned within the ScanResult's Result array may be marked as a partial if the scan's max size limit caused it to stop in the middle of a row.

      Incompatible Change: The return type of InternalScanners#next and RegionScanners#nextRaw has been changed to NextState from boolean
      The previous boolean return value can be accessed via NextState#hasMoreValues()
      Provides more context as to what happened inside the scanner

      Scan caching default has been changed to Integer.Max_Value
      This value works together with the new maxResultSize value from HBASE-12976 (defaults to 2MB)
      Results returned from server on basis of size rather than number of rows
      Provides better use of network since row size varies amongst tables

      Protobuf models have changed for Result, ScanRequest, and ScanResponse to support new partial Results

      Partial Results should be invisible to application layer unless Scan#setAllowPartials is set

      Scan#setAllowPartials has been added to allow the application to request to see the partial Results returned by the server rather than have the ClientScanner form the complete Result prior to returning it to the application

      To disable the use of partial Results on the server, set ScanRequest.Builder#setClientHandlesPartials() to be false in the ScanRequest issued to server

      Partial Results should allow the server to return large rows in parts rather than accumulate all the cells for that particular row and run out of memory
      Show
      Results returned from RPC calls may now be returned as partials When is a Result marked as a partial? When the server must stop the scan because the max size limit has been reached. Means that the LAST Result returned within the ScanResult's Result array may be marked as a partial if the scan's max size limit caused it to stop in the middle of a row. Incompatible Change: The return type of InternalScanners#next and RegionScanners#nextRaw has been changed to NextState from boolean The previous boolean return value can be accessed via NextState#hasMoreValues() Provides more context as to what happened inside the scanner Scan caching default has been changed to Integer.Max_Value This value works together with the new maxResultSize value from HBASE-12976 (defaults to 2MB) Results returned from server on basis of size rather than number of rows Provides better use of network since row size varies amongst tables Protobuf models have changed for Result, ScanRequest, and ScanResponse to support new partial Results Partial Results should be invisible to application layer unless Scan#setAllowPartials is set Scan#setAllowPartials has been added to allow the application to request to see the partial Results returned by the server rather than have the ClientScanner form the complete Result prior to returning it to the application To disable the use of partial Results on the server, set ScanRequest.Builder#setClientHandlesPartials() to be false in the ScanRequest issued to server Partial Results should allow the server to return large rows in parts rather than accumulate all the cells for that particular row and run out of memory
    • beginner

    Description

      Running some tests, I set hbase.client.scanner.caching=1000. Dataset has large cells. I kept OOME'ing.

      Serverside, we should measure how much we've accumulated and return to the client whatever we've gathered once we pass out a certain size threshold rather than keep accumulating till we OOME.

      Attachments

        1. Allocation_Hot_Spots.html
          4.34 MB
          Michael Stack
        2. gc.j.png
          23 kB
          Michael Stack
        3. h.png
          11 kB
          Michael Stack
        4. HBASE-11544-addendum-v1.patch
          84 kB
          Jonathan Lawlor
        5. HBASE-11544-addendum-v2.patch
          88 kB
          Jonathan Lawlor
        6. HBASE-11544-branch_1_0-v1.patch
          302 kB
          Jonathan Lawlor
        7. HBASE-11544-branch_1_0-v2.patch
          301 kB
          Jonathan Lawlor
        8. HBASE-11544-v1.patch
          186 kB
          Jonathan Lawlor
        9. HBASE-11544-v2.patch
          186 kB
          Jonathan Lawlor
        10. HBASE-11544-v3.patch
          188 kB
          Jonathan Lawlor
        11. HBASE-11544-v4.patch
          284 kB
          Jonathan Lawlor
        12. HBASE-11544-v5.patch
          285 kB
          Jonathan Lawlor
        13. HBASE-11544-v6.patch
          294 kB
          Jonathan Lawlor
        14. HBASE-11544-v6.patch
          294 kB
          Jonathan Lawlor
        15. HBASE-11544-v6.patch
          294 kB
          Jonathan Lawlor
        16. HBASE-11544-v7.patch
          294 kB
          Jonathan Lawlor
        17. HBASE-11544-v8.patch
          294 kB
          Jonathan Lawlor
        18. HBASE-11544-v8-branch-1.patch
          294 kB
          Jonathan Lawlor
        19. hits.j.png
          13 kB
          Michael Stack
        20. m.png
          11 kB
          Michael Stack
        21. mean.png
          25 kB
          Michael Stack
        22. net.j.png
          16 kB
          Michael Stack
        23. q (2).png
          14 kB
          Michael Stack

        Issue Links

          Activity

            People

              jonathan.lawlor Jonathan Lawlor
              stack Michael Stack
              Votes:
              0 Vote for this issue
              Watchers:
              29 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: