HBase
  1. HBase
  2. HBASE-4888

Not close ResultScanner cause the cluster abnormal ( RS memory increase)

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Invalid
    • Affects Version/s: 0.90.3
    • Fix Version/s: None
    • Component/s: Client
    • Labels:
    • Environment:

      CentOS 5.5 final, hadoop-0.20.2-cdh3u1,hbase-0.20.2-cdh3u1

      Description

      A ResultScanner is created in client side, If the user doesn't invoke the "ResultScanner.close()" ,it happens that the memory of RegionServer increase rapidly and hold for a long time. Finally,the cluster goes to an abnormal status

        Activity

        Hide
        stack added a comment -

        Do you have evidence this is so Yuan? Thank you.

        Show
        stack added a comment - Do you have evidence this is so Yuan? Thank you.
        Hide
        Todd Lipcon added a comment -

        Something broken about our "reseek" stuff?

        Show
        Todd Lipcon added a comment - Something broken about our "reseek" stuff?
        Hide
        Yuan Kang added a comment -

        I came into this issus in online system and find some reference in 《HBase: The Definitive Guide》.

        It writes "Scanner Leases
        Make sure you release a scanner instance as timely as possible. An open scanner holds quite a few resources on the server side, which could accumulate to a large amount of heap space occupied. When you are done with the current scan call close(), and consider adding this into a try/finally construct to ensure it is called, even if there are exceptions or errors during the iterations.

        Like row locks, scanners are protected against stray clients blocking resources for too long, using the same lease based mechanisms. You need to set the same configuration property to modify the timeout threshold (in milliseconds):
        <property>
        <name>hbase.regionserver.lease.period</name>
        <value>120000</value>
        </property>
        You need to make sure that the property is set to an appropriate value that make sense for locks and the scanner leases.
        "

        But I find 'hbase.regionserver.lease.period' don't work well and Our RS hold high memory for a much longer time more than 'hbase.regionserver.lease.period' .After I add 'rs.close()' in my code,it disappeared

        Show
        Yuan Kang added a comment - I came into this issus in online system and find some reference in 《HBase: The Definitive Guide》. It writes "Scanner Leases Make sure you release a scanner instance as timely as possible. An open scanner holds quite a few resources on the server side, which could accumulate to a large amount of heap space occupied. When you are done with the current scan call close(), and consider adding this into a try/finally construct to ensure it is called, even if there are exceptions or errors during the iterations. Like row locks, scanners are protected against stray clients blocking resources for too long, using the same lease based mechanisms. You need to set the same configuration property to modify the timeout threshold (in milliseconds): <property> <name>hbase.regionserver.lease.period</name> <value>120000</value> </property> You need to make sure that the property is set to an appropriate value that make sense for locks and the scanner leases. " But I find 'hbase.regionserver.lease.period' don't work well and Our RS hold high memory for a much longer time more than 'hbase.regionserver.lease.period' .After I add 'rs.close()' in my code,it disappeared
        Hide
        Jesse Yates added a comment -

        Don't think this is an issue, but is rather just how scanner leases work. Reoopen if I'm wrong.

        Show
        Jesse Yates added a comment - Don't think this is an issue, but is rather just how scanner leases work. Reoopen if I'm wrong.

          People

          • Assignee:
            Unassigned
            Reporter:
            Yuan Kang
          • Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Time Tracking

              Estimated:
              Original Estimate - 672h
              672h
              Remaining:
              Remaining Estimate - 672h
              672h
              Logged:
              Time Spent - Not Specified
              Not Specified

                Development