HBase
  1. HBase
  2. HBASE-6254

deletes w/ many column qualifiers overwhelm Region Server

    Details

    • Type: Bug Bug
    • Status: Open
    • Priority: Major Major
    • Resolution: Unresolved
    • Affects Version/s: 0.94.0
    • Fix Version/s: None
    • Component/s: Performance, regionserver
    • Labels:
      None
    • Environment:

      5 node Cent OS + 1 master, v0.94 on cdh3u3

      Description

      Execution of Deletes constructed with thousands of calls to Delete.deleteColumn(family, qualifier) are very expensive and slow.

      On our (quiet) cluster, a Delete w/ 20k qualifiers took about 13s to complete (as measured by client).

      When 10 such Deletes were sent to the cluster via HTable.delete(List<Delete>), one of RegionServers ended up w/ 5 of the requests and became 100% CPU utilized for about 1 hour.

      This lead to the client timing out after 20min (2min x 10 retries). In one case, the client was able to fill the RPC callqueue and received the following error:

        Failed all from region=<region>,hostname=<host>, port=<port> java.util.concurrent.ExecutionException: java.io.IOException: Call queue is full, is ipc.server.max.callqueue.size too small?
      

      Based on feedback (http://search-hadoop.com/m/yITsc1WcDWP), I switched to Delete.deleteColumn(family, qual, timestamp) where timestamp came from KeyValue retrieved from scan based on domain objects. This version of the delete ran in about 500ms.

      User group thread titled "RS unresponsive after series of deletes" has related logs and stacktraces.

      Link to thread: http://search-hadoop.com/m/RmIyr1WcDWP

      Here is the stack dump of region server: http://pastebin.com/8y5x4xU7

        Activity

        Hide
        Ted Yu added a comment -

        From HRegion, prepareDeleteTimestamps() performs one get operation per column qualifier:

              for (KeyValue kv: kvs) {
                //  Check if time is LATEST, change to time of most recent addition if so
                //  This is expensive.
                if (kv.isLatestTimestamp() && kv.isDeleteType()) {
        ...
                  List<KeyValue> result = get(get, false);
        

        We perform get() for each kv whose time is LATEST.
        This explains the unresponsiveness.

        I think we can group some configurable number of qualifiers in each get and perform classification on result.
        This way we can reduce the number of times HRegion$RegionScannerImpl.next() is called.

        Show
        Ted Yu added a comment - From HRegion, prepareDeleteTimestamps() performs one get operation per column qualifier: for (KeyValue kv: kvs) { // Check if time is LATEST, change to time of most recent addition if so // This is expensive. if (kv.isLatestTimestamp() && kv.isDeleteType()) { ... List<KeyValue> result = get(get, false ); We perform get() for each kv whose time is LATEST. This explains the unresponsiveness. I think we can group some configurable number of qualifiers in each get and perform classification on result. This way we can reduce the number of times HRegion$RegionScannerImpl.next() is called.
        Hide
        Ted Yu added a comment -

        Since KeyValue implements HeapSize, we can keep adding column qualifiers until we reach configurable threshold.
        After get(get, false) returns, we can parse out the column qualifiers from result.

        Show
        Ted Yu added a comment - Since KeyValue implements HeapSize, we can keep adding column qualifiers until we reach configurable threshold. After get(get, false) returns, we can parse out the column qualifiers from result.

          People

          • Assignee:
            Unassigned
            Reporter:
            Ted Tuttle
          • Votes:
            0 Vote for this issue
            Watchers:
            7 Start watching this issue

            Dates

            • Created:
              Updated:

              Development