Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-3295

Dropping a 1k+ regions table likely ends in a client socket timeout and it's very confusing

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • 0.90.0
    • None
    • None
    • Reviewed

    Description

      I tried truncating a 1.6k regions table from the shell and, after the usual disabling timeout, I then got a socket timeout on the second invocation while it was dropping. It looked like this:

      ERROR: java.net.SocketTimeoutException: Call to sv2borg180/10.20.20.180:61000 failed on socket timeout exception:
       java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch :
       java.nio.channels.SocketChannel[connected local=/10.20.20.180:59153 remote=sv2borg180/10.20.20.180:61000]
      

      At first I thought that was coming from the master because HDFS was somehow slow, but then understood that it was my socket that timed out meaning that the master was still dropping the table. Calling truncate again, I got:

      ERROR: Unknown table TestTable!
      

      Which means that the table would be deleted... I learned later that it wasn't totally deleted after I shut down the cluster. So it leaves me in a situation where I have to manually delete the files on the FS and the remaining .META. entries.

      Since I expect a few people will hit this issue rather soon, for 0.90.0, I propose we just set the socket timeout really high in the shell. For 0.90.1, or 0.92, we should do for drop what we do for disabling.

      Attachments

        1. 3295.txt
          0.6 kB
          Michael Stack
        2. 3295-v2.txt
          3 kB
          Michael Stack

        Activity

          People

            stack Michael Stack
            jdcryans Jean-Daniel Cryans
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: