Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-24

Scaling: Too many open file handles to datanodes

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Blocker
    • Resolution: Won't Fix
    • None
    • None
    • regionserver
    • None

    Description

      We've been here before (HADOOP-2341).

      Today the rapleaf gave me an lsof listing from a regionserver. Had thousands of open sockets to datanodes all in ESTABLISHED and CLOSE_WAIT state. On average they seem to have about ten file descriptors/sockets open per region (They have 3 column families IIRC. Per family, can have between 1-5 or so mapfiles open per family – 3 is max... but compacting we open a new one, etc.).

      They have thousands of regions. 400 regions – ~100G, which is not that much – takes about 4k open file handles.

      If they want a regionserver to server a decent disk worths – 300-400G – then thats maybe 1600 regions... 16k file handles. If more than just 3 column families..... then we are in danger of blowing out limits if they are 32k.

      We've been here before with HADOOP-2341.

      A dfsclient that used non-blocking i/o would help applications like hbase (The datanode doesn't have this problem as bad – CLOSE_WAIT on regionserver side, the bulk of the open fds in the rapleaf log, don't have a corresponding open resource on datanode end).

      Could also just open mapfiles as needed, but that'd kill our random read performance and its bad enough already.

      Attachments

        1. MonitoredReader.java
          10 kB
          Luo Ning
        2. HBASE-823.patch
          1 kB
          Luo Ning

        Issue Links

          Activity

            People

              Unassigned Unassigned
              stack Michael Stack
              Votes:
              1 Vote for this issue
              Watchers:
              17 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: