Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-19358

Improve the stability of splitting log when do fail over

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 0.98.24
    • 1.4.1, 2.0.0-beta-1, 2.0.0
    • MTTR
    • None
    • Reviewed
    • Hide
      After HBASE-19358 we introduced a new property hbase.split.writer.creation.bounded to limit the opening writers for each WALSplitter. If set to true, we won't open any writer for recovered.edits until the entries accumulated in memory reaching hbase.regionserver.hlog.splitlog.buffersize (which defaults at 128M) and will write and close the file in one go instead of keeping the writer open. It's false by default and we recommend to set it to true if your cluster has a high region load (like more than 300 regions per RS), especially when you observed obvious NN/HDFS slow down during hbase (single RS or cluster) failover.
      Show
      After HBASE-19358 we introduced a new property hbase.split.writer.creation.bounded to limit the opening writers for each WALSplitter. If set to true, we won't open any writer for recovered.edits until the entries accumulated in memory reaching hbase.regionserver.hlog.splitlog.buffersize (which defaults at 128M) and will write and close the file in one go instead of keeping the writer open. It's false by default and we recommend to set it to true if your cluster has a high region load (like more than 300 regions per RS), especially when you observed obvious NN/HDFS slow down during hbase (single RS or cluster) failover.

    Description

      The way we splitting log now is like the following figure:

      The problem is the OutputSink will write the recovered edits during splitting log, which means it will create one WriterAndPath for each region and retain it until the end. If the cluster is small and the number of regions per rs is large, it will create too many HDFS streams at the same time. Then it is prone to failure since each datanode need to handle too many streams.

      Thus I come up with a new way to split log.

      We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, we will pick the largest EntryBuffer and write it to a file (close the writer after finish). Then after we read all entries into memory, we will start a writeAndCloseThreadPool, it starts a certain number of threads to write all buffers to files. Thus it will not create HDFS streams more than hbase.regionserver.hlog.splitlog.writer.threads we set.
      The biggest benefit is we can control the number of streams we create during splitting log,
      it will not exceeds hbase.regionserver.wal.max.splitters * hbase.regionserver.hlog.splitlog.writer.threads, but before it is hbase.regionserver.wal.max.splitters * the number of region the hlog contains.

      Attachments

        1. HBASE-18619-branch-2-v2.patch
          29 kB
          Yu Li
        2. HBASE-19358.patch
          22 kB
          Jingyun Tian
        3. HBASE-19358-branch-1.patch
          25 kB
          Jingyun Tian
        4. HBASE-19358-branch-1-v2.patch
          25 kB
          Jingyun Tian
        5. HBASE-19358-branch-1-v3.patch
          26 kB
          Jingyun Tian
        6. HBASE-19358-branch-2-v3.patch
          29 kB
          Jingyun Tian
        7. HBASE-19358-v1.patch
          23 kB
          Jingyun Tian
        8. HBASE-19358-v4.patch
          30 kB
          Jingyun Tian
        9. HBASE-19358-v5.patch
          26 kB
          Jingyun Tian
        10. HBASE-19358-v6.patch
          26 kB
          Jingyun Tian
        11. HBASE-19358-v7.patch
          26 kB
          Jingyun Tian
        12. HBASE-19358-v8.patch
          29 kB
          Jingyun Tian
        13. split_test_result.png
          12 kB
          Jingyun Tian
        14. split-1-log.png
          20 kB
          Jingyun Tian
        15. split-logic-new.jpg
          41 kB
          Jingyun Tian
        16. split-logic-old.jpg
          37 kB
          Jingyun Tian
        17. split-table.png
          24 kB
          Jingyun Tian

        Issue Links

          Activity

            People

              tianjingyun Jingyun Tian
              tianjingyun Jingyun Tian
              Votes:
              0 Vote for this issue
              Watchers:
              19 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: