Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-8701

distributedLogReplay need to apply wal edits in the receiving order of those edits



    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • 0.98.0, 0.99.0
    • MTTR
    • None
    • Reviewed


      This issue happens in distributedLogReplay mode when recovering multiple puts of the same key + version(timestamp). After replay, the value is nondeterministic of the key

      The original concern situation raised from eclark:

      For all edits the rowkey is the same.
      There's a log with: [ A (ts = 0), B (ts = 0) ]
      Replay the first half of the log.
      A user puts in C (ts = 0)
      Memstore has to flush
      A new Hfile will be created with [ C, A ] and MaxSequenceId = C's seqid.
      Replay the rest of the Log.

      The issue will happen in similar situation like Put(key, t=T) in WAL1 and Put(key,t=T) in WAL2

      Below is the option(proposed by Ted) I'd like to use:

      a) During replay, we pass original wal sequence number of each edit to the receiving RS
      b) In receiving RS, we store negative original sequence number of wal edits into mvcc field of KVs of wal edits
      c) Add handling of negative MVCC in KVScannerComparator and KVComparator
      d) In receiving RS, write original sequence number into an optional field of wal file for chained RS failure situation
      e) When opening a region, we add a safety bumper(a large number) in order for the new sequence number of a newly opened region not to collide with old sequence numbers.

      In the future, when we stores sequence number along with KVs, we can adjust the above solution a little bit by avoiding to overload MVCC field.

      The other alternative options are listed below for references:

      Option one
      a) disallow writes during recovery
      b) during replay, we pass original wal sequence ids
      c) hold flush till all wals of a recovering region are replayed. Memstore should hold because we only recover unflushed wal edits. For edits with same key + version, whichever with larger sequence Id wins.

      Option two
      a) During replay, we pass original wal sequence ids
      b) for each wal edit, we store each edit's original sequence id along with its key.
      c) during scanning, we use the original sequence id if it's present otherwise its store file sequence Id
      d) compaction can just leave put with max sequence id

      Please let me know if you have better ideas.


        1. 8701-v3.txt
          9 kB
          Ted Yu
        2. hbase-8701-tag.patch
          26 kB
          Jeffrey Zhong
        3. hbase-8701-tag-v1.patch
          27 kB
          Jeffrey Zhong
        4. hbase-8701-tag-v2.patch
          44 kB
          Jeffrey Zhong
        5. hbase-8701-tag-v2-update.patch
          44 kB
          Jeffrey Zhong
        6. hbase-8701-v4.patch
          33 kB
          Jeffrey Zhong
        7. hbase-8701-v5.patch
          33 kB
          Jeffrey Zhong
        8. hbase-8701-v6.patch
          37 kB
          Jeffrey Zhong
        9. hbase-8701-v7.patch
          42 kB
          Jeffrey Zhong
        10. hbase-8701-v8.patch
          47 kB
          Jeffrey Zhong

        Issue Links



              jeffreyz Jeffrey Zhong
              jeffreyz Jeffrey Zhong
              0 Vote for this issue
              18 Start watching this issue