Details
-
Bug
-
Status: Closed
-
Critical
-
Resolution: Fixed
-
None
-
Reviewed
-
Description
Looks like the current write path can cause inconsistency between memstore/hfile and WAL which cause the slave cluster has more data than the master cluster.
The simplified write path looks like:
1. insert record into Memstore
2. write record to WAL
3. sync WAL
4. rollback Memstore if 3 fails
It's possible that the HDFS sync RPC call fails, but the data is already (may partially) transported to the DNs which finally get persisted. As a result, the handler will rollback the Memstore and the later flushed HFile will also skip this record.
==================================
This is a long lived issue. The above problem is solved by write path reorder, as now we will sync wal first before modifying memstore. But the problem may still exists as replication thread may read the new data before we return from hflush. See this document for more details:
https://docs.google.com/document/d/11AyWtGhItQs6vsLRIx32PwTxmBY3libXwGXI25obVEY/edit#
So we need to keep a sync length in WAL and tell replication wal reader this is limit when you read this wal file.
Attachments
Attachments
Issue Links
- breaks
-
HBASE-21503 Replication normal source can get stuck due potential race conditions between source wal reader and wal provider initialization threads.
- Resolved
-
HBASE-18845 TestReplicationSmallTests fails after HBASE-14004
- Resolved
- is related to
-
HBASE-24625 AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.
- Resolved
-
HBASE-28184 Tailing the WAL is very slow if there are multiple peers.
- Resolved
- relates to
-
HBASE-5954 Allow proper fsync support for HBase
- Closed
-
HBASE-14790 Implement a new DFSOutputStream for logging WAL only
- Closed
- links to