Details
Description
We're having a discussion on the mailing list about replicating the data on a cluster that was shut down in an offline fashion. The motivation could be that you don't want to bring HBase back up but still need that data on the slave.
So I have this idea of a tool that would be running on the master cluster while it is down, although it could also run at any time. Basically it would be able to read the replication state of each master region server, finish replicating what's missing to all the slave, and then clear that state in zookeeper.
The code that handles replication does most of that already, see ReplicationSourceManager and ReplicationSource. Basically when ReplicationSourceManager.init() is called, it will check all the queues in ZK and try to grab those that aren't attached to a region server. If the whole cluster is down, it will grab all of them.
The beautiful thing here is that you could start that tool on all your machines and the load will be spread out, but that might not be a big concern if replication wasn't lagging since it would take a few seconds to finish replicating the missing data for each region server.
I'm guessing when starting ReplicationSourceManager you'd give it a fake region server ID, and you'd tell it not to start its own source.
FWIW the main difference in how replication is handled between Apache's HBase and Facebook's is that the latter is always done separately of HBase itself. This jira isn't about doing that.
Attachments
Attachments
Issue Links
- relates to
-
HBASE-10249 TestReplicationSyncUpTool fails because failover takes too long
- Closed