Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
2.0.0-alpha-1
-
None
-
Reviewed
Description
As HBASE-16466 commented.
Seems like that when source hbase cluster / peer hbase cluster / yarn cluster locate in three different HDFS cluster , it has one problem.
when restoring the snapshot into tmpdir , we need to create region by following code (HRegion#createHRegion)
public static HRegion createHRegion(final HRegionInfo info, final Path rootDir, final Configuration conf, final TableDescriptor hTableDescriptor, final WAL wal, final boolean initialize) throws IOException { LOG.info("creating HRegion " + info.getTable().getNameAsString() + " HTD == " + hTableDescriptor + " RootDir = " + rootDir + " Table name == " + info.getTable().getNameAsString()); FileSystem fs = FileSystem.get(conf); <------------------- Here our code use fs.defaultFs configuration to create region. Path tableDir = FSUtils.getTableDir(rootDir, info.getTable()); HRegionFileSystem.createRegionOnFileSystem(conf, fs, tableDir, info); HRegion region = HRegion.newHRegion(tableDir, wal, fs, conf, info, hTableDescriptor, null); if (initialize) region.initialize(null); return region; }
When source cluster & peer cluster locate in two difference file systems , then their fs.defaultFs should be difference, so at least one cluster will fail when restore snapshot into tmpdir . after I added the following fix, it works fine for me.
-FileSystem fs = FileSystem.get(conf); +FileSystem fs = rootDir.getFileSystem(conf);
Attachments
Attachments
Issue Links
- is related to
-
HBASE-18452 VerifyReplication by Snapshot should cache HDFS token before submit job for kerberos env.
- Resolved
-
HBASE-16466 HBase snapshots support in VerifyReplication tool to reduce load on live HBase cluster with large tables
- Resolved
- links to