Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
None
Description
It appears that restoring a non-existing table from an incremental backup with the "-m" parameter results in an error in the restore client.
Reproduction steps:
Build & start hbase:
mvn clean install -Phadoop-3.0 -DskipTests bin/start-hbase.sh
In HBase shell: create table and some values:
create 'test', 'cf' put 'test', 'row1', 'cf:a', 'value1' put 'test', 'row2', 'cf:b', 'value2' put 'test', 'row3', 'cf:c', 'value3' scan 'test'
Create a full backup:
bin/hbase backup create full file:/tmp/hbase-backup
Adjust some data through HBase shell:
put 'test', 'row1', 'cf:a', 'value1-new' scan 'test'
Create an incremental backup:
bin/hbase backup create incremental file:/tmp/hbase-backup
Delete the original table in HBase shell:
disable 'test' drop 'test'
Restore the incremental backup under a new table name:
bin/hbase backup history bin/hbase restore file:/tmp/hbase-backup <ID-of-incremental> -t "test" -m "test-restored"
This results in the following output / error:
... 2024-03-25T13:38:53,062 WARN [main {}] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2024-03-25T13:38:53,174 INFO [main {}] Configuration.deprecation: hbase.client.pause.cqtbe is deprecated. Instead, use hbase.client.pause.server.overloaded 2024-03-25T13:38:53,554 INFO [main {}] impl.RestoreTablesClient: HBase table test-restored does not exist. It will be created during restore process 2024-03-25T13:38:53,593 INFO [main {}] impl.RestoreTablesClient: Restoring 'test' to 'test-restored' from full backup image file:/tmp/hbase-backup/backup_1711370230143/default/test 2024-03-25T13:38:53,707 INFO [main {}] util.BackupUtils: Creating target table 'test-restored' 2024-03-25T13:38:54,546 INFO [main {}] mapreduce.MapReduceRestoreJob: Restore test into test-restored 2024-03-25T13:38:54,646 INFO [main {}] mapreduce.HFileOutputFormat2: bulkload locality sensitive enabled 2024-03-25T13:38:54,647 INFO [main {}] mapreduce.HFileOutputFormat2: Looking up current regions for table test-restored 2024-03-25T13:38:54,669 INFO [main {}] mapreduce.HFileOutputFormat2: Configuring 1 reduce partitions to match current region count for all tables 2024-03-25T13:38:54,669 INFO [main {}] mapreduce.HFileOutputFormat2: Writing partition information to file:/tmp/hbase-tmp/partitions_0667b6e2-79ef-4cfe-97e1-abb204ee420d 2024-03-25T13:38:54,687 INFO [main {}] compress.CodecPool: Got brand-new compressor [.deflate] 2024-03-25T13:38:54,713 INFO [main {}] mapreduce.HFileOutputFormat2: Incremental output configured for tables: test-restored 2024-03-25T13:38:54,715 WARN [main {}] mapreduce.TableMapReduceUtil: The addDependencyJars(Configuration, Class<?>...) method has been deprecated since it is easy to use incorrectly. Most users should rely on addDependencyJars(Job) instead. See HBASE-8386 for more details. 2024-03-25T13:38:54,742 WARN [main {}] impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-jobtracker.properties,hadoop-metrics2.properties 2024-03-25T13:38:54,834 INFO [main {}] input.FileInputFormat: Total input files to process : 1 2024-03-25T13:38:54,853 INFO [main {}] mapreduce.JobSubmitter: number of splits:1 2024-03-25T13:38:54,964 INFO [main {}] mapreduce.JobSubmitter: Submitting tokens for job: job_local748155768_0001 2024-03-25T13:38:54,967 INFO [main {}] mapreduce.JobSubmitter: Executing with tokens: [] 2024-03-25T13:38:55,076 INFO [main {}] mapred.LocalDistributedCacheManager: Creating symlink: /tmp/hadoop-dieter/mapred/local/job_local748155768_0001_0768a243-06e8-4524-8a6d-016ddd75df52/libjars <- /home/dieter/code/hbase/libjars/* 2024-03-25T13:38:55,079 WARN [main {}] fs.FileUtil: Command 'ln -s /tmp/hadoop-dieter/mapred/local/job_local748155768_0001_0768a243-06e8-4524-8a6d-016ddd75df52/libjars /home/dieter/code/hbase/libjars/*' failed 1 with: ln: failed to create symbolic link '/home/dieter/code/hbase/libjars/*': No such file or directory2024-03-25T13:38:55,079 WARN [main {}] mapred.LocalDistributedCacheManager: Failed to create symlink: /tmp/hadoop-dieter/mapred/local/job_local748155768_0001_0768a243-06e8-4524-8a6d-016ddd75df52/libjars <- /home/dieter/code/hbase/libjars/* 2024-03-25T13:38:55,079 INFO [main {}] mapred.LocalDistributedCacheManager: Localized file:/tmp/hadoop/mapred/staging/dieter748155768/.staging/job_local748155768_0001/libjars as file:/tmp/hadoop-dieter/mapred/local/job_local748155768_0001_0768a243-06e8-4524-8a6d-016ddd75df52/libjars 2024-03-25T13:38:55,129 INFO [main {}] mapreduce.Job: The url to track the job: http://localhost:8080/ 2024-03-25T13:38:55,129 INFO [main {}] mapreduce.Job: Running job: job_local748155768_0001 2024-03-25T13:38:55,129 INFO [Thread-33 {}] mapred.LocalJobRunner: OutputCommitter set in config null 2024-03-25T13:38:55,131 INFO [Thread-33 {}] output.PathOutputCommitterFactory: No output committer factory defined, defaulting to FileOutputCommitterFactory 2024-03-25T13:38:55,132 INFO [Thread-33 {}] output.FileOutputCommitter: File Output Committer Algorithm version is 2 2024-03-25T13:38:55,132 INFO [Thread-33 {}] output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 2024-03-25T13:38:55,132 INFO [Thread-33 {}] mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 2024-03-25T13:38:55,151 INFO [Thread-33 {}] mapred.LocalJobRunner: Waiting for map tasks 2024-03-25T13:38:55,151 INFO [LocalJobRunner Map Task Executor #0 {}] mapred.LocalJobRunner: Starting task: attempt_local748155768_0001_m_000000_0 2024-03-25T13:38:55,160 INFO [LocalJobRunner Map Task Executor #0 {}] output.PathOutputCommitterFactory: No output committer factory defined, defaulting to FileOutputCommitterFactory 2024-03-25T13:38:55,160 INFO [LocalJobRunner Map Task Executor #0 {}] output.FileOutputCommitter: File Output Committer Algorithm version is 2 2024-03-25T13:38:55,160 INFO [LocalJobRunner Map Task Executor #0 {}] output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 2024-03-25T13:38:55,167 INFO [LocalJobRunner Map Task Executor #0 {}] mapred.Task: Using ResourceCalculatorProcessTree : [ ] 2024-03-25T13:38:55,197 INFO [LocalJobRunner Map Task Executor #0 {}] mapred.MapTask: Processing split: file:/tmp/hbase-backup/backup_1711370230143/default/test/archive/data/default/test/de6966990d8204028ef5dd5156c1670e/cf/2287fb0091f24a34bb9310c8fc377831:0+5029 2024-03-25T13:38:55,226 INFO [LocalJobRunner Map Task Executor #0 {}] mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584) 2024-03-25T13:38:55,226 INFO [LocalJobRunner Map Task Executor #0 {}] mapred.MapTask: mapreduce.task.io.sort.mb: 100 2024-03-25T13:38:55,226 INFO [LocalJobRunner Map Task Executor #0 {}] mapred.MapTask: soft limit at 83886080 2024-03-25T13:38:55,226 INFO [LocalJobRunner Map Task Executor #0 {}] mapred.MapTask: bufstart = 0; bufvoid = 104857600 2024-03-25T13:38:55,227 INFO [LocalJobRunner Map Task Executor #0 {}] mapred.MapTask: kvstart = 26214396; length = 6553600 2024-03-25T13:38:55,229 INFO [LocalJobRunner Map Task Executor #0 {}] mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 2024-03-25T13:38:55,230 INFO [LocalJobRunner Map Task Executor #0 {}] mapreduce.HFileInputFormat: Initialize HFileRecordReader for file:/tmp/hbase-backup/backup_1711370230143/default/test/archive/data/default/test/de6966990d8204028ef5dd5156c1670e/cf/2287fb0091f24a34bb9310c8fc377831 2024-03-25T13:38:55,233 INFO [LocalJobRunner Map Task Executor #0 {}] mapreduce.HFileInputFormat: Seeking to start 2024-03-25T13:38:55,257 WARN [LocalJobRunner Map Task Executor #0 {}] impl.MetricsSystemImpl: JobTracker metrics system already initialized! 2024-03-25T13:38:55,272 INFO [LocalJobRunner Map Task Executor #0 {}] metrics.MetricRegistries: Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2024-03-25T13:38:55,278 INFO [LocalJobRunner Map Task Executor #0 {}] mapred.LocalJobRunner: 2024-03-25T13:38:55,279 INFO [LocalJobRunner Map Task Executor #0 {}] mapred.MapTask: Starting flush of map output 2024-03-25T13:38:55,279 INFO [LocalJobRunner Map Task Executor #0 {}] mapred.MapTask: Spilling map output 2024-03-25T13:38:55,279 INFO [LocalJobRunner Map Task Executor #0 {}] mapred.MapTask: bufstart = 0; bufend = 135; bufvoid = 104857600 2024-03-25T13:38:55,279 INFO [LocalJobRunner Map Task Executor #0 {}] mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214388(104857552); length = 9/6553600 2024-03-25T13:38:55,287 INFO [LocalJobRunner Map Task Executor #0 {}] mapred.MapTask: Finished spill 0 2024-03-25T13:38:55,313 INFO [LocalJobRunner Map Task Executor #0 {}] mapred.Task: Task:attempt_local748155768_0001_m_000000_0 is done. And is in the process of committing 2024-03-25T13:38:55,338 INFO [LocalJobRunner Map Task Executor #0 {}] mapred.LocalJobRunner: map 2024-03-25T13:38:55,338 INFO [LocalJobRunner Map Task Executor #0 {}] mapred.Task: Task 'attempt_local748155768_0001_m_000000_0' done. 2024-03-25T13:38:55,341 INFO [LocalJobRunner Map Task Executor #0 {}] mapred.Task: Final Counters for attempt_local748155768_0001_m_000000_0: Counters: 22 File System Counters FILE: Number of bytes read=268105 FILE: Number of bytes written=971133 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 Map-Reduce Framework Map input records=3 Map output records=3 Map output bytes=135 Map output materialized bytes=147 Input split bytes=217 Combine input records=0 Spilled Records=3 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=0 CPU time spent (ms)=280 Physical memory (bytes) snapshot=412131328 Virtual memory (bytes) snapshot=19061460992 Total committed heap usage (bytes)=276824064 Peak Map Physical memory (bytes)=412131328 Peak Map Virtual memory (bytes)=19061460992 File Input Format Counters Bytes Read=8701 2024-03-25T13:38:55,341 INFO [LocalJobRunner Map Task Executor #0 {}] mapred.LocalJobRunner: Finishing task: attempt_local748155768_0001_m_000000_0 2024-03-25T13:38:55,341 INFO [Thread-33 {}] mapred.LocalJobRunner: map task executor complete. 2024-03-25T13:38:55,343 INFO [Thread-33 {}] mapred.LocalJobRunner: Waiting for reduce tasks 2024-03-25T13:38:55,343 INFO [pool-11-thread-1 {}] mapred.LocalJobRunner: Starting task: attempt_local748155768_0001_r_000000_0 2024-03-25T13:38:55,346 INFO [pool-11-thread-1 {}] output.PathOutputCommitterFactory: No output committer factory defined, defaulting to FileOutputCommitterFactory 2024-03-25T13:38:55,346 INFO [pool-11-thread-1 {}] output.FileOutputCommitter: File Output Committer Algorithm version is 2 2024-03-25T13:38:55,346 INFO [pool-11-thread-1 {}] output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 2024-03-25T13:38:55,346 INFO [pool-11-thread-1 {}] mapred.Task: Using ResourceCalculatorProcessTree : [ ] 2024-03-25T13:38:55,366 INFO [pool-11-thread-1 {}] mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@3a7c856f 2024-03-25T13:38:55,366 WARN [pool-11-thread-1 {}] impl.MetricsSystemImpl: JobTracker metrics system already initialized! 2024-03-25T13:38:55,373 INFO [pool-11-thread-1 {}] reduce.MergeManagerImpl: The max number of bytes for a single in-memory shuffle cannot be larger than Integer.MAX_VALUE. Setting it to Integer.MAX_VALUE 2024-03-25T13:38:55,373 INFO [pool-11-thread-1 {}] reduce.MergeManagerImpl: MergerManager: memoryLimit=11732306944, maxSingleShuffleLimit=2147483647, mergeThreshold=7743323136, ioSortFactor=10, memToMemMergeOutputsThreshold=10 2024-03-25T13:38:55,374 INFO [EventFetcher for fetching Map Completion Events {}] reduce.EventFetcher: attempt_local748155768_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events 2024-03-25T13:38:55,387 INFO [localfetcher#1 {}] reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local748155768_0001_m_000000_0 decomp: 143 len: 147 to MEMORY 2024-03-25T13:38:55,388 INFO [localfetcher#1 {}] reduce.InMemoryMapOutput: Read 143 bytes from map-output for attempt_local748155768_0001_m_000000_0 2024-03-25T13:38:55,389 INFO [localfetcher#1 {}] reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 143, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->143 2024-03-25T13:38:55,389 INFO [EventFetcher for fetching Map Completion Events {}] reduce.EventFetcher: EventFetcher is interrupted.. Returning 2024-03-25T13:38:55,390 INFO [pool-11-thread-1 {}] mapred.LocalJobRunner: 1 / 1 copied. 2024-03-25T13:38:55,390 INFO [pool-11-thread-1 {}] reduce.MergeManagerImpl: finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs 2024-03-25T13:38:55,394 INFO [pool-11-thread-1 {}] mapred.Merger: Merging 1 sorted segments 2024-03-25T13:38:55,395 INFO [pool-11-thread-1 {}] mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 133 bytes 2024-03-25T13:38:55,397 INFO [pool-11-thread-1 {}] reduce.MergeManagerImpl: Merged 1 segments, 143 bytes to disk to satisfy reduce memory limit 2024-03-25T13:38:55,397 INFO [pool-11-thread-1 {}] reduce.MergeManagerImpl: Merging 1 files, 147 bytes from disk 2024-03-25T13:38:55,397 INFO [pool-11-thread-1 {}] reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce 2024-03-25T13:38:55,397 INFO [pool-11-thread-1 {}] mapred.Merger: Merging 1 sorted segments 2024-03-25T13:38:55,397 INFO [pool-11-thread-1 {}] mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 133 bytes 2024-03-25T13:38:55,398 INFO [pool-11-thread-1 {}] mapred.LocalJobRunner: 1 / 1 copied. 2024-03-25T13:38:55,399 INFO [pool-11-thread-1 {}] Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords 2024-03-25T13:38:55,455 INFO [pool-11-thread-1 {}] mapred.Task: Task:attempt_local748155768_0001_r_000000_0 is done. And is in the process of committing 2024-03-25T13:38:55,456 INFO [pool-11-thread-1 {}] mapred.LocalJobRunner: 1 / 1 copied. 2024-03-25T13:38:55,456 INFO [pool-11-thread-1 {}] mapred.Task: Task attempt_local748155768_0001_r_000000_0 is allowed to commit now 2024-03-25T13:38:55,458 INFO [pool-11-thread-1 {}] output.FileOutputCommitter: Saved output of task 'attempt_local748155768_0001_r_000000_0' to file:/tmp/hbase-tmp/bulk_output-default-test-restored-1711370334546 2024-03-25T13:38:55,476 INFO [pool-11-thread-1 {}] mapred.LocalJobRunner: Read class java.util.TreeSet > reduce 2024-03-25T13:38:55,476 INFO [pool-11-thread-1 {}] mapred.Task: Task 'attempt_local748155768_0001_r_000000_0' done. 2024-03-25T13:38:55,476 INFO [pool-11-thread-1 {}] mapred.Task: Final Counters for attempt_local748155768_0001_r_000000_0: Counters: 29 File System Counters FILE: Number of bytes read=268431 FILE: Number of bytes written=976497 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 Map-Reduce Framework Combine input records=0 Combine output records=0 Reduce input groups=3 Reduce shuffle bytes=147 Reduce input records=3 Reduce output records=3 Spilled Records=3 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=0 CPU time spent (ms)=190 Physical memory (bytes) snapshot=415133696 Virtual memory (bytes) snapshot=19068817408 Total committed heap usage (bytes)=276824064 Peak Reduce Physical memory (bytes)=415133696 Peak Reduce Virtual memory (bytes)=19068817408 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Output Format Counters Bytes Written=5217 2024-03-25T13:38:55,476 INFO [pool-11-thread-1 {}] mapred.LocalJobRunner: Finishing task: attempt_local748155768_0001_r_000000_0 2024-03-25T13:38:55,476 INFO [Thread-33 {}] mapred.LocalJobRunner: reduce task executor complete. 2024-03-25T13:38:56,131 INFO [main {}] mapreduce.Job: Job job_local748155768_0001 running in uber mode : false 2024-03-25T13:38:56,132 INFO [main {}] mapreduce.Job: map 100% reduce 100% 2024-03-25T13:38:56,134 INFO [main {}] mapreduce.Job: Job job_local748155768_0001 completed successfully 2024-03-25T13:38:56,147 INFO [main {}] mapreduce.Job: FILE: Number of write operations=0 Map-Reduce Framework Map input records=3 Map output records=3 Map output bytes=135 Map output materialized bytes=147 Input split bytes=217 Combine input records=0 Combine output records=0 Reduce input groups=3 Reduce shuffle bytes=147 Reduce input records=3 Reduce output records=3 Spilled Records=6 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=0 CPU time spent (ms)=470 Physical memory (bytes) snapshot=827265024 Virtual memory (bytes) snapshot=38130278400 Total committed heap usage (bytes)=553648128 Peak Map Physical memory (bytes)=412131328 Peak Map Virtual memory (bytes)=19061460992 Peak Reduce Physical memory (bytes)=415133696 Peak Reduce Virtual memory (bytes)=19068817408 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=8701 File Output Format Counters Bytes Written=5217 2024-03-25T13:38:56,254 WARN [main {}] tool.LoadIncrementalHFiles: Skipping non-directory file:/tmp/hbase-tmp/bulk_output-default-test-restored-1711370334546/_SUCCESS 2024-03-25T13:38:56,284 WARN [main {}] token.FsDelegationToken: Unknown FS URI scheme: file 2024-03-25T13:38:56,326 INFO [LoadIncrementalHFiles-0 {}] tool.LoadIncrementalHFiles: Trying to load hfile=file:/tmp/hbase-tmp/bulk_output-default-test-restored-1711370334546/cf/b0c03e04af784c77a9d12206559aa768 first=Optional[row1] last=Optional[row3] 2024-03-25T13:38:56,473 INFO [main {}] impl.RestoreTablesClient: Restoring 'test' to 'test-restored' from log dirs: file:/tmp/hbase-backup/backup_1711370245048/default/test/cf/860b10b854204226834b85212e529f29 2024-03-25T13:38:56,477 INFO [main {}] mapreduce.MapReduceRestoreJob: Restore test into test-restored 2024-03-25T13:38:56,486 ERROR [main {}] mapreduce.MapReduceRestoreJob: org.apache.hadoop.hbase.TableNotFoundException: test org.apache.hadoop.hbase.TableNotFoundException: test at org.apache.hadoop.hbase.client.HBaseAdmin.getTableDescriptor(HBaseAdmin.java:635) ~[hbase-client-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT] at org.apache.hadoop.hbase.client.HTable.getDescriptor(HTable.java:244) ~[hbase-client-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT] at org.apache.hadoop.hbase.backup.mapreduce.MapReduceHFileSplitterJob.createSubmittableJob(MapReduceHFileSplitterJob.java:117) ~[hbase-backup-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT] at org.apache.hadoop.hbase.backup.mapreduce.MapReduceHFileSplitterJob.run(MapReduceHFileSplitterJob.java:165) ~[hbase-backup-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT] at org.apache.hadoop.hbase.backup.mapreduce.MapReduceRestoreJob.run(MapReduceRestoreJob.java:84) ~[hbase-backup-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT] at org.apache.hadoop.hbase.backup.util.RestoreTool.incrementalRestoreTable(RestoreTool.java:205) ~[hbase-backup-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT] at org.apache.hadoop.hbase.backup.impl.RestoreTablesClient.restoreImages(RestoreTablesClient.java:186) ~[hbase-backup-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT] at org.apache.hadoop.hbase.backup.impl.RestoreTablesClient.restore(RestoreTablesClient.java:229) ~[hbase-backup-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT] at org.apache.hadoop.hbase.backup.impl.RestoreTablesClient.execute(RestoreTablesClient.java:265) ~[hbase-backup-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT] at org.apache.hadoop.hbase.backup.impl.BackupAdminImpl.restore(BackupAdminImpl.java:518) ~[hbase-backup-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT] at org.apache.hadoop.hbase.backup.RestoreDriver.parseAndRun(RestoreDriver.java:176) ~[hbase-backup-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT] at org.apache.hadoop.hbase.backup.RestoreDriver.doWork(RestoreDriver.java:216) ~[hbase-backup-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT] at org.apache.hadoop.hbase.backup.RestoreDriver.run(RestoreDriver.java:252) ~[hbase-backup-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT] at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:82) ~[hadoop-common-3.3.5.jar:?] at org.apache.hadoop.hbase.backup.RestoreDriver.main(RestoreDriver.java:224) ~[hbase-backup-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT] 2024-03-25T13:38:56,491 ERROR [main {}] backup.RestoreDriver: Error while running restore backup java.io.IOException: Can not restore from backup directory file:/tmp/hbase-backup/backup_1711370245048/default/test/cf/860b10b854204226834b85212e529f29 (check Hadoop and HBase logs) at org.apache.hadoop.hbase.backup.mapreduce.MapReduceRestoreJob.run(MapReduceRestoreJob.java:103) ~[hbase-backup-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT] at org.apache.hadoop.hbase.backup.util.RestoreTool.incrementalRestoreTable(RestoreTool.java:205) ~[hbase-backup-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT] at org.apache.hadoop.hbase.backup.impl.RestoreTablesClient.restoreImages(RestoreTablesClient.java:186) ~[hbase-backup-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT] at org.apache.hadoop.hbase.backup.impl.RestoreTablesClient.restore(RestoreTablesClient.java:229) ~[hbase-backup-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT] at org.apache.hadoop.hbase.backup.impl.RestoreTablesClient.execute(RestoreTablesClient.java:265) ~[hbase-backup-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT] at org.apache.hadoop.hbase.backup.impl.BackupAdminImpl.restore(BackupAdminImpl.java:518) ~[hbase-backup-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT] at org.apache.hadoop.hbase.backup.RestoreDriver.parseAndRun(RestoreDriver.java:176) ~[hbase-backup-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT] at org.apache.hadoop.hbase.backup.RestoreDriver.doWork(RestoreDriver.java:216) ~[hbase-backup-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT] at org.apache.hadoop.hbase.backup.RestoreDriver.run(RestoreDriver.java:252) ~[hbase-backup-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT] at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:82) ~[hadoop-common-3.3.5.jar:?] at org.apache.hadoop.hbase.backup.RestoreDriver.main(RestoreDriver.java:224) ~[hbase-backup-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT] Caused by: org.apache.hadoop.hbase.TableNotFoundException: test at org.apache.hadoop.hbase.client.HBaseAdmin.getTableDescriptor(HBaseAdmin.java:635) ~[hbase-client-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT] at org.apache.hadoop.hbase.client.HTable.getDescriptor(HTable.java:244) ~[hbase-client-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT] at org.apache.hadoop.hbase.backup.mapreduce.MapReduceHFileSplitterJob.createSubmittableJob(MapReduceHFileSplitterJob.java:117) ~[hbase-backup-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT] at org.apache.hadoop.hbase.backup.mapreduce.MapReduceHFileSplitterJob.run(MapReduceHFileSplitterJob.java:165) ~[hbase-backup-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT] at org.apache.hadoop.hbase.backup.mapreduce.MapReduceRestoreJob.run(MapReduceRestoreJob.java:84) ~[hbase-backup-2.6.1-SNAPSHOT.jar:2.6.1-SNAPSHOT]
Attachments
Issue Links
- links to