Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-18887

After full backup passed on hdfs root and incremental failed, full backup cannot be cleaned

VotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • 2.0.0-alpha-4, 2.0.0
    • None
    • Reviewed

    Description

      >>
      ./bin/hbase backup create full hdfs://localhost:8020/ -t test1

      2017-09-27 10:19:38,885 INFO [main] impl.BackupManifest: Manifest file stored to hdfs://localhost:8020/backup_1506487766386/.backup.manifest
      2017-09-27 10:19:38,937 INFO [main] impl.TableBackupClient: Backup backup_1506487766386 completed.
      Backup session backup_1506487766386 finished. Status: SUCCESS

      >>

      2017-09-27 10:20:48,211 INFO [main] mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/vkhandelwal/.staging/job_1506419443344_0045
      2017-09-27 10:20:48,215 ERROR [main] impl.TableBackupClient: Unexpected exception in incremental-backup: incremental copy backup_1506487845361Can not convert from directory (check Hadoop, HBase and WALPlayer M/R job logs)
      java.io.IOException: Can not convert from directory (check Hadoop, HBase and WALPlayer M/R job logs)
      at org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.walToHFiles(IncrementalTableBackupClient.java:363)
      at

       ./bin/hbase backup create full hdfs://localhost:8020/ -t test1 

      2017-09-27 10:19:38,885 INFO [main] impl.BackupManifest: Manifest file stored to hdfs://localhost:8020/backup_1506487766386/.backup.manifest
      2017-09-27 10:19:38,937 INFO [main] impl.TableBackupClient: Backup backup_1506487766386 completed.
      Backup session backup_1506487766386 finished. Status: SUCCESS

       ./bin/hbase backup create incremental hdfs://localhost:8020/ -t test1 

      2017-09-27 10:20:48,215 ERROR [main] impl.TableBackupClient: Unexpected exception in incremental-backup: incremental copy backup_1506487845361Can not convert from directory (check Hadoop, HBase and WALPlayer M/R job logs)
      java.io.IOException: Can not convert from directory (check Hadoop, HBase and WALPlayer M/R job logs)
      at org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.walToHFiles(IncrementalTableBackupClient.java:363)
      at org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.convertWALsToHFiles(IncrementalTableBackupClient.java:322)
      at org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.execute(IncrementalTableBackupClient.java:232)
      at org.apache.hadoop.hbase.backup.impl.BackupAdminImpl.backupTables(BackupAdminImpl.java:601)
      at org.apache.hadoop.hbase.backup.impl.BackupCommands$CreateCommand.execute(BackupCommands.java:336)
      at org.apache.hadoop.hbase.backup.BackupDriver.parseAndRun(BackupDriver.java:137)
      at org.apache.hadoop.hbase.backup.BackupDriver.doWork(BackupDriver.java:170)
      at org.apache.hadoop.hbase.backup.BackupDriver.run(BackupDriver.java:203)
      at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
      at org.apache.hadoop.hbase.backup.BackupDriver.main(BackupDriver.java:178)
      Caused by: java.lang.IllegalArgumentException: Can not create a Path from an empty string
      at org.apache.hadoop.fs.Path.checkPathArg(Path.java:126)
      at org.apache.hadoop.fs.Path.<init>(Path.java:134)
      at org.apache.hadoop.util.StringUtils.stringToPath(StringUtils.java:245)
      at org.apache.hadoop.hbase.mapreduce.WALInputFormat.getInputPaths(WALInputFormat.java:301)
      at org.apache.hadoop.hbase.mapreduce.WALInputFormat.getSplits(WALInputFormat.java:274)
      at org.apache.hadoop.hbase.mapreduce.WALInputFormat.getSplits(WALInputFormat.java:264)
      at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
      at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
      at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
      at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
      at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
      at java.security.AccessController.doPrivileged(Native Method)
      at javax.security.auth.Subject.doAs(Subject.java:422)
      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
      at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
      at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
      at org.apache.hadoop.hbase.mapreduce.WALPlayer.run(WALPlayer.java:380)
      at org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.walToHFiles(IncrementalTableBackupClient.java:354)
      ... 9 more
      2017-09-27 10:20:48,216 ERROR [main] impl.TableBackupClient: BackupId=backup_1506487845361,startts=1506487846725,failedts=1506487848216,failedphase=PREPARE_INCREMENTAL,failedmessage=Can not convert from directory (check Hadoop, HBase and WALPlayer M/R job logs)
      2017-09-27 10:20:49,919 ERROR [main] impl.TableBackupClient: Backup backup_1506487845361 failed.
      Backup session backup_1506487845361 finished. Status: SUCCESS

       ~/Desktop/backup-open-source/hbase-2.0.0-alpha3$ ./bin/hbase backup delete backup_150648776638 

      Please make sure that backup is enabled on the cluster. To enable backup, in hbase-site.xml, set:
      hbase.backup.enable=true
      hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner
      hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager
      hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager
      and restart the cluster
      2017-09-27 10:22:37,043 INFO [main] metrics.MetricRegistries: Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl
      Deleted 0 backups. Total requested: 2

       ./bin/hbase backup history | grep backup_150648776638 

      2017-09-27 10:22:18,600 INFO [main] metrics.MetricRegistries: Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl
      {ID=backup_1506487766386,Type=FULL,Tables=

      {test1},State=COMPLETE,Start time=Wed Sep 27 10:19:27 IST 2017,End time=Wed Sep 27 10:19:38 IST 2017,Progress=100%}


      org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.convertWALsToHFiles(IncrementalTableBackupClient.java:322)
      at org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.execute(IncrementalTableBackupClient.java:232)
      at org.apache.hadoop.hbase.backup.impl.BackupAdminImpl.backupTables(BackupAdminImpl.java:601)
      at org.apache.hadoop.hbase.backup.impl.BackupCommands$CreateCommand.execute(BackupCommands.java:336)
      at org.apache.hadoop.hbase.backup.BackupDriver.parseAndRun(BackupDriver.java:137)
      at org.apache.hadoop.hbase.backup.BackupDriver.doWork(BackupDriver.java:170)
      at org.apache.hadoop.hbase.backup.BackupDriver.run(BackupDriver.java:203)
      at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
      at org.apache.hadoop.hbase.backup.BackupDriver.main(BackupDriver.java:178)
      Caused by: java.lang.IllegalArgumentException: Can not create a Path from an empty string
      at org.apache.hadoop.fs.Path.checkPathArg(Path.java:126)
      at org.apache.hadoop.fs.Path.<init>(Path.java:134)
      at org.apache.hadoop.util.StringUtils.stringToPath(StringUtils.java:245)
      at org.apache.hadoop.hbase.mapreduce.WALInputFormat.getInputPaths(WALInputFormat.java:301)
      at org.apache.hadoop.hbase.mapreduce.WALInputFormat.getSplits(WALInputFormat.java:274)
      at org.apache.hadoop.hbase.mapreduce.WALInputFormat.getSplits(WALInputFormat.java:264)
      at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
      at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
      at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
      at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
      at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
      at java.security.AccessController.doPrivileged(Native Method)
      at javax.security.auth.Subject.doAs(Subject.java:422)
      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
      at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
      at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
      at org.apache.hadoop.hbase.mapreduce.WALPlayer.run(WALPlayer.java:380)
      at org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.walToHFiles(IncrementalTableBackupClient.java:354)
      ... 9 more
      2017-09-27 10:20:48,216 ERROR [main] impl.TableBackupClient: BackupId=backup_1506487845361,startts=1506487846725,failedts=1506487848216,failedphase=PREPARE_INCREMENTAL,failedmessage=Can not convert from directory (check Hadoop, HBase and WALPlayer M/R job logs)
      2017-09-27 10:20:49,919 ERROR [main] impl.TableBackupClient: Backup backup_1506487845361 failed.
      Backup session backup_1506487845361 finished. Status: SUCCESS

      >>
      vkhandelwal@vishalk-wsl:~/Desktop/backup-open-source/hbase-2.0.0-alpha3$ ./bin/hbase backup delete backup_150648776638
      Please make sure that backup is enabled on the cluster. To enable backup, in hbase-site.xml, set:
      hbase.backup.enable=true
      hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner
      hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager
      hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager
      and restart the cluster
      2017-09-27 10:22:37,043 INFO [main] metrics.MetricRegistries: Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl
      Deleted 0 backups. Total requested: 2

      >>
      vkhandelwal@vishalk-wsl:~/Desktop/backup-open-source/hbase-2.0.0-alpha3$ ./bin/hbase backup history | grep backup_150648776638
      2017-09-27 10:22:18,600 INFO [main] metrics.MetricRegistries: Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl
      {ID=backup_1506487766386,Type=FULL,Tables={test1}

      ,State=COMPLETE,Start time=Wed Sep 27 10:19:27 IST 2017,End time=Wed Sep 27 10:19:38 IST 2017,Progress=100%}

      So We should have 2 fixes here

      #1 backup on root should not be allowed at all either full or incremental
      #2 delete should work for any incorrect backup, as backup structure might be abstracted from user.

      Attachments

        1. HBASE-18887-v1.patch
          6 kB
          Vladimir Rodionov

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            vrodionov Vladimir Rodionov
            vishk Vishal Khandelwal
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment