HBase
  1. HBase
  2. HBASE-10615

Make LoadIncrementalHFiles skip reference files

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Minor Minor
    • Resolution: Fixed
    • Affects Version/s: 0.96.0
    • Fix Version/s: 0.99.0
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      There is use base that the source of hfiles for LoadIncrementalHFiles can be a FileSystem copy-out/backup of HBase table or archive hfiles. For example,
      1. Copy-out of hbase.rootdir, table dir, region dir (after disable) or archive dir.
      2. ExportSnapshot

      It is possible that there are reference files in the family dir in these cases.
      We have such use cases, where trying to load back into HBase, we'll get

      Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem reading HFile Trailer from file hdfs://HDFS-AMR/tmp/restoreTemp/117182adfe861c5d2b607da91d60aa8a/info/aed3d01648384b31b29e5bad4cd80bec.d179ab341fc68e7612fcd74eaf7cafbd
              at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:570)
              at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:594)
              at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:636)
              at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.groupOrSplit(LoadIncrementalHFiles.java:472)
              at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$2.call(LoadIncrementalHFiles.java:393)
              at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$2.call(LoadIncrementalHFiles.java:391)
              at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
              at java.util.concurrent.FutureTask.run(FutureTask.java:149)
              at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
              at java.lang.Thread.run(Thread.java:738)
      Caused by: java.lang.IllegalArgumentException: Invalid HFile version: 16715777 (expected to be between 2 and 2)
              at org.apache.hadoop.hbase.io.hfile.HFile.checkFormatVersion(HFile.java:927)
              at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:426)
              at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:568)
      

      It is desirable and safe to skip these reference files since they don't contain any real data for bulk load purpose.

      1. HBASE-10615-trunk-v3.patch
        3 kB
        Jerry He
      2. HBASE-10615-trunk-v2.patch
        3 kB
        Jerry He
      3. HBASE-10615-trunk.patch
        2 kB
        Jerry He

        Activity

        Hide
        Jerry He added a comment -

        Attached a patch to skip reference files in discoverLoadQueue().

        Show
        Jerry He added a comment - Attached a patch to skip reference files in discoverLoadQueue().
        Hide
        stack added a comment -

        lgtm

        Show
        stack added a comment - lgtm
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12631153/HBASE-10615-trunk.patch
        against trunk revision .
        ATTACHMENT ID: 12631153

        +1 @author. The patch does not contain any @author tags.

        -1 tests included. The patch doesn't appear to include any new or modified tests.
        Please justify why no new tests are needed for this patch.
        Also please list what manual steps were performed to verify this patch.

        +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

        +1 hadoop1.1. The patch compiles against the hadoop 1.1 profile.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 lineLengths. The patch does not introduce lines longer than 100

        +1 site. The mvn site goal succeeds with this patch.

        +1 core tests. The patch passed unit tests in .

        Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8810//testReport/
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8810//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8810//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8810//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8810//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8810//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8810//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8810//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8810//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8810//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8810//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12631153/HBASE-10615-trunk.patch against trunk revision . ATTACHMENT ID: 12631153 +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop1.1 . The patch compiles against the hadoop 1.1 profile. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 +1 site . The mvn site goal succeeds with this patch. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8810//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8810//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8810//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8810//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8810//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8810//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8810//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8810//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8810//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8810//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8810//console This message is automatically generated.
        Hide
        Matteo Bertozzi added a comment -

        I'm not convinced about it...

        let say that you copy hbase.rootdir then what are you going to bulk load?
        if you bulk load only /hbase/table and you skip references or links you loose data.

        example for links is: clone_snapshot
        all the table is based on links... if you try to bulk load it, you are skipping every file...

        example for reference is:
        you upload the parent region data but not the daughter reference files
        the CatalogJanitor kicks in and the parent is removed, since there are no references to the parent
        and your data is lost...

        can you describe more the use case, and give examples on how did you tested it?

        Show
        Matteo Bertozzi added a comment - I'm not convinced about it... let say that you copy hbase.rootdir then what are you going to bulk load? if you bulk load only /hbase/table and you skip references or links you loose data. example for links is: clone_snapshot all the table is based on links... if you try to bulk load it, you are skipping every file... example for reference is: you upload the parent region data but not the daughter reference files the CatalogJanitor kicks in and the parent is removed, since there are no references to the parent and your data is lost... can you describe more the use case, and give examples on how did you tested it?
        Hide
        stack added a comment -

        (Thanks for jumping in Matteo Bertozzi)

        Show
        stack added a comment - (Thanks for jumping in Matteo Bertozzi )
        Hide
        Jerry He added a comment -

        Hi, Matteo

        Thanks for the comments!

        There are two questions here.
        1. Should the bulk load throws an error or skip if it sees a reference file? My argument is that we should not throw an error.
        The existence of reference file is not an error condition.
        2. Is it safe to skip the reference file for the purpose of bulk loading from the user's perspective? Matteo raised the issue of possible loss of data.
        My argument is that we are fine for these reasons:
        1) The purpose of LoadIncrementalHFiles is to load the data contained in the hfiles of a given region dir into HBase safely.
        As long as this is satisfied, we are fine for the data for this scope
        2) If we want to consider from a broader view, to confider the integrity of the entire table data.
        The user of the bulk load tool controls the bulk loading.
        For example, the user will not copy out the links in a cloned table from a snapshot and then expect to bulk load these links to have the data.
        In the reference example, the user will bulk load the parent region too.

        you upload the parent region data but not the daughter reference files
        the CatalogJanitor kicks in and the parent is removed, since there are no references to the parent
        and your data is lost...

        Why would the data is lost? I thought the hfiles in the parent region would be added or sliced into an existing live region. The bulk load tool does not care if the input hfile region is a split parent or not, right? Maybe I miss and misunderstand something?

        Show
        Jerry He added a comment - Hi, Matteo Thanks for the comments! There are two questions here. 1. Should the bulk load throws an error or skip if it sees a reference file? My argument is that we should not throw an error. The existence of reference file is not an error condition. 2. Is it safe to skip the reference file for the purpose of bulk loading from the user's perspective? Matteo raised the issue of possible loss of data. My argument is that we are fine for these reasons: 1) The purpose of LoadIncrementalHFiles is to load the data contained in the hfiles of a given region dir into HBase safely. As long as this is satisfied, we are fine for the data for this scope 2) If we want to consider from a broader view, to confider the integrity of the entire table data. The user of the bulk load tool controls the bulk loading. For example, the user will not copy out the links in a cloned table from a snapshot and then expect to bulk load these links to have the data. In the reference example, the user will bulk load the parent region too. you upload the parent region data but not the daughter reference files the CatalogJanitor kicks in and the parent is removed, since there are no references to the parent and your data is lost... Why would the data is lost? I thought the hfiles in the parent region would be added or sliced into an existing live region. The bulk load tool does not care if the input hfile region is a split parent or not, right? Maybe I miss and misunderstand something?
        Hide
        Matteo Bertozzi added a comment -

        do you have an example of the cmd line that you want to use to bulk load after copying hbase.rootdir?
        is still not clear to me how do you want to bulk load the table/tables

        if you put all the files together and let LoadIncrementalHFile split based on key,
        yeah you probably can skip the reference files

        same for HFileLink, if you bulk load the ones in the archive, you can skip the HFileLink
        but you have to manually find which files do you need, otherwise you may bulk load old data from the archive which shouldn't be in the table anymore

        if you can post the use example that you have tried it will be easy for me to verify that what I've said able the parent removed by the CatalogJanitor

        Show
        Matteo Bertozzi added a comment - do you have an example of the cmd line that you want to use to bulk load after copying hbase.rootdir? is still not clear to me how do you want to bulk load the table/tables if you put all the files together and let LoadIncrementalHFile split based on key, yeah you probably can skip the reference files same for HFileLink, if you bulk load the ones in the archive, you can skip the HFileLink but you have to manually find which files do you need, otherwise you may bulk load old data from the archive which shouldn't be in the table anymore if you can post the use example that you have tried it will be easy for me to verify that what I've said able the parent removed by the CatalogJanitor
        Hide
        Jerry He added a comment -

        Let me give a practical use case, related to ExportSnapshot. You can help to see if there is any loophole.

        I take snapshots on cluster A, and export them to cluster B, which services as backup storage.
        When I want to clone the table on cluster C, I can do the following on cluster C (alternative to restore/clone snapshot)

        1. Construct the table based on the tableInfo and possibly pre-split based on the region info stored with the snapshot.
        2. Have a program basically loops thru the archive regions to bulk load the region data.

        The parent region is in the archive so are the daughters, if the snapshot happened to capture the moment.
        I remember you had a JIRA to include both parent and daughters in the snapshot.

        I don't see any loss of data here. I have been testing it for a while.
        I had to change LoadIncrementalHFiles to skip the reference files if they exists, to avoid the exception that posted in this JIRA.

        Show
        Jerry He added a comment - Let me give a practical use case, related to ExportSnapshot. You can help to see if there is any loophole. I take snapshots on cluster A, and export them to cluster B, which services as backup storage. When I want to clone the table on cluster C, I can do the following on cluster C (alternative to restore/clone snapshot) 1. Construct the table based on the tableInfo and possibly pre-split based on the region info stored with the snapshot. 2. Have a program basically loops thru the archive regions to bulk load the region data. The parent region is in the archive so are the daughters, if the snapshot happened to capture the moment. I remember you had a JIRA to include both parent and daughters in the snapshot. I don't see any loss of data here. I have been testing it for a while. I had to change LoadIncrementalHFiles to skip the reference files if they exists, to avoid the exception that posted in this JIRA.
        Hide
        Matteo Bertozzi added a comment -

        yeah, that works

        if your application does all the job of resolving the links and going to the archive to lookup the right files.
        also if you create a new table with presplit, instead of "copying" the old .META. with the daughter information will allow you to skip the ref files and don't have any problem.

        can just you just add a WARN or INFO (your choice) that you have skipped those files? after that I'm +1

        Show
        Matteo Bertozzi added a comment - yeah, that works if your application does all the job of resolving the links and going to the archive to lookup the right files. also if you create a new table with presplit, instead of "copying" the old .META. with the daughter information will allow you to skip the ref files and don't have any problem. can just you just add a WARN or INFO (your choice) that you have skipped those files? after that I'm +1
        Hide
        Jerry He added a comment -

        Attached v2 with added LOG warns.
        There is another place where we walk thru the hfiles in createTable() when the table does not exist. We read the files twice in this case.
        Only warn once.

        Show
        Jerry He added a comment - Attached v2 with added LOG warns. There is another place where we walk thru the hfiles in createTable() when the table does not exist. We read the files twice in this case. Only warn once.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12631601/HBASE-10615-trunk-v2.patch
        against trunk revision .
        ATTACHMENT ID: 12631601

        +1 @author. The patch does not contain any @author tags.

        -1 tests included. The patch doesn't appear to include any new or modified tests.
        Please justify why no new tests are needed for this patch.
        Also please list what manual steps were performed to verify this patch.

        -1 patch. The patch command could not apply the patch.

        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8839//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12631601/HBASE-10615-trunk-v2.patch against trunk revision . ATTACHMENT ID: 12631601 +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8839//console This message is automatically generated.
        Hide
        Ted Yu added a comment -

        Does it make sense to introduce a config parameter to enable / disable reference file skipping ?

        Show
        Ted Yu added a comment - Does it make sense to introduce a config parameter to enable / disable reference file skipping ?
        Hide
        Jerry He added a comment -

        Rebased with latest from trunk and attached v3.

        Show
        Jerry He added a comment - Rebased with latest from trunk and attached v3.
        Hide
        Jerry He added a comment -

        A config parameter will probably be helpful if the bulk load tool itself can or wants to 'resolve' the reference/link.
        But bulk load is operating on one region dir only.

        Show
        Jerry He added a comment - A config parameter will probably be helpful if the bulk load tool itself can or wants to 'resolve' the reference/link. But bulk load is operating on one region dir only.
        Hide
        Jerry He added a comment -

        A config parameter will probably be helpful if the bulk load tool itself can or wants to 'resolve' the reference/link.

        Then it will be good to give user an option to 'skip' or 'resolve'.
        Now it is more like the only option is to 'skip'.
        Or 'error', which doesn't make much meaningful sense.

        Show
        Jerry He added a comment - A config parameter will probably be helpful if the bulk load tool itself can or wants to 'resolve' the reference/link. Then it will be good to give user an option to 'skip' or 'resolve'. Now it is more like the only option is to 'skip'. Or 'error', which doesn't make much meaningful sense.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12631640/HBASE-10615-trunk-v3.patch
        against trunk revision .
        ATTACHMENT ID: 12631640

        +1 @author. The patch does not contain any @author tags.

        -1 tests included. The patch doesn't appear to include any new or modified tests.
        Please justify why no new tests are needed for this patch.
        Also please list what manual steps were performed to verify this patch.

        +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

        +1 hadoop1.1. The patch compiles against the hadoop 1.1 profile.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        -1 findbugs. The patch appears to introduce 2 new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 lineLengths. The patch does not introduce lines longer than 100

        +1 site. The mvn site goal succeeds with this patch.

        -1 core tests. The patch failed these unit tests:
        org.apache.hadoop.hbase.regionserver.wal.TestLogRolling

        Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8843//testReport/
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8843//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8843//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8843//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8843//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8843//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8843//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8843//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8843//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8843//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8843//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12631640/HBASE-10615-trunk-v3.patch against trunk revision . ATTACHMENT ID: 12631640 +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop1.1 . The patch compiles against the hadoop 1.1 profile. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 findbugs . The patch appears to introduce 2 new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 +1 site . The mvn site goal succeeds with this patch. -1 core tests . The patch failed these unit tests: org.apache.hadoop.hbase.regionserver.wal.TestLogRolling Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8843//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8843//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8843//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8843//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8843//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8843//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8843//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8843//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8843//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8843//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8843//console This message is automatically generated.
        Hide
        Jerry He added a comment -

        The findbugs and TestLogRolling does not seem to be caused by the patch.
        Hi, stack, Matteo Bertozzi, Ted Yu
        Are you ok with the patch?

        Show
        Jerry He added a comment - The findbugs and TestLogRolling does not seem to be caused by the patch. Hi, stack , Matteo Bertozzi , Ted Yu Are you ok with the patch?
        Hide
        Matteo Bertozzi added a comment -

        +1 for me

        Show
        Matteo Bertozzi added a comment - +1 for me
        Hide
        Ted Yu added a comment -

        Integrated to trunk.

        Thanks Jerry for the patch.

        Show
        Ted Yu added a comment - Integrated to trunk. Thanks Jerry for the patch.
        Hide
        Hudson added a comment -

        SUCCESS: Integrated in HBase-TRUNK #4981 (See https://builds.apache.org/job/HBase-TRUNK/4981/)
        HBASE-10615 Make LoadIncrementalHFiles skip reference files (Jerry He) (tedyu: rev 1574736)

        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
        Show
        Hudson added a comment - SUCCESS: Integrated in HBase-TRUNK #4981 (See https://builds.apache.org/job/HBase-TRUNK/4981/ ) HBASE-10615 Make LoadIncrementalHFiles skip reference files (Jerry He) (tedyu: rev 1574736) /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
        Hide
        Hudson added a comment -

        FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #109 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/109/)
        HBASE-10615 Make LoadIncrementalHFiles skip reference files (Jerry He) (tedyu: rev 1574736)

        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
        Show
        Hudson added a comment - FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #109 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/109/ ) HBASE-10615 Make LoadIncrementalHFiles skip reference files (Jerry He) (tedyu: rev 1574736) /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
        Hide
        Enis Soztutar added a comment -

        Closing this issue after 0.99.0 release.

        Show
        Enis Soztutar added a comment - Closing this issue after 0.99.0 release.

          People

          • Assignee:
            Jerry He
            Reporter:
            Jerry He
          • Votes:
            0 Vote for this issue
            Watchers:
            7 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development