HBase
  1. HBase
  2. HBASE-8065

bulkload can load the hfile into hbase table,but this mechanism can't remove prior data

    Details

    • Type: Improvement Improvement
    • Status: Open
    • Priority: Critical Critical
    • Resolution: Unresolved
    • Affects Version/s: 0.94.0
    • Fix Version/s: None
    • Component/s: IPC/RPC, mapreduce, regionserver
    • Labels:
      None
    • Environment:

      hadoop-1.0.2、hbase-0.94.0

    • Release Note:
      bulkload mechanism can remove the old date optionally

      Description

      this patch can do bulkload for one more parameter ‘need to refresh’,when this parameter is true.bulkload can clean the old date in the hbase table ,then do the new date load

        Activity

        Hide
        Nick Dimiduk added a comment -

        HBASE-5525 provides the truncate_preserve command that does just as I suggested. It's best to use this feature instead. Does it satisfy your need?

        Show
        Nick Dimiduk added a comment - HBASE-5525 provides the truncate_preserve command that does just as I suggested. It's best to use this feature instead. Does it satisfy your need?
        Hide
        Nick Dimiduk added a comment -

        Good point. Perhaps a less invasive approach would be to extend the truncate API to preserve the original topology? You might also be interested in another ticket I just opened today: HBASE-8073.

        Show
        Nick Dimiduk added a comment - Good point. Perhaps a less invasive approach would be to extend the truncate API to preserve the original topology? You might also be interested in another ticket I just opened today: HBASE-8073 .
        Hide
        Yuan Kang added a comment -

        After truncate operation,the table will have one region ,instead of the former region splited situation

        Show
        Yuan Kang added a comment - After truncate operation,the table will have one region ,instead of the former region splited situation
        Hide
        Nick Dimiduk added a comment -

        What's wrong with truncating/dropping a table before loading HFiles? LoadIncrementalHFiles needs the ability to replace data?

        Show
        Nick Dimiduk added a comment - What's wrong with truncating/dropping a table before loading HFiles? LoadIncrementalHFiles needs the ability to replace data?
        Hide
        Ted Yu added a comment -

        I was saying that if you take write lock, there is no need to take read lock at the same time.

        Show
        Ted Yu added a comment - I was saying that if you take write lock, there is no need to take read lock at the same time.
        Hide
        Yuan Kang added a comment -

        Ted Yuwrite/read locks taken to prevent this progress from being disturb by other operation .whether it is read or write

        Show
        Yuan Kang added a comment - Ted Yu write/read locks taken to prevent this progress from being disturb by other operation .whether it is read or write
        Hide
        Ted Yu added a comment -
        +   * @param familyPaths List of Pair<byte[] column family, String hfilePath>
        

        In javadoc, there is no need to mention parameter names.

        +    this.lock.writeLock().lock();
        +    this.lock.readLock().lock();
        

        Why are the write/read locks taken consecutively ?

        Show
        Ted Yu added a comment - + * @param familyPaths List of Pair< byte [] column family, String hfilePath> In javadoc, there is no need to mention parameter names. + this .lock.writeLock().lock(); + this .lock.readLock().lock(); Why are the write/read locks taken consecutively ?
        Hide
        Ted Yu added a comment -

        Is your target release 0.94 ?

        +        return server.bulkLoadHFilesRefresh(famPaths, regionName);
        

        How do we know that the server supports this new feature ?

        Can you provide patch for trunk ?

        It would be nice if you can upload the trunk patch onto review board.

        Thanks

        Show
        Ted Yu added a comment - Is your target release 0.94 ? + return server.bulkLoadHFilesRefresh(famPaths, regionName); How do we know that the server supports this new feature ? Can you provide patch for trunk ? It would be nice if you can upload the trunk patch onto review board. Thanks
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12573032/LoadIncrementalHFiles-bulkload-can-clean-olddata.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        -1 tests included. The patch doesn't appear to include any new or modified tests.
        Please justify why no new tests are needed for this patch.
        Also please list what manual steps were performed to verify this patch.

        -1 patch. The patch command could not apply the patch.

        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/4753//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12573032/LoadIncrementalHFiles-bulkload-can-clean-olddata.patch against trunk revision . +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/4753//console This message is automatically generated.
        Hide
        Yuan Kang added a comment -

        this patch change the regionserver/ipc/mapreduce code to enable the bulkload remove the old date

        Show
        Yuan Kang added a comment - this patch change the regionserver/ipc/mapreduce code to enable the bulkload remove the old date

          People

          • Assignee:
            Yuan Kang
            Reporter:
            Yuan Kang
          • Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

            • Due:
              Created:
              Updated:

              Development