The user experience for importing data into HBase and getting a dump out of HBase is pretty poor. The existing tools as I understand them include:
- org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles, and
Each one provides specific features that do not necessarily overlap with the others. For instance, Import and ImportTsv could have most of their logic combined, sharing common driver code and leaving the details of the file-format up to the user to provide via a pluggable mapper. Export and CopyTable both map over a target table; it's only the detail of what they do with the data that is different. Bulk operations via HFiles could be a more common use-case as well, not just a special case of ImportTsv.
The list of open issues against ImportTsv alone indicates users are using the tool, and I certainly advise it for people getting started with a new HBase deployment.
I propose a single interface for getting data into and out of HBase. It would be pluggable, allowing users to override details of their file formats and schemas. We can provide implementations that replicate existing tool behaviors as example modules. These tools are also a reasonable place, IMHO, to include support for creation and loading of snapshots.
I started down the path of a specific tool intended to overcome some of the limitations of ImportTsv and it has since refactored into a more general purpose application. Initial patches forthcoming. Comments strongly encouraged.