Details
-
Sub-task
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
-
None
-
hadoop-2.6.0
-
Reviewed
-
Description
After reading HBASE-12848 and HBASE-12934, I wrote a patch to implement cf-level storage policy.
My main purpose is to improve random-read performance for some really hot data, which usually locates in certain column family of a big table.
Usage:
$ hbase shell
> alter 'TABLE_NAME', METADATA =>
> alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}}
HDFS's setStoragePolicy can only take effect when new hfile is created in a configured directory, so I had to make sub directories(for each cf) in region's .tmp directory and set storage policy for them.
Besides, I had to upgrade hadoop version to 2.6.0 because dfs.getStoragePolicy cannot be easily written in reflection, and I needed this api to finish my unit test.
Attachments
Attachments
Issue Links
- Is contained by
-
HBASE-15161 Umbrella: Miscellaneous improvements from production usage
-
- Closed
-
- is related to
-
HBASE-17474 Reduce frequency of NoSuchMethodException when calling setStoragePolicy()
-
- Closed
-
-
HBASE-19858 Backport HBASE-14061 (Support CF-level Storage Policy) to branch-1
-
- Closed
-
- relates to
-
HBASE-20691 Storage policy should allow deferring to HDFS
-
- Closed
-
-
HADOOP-12161 Add getStoragePolicy API to the FileSystem interface
-
- Resolved
-
-
HDFS-8361 Choose SSD over DISK in block placement
-
- Closed
-
-
HDFS-9666 Enable hdfs-client to read even remote SSD/RAM prior to local disk replica to improve random read
-
- Patch Available
-