Details
-
Improvement
-
Status: Closed
-
Major
-
Resolution: Fixed
-
1.2.3, 2.4.0, 3.2.0, 4.0.0
-
None
Description
Scan instances can be set to use the block cache in the RegionServer via the setCacheBlocks method. For input Scans to MapReduce jobs, this should be false.
https://hbase.apache.org/book.html#perf.hbase.client.blockcache
However, from the Hive code, we can see that this is not the case.
public static final String HBASE_SCAN_CACHEBLOCKS = "hbase.scan.cacheblock"; ... String scanCacheBlocks = tableProperties.getProperty(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS); if (scanCacheBlocks != null) { jobProperties.put(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS, scanCacheBlocks); } ... String scanCacheBlocks = jobConf.get(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS); if (scanCacheBlocks != null) { scan.setCacheBlocks(Boolean.parseBoolean(scanCacheBlocks)); }
In the Hive code, we can see that if hbase.scan.cacheblock is not specified in the SERDEPROPERTIES then setCacheBlocks is not called and the default value of the HBase Scan class is used.
/** * Set whether blocks should be cached for this Scan. * <p> * This is true by default. When true, default settings of the table and * family are used (this will never override caching blocks if the block * cache is disabled for that family or entirely). * * @param cacheBlocks if false, default settings are overridden and blocks * will not be cached */ public Scan setCacheBlocks(boolean cacheBlocks) { this.cacheBlocks = cacheBlocks; return this; }
Hive is doing full scans of the table with MapReduce/Spark and therefore, according to the HBase docs, the default behavior here should be that blocks are not cached. Hive should set this value to "false" by default unless the table SERDEPROPERTIES override this.
-- Commands for HBase -- create 'test', 't' CREATE EXTERNAL TABLE test(value map<string,string>, row_key string) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ( "hbase.columns.mapping" = "t:,:key", "hbase.scan.cacheblock" = "true" );