Details
-
Improvement
-
Status: Closed
-
Major
-
Resolution: Abandoned
-
2.0.0
-
None
-
None
Description
Welcome to the ReviewBoard :https://reviews.apache.org/r/27519/
The Latest Patch is "Diff Revision 2 (Latest)"
For Hadoop cluster, a job with large HBase table as input always consumes a large amount of computing resources. For example, we need to create a job with 1000 mappers to scan a table with 1000 regions. This patch is to support one mapper using multiple regions as input.
In order to support multiple regions for one mapper, we need a new property in configuration--"hbase.mapreduce.scan.regionspermapper"
hbase.mapreduce.scan.regionspermapper controls how many regions used as input for one mapper. For example,if we have an HBase table with 300 regions, and we set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to scan the table, the job will use only 300/3=100 mappers.
In this way, we can control the number of mappers using the following formula.
Number of Mappers = (Total region numbers) / hbase.mapreduce.scan.regionspermapper
This is an example of the configuration.
<property>
<name>hbase.mapreduce.scan.regionspermapper</name>
<value>3</value>
</property>
This is an example for Java code:
TableMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, Text.class, Text.class, job);