Description
One way to access HBase from Spark is to use newAPIHadoopRDD, which can take a TableInputFormat as class name. But we are not able to set a Scan object in there, for example to set a HBase filter.
In MR, the public API TableMapReduceUtil.initTableMapperJob() or equivalent is used which can take a Scan object. But this call is not used in Spark conveniently.
We need to make the TableMapReduceUtil.convertScanToString() public.
So that a Scan object can be created, populated and then convert to the property and used by Spark. They are now package private.