Details
-
Wish
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
-
None
-
None
Description
For users to bulk upload by writing hfiles directly to the filesystem, they currently need to write a partitioner that is intimate with how their key schema works. This issue is about providing a general partitioner, one that could never be as fair as a custom-written partitioner but that might just work for many cases. The idea is that a user would supply the first and last keys in their dataset to upload. We'd then do bigdecimal on the range between start and end rowids dividing it by the number of reducers to come up with key ranges per reducer.
(I thought jgray had done some BigDecimal work dividing keys already but I can't find it)