Supposed that we got 100GB data after cuboid building, and with setting that 10GB per region. For now, 10 split keys was calculated, and 10 region created, 10 reducer used in ‘convert to hfile’ step.
With optimization, we could calculate 100 (or more) split keys, and use all them in ‘covert to file’ step, but sampled 10 keys in them to create regions. The result is still 10 region created, but 100 reducer used in ‘convert to file’ step. Of course, the hfile created is also 100, and load 10 files per region. That’s should be fine, doesn’t affect the query performance dramatically.