Some algorithms require that the values to the reduce be sorted in a particular order, but extending the key with the additional fields causes them to be handled by different calls to reduce. (The user then collects the values until they detect a "real" key change and then processes them.)
It would be much easier if the framework let you define a second comparator that did the grouping of values for reduces. So your reduce inputs look like:
instead of getting calls to reduce that look like:
you could define the grouping comparator to just compare the letters and end up with:
which is the desired outcome. Note that this assumes that the "extra" part of the key is just for sorting because the reduce will only see the first representative of each equivalence class.
- is duplicated by
HADOOP-686 job.setOutputValueComparatorClass(theClass) should be supported