Took a while to pinpoint the reason - lines 173-181 of StandardFacetsAccumulator.
In the mentioned lines, a 'merge' is performed over categories which matched the request, but reside on different partitions.
Partitions are an optimization which limit the RAM requirements per query to a constant, rather than linear to the taxonomy size (could be millions of categories). The taxonomy is virtually "splitted" into partitions of constant size, a top-k is heaped from each partition, and all those top-k results are being merged to a global top-k list
The proposed solution of changing the hashCode and equals so that the same request will have two hashCodes and will not be equal to itself is very likely to break other parts of the code.
Perhaps such cases could be prevented all together? e.g throwing an exception when the (exact) same request is added twice.
Is that a reasonable solution? Are there cases where it is necessary to request the same path twice?
Please note that a different count, depth, path etc - makes a different request, so requesting "author" with count 10 and count 11 makes different requests - which are handled simultaneously correctly in current versions.