Thanks for the review
why do we need to set the actual extra bytes and records proportional to nMaps and nReds
If the spec expects 0 bytes/records, then the necessary spec data for each reduce needs to be forgiven. The amount of extra data will be proportional to the number of maps/reduces.
However, this is adjacent to some sloppiness in the map output, where the spec data is not written as part of the output, but rather as overhead. While the special case will still exist, right now it's the case for all jobs. Since the test still needs to tolerate the 0 cases, I was planning to tighten up the shuffle in a separate issue.