(oops... yes, doug anticipated this in his comment and I didn't read very well)
Presumably the limit could be made dynamic. The limit could be max(static_limit, number of cores in cluster / # active jobs)
On further reflection, I should note that my big jobs are all limited in pretty much the way that Doug suggests because they are processing a few large files that are unsplittable. This limits the number of slots these big jobs can eat up.
The result is pretty OK. My little jobs with lots of maps can slide through the cracks most of the time and everything runs pretty well.