Affects Version/s: None
Fix Version/s: 0.11
PIG-2334 was helpful in understanding this issue. Short version is input file size is only computed if the path begins with a whitelisted prefix, currently:
As HCatalog locations use the form dbname.tablename the input file size is not computed, and the size-based parallelism optimization breaks.
I discovered this issue comparing two runs on the same script, one loading regular HDFS paths, and one with HCatalog db.table names. I just happened to notice the "Setting number of reducers" line difference.
Possible fix: Pig should just ask the loader for the size of its inputs rather than special-casing certain location types.