While running performance tests on Pig (0.12 and 0.17) we've observed a huge performance drop in a workload that has multiple inputs from HCatLoader.
The reason is that for a particular MR job with multiple Hive tables as input, Pig calls setLocation on each LoaderFunc (HCatLoader) instance but only one table's information (InputJobInfo instance) gets tracked in the JobConf. (This is under config key HCatConstants.HCAT_KEY_JOB_INFO).
Any such call overwrites preexisting values, and thus only the last table's information will be considered when Pig calls getStatistics to calculate and estimate required reducer count.
In cases when there are 2 input tables, 256GB and 1MB in size respectively, Pig will query the size information from HCat for both of them, but it will either see 1MB+1MB=2MB or 256GB+256GB=0.5TB depending on input order in the execution plan's DAG.
It should of course see 256.00097GB in total and use 257 reducers by default accordingly.
In unlucky cases this will be seen as 2MB and 1 reducer will have to struggle with the actual 256.00097GB...