In case of sequence files, it's crucial that splits are calculated around the boundaries enforced by the input sequence file. However by default hadoop creates input splits depending on the configuration parameters which may not match the boundaries for the input sequence file. Hive provides HiveIndexedInputFormat that provides extra logic and recalculates the split boundaries for each split depending on the sequence file's boundaries.
However we noticed this behavior of "over" reporting from data backed by sequence file. We've a sample data on which we experimented and fixed this bug, we have verified this fix by comparing the query output for input being sequence file format, rc file and regular format. However we have not able to find the right place to include this as a unit test that would execute as part of hive tests. We tried writing a "clientpositive" test as part of ql module but the output seems quite verbose and i couldn't interpret it that well. Can someone please review this change and guide on how to write a test that will execute as part of Hive testing?