Sometimes, Hive will create an empty table with many empty files, Spark use the InputFormat stored in Hive Meta Store and will not combine the empty files and therefore generate many tasks to handle this empty files.
Hive use CombineHiveInputFormat(hive.input.format) by default.
So, in this case, Spark will spends much more resources than hive.
1. add a configuration, filter out empty InputSplit in HadoopRDD.
2. add a configuration, user can customize the inputformatclass in HadoopTableReader.