The fix for
IMPALA-4172/ IMPALA-3653 uses Hadoop's Filesystem.listFiles() API to recursively list all files under an HDFS table's parent directory. We then map each file to its corresponding partition. However, the use of listFiles() and the associated code for doing the file-to-partition mapping does not really make sense because listFiles() is just a recursive wrapper around listLocatedStatus(). So for a table with 10k partitions there will be 10k RPCs doing listLocatedStatus().
We should simplify our code to just loop over all partitions and call listLocatedStatus(). This has the following benefits:
- Simper code. Would have avoided bugs like
- Faster code. No need to map files to partitions.
- Easier to parallelize in the future.
- Easier to decouple table and partition loading in the future.
Keep in mind that for S3 tables we do want to use the listFiles() API to avoid being throttled by S3.