When testing a large dataset around 1.2TB data and around 20k files, we notice an issue where the spark driver would always run out of memory, when running queries with use of metadata table enabled. The OOM would happen on any query, even if it was touching a single partition, and was happening in the split generation phase before any tasks would start executing.
Upon analyzing the heap dump, it was analyzed that input format was generating millions of splits for every single file. Upon further analysis of the code path, it was found that the root cause was because metadata enabled code was ignoring the blockSize when returning FileStatus objects and setting it to 0. Spark by itself does not set any value for the property:
As a result minSize ends up being 1, and with block size as 0 it cause input format to generate splits of size 1 bytes**** because of the logic here:
This ends up in exponential file split objects being creating, causing driver to run out of memory.