Currently, dataload for the Impala development environment uses Hive to populate tpcds.store_sales. We use several insert statements that select from tpcds.stores_sales_unpartitioned, which is loaded from text files. The inserts have this form:
Since this is inserting into a partitioned table, it is creating a file per partition. Each statement manipulates hundreds of partitions. With the current settings, the Hive implementation of this insert opens several hundred files simultaneously (by my measurement, ~450). HDFS reserves a whole block for each file (even though the resulting files are not large), and if there isn't enough disk space for all of the reservations, then these inserts can fail. This is a common problem on development environments. This is currently failing for erasure coding tests.
Impala uses clustered inserts where the input is sorted and files are written one at a time (per backend). This limits the number of simultaneously open files, eliminating the corresponding disk space reservation. Switching populating tpcds.store_sales to use Impala would reduce the diskspace requirement for an Impala developer environment. Alternatively, there is likely equivalent Hive functionality for doing an initial sort so that only one partition needs to be written at a time.
This only applies to the text version of store_sales, which is created from store_sales_unpartitioned. All other formats are created from the text version of store_sales. Since the text store_sales is already partitioned in the same way as the destination store_sales, Hive can be more efficient, processing a small number of partitions at a time.