Details
-
Sub-task
-
Status: Open
-
Major
-
Resolution: Unresolved
-
None
-
None
-
None
Description
When running INSERT OVERWRITE queries the files to overwrite are deleted one by one. The reason is that, by default, hive.exec.stagingdir is inside the target table directory.
Ideally Hive would just delete the entire table directory, but it can't do that since the staging data is also inside the directory. Instead it deletes each file one-by-one, which is very slow.
There are a few ways to fix this:
1: Move the staging directory outside the table location. This can be done by setting hive.exec.stagingdir to a different location when running on S3. It would be nice if users didn't have to explicitly set this when running on S3 and things just worked out-of-the-box. My understanding is that hive.exec.stagingdir was only added to support HDFS encryption zones. Since S3 doesn't have encryption zones, there should be no problem with using the value of hive.exec.scratchdir to store all intermediate data instead.
2: Multi-thread the delete operations
3: See if the S3AFileSystem can expose some type of bulk delete op