Uploaded image for project: 'Hive'
  1. Hive
  2. HIVE-14269 Performance optimizations for data on S3
  3. HIVE-15215

Investigate if staging data on S3 can always go under the scratch dir

    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Open
    • Major
    • Resolution: Unresolved
    • None
    • None
    • Hive
    • None

    Description

      When running INSERT OVERWRITE queries the files to overwrite are deleted one by one. The reason is that, by default, hive.exec.stagingdir is inside the target table directory.

      Ideally Hive would just delete the entire table directory, but it can't do that since the staging data is also inside the directory. Instead it deletes each file one-by-one, which is very slow.

      There are a few ways to fix this:

      1: Move the staging directory outside the table location. This can be done by setting hive.exec.stagingdir to a different location when running on S3. It would be nice if users didn't have to explicitly set this when running on S3 and things just worked out-of-the-box. My understanding is that hive.exec.stagingdir was only added to support HDFS encryption zones. Since S3 doesn't have encryption zones, there should be no problem with using the value of hive.exec.scratchdir to store all intermediate data instead.

      2: Multi-thread the delete operations

      3: See if the S3AFileSystem can expose some type of bulk delete op

      Attachments

        Activity

          People

            Unassigned Unassigned
            stakiar Sahil Takiar
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated: