Details
-
Sub-task
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
None
-
None
-
None
Description
Currently, when doing INSERT statements on tables located at S3, Hive writes and reads temporary (or intermediate) files to S3 as well.
If HDFS is still the default filesystem on Hive, then we can keep such temporary files on HDFS to keep things run faster.
Attachments
Attachments
Issue Links
- relates to
-
SPARK-21514 Hive has updated with new support for S3 and InsertIntoHiveTable.scala should update also
- Resolved
- links to