Details
-
Bug
-
Status: Resolved
-
Minor
-
Resolution: Duplicate
-
1.2.0
-
None
-
None
Description
When inserting into a hive table from spark sql while using dynamic partitioning, if a task fails it will cause the task to continue to fail and eventually fail the job.
/mytable/.hive-staging_hive_2015-02-27_11-53-19_573_222-3/-ext-10000/partition=2015-02-04/part-00001 for client <ip> already exists
The task may need to clean up after a failed task to write to the location of the previously failed task.
Attachments
Attachments
Issue Links
- duplicates
-
SPARK-8379 LeaseExpiredException when using dynamic partition with speculative execution
- Resolved
- is related to
-
SPARK-6369 InsertIntoHiveTable and Parquet Relation should use logic from SparkHadoopWriter
- Resolved
-
SPARK-3007 Add "Dynamic Partition" support to Spark Sql hive
- Resolved
- links to