Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Incomplete
-
1.5.0
-
None
Description
When a spark job is submitted twice, to execute concurrently using spark summit, both the jobs are failing, not allowing concurrent write(Append) to Hive tables.
ERROR InsertIntoHadoopFsRelation: Aborting job.
java.io.IOException: Failed to rename FileStatus
to hdfs://nameservice1/user/hive/warehouse/aaa.db/table1t/part-r-00050-00e873af-e3ab-4730-881f-e8a1b22077e0.gz.parquet
at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java:371)
at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java:384)
at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJob(FileOutputCommitter.java:326)
at parquet.hadoop.ParquetOutputCommitter.commitJob(ParquetOutputCommitter.java:46)