Description
The Hadoop S3A staging committer has problems with >1 spark sql query being launched simultaneously, as it uses the jobID for its path in the clusterFS to pass the commit information from tasks to job committer.
If two queries are launched in the same second, they conflict and the output of job 1 includes that of all job2 files written so far; job 2 will fail with FNFE.
Proposed:
job conf to set "spark.sql.sources.writeJobUUID" to the value of WriteJobDescription.uuid
That was the property name which used to serve this purpose; any committers already written which use this property will pick it up without needing any changes.
Attachments
Issue Links
- is depended upon by
-
SPARK-31911 Using S3A staging committer, pending uploads are committed more than once and listed incorrectly in _SUCCESS data
- Resolved
- is related to
-
SPARK-33402 Jobs launched in same second have duplicate MapReduce JobIDs
- Resolved
- relates to
-
HADOOP-17318 S3A committer to support concurrent jobs with same app attempt ID & dest dir
- Resolved
- links to