Details
-
Bug
-
Status: Resolved
-
Critical
-
Resolution: Fixed
-
1.4.0
-
None
Description
When stages get retried, a task may have more than one attempt running at the same time, on the same executor. Currently this causes problems for ShuffleMapTasks, since all attempts try to write to the same output files.
This is finally resolved through https://github.com/apache/spark/pull/9610, which uses the first writer wins approach.
Attachments
Attachments
Issue Links
- is related to
-
SPARK-7829 SortShuffleWriter writes inconsistent data & index files on stage retry
- Resolved
-
SPARK-18113 Sending AskPermissionToCommitOutput failed, driver enter into task deadloop
- Resolved
- is required by
-
SPARK-7308 Should there be multiple concurrent attempts for one stage?
- Resolved
- links to