Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
2.4.7, 3.0.2, 3.1.1, 3.2.0
-
None
Description
One customer encountered application error. From the log, it is caused by accessing non-existing broadcasted value. The broadcasted value is map statuses. There is a race-condition.
After map statuses are broadcasted and the executors obtain serialized broadcasted map statuses. If any fetch failure happens after, Spark scheduler invalidates cached map statuses and destroy broadcasted value of the map statuses. Then any executor trying to deserialize serialized broadcasted map statuses and access broadcasted value, IOException will be thrown. Currently we don't catch it in MapOutputTrackerWorker and above exception will fail the application.
Normally we should throw a fetch failure exception for such case and let Spark scheduler handle this.
Attachments
Issue Links
- duplicates
-
SPARK-30849 Application failed due to failed to get MapStatuses broadcast
- Resolved
- is related to
-
SPARK-38101 MetadataFetchFailedException due to decommission block migrations
- Open
- links to