Details
-
Bug
-
Status: Resolved
-
Blocker
-
Resolution: Fixed
-
3.0.0
-
None
Description
A Spark user reported `FetchFailedException: Stream is corrupted` error when they upgraded their workload to 3.0. The issue happens when the shuffle output data size from a single task is very large (~5GB). The issue is introduced by https://github.com/apache/spark/commit/abef84a868e9e15f346eea315bbab0ec8ac8e389 , the `PartitionWriterStream` defined the partition length to be an int value, while it should be a long value.