Details
-
Bug
-
Status: Resolved
-
Blocker
-
Resolution: Fixed
-
1.1.0
-
None
-
Standalone EC2 Spark shell
Description
This issue is originally reported in SPARK-2916 in the context of MLLib, but we were able to reproduce it using a simple Spark shell command:
(1 to 10000).foreach { i => sc.parallelize(1 to 1000, 48).sum }
We still do not have a full understanding of the issue, but we have gleaned the following information so far. When the driver runs a GC, it attempts to clean up all the broadcast blocks that go out of scope at once. This causes the driver to send out many blocking RemoveBroadcast messages to the executors, which in turn send out blocking UpdateBlockInfo messages back to the driver. Both of these calls block until they receive the expected responses. We suspect that the high frequency at which we send these blocking messages is the cause of either dropped messages or internal deadlock somewhere.
Unfortunately, it is highly difficult to reproduce depending on the environment. We have been able to reproduce it on a 6-node cluster in us-west-2, but not in us-west-1, for instance.
Attachments
Attachments
Issue Links
- contains
-
SPARK-2916 [MLlib] While running regression tests with dense vectors of length greater than 1000, the treeAggregate blows up after several iterations
- Resolved
- relates to
-
SPARK-3139 Akka timeouts from ContextCleaner when cleaning shuffles
- Resolved
- links to