Details
-
Improvement
-
Status: Resolved
-
Minor
-
Resolution: Fixed
-
1.5.0
-
None
-
Spark 1.5.0 RC1
Centos 6
java 7 oracle
Description
When running spark in coarse grained mode with shuffle service and dynamic allocation, the driver does not release executors if a dataset is cached.
The console output OTOH shows:
> 15/08/26 17:29:58 WARN SparkContext: Dynamic allocation currently does not support cached RDDs. Cached data for RDD 9 will be lost when executors are removed.
However after the default of 1m, executors are not released. When I perform the same initial setup, loading data, etc, but without caching, the executors are released.
Is this intended behaviour?
If this is intended behaviour, the console warning is misleading.
Attachments
Issue Links
- links to