Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Incomplete
-
2.0.0
-
None
Description
Streaming dynamic allocation has lower bound spark.streaming.dynamicAllocation.minExecutors and upper bound spark.streaming.dynamicAllocation.maxExecutors of executor number. But seems it is not used when starting on yarn or other cluster manager, I think we should honor minExecutors as an initial number of executors for starting if streaming dynamic allocation is enabled, like what we did in Spark dynamic allocation.
From my understanding this is an issue should be fixed, but I'm not sure it is a by-design choice, what is your opinion andrewor14 and tdas ?