Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
0.6.0
-
None
-
None
Description
For now, we load zeppelin's spark configuration at runtime in the RemoteInterpreter Process rather than loading them before starting the process (through --conf of spark-submit). It is fine for most of spark configuration, but for some configuration, it would introduce some weird issues. Like ZEPPELIN-1242, and if you specify spark.master as yarn-client in spark-defaults.conf but specify spark.master as local in zeppelin side, you will see the spark interpreter fail to start due to this inconsistency. Another case is that spark.driver.memory won't take effect.
So I propose to specify zeppelin's spark configuration through --conf arguments of spark-submit
Attachments
Issue Links
- blocks
-
ZEPPELIN-2715 spark.jars.packages doesn't take effect
-
- Closed
-
-
ZEPPELIN-2720 spark.driver.memory won't take effect for zeppelin spark interpreter
-
- Closed
-
- is related to
-
ZEPPELIN-1460 Make spark configuration less confusing and ambiguous, more intuitive
-
- Closed
-
- links to