Details
-
Improvement
-
Status: Closed
-
Major
-
Resolution: Fixed
-
0.9.0
-
None
Description
Spark has the possibility to define the resource usage if you run your driver within YARN or Kubernetes in cluster mode.
The following configuration values are used to request or limit resources.
- spark.driver.memory
- spark.driver.memoryOverhead
- spark.driver.cores
We should use this configuration, when setting up a Zeppelin interpreter within YARN or Kubernetes.
A good resource definition is very important in Kubernetes.
My Goal behind:
- Create multiple Spark interpreter with different resource usage.