Description
In yarn-cluster mode, jars given to spark-submit's --jars argument should be distributed to executors through the distributed cache, not through fetching.
Currently, Spark tries to distribute the jars both ways, which can cause executor errors related to trying to overwrite symlinks without write permissions.
It looks like this was introduced by SPARK-2260, which sets spark.jars in yarn-cluster mode. Setting spark.jars is necessary for standalone cluster deploy mode, but harmful for yarn cluster deploy mode.
Attachments
Issue Links
- is related to
-
SPARK-2260 Spark submit standalone-cluster mode is broken
- Resolved
- links to