Details
-
Improvement
-
Status: Resolved
-
Minor
-
Resolution: Duplicate
-
2.1.1, 2.2.0
-
None
Description
When a driver program in Client mode runs in a Docker container, it binds to the IP address of the container, not the host machine. This container IP address is accessible only within the host machine, it is inaccessible for master and worker nodes.
For example, the host machine has IP address 192.168.216.10. When Docker machine starts a container, it places it to a special bridged network and assigns it an IP address like 172.17.0.2. All Spark nodes belonging to the 192.168.216.0 network cannot access the bridged network with the container. Therefore, the driver program is not able to communicate with the Spark cluster.
Spark already provides SPARK_PUBLIC_DNS environment variable for this purpose. However, in this scenario setting SPARK_PUBLIC_DNS to the host machine IP address does not work.
Topic on StackOverflow: https://stackoverflow.com/questions/45489248/running-spark-driver-program-in-docker-container-no-connection-back-from-execu
Attachments
Issue Links
- duplicates
-
SPARK-6680 Be able to specifie IP for spark-shell(spark driver) blocker for Docker integration
- Resolved
-
SPARK-4563 Allow spark driver to bind to different ip then advertise ip
- Resolved
- links to