Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
3.1.1
-
None
Description
Hello!
I have run into a problem that if I don't call the method sparkContext.stop() explicitly, then a Spark driver process doesn't terminate even after its Main method has been completed. This behaviour is different from spark on yarn, where the manual sparkContext stopping is not required.
It looks like, the problem is in using non-daemon threads, which prevent the driver jvm process from terminating.
At least I see two non-daemon threads, if I don't call sparkContext.stop():
Thread[OkHttp kubernetes.default.svc,5,main] Thread[OkHttp kubernetes.default.svc Writer,5,main]
Could you tell please, if it is possible to solve this problem?
Docker image from the official release of spark-3.1.1 hadoop3.2 is used.
Attachments
Issue Links
- duplicates
-
SPARK-27812 kubernetes client import non-daemon thread which block jvm exit.
- Resolved
- links to