Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
0.9.0, 1.0.0
-
None
Description
Sockets from Spark to Python workers are not cleaned up over the duration of a job, causing the total number of opened file descriptors to grow to around the number of partitions in the job. Usually these go away if the job is successful, but in the case of cancellation (and possibly exceptions, though I haven't investigated), the socket file descriptors remain indefinitely.