Details

    • Sub-task
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 1.1.0
    • 1.0.3, 1.1.2, 1.2.1, 1.3.0
    • Spark Core
    • None

    Description

      If an exception occurs when creating the DAGScheduler, then other resources in SparkContext may be leaked / not cleaned up.

      Edit (joshrosen): this issue originally was a duplicate of SPARK-4194, but I've converted it into a subtask and revised it to relfect the scope of the PR opened against it. The original PR description is reproduced below:

      When client creates a SparkContext, currently there are many val to initialize during object initialization. But when there is failure initializing these val, like throwing an exception, the resources in this SparkContext is not released properly.
      For example, SparkUI object is created and bind to the HTTP server during initialization using
      ui.foreach(_.bind())
      but if anything goes wrong after this code (say throwing an exception when creating DAGScheduler), the SparkUI server is not stopped, thus the port bind will fail again in the client when creating another SparkContext. So basically this leads to a situation that the client can not create another SparkContext in the same process, which I think it is not reasonable.

      So, I suggest to refactor the SparkContext code to release resource when there is failure during in initialization.

      Attachments

        Activity

          People

            tigerquoll Dale Richardson
            jackylk Jacky Li
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: