Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-25183

Spark HiveServer2 registers shutdown hook with JVM, not ShutdownHookManager; race conditions can arise

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: 2.2.0
    • Fix Version/s: 2.4.0
    • Component/s: SQL
    • Labels:
      None

      Description

      Spark's HiveServer2 registers a shutdown hook with the JVM Runtime.addShutdownHook() which can happen in parallel with the ShutdownHookManager sequence of spark & Hadoop, which operate the shutdowns in an ordered sequence.

      This has some risks

      • FS shutdown before rename of logs completes, SPARK-6933
      • Delays of rename on object stores may block FS close operation, which, on clusters with timeouts hooks (HADOOP-12950) of FileSystem.closeAll() can force a kill of that shutdown hook and other problems.

      General outcome: logs aren't present.

      Proposed fix:

      • register hook with org.apache.spark.util.ShutdownHookManager
      • HADOOP-15679 to make shutdown wait time configurable, so O(data) renames don't trigger timeouts.

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                stevel@apache.org Steve Loughran
                Reporter:
                stevel@apache.org Steve Loughran
              • Votes:
                0 Vote for this issue
                Watchers:
                3 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: