Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-25183

Spark HiveServer2 registers shutdown hook with JVM, not ShutdownHookManager; race conditions can arise

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Minor
    • Resolution: Fixed
    • 2.2.0
    • 2.4.0
    • SQL
    • None

    Description

      Spark's HiveServer2 registers a shutdown hook with the JVM Runtime.addShutdownHook() which can happen in parallel with the ShutdownHookManager sequence of spark & Hadoop, which operate the shutdowns in an ordered sequence.

      This has some risks

      • FS shutdown before rename of logs completes, SPARK-6933
      • Delays of rename on object stores may block FS close operation, which, on clusters with timeouts hooks (HADOOP-12950) of FileSystem.closeAll() can force a kill of that shutdown hook and other problems.

      General outcome: logs aren't present.

      Proposed fix:

      • register hook with org.apache.spark.util.ShutdownHookManager
      • HADOOP-15679 to make shutdown wait time configurable, so O(data) renames don't trigger timeouts.

      Attachments

        Issue Links

          Activity

            People

              stevel@apache.org Steve Loughran
              stevel@apache.org Steve Loughran
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: