Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-35160

Spark application submitted despite failing to get Hive delegation token

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 3.1.1
    • None
    • Security
    • None

    Description

      Currently, when running on YARN and failing to get Hive delegation token, a Spark SQL application will still be submitted. Eventually, the application will fail on connecting to Hive metastore without a valid delegation token. 

      Is there any reason for this design ?

      cc jerryshao who originally implemented this in https://issues.apache.org/jira/browse/SPARK-14743

      I'd propose to fail immediately like HadoopFSDelegationTokenProvider.

       

      Update:

      After https://github.com/apache/spark/pull/23418, HadoopFSDelegationTokenProvider no longer fail on non fatal exception. However, the author changed the behavior just to keep it consistent with other providers. 

      Attachments

        Activity

          People

            Unassigned Unassigned
            mauzhang Manu Zhang
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated: