Details
-
Improvement
-
Status: Open
-
Major
-
Resolution: Unresolved
-
3.1.1
-
None
-
None
Description
Currently, when running on YARN and failing to get Hive delegation token, a Spark SQL application will still be submitted. Eventually, the application will fail on connecting to Hive metastore without a valid delegation token.
Is there any reason for this design ?
cc jerryshao who originally implemented this in https://issues.apache.org/jira/browse/SPARK-14743
I'd propose to fail immediately like HadoopFSDelegationTokenProvider.
Update:
After https://github.com/apache/spark/pull/23418, HadoopFSDelegationTokenProvider no longer fail on non fatal exception. However, the author changed the behavior just to keep it consistent with other providers.