Description
Currently, for a Hive query HoS need to get a session
a session twice, once in SparkSetReducerParallelism, and another when submitting the actual job.
The issue is that sometimes there's problem when launching a Yarn application (e.g., don't have permission), then user will have to wait for two timeouts, because both session initializations will fail. This turned out to happen frequently.
This JIRA proposes to fail the query in SparkSetReducerParallelism, when it cannot initialize the session.
Attachments
Attachments
Issue Links
- duplicates
-
HIVE-12649 Hive on Spark will resubmitted application when not enough resouces to launch yarn application master
- Resolved