Description
Hive on spark will estimate reducer number when the query is not set reduce number,which cause a application submit.The application will pending if the yarn queue's resources is insufficient.
So there are more than one pending applications probably because
there are more than one estimate call.The failure is soft, so it doesn't prevent subsequent processings. We can make that a hard failure
That code is found in
at org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:112)
at org.apache.hadoop.hive.ql.optimizer.spark.SetSparkReducerParallelism.process(SetSparkReducerParallelism.java:115)
Attachments
Issue Links
- is duplicated by
-
HIVE-10476 Hive query should fail when it fails to initialize a session in SetSparkReducerParallelism [Spark Branch]
- Closed