Description
I'm trying the jdbc server
I found that the jdbc server always occupies all cores in the cluster
the reason is that when creating HiveContext, it doesn't set anything related to spark.cores.max or spark.executor.memory
SparkSQLEnv.scala(https://github.com/apache/spark/blob/8032fe2fae3ac40a02c6018c52e76584a14b3438/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLEnv.scala) L41-L43
Attachments
Issue Links
- is related to
-
SPARK-2410 Thrift/JDBC Server
- Resolved