Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Done
-
0.6.0
-
None
-
None
Description
org.apache.spark.sql.AnalysisExceptionoccurs when I use thriftserver to execute the following SQL. When I do not use hive as metastore, thriftserver does not support create table ?
0: jdbc:hive2://localhost:10090> CREATE TABLE test(key INT, val STRING);
Error: java.util.concurrent.ExecutionException: java.lang.RuntimeException: org.apache.spark.sql.AnalysisException: Hive support is required to CREATE Hive TABLE (AS SELECT);;
'CreateTable `test`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, ErrorIfExists
org.apache.spark.sql.execution.datasources.HiveOnlyCheck$$anonfun$apply$12.apply(rules.scala:392)
org.apache.spark.sql.execution.datasources.HiveOnlyCheck$$anonfun$apply$12.apply(rules.scala:390)
org.apache.spark.sql.catalyst.trees.TreeNode.foreach(TreeNode.scala:117)
org.apache.spark.sql.execution.datasources.HiveOnlyCheck$.apply(rules.scala:390)
org.apache.spark.sql.execution.datasources.HiveOnlyCheck$.apply(rules.scala:388)
org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$2.apply(CheckAnalysis.scala:386)
org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$2.apply(CheckAnalysis.scala:386)
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:386)
org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:95)
org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:108)
org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:105)
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201)
org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)
org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:78)
org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
org.apache.livy.thriftserver.session.SqlJob.executeSql(SqlJob.java:74)
org.apache.livy.thriftserver.session.SqlJob.call(SqlJob.java:64)
org.apache.livy.thriftserver.session.SqlJob.call(SqlJob.java:35)
org.apache.livy.rsc.driver.JobWrapper.call(JobWrapper.java:64)
org.apache.livy.rsc.driver.JobWrapper.call(JobWrapper.java:31)
java.util.concurrent.FutureTask.run(FutureTask.java:266)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748) (state=,code=0)
0: jdbc:hive2://localhost:10090>