Uploaded image for project: 'Hive'
  1. Hive
  2. HIVE-18655

Apache hive 2.1.1 on Apache Spark 2.0

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Blocker
    • Resolution: Unresolved
    • 2.1.1
    • None
    • Hive, HiveServer2, Spark
    • None
    • apache hive  -2.1.1

      apache spark - 2.0 - prebulit version (removed hive jars)

      apache hadoop -2.8

    Description

       

      Hi,
       
      when connecting my beeline in hive it is not able to create spark client.
       
      {{select count from student; Query ID = hadoop_20180208184224_f86b5aeb-f27b-4156-bd77-0aab54c0ec67 Total jobs = 1 Launching Job 1 out of 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> }}
      Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark client.)'
      { {FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask (state=08S01,code=1) }
      }
       
       
      Installed spark prebuilt 2.0 one in standalone cluster mode
       
      My hive-site.xml – placed in spark/conf too and removed the hive jars in hdfs path
       
      {{<property> <name>spark.master</name> <value>yarn</value> <description>
      Spark Master URL</description> </property> <property> <name>spark.eventLog.enabled</name> <value>true</value> <description>Spark Event Log</description> </property> <property> <name>spark.eventLog.dir</name> <value>hdfs://xx.xxx.xx.xx:9000/user/spark/eventLogging</value> <description>Spark event log folder</description> </property> <property> <name>spark.executor.memory</name> <value>512m</value> <description>Spark executor memory</description> </property> <property> <name>spark.serializer</name> <value>org.apache.spark.serializer.KryoSerializer</value> <description>Spark serializer</description> </property> <property> <name>spark.yarn.jars</name> <value>hdfs://xx.xxx.xx.xx:9000:/user/spark/spark-jars/*</value> </property> <property> <name>spark.submit.deployMode</name> <value>cluster</value> <description>Spark Master URL</description> </property }}
       
      My yarn-site.xml
       
      <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>40960</value> </property> <property> <name>yarn.scheduler.minimum-allocation-mb</name> <value>2048</value> </property> <property> <name>yarn.scheduler.maximum-allocation-mb</name> <value>8192</value> </property>

       

       

       

       

       

       

       

       

       

       

       

       

       

       

       

       

       

       

       

       

       

       

       

       

       

       

       

       

       

       

       

       

       

       

       

       

      Attachments

        Activity

          People

            Unassigned Unassigned
            Abdulmateen AbdulMateen
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated: