Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-17245

NPE thrown by ClientWrapper.conf

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 1.6.2
    • 1.6.3
    • SQL
    • None

    Description

      This issue has been fixed in Spark 2.0. Seems ClientWrapper.conf is trying to access the ThreadLocal SessionState, which has been set.

      java.lang.NullPointerException 
      at org.apache.spark.sql.hive.client.ClientWrapper.conf(ClientWrapper.scala:225) 
      at org.apache.spark.sql.hive.client.ClientWrapper.client(ClientWrapper.scala:279) 
      at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$withHiveState$1.apply(ClientWrapper.scala:291) 
      at org.apache.spark.sql.hive.client.ClientWrapper.liftedTree1$1(ClientWrapper.scala:246) 
      at org.apache.spark.sql.hive.client.ClientWrapper.retryLocked(ClientWrapper.scala:245) 
      at org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:288) 
      at org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:493) 
      at org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:483) 
      at org.apache.spark.sql.hive.client.ClientWrapper.addJar(ClientWrapper.scala:603) 
      at org.apache.spark.sql.hive.HiveContext.addJar(HiveContext.scala:654) 
      at org.apache.spark.sql.hive.execution.AddJar.run(commands.scala:105)
      at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58) 
      at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56) 
      at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70) 
      at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) 
      at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) 
      at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) 
      at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) 
      at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55) 
      at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55) 
      at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145) 
      at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130) 
      at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52) 
      at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:816) 
      

      Attachments

        Activity

          People

            yhuai Yin Huai
            yhuai Yin Huai
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: