Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-6446

Spark Sql hive query is not working on spark1.3 version

Log workAgile BoardRank to TopRank to BottomAttach filesAttach ScreenshotBulk Copy AttachmentsBulk Move AttachmentsVotersWatch issueWatchersCreate sub-taskConvert to sub-taskMoveLinkCloneLabelsUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete CommentsDelete
    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Duplicate
    • 1.3.0
    • None
    • Spark Core, SQL
    • None

    Description

      Hi,

      I am running hive queries from spark sql. getting below exception on spark 1.3 version where as it works on spark 1.2

      15/03/21 19:22:24 INFO metastore: Trying to connect to metastore with URI thrift://hdfs://yy.yy.yy-yy-east-xx-01.zzz.com:9083
      15/03/21 19:22:24 INFO metastore: Connected to metastore.
      15/03/21 19:22:24 ERROR FunctionRegistry: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.TApplicationException: Invalid method name: 'get_functions'
      spark-sql> select percentile_approx(area_id,0.95) as count from test_db.test_servicemap where data_partition_folder='2015_01_21_17_00';
      15/03/21 19:22:30 WARN HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.
      15/03/21 19:22:30 INFO ParseDriver: Parsing command: select value from test_db.test_servicemap where data_partition_folder='2015_01_21_17_00'
      15/03/21 19:22:30 INFO ParseDriver: Parse Completed
      15/03/21 19:23:31 ERROR SparkSQLDriver: Failed in [select value from test_db.test_servicemap where data_partition_folder='2015_03_21_17_00']
      java.lang.IllegalArgumentException: Wrong FS: hdfs://hdfs://xx.xx.xx-xx-east-xx-01.zzz.com:8020/user/hive/data/servicemap/2015_01_07_12_30, expected: file:///
      at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:643)
      at org.apache.hadoop.fs.FileSystem.makeQualified(FileSystem.java:463)
      at org.apache.hadoop.fs.FilterFileSystem.makeQualified(FilterFileSystem.java:118)
      at org.apache.spark.sql.parquet.ParquetRelation2$MetadataCache$$anonfun$6.apply(newParquet.scala:252)
      at org.apache.spark.sql.parquet.ParquetRelation2$MetadataCache$$anonfun$6.apply(newParquet.scala:251)
      at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
      at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
      at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
      at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
      at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
      at scala.collection.AbstractTraversable.map(Traversable.scala:105)
      at org.apache.spark.sql.parquet.ParquetRelation2$MetadataCache.refresh(newParquet.scala:251)
      at org.apache.spark.sql.parquet.ParquetRelation2.<init>(newParquet.scala:370)
      at org.apache.spark.sql.hive.HiveMetastoreCatalog.org$apache$spark$sql$hive$HiveMetastoreCatalog$$convertToParquetRelation(HiveMetastoreCatalog.scala:235)
      at org.apache.spark.sql.hive.HiveMetastoreCatalog$ParquetConversions$$anonfun$1.applyOrElse(HiveMetastoreCatalog.scala:480)
      at org.apache.spark.sql.hive.HiveMetastoreCatalog$ParquetConversions$$anonfun$1.applyOrElse(HiveMetastoreCatalog.scala:452)
      at scala.PartialFunction$Lifted.apply(PartialFunction.scala:218)
      at scala.PartialFunction$Lifted.apply(PartialFunction.scala:214)
      at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collect$1.apply(TreeNode.scala:119)
      at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collect$1.apply(TreeNode.scala:119)
      at org.apache.spark.sql.catalyst.trees.TreeNode.foreach(TreeNode.scala:78)
      at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreach$1.apply(TreeNode.scala:79)
      at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreach$1.apply(TreeNode.scala:79)
      at scala.collection.immutable.List.foreach(List.scala:318)
      at org.apache.spark.sql.catalyst.trees.TreeNode.foreach(TreeNode.scala:79)
      at org.apache.spark.sql.catalyst.trees.TreeNode.collect(TreeNode.scala:119)
      at org.apache.spark.sql.hive.HiveMetastoreCatalog$ParquetConversions$.apply(HiveMetastoreCatalog.scala:452)
      at org.apache.spark.sql.hive.HiveMetastoreCatalog$ParquetConversions$.apply(HiveMetastoreCatalog.scala:445)
      at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:61)
      at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:59)
      at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:111)
      at scala.collection.immutable.List.foldLeft(List.scala:84)
      at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:59)
      at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:51)
      at scala.collection.immutable.List.foreach(List.scala:318)
      at org.apache.spark.sql.catalyst.rules.RuleExecutor.apply(RuleExecutor.scala:51)
      at org.apache.spark.sql.SQLContext$QueryExecution.analyzed$lzycompute(SQLContext.scala:1071)
      at org.apache.spark.sql.SQLContext$QueryExecution.analyzed(SQLContext.scala:1071)
      at org.apache.spark.sql.SQLContext$QueryExecution.assertAnalyzed(SQLContext.scala:1069)
      at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:133)
      at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
      at org.apache.spark.sql.hive.HiveContext.sql(HiveContext.scala:92)
      at org.apache.spark.sql.hive.thriftserver.AbstractSparkSQLDriver.run(AbstractSparkSQLDriver.scala:57)
      at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:275)
      at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423)
      at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:211)
      at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      at java.lang.reflect.Method.invoke(Method.java:606)
      at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
      at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
      at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
      at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
      at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
      java.lang.IllegalArgumentException: Wrong FS: hdfs://xx.xx.xx-xx-east-xx-01.zzz.com:8020/user/hive/data/servicemap/2015_01_07_12_30, expected: file:///
      at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:643)
      at org.apache.hadoop.fs.FileSystem.makeQualified(FileSystem.java:463)
      at org.apache.hadoop.fs.FilterFileSystem.makeQualified(FilterFileSystem.java:118)

      Thanks
      Pankaj

      Attachments

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            Unassigned Unassigned Assign to me
            pankajch pankaj
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment