Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-15730

[Spark SQL] the value of 'hiveconf' parameter in Spark-sql CLI don't take effect in spark-sql session

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Critical
    • Resolution: Fixed
    • 2.0.0
    • 2.0.0
    • SQL
    • None

    Description

      /usr/lib/spark/bin/spark-sql -v --driver-memory 4g --executor-memory 7g --executor-cores 5 --num-executors 31 --master yarn-client --conf spark.yarn.executor.memoryOverhead=1024 --hiveconf RESULT_TABLE=test_result01
      
      spark-sql> use test;
      16/06/02 21:36:15 INFO execution.SparkSqlParser: Parsing command: use test
      16/06/02 21:36:15 INFO spark.SparkContext: Starting job: processCmd at CliDriver.java:376
      16/06/02 21:36:15 INFO scheduler.DAGScheduler: Got job 2 (processCmd at CliDriver.java:376) with 1 output partitions
      16/06/02 21:36:15 INFO scheduler.DAGScheduler: Final stage: ResultStage 2 (processCmd at CliDriver.java:376)
      16/06/02 21:36:15 INFO scheduler.DAGScheduler: Parents of final stage: List()
      16/06/02 21:36:15 INFO scheduler.DAGScheduler: Missing parents: List()
      16/06/02 21:36:15 INFO scheduler.DAGScheduler: Submitting ResultStage 2 (MapPartitionsRDD[8] at processCmd at CliDriver.java:376), which has no missing parents
      16/06/02 21:36:15 INFO memory.MemoryStore: Block broadcast_2 stored as values in memory (estimated size 3.2 KB, free 2.4 GB)
      16/06/02 21:36:15 INFO memory.MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 1964.0 B, free 2.4 GB)
      16/06/02 21:36:15 INFO storage.BlockManagerInfo: Added broadcast_2_piece0 in memory on 192.168.3.11:36189 (size: 1964.0 B, free: 2.4 GB)
      16/06/02 21:36:15 INFO spark.SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1012
      16/06/02 21:36:15 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 2 (MapPartitionsRDD[8] at processCmd at CliDriver.java:376)
      16/06/02 21:36:15 INFO cluster.YarnScheduler: Adding task set 2.0 with 1 tasks
      16/06/02 21:36:15 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 2.0 (TID 2, 192.168.3.13, partition 0, PROCESS_LOCAL, 5362 bytes)
      16/06/02 21:36:15 INFO cluster.YarnClientSchedulerBackend: Launching task 2 on executor id: 10 hostname: 192.168.3.13.
      16/06/02 21:36:16 INFO storage.BlockManagerInfo: Added broadcast_2_piece0 in memory on hw-node3:45924 (size: 1964.0 B, free: 4.4 GB)
      16/06/02 21:36:17 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 2.0 (TID 2) in 1934 ms on 192.168.3.13 (1/1)
      16/06/02 21:36:17 INFO cluster.YarnScheduler: Removed TaskSet 2.0, whose tasks have all completed, from pool
      16/06/02 21:36:17 INFO scheduler.DAGScheduler: ResultStage 2 (processCmd at CliDriver.java:376) finished in 1.937 s
      16/06/02 21:36:17 INFO scheduler.DAGScheduler: Job 2 finished: processCmd at CliDriver.java:376, took 1.962631 s
      Time taken: 2.027 seconds
      16/06/02 21:36:17 INFO CliDriver: Time taken: 2.027 seconds
      spark-sql> DROP TABLE IF EXISTS ${hiveconf:RESULT_TABLE};
      16/06/02 21:36:36 INFO execution.SparkSqlParser: Parsing command: DROP TABLE IF EXISTS ${hiveconf:RESULT_TABLE}
      Error in query:
      mismatched input '$' expecting {'ADD', 'AS', 'ALL', 'GROUP', 'BY', 'GROUPING', 'SETS', 'CUBE', 'ROLLUP', 'ORDER', 'LIMIT', 'AT', 'IN', 'NO', 'EXISTS', 'BETWEEN', 'LIKE', RLIKE, 'IS', 'NULL', 'TRUE', 'FALSE', 'NULLS', 'ASC', 'DESC', 'FOR', 'OUTER', 'LATERAL', 'WINDOW', 'OVER', 'PARTITION', 'RANGE', 'ROWS', 'PRECEDING', 'FOLLOWING', 'CURRENT', 'ROW', 'WITH', 'VALUES', 'CREATE', 'TABLE', 'VIEW', 'REPLACE', 'INSERT', 'DELETE', 'INTO', 'DESCRIBE', 'EXPLAIN', 'FORMAT', 'LOGICAL', 'CODEGEN', 'SHOW', 'TABLES', 'COLUMNS', 'COLUMN', 'USE', 'PARTITIONS', 'FUNCTIONS', 'DROP', 'TO', 'TABLESAMPLE', 'ALTER', 'RENAME', 'ARRAY', 'MAP', 'STRUCT', 'COMMENT', 'SET', 'RESET', 'DATA', 'START', 'TRANSACTION', 'COMMIT', 'ROLLBACK', 'IF', 'PERCENT', 'BUCKET', 'OUT', 'OF', 'SORT', 'CLUSTER', 'DISTRIBUTE', 'OVERWRITE', 'TRANSFORM', 'REDUCE', 'USING', 'SERDE', 'SERDEPROPERTIES', 'RECORDREADER', 'RECORDWRITER', 'DELIMITED', 'FIELDS', 'TERMINATED', 'COLLECTION', 'ITEMS', 'KEYS', 'ESCAPED', 'LINES', 'SEPARATED', 'EXTENDED', 'REFRESH', 'CLEAR', 'CACHE', 'UNCACHE', 'LAZY', 'FORMATTED', TEMPORARY, 'OPTIONS', 'UNSET', 'TBLPROPERTIES', 'DBPROPERTIES', 'BUCKETS', 'SKEWED', 'STORED', 'DIRECTORIES', 'LOCATION', 'EXCHANGE', 'ARCHIVE', 'UNARCHIVE', 'FILEFORMAT', 'TOUCH', 'COMPACT', 'CONCATENATE', 'CHANGE', 'CASCADE', 'RESTRICT', 'CLUSTERED', 'SORTED', 'PURGE', 'INPUTFORMAT', 'OUTPUTFORMAT', DATABASES, 'DFS', 'TRUNCATE', 'ANALYZE', 'COMPUTE', 'LIST', 'STATISTICS', 'PARTITIONED', 'EXTERNAL', 'DEFINED', 'REVOKE', 'GRANT', 'LOCK', 'UNLOCK', 'MSCK', 'REPAIR', 'EXPORT', 'IMPORT', 'LOAD', 'ROLE', 'ROLES', 'COMPACTIONS', 'PRINCIPALS', 'TRANSACTIONS', 'INDEX', 'INDEXES', 'LOCKS', 'OPTION', 'LOCAL', 'INPATH', IDENTIFIER, BACKQUOTED_IDENTIFIER}(line 1, pos 21)
      
      == SQL ==
      DROP TABLE IF EXISTS ${hiveconf:RESULT_TABLE}
      ---------------------^^^
      

      Attachments

        Activity

          People

            chenghao Cheng Hao
            jameszhouyi Yi Zhou
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: