Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Cannot Reproduce
-
2.3.0
-
None
-
None
Description
It shows the warehouse dir is file:/root/create/spark/spark-2.3.0-SNAPSHOT-bin-2.6.5/spark-warehouse, but actually the warehouse dir is /user/hive/warehouse when create table.
[root@wangyuming01 spark-2.3.0-SNAPSHOT-bin-2.6.5]# bin/spark-sql 17/09/22 21:32:40 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable log4j:WARN No appenders could be found for logger (org.apache.hadoop.conf.Configuration). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 17/09/22 21:32:45 INFO SparkContext: Running Spark version 2.3.0-SNAPSHOT 17/09/22 21:32:45 INFO SparkContext: Submitted application: SparkSQL::192.168.77.55 17/09/22 21:32:45 INFO SecurityManager: Changing view acls to: root 17/09/22 21:32:45 INFO SecurityManager: Changing modify acls to: root 17/09/22 21:32:45 INFO SecurityManager: Changing view acls groups to: 17/09/22 21:32:45 INFO SecurityManager: Changing modify acls groups to: 17/09/22 21:32:45 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set() 17/09/22 21:32:45 INFO Utils: Successfully started service 'sparkDriver' on port 43676. 17/09/22 21:32:45 INFO SparkEnv: Registering MapOutputTracker 17/09/22 21:32:45 INFO SparkEnv: Registering BlockManagerMaster 17/09/22 21:32:45 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 17/09/22 21:32:45 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 17/09/22 21:32:45 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-f536509f-4e3e-4e08-ae7b-8d9499f8e4a4 17/09/22 21:32:45 INFO MemoryStore: MemoryStore started with capacity 366.3 MB 17/09/22 21:32:45 INFO SparkEnv: Registering OutputCommitCoordinator 17/09/22 21:32:45 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041. 17/09/22 21:32:45 INFO Utils: Successfully started service 'SparkUI' on port 4041. 17/09/22 21:32:45 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://wangyuming01:4041 17/09/22 21:32:45 INFO Executor: Starting executor ID driver on host localhost 17/09/22 21:32:45 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 44426. 17/09/22 21:32:45 INFO NettyBlockTransferService: Server created on wangyuming01:44426 17/09/22 21:32:45 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 17/09/22 21:32:45 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, wangyuming01, 44426, None) 17/09/22 21:32:45 INFO BlockManagerMasterEndpoint: Registering block manager wangyuming01:44426 with 366.3 MB RAM, BlockManagerId(driver, wangyuming01, 44426, None) 17/09/22 21:32:45 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, wangyuming01, 44426, None) 17/09/22 21:32:45 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, wangyuming01, 44426, None) 17/09/22 21:32:45 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/root/create/spark/spark-2.3.0-SNAPSHOT-bin-2.6.5/spark-warehouse'). 17/09/22 21:32:45 INFO SharedState: Warehouse path is 'file:/root/create/spark/spark-2.3.0-SNAPSHOT-bin-2.6.5/spark-warehouse'. 17/09/22 21:32:46 INFO HiveUtils: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes. 17/09/22 21:32:46 INFO HiveClientImpl: Warehouse location for Hive client (version 1.2.2) is file:/root/create/spark/spark-2.3.0-SNAPSHOT-bin-2.6.5/spark-warehouse 17/09/22 21:32:46 INFO metastore: Mestastore configuration hive.metastore.warehouse.dir changed from /user/hive/warehouse to file:/root/create/spark/spark-2.3.0-SNAPSHOT-bin-2.6.5/spark-warehouse 17/09/22 21:32:46 INFO HiveMetaStore: 0: Shutting down the object store... 17/09/22 21:32:46 INFO audit: ugi=root ip=unknown-ip-addr cmd=Shutting down the object store... 17/09/22 21:32:46 INFO HiveMetaStore: 0: Metastore shutdown complete. 17/09/22 21:32:46 INFO audit: ugi=root ip=unknown-ip-addr cmd=Metastore shutdown complete. 17/09/22 21:32:46 INFO HiveMetaStore: 0: get_database: default 17/09/22 21:32:46 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_database: default 17/09/22 21:32:46 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore 17/09/22 21:32:46 INFO ObjectStore: ObjectStore, initialize called 17/09/22 21:32:46 INFO Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing 17/09/22 21:32:46 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY 17/09/22 21:32:46 INFO ObjectStore: Initialized ObjectStore 17/09/22 21:32:46 INFO HiveClientImpl: Warehouse location for Hive client (version 1.2.2) is file:/root/create/spark/spark-2.3.0-SNAPSHOT-bin-2.6.5/spark-warehouse 17/09/22 21:32:46 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint spark-sql> create table t(id int); 17/09/22 21:32:51 INFO HiveMetaStore: 0: get_database: global_temp 17/09/22 21:32:51 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_database: global_temp 17/09/22 21:32:51 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException 17/09/22 21:32:51 INFO HiveClientImpl: Warehouse location for Hive client (version 1.2.2) is file:/root/create/spark/spark-2.3.0-SNAPSHOT-bin-2.6.5/spark-warehouse 17/09/22 21:32:52 INFO HiveMetaStore: 0: get_database: default 17/09/22 21:32:52 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_database: default 17/09/22 21:32:52 INFO HiveMetaStore: 0: get_database: default 17/09/22 21:32:52 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_database: default 17/09/22 21:32:52 INFO HiveMetaStore: 0: get_table : db=default tbl=t 17/09/22 21:32:52 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=default tbl=t 17/09/22 21:32:52 INFO HiveMetaStore: 0: get_database: default 17/09/22 21:32:52 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_database: default 17/09/22 21:32:52 INFO HiveMetaStore: 0: create_table: Table(tableName:t, dbName:default, owner:root, createTime:1506087171, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:id, type:int, comment:null)], location:file:/user/hive/warehouse/t, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format=1}), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{})), partitionKeys:[], parameters:{spark.sql.sources.schema.part.0={"type":"struct","fields":[{"name":"id","type":"integer","nullable":true,"metadata":{}}]}, spark.sql.sources.schema.numParts=1, spark.sql.create.version=2.3.0-SNAPSHOT}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null, rolePrivileges:null)) 17/09/22 21:32:52 INFO audit: ugi=root ip=unknown-ip-addr cmd=create_table: Table(tableName:t, dbName:default, owner:root, createTime:1506087171, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:id, type:int, comment:null)], location:file:/user/hive/warehouse/t, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format=1}), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{})), partitionKeys:[], parameters:{spark.sql.sources.schema.part.0={"type":"struct","fields":[{"name":"id","type":"integer","nullable":true,"metadata":{}}]}, spark.sql.sources.schema.numParts=1, spark.sql.create.version=2.3.0-SNAPSHOT}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null, rolePrivileges:null)) 17/09/22 21:32:52 WARN HiveMetaStore: Location: file:/user/hive/warehouse/t specified for non-external table:t 17/09/22 21:32:52 INFO FileUtils: Creating directory if it doesn't exist: file:/user/hive/warehouse/t Time taken: 2.881 seconds 17/09/22 21:32:54 INFO SparkSQLCLIDriver: Time taken: 2.881 seconds spark-sql>
Attachments
Issue Links
- relates to
-
SPARK-21428 CliSessionState never be recognized because of IsolatedClientLoader
- Resolved
- links to