Description
I'm trying to use dataframes with spark 2.0. It was built with hive but when I try to run a dataframe command I get the error:
16/05/31 20:24:21 ERROR ApplicationMaster: User class threw exception: java.lang.IllegalArgumentException: Wrong FS: file:/grid/2/tmp/yarn-local/usercache/tgraves/appcache/application_1464289177693_1036410/container_e14_1464289177693_1036410_01_000001/spark-warehouse, expected: hdfs://nn1.com:8020
java.lang.IllegalArgumentException: Wrong FS: file:/grid/2/tmp/yarn-local/usercache/tgraves/appcache/application_1464289177693_1036410/container_e14_1464289177693_1036410_01_000001/spark-warehouse, expected: hdfs://nn1.com:8020
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:648)
at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:194)
at org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:106)
at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1880)
at org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.liftedTree1$1(InMemoryCatalog.scala:123)
at org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.createDatabase(InMemoryCatalog.scala:122)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createDatabase(SessionCatalog.scala:142)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.<init>(SessionCatalog.scala:84)
at org.apache.spark.sql.internal.SessionState.catalog$lzycompute(SessionState.scala:94)
at org.apache.spark.sql.internal.SessionState.catalog(SessionState.scala:94)
at org.apache.spark.sql.internal.SessionState$$anon$1.<init>(SessionState.scala:110)
at org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:110)
at org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:109)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:48)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:62)
at org.apache.spark.sql.SparkSession.baseRelationToDataFrame(SparkSession.scala:371)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:154)
at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:419)
at yahoo.spark.SparkFlickrLargeJoin$.main(SparkFlickrLargeJoin.scala:26)
at yahoo.spark.SparkFlickrLargeJoin.main(SparkFlickrLargeJoin.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:617)
It seems https://issues.apache.org/jira/browse/SPARK-15565 change it to have default local fs. Even before that it didn't work either, just different error -> https://issues.apache.org/jira/browse/SPARK-15034.
Attachments
Issue Links
- duplicates
-
SPARK-15659 Ensure FileSystem is gotten from path in InMemoryCatalog
- Resolved
- is broken by
-
SPARK-15565 The default value of spark.sql.warehouse.dir needs to explicitly point to local filesystem
- Resolved