Details
-
IT Help
-
Status: Resolved
-
Major
-
Resolution: Invalid
-
2.3.0
-
None
-
None
-
python 2.7
java jdk 10
Description
Hi all,
I am new to spark and pyspark. I tried to load a lock csv.file by the fucntion spark. read. csv(), but I met a follwing traceback:
>>> df=spark.read.csv("/Users/jzeng/employee.txt")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/jzeng/spark-2.3.0-bin-hadoop2.7/python/pyspark/sql/readwriter.py", line 439, in csv
return self._df(self._jreader.csv(self._spark._sc._jvm.PythonUtils.toSeq(path)))
File "/Users/jzeng/spark-2.3.0-bin-hadoop2.7/python/lib/py4j-0.10.6-src.zip/py4j/java_gateway.py", line 1160, in _call_
File "/Users/jzeng/spark-2.3.0-bin-hadoop2.7/python/pyspark/sql/utils.py", line 79, in deco
raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.IllegalArgumentException: u'Unable to locate hive jars to connect to metastore. Please set spark.sql.hive.metastore.jars.'
Do you have any suggestion solve this problem?
Thanks.