Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
-
None
-
None
-
None
Description
OS : Windows 10
Spark Version : 1.6.1
Java version : 1.8
I am trying to launch a simple Spark job from eclipse, after starting spark master and registering one worker. JavaRDDs are created successfully, however, a NPE is thrown while collect() operation is executed. Below are the steps that I performed:
1. Downloaded Spark 1.6.1
2. Built it locally with 'sbt package' and 'sbt assembly' commands
3. Started Master with 'spark-class org.apache.spark.deploy.master.Master'
4. Started Worker with 'spark-class org.apache.spark.deploy.worker.Worker spark://master:7077 -c 2'
5. Verified both Master and Worker are up, and have enough resources in Spark UI
6. Created a maven project in eclipse, with spark dependency
7. Executed attached "SparkCrud.java" in eclipse
8. NPE is thrown, logs are attached "Logs.log"
It seems it's trying to execute Hadoop binaries, however, I am not using Hadoop anywhere at all. Also, I tried placing winutil.exe in C:
and configured "hadoop.home.dir" System property (as suggested in another JIRA), however that doesn't seem to have done the trick.
Attachments
Attachments
Issue Links
- duplicates
-
SPARK-2356 Exception: Could not locate executable null\bin\winutils.exe in the Hadoop
-
- Resolved
-