Description
if I just use spark-class to try to run an example on yarn it errors out because of the echo put in compute_classpath.sh under the hive assembly check:
echo "Hive assembly found, including hive support. If this isn't desired run sbt hive/clean."
This causes the classpath to look like:
exec /home/y/share/yjava_jdk/java//bin/java -cp Hive assembly found, including hive support. If this isn't desired run sbt hive/clean.
/home/user1/user1cs-spark/assembly/target/scala-2.10/spark-assembly-1.0.0-SNAPSHOT-hadoop0.23.10.jar:/home/user1/user1cs-spark/conf:/home/user1/user1cs-spark/lib_managed/jars/datanucleus-api-jdo-3.2.1.jar:/home/user1/user1cs-spark/lib_managed/jars/datanucleus-core-3.2.2.jar:/home/user1/user1cs-spark/lib_managed/jars/datanucleus-rdbms-3.2.1.jar:/home/user1/user1cs-spark/sql/hive/target/scala-2.10//spark-hive-assembly-1.0.0-SNAPSHOT-hadoop0.23.10.jar:/home/user1/yarn_install//conf -Djava.library.path= -Xms512m -Xmx512m org.apache.spark.deploy.yarn.Client --jar examples/target/scala-2.10/spark-examples-assembly-1.0.0-SNAPSHOT.jar --class org.apache.spark.examples.SparkPi --args yarn-cluster
Attachments
Issue Links
- relates to
-
SPARK-1251 Support for optimizing and executing structured queries
- Resolved