Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-6491

Spark will put the current working dir to the CLASSPATH

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 1.3.0
    • Fix Version/s: 1.3.1
    • Component/s: Spark Submit
    • Labels:
      None

      Description

      When running "bin/computer-classpath.sh", the output will be:

      :/spark/conf:/spark/assembly/target/scala-2.10/spark-assembly-1.3.0-hadoop2.5.0-cdh5.2.0.jar:/spark/lib_managed/jars/datanucleus-rdbms-3.2.9.jar:/spark/lib_managed/jars/datanucleus-api-jdo-3.2.6.jar:/spark/lib_managed/jars/datanucleus-core-3.2.10.jar

      Java will add the current working dir to the CLASSPATH, if the first ":" exists, which is not expected by spark users.

      For example, if I call spark-shell in the folder /root. And there exists a "core-site.xml" under /root/. Spark will use this file as HADOOP CONF file, even if I have already set HADOOP_CONF_DIR=/etc/hadoop/conf.

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                marsishandsome Liangliang Gu
                Reporter:
                marsishandsome Liangliang Gu
              • Votes:
                0 Vote for this issue
                Watchers:
                2 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: