Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-15531

spark-class tries to use too much memory when running Launcher


    • Type: Bug
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: 1.6.1, 2.0.0
    • Fix Version/s: 2.0.0
    • Component/s: Deploy
    • Labels:
    • Environment:

      Linux running in Univa or Sun Grid Engine


      When running Java on a server with a lot of memory but a rather small virtual memory ulimit, Java will try to allocate a large memory pool and fail:

      # System has 128GB of Ram but ulimit set to 7.5G
      $ ulimit -v
      $ java -client
      Error occurred during initialization of VM
      Could not reserve enough space for object heap
      Error: Could not create the Java Virtual Machine.
      Error: A fatal exception has occurred. Program will exit.

      This is a known issue with Java, but unlikely to get fixed.

      As a result, when starting various Spark process (spark-submit, master or workers), they fail when spark-class tries to run org.apache.spark.launcher.Main.

      To fix this, add -Xmx128m to this line

      "$RUNNER" -Xmx128m -cp "$LAUNCH_CLASSPATH" org.apache.spark.launcher.Main "$\@"


      We've been using 128m and that works in our setup. Considering all the launcher does is analyze the arguments and env var and spit out some command, it should be plenty. All other calls to Java seem to include some value for -Xmx, so it is not an issue elsewhere.

      I don't mind submitting a PR, but I'm sure somebody has opinions on the 128m (bigger, smaller, configurable, ...), so I'd rather it would be discussed first.




            • Assignee:
              srowen Sean Owen
              mathieulongtin mathieu longtin
            • Votes:
              0 Vote for this issue
              3 Start watching this issue


              • Created: