Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-4299

In spark-submit, the driver-memory value is used for the SPARK_SUBMIT_DRIVER_MEMORY value

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Duplicate
    • 1.1.0
    • None
    • Spark Core
    • None

    Description

      In the spark-submit script, the lines below:
      elif [ "$1" = "--driver-memory" ]; then
      export SPARK_SUBMIT_DRIVER_MEMORY=$2
      are wrong: spark-submit is not the process that will handle the driver when you're in yarn-cluster mode. So, when I lanch spark-submit on a light server with only 2Gb of memory and want to allocate 4gb of memory to the driver (that will run in the ressource manager on a big fat yarn server with, say, 64Gb of RAM) spark submit fails with a OutOfMemory.

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              vdevaux Virgile Devaux
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Time Tracking

                  Estimated:
                  Original Estimate - 0.5h
                  0.5h
                  Remaining:
                  Remaining Estimate - 0.5h
                  0.5h
                  Logged:
                  Time Spent - Not Specified
                  Not Specified