Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-8726

Wrong spark.executor.memory when using different EC2 master and worker machine types

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 1.4.0
    • Fix Version/s: 1.4.0
    • Component/s: EC2
    • Labels:
      None

      Description

      (this is a mirror of MESOS-2985)

      By default, spark.executor.memory is set to the min(slave_ram_kb, master_ram_kb); when using the same instance type for master and workers you will not notice, but when using different ones (which makes sense, as the master cannot be a spot instance, and using a big machine for the master would be a waste of resources) the default amount of memory given to each worker is capped to the amount of RAM available on the master (ex: if you create a cluster with an m1.small master (1.7GB RAM) and one m1.large worker (7.5GB RAM), spark.executor.memory will be set to 512MB).

        Attachments

          Activity

            People

            • Assignee:
              parmesan Stefano Parmesan
              Reporter:
              parmesan Stefano Parmesan
            • Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: