Uploaded image for project: 'Mesos'
  1. Mesos
  2. MESOS-2985

Wrong spark.executor.memory when using different EC2 master and worker machine types

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Won't Fix
    • None
    • None
    • None
    • None

    Description

      (this is a mirror of SPARK-8726)

      By default, spark.executor.memory is set to the min(slave_ram_kb, master_ram_kb); when using the same instance type for master and workers you will not notice, but when using different ones (which makes sense, as the master cannot be a spot instance, and using a big machine for the master would be a waste of resources) the default amount of memory given to each worker is capped to the amount of RAM available on the master (ex: if you create a cluster with an m1.small master (1.7GB RAM) and one m1.large worker (7.5GB RAM), spark.executor.memory will be set to 512MB).

      Attachments

        Activity

          People

            Unassigned Unassigned
            parmesan Stefano Parmesan
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: