Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-3535

Spark on Mesos not correctly setting heap overhead

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 1.1.0
    • Fix Version/s: 1.1.1, 1.2.0
    • Component/s: Mesos
    • Labels:
      None

      Description

      Spark on Mesos does account for any memory overhead. The result is that tasks are OOM killed nearly 95% of the time.

      Like with the Hadoop on Mesos project, Spark should set aside 15-25% of the executor memory for JVM overhead.

      For example, see: https://github.com/mesos/hadoop/blob/master/src/main/java/org/apache/hadoop/mapred/ResourcePolicy.java#L55-L63

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                brenden Brenden Matthews
                Reporter:
                brenden Brenden Matthews
              • Votes:
                1 Vote for this issue
                Watchers:
                5 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: