Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-5395

Large number of Python workers causing resource depletion

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 1.2.0, 1.3.0
    • 1.2.2, 1.3.0
    • PySpark
    • None
    • AWS ElasticMapReduce

    Description

      During job execution a large number of Python worker accumulates eventually causing YARN to kill containers for being over their memory allocation (in the case below that is about 8G for executors plus 6G for overhead per container).

      In this instance, at the time of killing the container 97 pyspark.daemon processes had accumulated.

      2015-01-23 15:36:53,654 INFO [Reporter] yarn.YarnAllocationHandler (Logging.scala:logInfo(59)) - Container marked as failed: container_1421692415636_0052_01_000030. Exit status: 143. Diagnostics: Container [pid=35211,containerID=container_1421692415636_0052_01_000030] is running beyond physical memory limits. Current usage: 14.9 GB of 14.5 GB physical memory used; 41.3 GB of 72.5 GB virtual memory used. Killing container.
      Dump of the process-tree for container_1421692415636_0052_01_000030 :
      |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
      |- 54101 36625 36625 35211 (python) 78 1 332730368 16834 python -m pyspark.daemon
      |- 52140 36625 36625 35211 (python) 58 1 332730368 16837 python -m pyspark.daemon
      |- 36625 35228 36625 35211 (python) 65 604 331685888 17694 python -m pyspark.daemon
      	[...]
      

      The configuration used uses 64 containers with 2 cores each.

      Full output here: https://gist.github.com/skrasser/e3e2ee8dede5ef6b082c

      Mailinglist discussion: https://www.mail-archive.com/user@spark.apache.org/msg20102.html

      Attachments

        Activity

          People

            davies Davies Liu
            skrasser Sven Krasser
            Votes:
            3 Vote for this issue
            Watchers:
            7 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: