Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-27648

In Spark2.4 Structured Streaming:The executor storage memory increasing over time

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Duplicate
    • 2.4.0
    • None
    • Structured Streaming
    • None

    Description

      Spark Program Code Business:
      Read the topic on kafka, aggregate the stream data sources, and then output it to another topic line of kafka.

      Problem Description:
      1) Using spark structured streaming in CDH environment (spark 2.2), memory overflow problems often occur (because of too many versions of state stored in memory, this bug has been modified in spark 2.4).

      /spark-submit \
      --conf “spark.yarn.executor.memoryOverhead=4096M”
      --num-executors 15 \
      --executor-memory 3G \
      --executor-cores 2 \
      --driver-memory 6G \{code}
      

      Executor memory exceptions occur when running with this submit resource under SPARK 2.2 and the normal running time does not exceed one day.

      The solution is to set the executor memory larger than before 

       My spark-submit script is as follows:
      /spark-submit\
      conf "spark. yarn. executor. memoryOverhead = 4096M"
      num-executors 15\
      executor-memory 46G\
      executor-cores 3\
      driver-memory 6G\
      ...

      In this case, the spark program can be guaranteed to run stably for a long time, and the executor storage memory is less than 10M (it has been running stably for more than 20 days).

      2) From the upgrade information of Spark 2.4, we can see that the problem of large memory consumption of state storage has been solved in Spark 2.4.
      So we upgraded spark to SPARK 2.4 under CDH, tried to run the spark program, and found that the use of memory was reduced.
      But a problem arises, as the running time increases, the storage memory of executor is growing (see Executors - > Storage Memory from the Spark on Yarn Resource Manager UI).
      This program has been running for 14 days (under SPARK 2.2, running with this submit resource, the normal running time is not more than one day, Executor memory abnormalities will occur).
      The script submitted by the program under spark2.4 is as follows:

      /spark-submit \
       --conf “spark.yarn.executor.memoryOverhead=4096M”
       --num-executors 15 \
       --executor-memory 3G \
       --executor-cores 2 \
       --driver-memory 6G 
      

      Under Spark 2.4, I counted the size of executor memory as time went by during the running of the spark program:

      Run-time(hour) Storage Memory size(MB) Memory growth rate(MB/hour)
      23.5H 41.6MB/1.5GB 1.770212766
      108.4H 460.2MB/1.5GB 4.245387454
      131.7H 559.1MB/1.5GB 4.245254366
      135.4H 575MB/1.5GB 4.246676514
      153.6H 641.2MB/1.5GB 4.174479167
      219H 888.1MB/1.5GB 4.055251142
      263H 1126.4MB/1.5GB 4.282889734
      309H 1228.8MB/1.5GB 3.976699029

      Attachments

        1. image-2019-06-02-19-43-21-652.png
          446 kB
          tommy duan
        2. image-2019-05-27-10-10-30-460.png
          334 kB
          tommy duan
        3. image-2019-05-24-10-20-25-723.png
          517 kB
          tommy duan
        4. image-2019-05-10-17-49-42-034.png
          61 kB
          tommy duan
        5. image-2019-05-09-17-51-14-036.png
          361 kB
          tommy duan
        6. houragg(1).out
          7.54 MB
          tommy duan
        7. houragg_with_state1_state2.xlsx
          83 kB
          tommy duan
        8. houragg_with_state1_state2.csv
          76 kB
          tommy duan
        9. houragg_filter.csv
          45 kB
          tommy duan

        Issue Links

          Activity

            People

              Unassigned Unassigned
              yy3b2007com tommy duan
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: