Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-6334

spark-local dir not getting cleared during ALS

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Duplicate
    • 1.2.0
    • None
    • MLlib
    • None

    Description

      when running bigger ALS training spark spills loads of temp data into the local-dir (in my case yarn/local/usercache/antony.mayi/appcache/... - running on YARN from cdh 5.3.2) eventually causing all the disks of all nodes running out of space (in my case I have 12TB of available disk capacity before kicking off the ALS but it all gets used (and yarn kills the containers when reaching 90%).

      even with all recommended options (configuring checkpointing and forcing GC when possible) it still doesn't get cleared.

      here is my (pseudo)code (pyspark):

      sc.setCheckpointDir('/tmp')
      training = sc.pickleFile('/tmp/dataset').repartition(768).persist(StorageLevel.MEMORY_AND_DISK)
      model = ALS.trainImplicit(training, 50, 15, lambda_=0.1, blocks=-1, alpha=40)
      sc._jvm.System.gc()
      

      the training RDD has about 3.5 billions of items (~60GB on disk). after about 6 hours the ALS will consume all 12TB of disk space in local-dir data and gets killed. my cluster has 192 cores, 1.5TB RAM and for this task I am using 37 executors of 4 cores/28+4GB RAM each.

      this is the graph of disk consumption pattern showing the space being all eaten from 7% to 90% during the ALS (90% is when YARN kills the container):

      Attachments

        1. als-diskusage.png
          11 kB
          Antony Mayi
        2. gc.png
          12 kB
          Antony Mayi

        Issue Links

          Activity

            People

              Unassigned Unassigned
              antonymayi Antony Mayi
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: