Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-7214

Unrolling never evicts blocks when MemoryStore is nearly full

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 1.6.0
    • Component/s: Block Manager, Spark Core
    • Labels:
      None
    • Target Version/s:

      Description

      When less than spark.storage.unrollMemoryThreshold (default 1MB) is left in the MemoryStore, new blocks that are computed with unrollSafely (e.g. any cached RDD split) will always fail unrolling even if old blocks could be dropped to accommodate it.

        Attachments

          Activity

            People

            • Assignee:
              andrewor14 Andrew Or
              Reporter:
              woggle Charles Reiss
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: