Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-1912

Compression memory issue during reduce

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.9.2, 1.0.1, 1.1.0
    • Component/s: Spark Core
    • Labels:
      None

      Description

      When we need to read a compressed block, we will first create a compress stream instance(LZF or Snappy) and use it to wrap that block.
      Let's say a reducer task need to read 1000 local shuffle blocks, it will first prepare to read that 1000 blocks, which means create 1000 compression stream instance to wrap them. But the initialization of compression instance will allocate some memory and when we have many compression instance at the same time, it is a problem.
      Actually reducer reads the shuffle blocks one by one, so why we create compression instance at the first time? Can we do it lazily that when a block is first read, create compression instance for it.

        Attachments

          Activity

            People

            • Assignee:
              cloud_fan Wenchen Fan
              Reporter:
              cloud_fan Wenchen Fan
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: