Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-33235 Push-based Shuffle Improvement Tasks
  3. SPARK-33331

Limit the number of pending blocks in memory and store blocks that collide

Attach filesAttach ScreenshotAdd voteVotersWatch issueWatchersLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

    Details

    • Type: Sub-task
    • Status: Open
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: 3.1.0
    • Fix Version/s: None
    • Component/s: Shuffle
    • Labels:
      None

      Description

      This jira addresses the below two points:
      1. In RemoteBlockPushResolver, bytes that cannot be merged immediately are stored in memory. The stream callback maintains a list of deferredBufs. When a block cannot be merged it is added to this list. Currently, there isn't a limit on the number of pending blocks. We can limit the number of pending blocks in memory. There has been a discussion around this here:
      https://github.com/apache/spark/pull/30062#discussion_r514026014

      2. When a stream doesn't get an opportunity to merge, then RemoteBlockPushResolver ignores the data from that stream. Another approach is to store the data of the stream in AppShufflePartitionInfo when it reaches the worst-case scenario. This may increase the memory usage of the shuffle service though. However, given a limit introduced with 1 we can try this out.
      More information can be found in this discussion:
      https://github.com/apache/spark/pull/30062#discussion_r517524546

        Attachments

          Activity

            People

            • Assignee:
              Unassigned
              Reporter:
              csingh Chandni Singh

              Dates

              • Created:
                Updated:

                Issue deployment