Details

    • Type: Improvement
    • Status: Closed
    • Priority: Critical
    • Resolution: Duplicate
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: Spark Core
    • Labels:
      None
    • Environment:

      all

      Description

      The underlying abstraction for blocks in spark is a ByteBuffer : which limits the size of the block to 2GB.
      This has implication not just for managed blocks in use, but also for shuffle blocks (memory mapped blocks are limited to 2gig, even though the api allows for long), ser-deser via byte array backed outstreams (SPARK-1391), etc.

      This is a severe limitation for use of spark when used on non trivial datasets.

        Attachments

        1. 2g_fix_proposal.pdf
          76 kB
          Mridul Muralidharan

          Issue Links

            Activity

              People

              • Assignee:
                Unassigned
                Reporter:
                mridulm80 Mridul Muralidharan
              • Votes:
                16 Vote for this issue
                Watchers:
                59 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: