Details
-
Sub-task
-
Status: Resolved
-
Major
-
Resolution: Duplicate
-
None
-
None
-
None
Description
A key component in eliminating the 2GB limit on blocks is creating a proper abstraction for storing more than 2GB. Currently spark is limited by a reliance on nio ByteBuffer and netty ByteBuf, both of which are limited at 2GB. This task will introduce the new abstraction and the relevant implementation and utilities, without effecting the existing implementation at all.
Attachments
Attachments
Issue Links
- is related to
-
SPARK-5928 Remote Shuffle Blocks cannot be more than 2 GB
- Resolved
-
SPARK-3151 DiskStore attempts to map any size BlockId without checking MappedByteBuffer limit
- Resolved
-
SPARK-1476 2GB limit in spark for blocks
- Closed
-
SPARK-1391 BlockManager cannot transfer blocks larger than 2G in size
- Closed
- links to