Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-3151

DiskStore attempts to map any size BlockId without checking MappedByteBuffer limit

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: 1.0.2
    • Fix Version/s: 2.3.0
    • Component/s: Block Manager, Spark Core
    • Labels:
      None
    • Environment:

      IBM 64-bit JVM PPC64

      Description

      DiskStore attempts to memory map the block file in def getBytes. If the file is larger than 2GB (Integer.MAX_VALUE) as specified by FileChannel.map, then the memory map fails.

      Some(channel.map(MapMode.READ_ONLY, segment.offset, segment.length)) # line 104
      

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                eyalfa Eyal Farago
                Reporter:
                damonab Damon Brown
              • Votes:
                2 Vote for this issue
                Watchers:
                12 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: