Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-3151

DiskStore attempts to map any size BlockId without checking MappedByteBuffer limit

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Minor
    • Resolution: Fixed
    • 1.0.2
    • 2.3.0
    • Block Manager, Spark Core
    • None
    • IBM 64-bit JVM PPC64

    Description

      DiskStore attempts to memory map the block file in def getBytes. If the file is larger than 2GB (Integer.MAX_VALUE) as specified by FileChannel.map, then the memory map fails.

      Some(channel.map(MapMode.READ_ONLY, segment.offset, segment.length)) # line 104
      

      Attachments

        Issue Links

          Activity

            People

              eyalfa Eyal Farago
              damonab Damon Brown
              Votes:
              2 Vote for this issue
              Watchers:
              12 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: