Details
-
Bug
-
Status: Resolved
-
Minor
-
Resolution: Duplicate
-
None
-
None
-
None
-
AWS EMR 3.2.30-49.59.amzn1.x86_64 #1 SMP x86_64 GNU/Linux
Spark 1.0.0-SNAPSHOT built for Hadoop 1.0.4 built 2014-03-18
Description
The Executor may fail when trying to mmap a file bigger than Integer.MAX_VALUE due to the constraints of FileChannel.map (http://docs.oracle.com/javase/7/docs/api/java/nio/channels/FileChannel.html#map(java.nio.channels.FileChannel.MapMode, long, long)). The signature takes longs, but the size value must be less than MAX_VALUE. This manifests with the following backtrace:
java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:828)
at org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:98)
at org.apache.spark.storage.BlockManager.doGetLocal(BlockManager.scala:337)
at org.apache.spark.storage.BlockManager.getLocal(BlockManager.scala:281)
at org.apache.spark.storage.BlockManager.get(BlockManager.scala:430)
at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:38)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:220)
at org.apache.spark.api.python.PythonRDD$$anon$2.run(PythonRDD.scala:85)
Attachments
Issue Links
- duplicates
-
SPARK-1476 2GB limit in spark for blocks
- Closed
- is broken by
-
SPARK-1476 2GB limit in spark for blocks
- Closed