Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-12418

spark shuffle FAILED_TO_UNCOMPRESS

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Duplicate
    • Affects Version/s: 1.5.1
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None
    • Environment:

      hadoop 2.3.0
      spark 1.5.1

      Description

      when use default compression snappy,I get error when spark doing shuffle

      Job aborted due to stage failure: Task 19 in stage 2.3 failed 4 times, most recent failure: Lost task 19.3 in stage 2.3 (TID 10311, 192.168.6.36): java.io.IOException: FAILED_TO_UNCOMPRESS(5)
      at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:84)
      at org.xerial.snappy.SnappyNative.rawUncompress(Native Method)
      at org.xerial.snappy.Snappy.rawUncompress(Snappy.java:444)
      at org.xerial.snappy.Snappy.uncompress(Snappy.java:480)
      at org.xerial.snappy.SnappyInputStream.readFully(SnappyInputStream.java:135)
      at org.xerial.snappy.SnappyInputStream.readHeader(SnappyInputStream.java:92)
      at org.xerial.snappy.SnappyInputStream.<init>(SnappyInputStream.java:58)
      at org.apache.spark.io.SnappyCompressionCodec.compressedInputStream(CompressionCodec.scala:159)
      at org.apache.spark.storage.BlockManager.wrapForCompression(BlockManager.scala:1179)
      at org.apache.spark.shuffle.hash.HashShuffleReader$$anonfun$3.apply(HashShuffleReader.scala:53)
      at org.apache.spark.shuffle.hash.HashShuffleReader$$anonfun$3.apply(HashShuffleReader.scala:52)
      at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
      at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
      at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
      at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32)
      at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
      at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
      at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
      at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:217)
      at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73)
      at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
      at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
      at org.apache.spark.scheduler.Task.run(Task.scala:88)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      at java.lang.Thread.run(Thread.java:745)

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                Unassigned
                Reporter:
                blackproof dirk.zhang
              • Votes:
                0 Vote for this issue
                Watchers:
                2 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: