Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-35275 Add checksum for shuffle blocks
  3. SPARK-37695

Skip diagnosis ob merged blocks from push-based shuffle

    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 3.2.0, 3.3.0
    • 3.2.1, 3.3.0
    • Shuffle
    • None

    Description

      Shuffle corruption diagnosis for push-based shuffle hasn't been supported yet. So we should skip diagnosis on merged blocks, otherwise it could fail:

      21/12/19 18:46:37 WARN TaskSetManager: Lost task 166.0 in stage 1921.0 (TID 138855) (beta-spark5 executor 218): java.lang.AssertionError: assertion failed: Expected ShuffleBlockId, but got shuffleChunk_464_0_5645_0
      	at scala.Predef$.assert(Predef.scala:223)
      	at org.apache.spark.storage.ShuffleBlockFetcherIterator.diagnoseCorruption(ShuffleBlockFetcherIterator.scala:1043)
      	at org.apache.spark.storage.BufferReleasingInputStream.$anonfun$tryOrFetchFailedException$1(ShuffleBlockFetcherIterator.scala:1308)
      	at scala.Option.map(Option.scala:230)
      	at org.apache.spark.storage.BufferReleasingInputStream.tryOrFetchFailedException(ShuffleBlockFetcherIterator.scala:1307)
      	at org.apache.spark.storage.BufferReleasingInputStream.read(ShuffleBlockFetcherIterator.scala:1293)
      	at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
      	at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
      	at java.io.DataInputStream.readInt(DataInputStream.java:387)
      	at org.apache.spark.sql.execution.UnsafeRowSerializerInstance$$anon$2$$anon$3.readSize(UnsafeRowSerializer.scala:113)
      	at org.apache.spark.sql.execution.UnsafeRowSerializerInstance$$anon$2$$anon$3.<init>(UnsafeRowSerializer.scala:120)
      	at org.apache.spark.sql.execution.UnsafeRowSerializerInstance$$anon$2.asKeyValueIterator(UnsafeRowSerializer.scala:110)
      	at org.apache.spark.shuffle.BlockStoreShuffleReader.$anonfun$read$2(BlockStoreShuffleReader.scala:98)
      	at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
      	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
      	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
      	at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:31)
      	at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
      	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
      	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage9.sort_addToSorter_0$(Unknown Source)
      	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage9.processNext(Unknown Source)
      	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
      	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759)
      	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage10.smj_findNextJoinRows_0$(Unknown Source)
      	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage10.processNext(Unknown Source)
      	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
      	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$2.hasNext(WholeStageCodegenExec.scala:778)
      	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
      	at org.apache.spark.shuffle.sort.UnsafeShuffleWriter.write(UnsafeShuffleWriter.java:179)
      	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
      	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
      	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
      	at org.apache.spark.scheduler.Task.run(Task.scala:136)
      	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:507)
      	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1468)
      	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:510)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      	at java.lang.Thread.run(Thread.java:748) 

       

       

      Attachments

        Activity

          People

            chengpan Cheng Pan
            Ngone51 wuyi
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: