Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-21269

MetadataFetchFailedException: Missing an output location for shuffle 0

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Cannot Reproduce
    • 2.3.0
    • None
    • Spark Core
    • None

    Description

      Spark cluster can reproduce, local can't:

      1. Start a spark context with spark.reducer.maxReqSizeShuffleToMem=1K and spark.serializer=org.apache.spark.serializer.KryoSerializer:

      $ spark-shell --conf spark.reducer.maxReqSizeShuffleToMem=1K --conf spark.serializer=org.apache.spark.serializer.KryoSerializer
      

      2. A shuffle:

      scala> sc.parallelize(0 until 3000000, 10).repartition(2001).count()
      

      The error messages:

      17/06/30 21:33:29 WARN TaskSetManager: Lost task 117.0 in stage 1.0 (TID 127, jqhadoop-test47-27.int.yihaodian.com, executor 140): FetchFailed(null, shuffleId=0, mapId=-1, reduceId=117, message=
      org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 0
              at org.apache.spark.MapOutputTracker$$anonfun$convertMapStatuses$2.apply(MapOutputTracker.scala:808)
              at org.apache.spark.MapOutputTracker$$anonfun$convertMapStatuses$2.apply(MapOutputTracker.scala:804)
              at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
              at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
              at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
              at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
              at org.apache.spark.MapOutputTracker$.convertMapStatuses(MapOutputTracker.scala:804)
              at org.apache.spark.MapOutputTrackerWorker.getMapSizesByExecutorId(MapOutputTracker.scala:618)
              at org.apache.spark.shuffle.BlockStoreShuffleReader.read(BlockStoreShuffleReader.scala:49)
              at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:105)
              at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
              at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
              at org.apache.spark.rdd.CoalescedRDD$$anonfun$compute$1.apply(CoalescedRDD.scala:100)
              at org.apache.spark.rdd.CoalescedRDD$$anonfun$compute$1.apply(CoalescedRDD.scala:99)
              at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
              at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
              at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
              at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1802)
              at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1159)
              at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1159)
              at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2065)
              at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2065)
              at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
              at org.apache.spark.scheduler.Task.run(Task.scala:108)
              at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:341)
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
              at java.lang.Thread.run(Thread.java:745)
      

      Attachments

        Activity

          People

            Unassigned Unassigned
            yumwang Yuming Wang
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: