Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-1115

Better error message when python worker process dies

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • 0.9.1, 1.0.0
    • PySpark
    • None

    Description

      Right now all I see is:

      java.io.EOFException
      	at java.io.DataInputStream.readInt(DataInputStream.java:392)
      	at org.apache.spark.api.python.PythonWorkerFactory.startDaemon(PythonWorkerFactory.scala:177)
      	at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:55)
      	at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:42)
      	at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:89)
      	at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:53)
      	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
      	at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
      	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:109)
      	at org.apache.spark.scheduler.Task.run(Task.scala:53)
      	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213)
      	at org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:49)
      	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      	at java.lang.Thread.run(Thread.java:724)
      

      Attachments

        Activity

          People

            bouk Bouke van der Bijl
            sandy Sanford Ryza
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: