Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-20466

HadoopRDD#addLocalConfiguration throws NPE

Attach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Minor
    • Resolution: Fixed
    • 2.0.2
    • 2.1.3, 2.2.1, 2.3.0
    • Spark Core, YARN
    • None

    Description

      in spark2.0.2, it throws NPE

        17/04/23 08:19:55 ERROR executor.Executor: Exception in task 439.0 in stage 16.0 (TID 986)$ 
      java.lang.NullPointerException$
      ^Iat org.apache.spark.rdd.HadoopRDD$.addLocalConfiguration(HadoopRDD.scala:373)$
      ^Iat org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:243)$
      ^Iat org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208)$
      ^Iat org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)$
      ^Iat org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)$
      ^Iat org.apache.spark.rdd.RDD.iterator(RDD.scala:283)$
      ^Iat org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)$
      ^Iat org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)$
      ^Iat org.apache.spark.rdd.RDD.iterator(RDD.scala:283)$
      ^Iat org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)$
      ^Iat org.apache.spark.scheduler.Task.run(Task.scala:86)$
      ^Iat org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)$
      ^Iat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)$
      ^Iat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)$
      ^Iat java.lang.Thread.run(Thread.java:745)$
      

      suggestion to add some code to avoid NPE

       
      
         /** Add Hadoop configuration specific to a single partition and attempt. */
        def addLocalConfiguration(jobTrackerId: String, jobId: Int, splitId: Int, attemptId: Int,
                                  conf: JobConf) {
          val jobID = new JobID(jobTrackerId, jobId)
          val taId = new TaskAttemptID(new TaskID(jobID, TaskType.MAP, splitId), attemptId)
          if ( conf != null){
          conf.set("mapred.tip.id", taId.getTaskID.toString)
          conf.set("mapred.task.id", taId.toString)
          conf.setBoolean("mapred.task.is.map", true)
          conf.setInt("mapred.task.partition", splitId)
          conf.set("mapred.job.id", jobID.toString)
         }
        }
      
      
      

      Attachments

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            stakiar Sahil Takiar
            kellyzly liyunzhang
            Votes:
            1 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment