Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-18402

S3A committer NPE in spark job abort

    XMLWordPrintableJSON

Details

    • Reviewed

    Description

      NPE happening in spark HadoopMapReduceCommitProtocol.abortJob when jobID is null

      - save()/findClass() - non-partitioned table - Overwrite *** FAILED ***
        java.lang.NullPointerException:
        at org.apache.hadoop.fs.s3a.commit.impl.CommitContext.<init>(CommitContext.java:159)
        at org.apache.hadoop.fs.s3a.commit.impl.CommitOperations.createCommitContext(CommitOperations.java:652)
        at org.apache.hadoop.fs.s3a.commit.AbstractS3ACommitter.initiateJobOperation(AbstractS3ACommitter.java:856)
        at org.apache.hadoop.fs.s3a.commit.AbstractS3ACommitter.abortJob(AbstractS3ACommitter.java:909)
        at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.abortJob(HadoopMapReduceCommitProtocol.scala:252)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:268)
        at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:191)
        at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:113)
        at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:111)
        at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:125)
        ...
      
      

      Attachments

        Issue Links

          Activity

            People

              stevel@apache.org Steve Loughran
              stevel@apache.org Steve Loughran
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: