Details
-
Bug
-
Status: Resolved
-
Blocker
-
Resolution: Fixed
-
3.3.5
Description
NPE happening in spark HadoopMapReduceCommitProtocol.abortJob when jobID is null
- save()/findClass() - non-partitioned table - Overwrite *** FAILED *** java.lang.NullPointerException: at org.apache.hadoop.fs.s3a.commit.impl.CommitContext.<init>(CommitContext.java:159) at org.apache.hadoop.fs.s3a.commit.impl.CommitOperations.createCommitContext(CommitOperations.java:652) at org.apache.hadoop.fs.s3a.commit.AbstractS3ACommitter.initiateJobOperation(AbstractS3ACommitter.java:856) at org.apache.hadoop.fs.s3a.commit.AbstractS3ACommitter.abortJob(AbstractS3ACommitter.java:909) at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.abortJob(HadoopMapReduceCommitProtocol.scala:252) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:268) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:191) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:113) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:111) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:125) ...
Attachments
Issue Links
- is broken by
-
HADOOP-17833 Improve Magic Committer Performance
-
- Resolved
-
- links to