Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-13286

JDBC driver doesn't report full exception

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Minor
    • Resolution: Fixed
    • 1.6.0
    • 2.0.1, 2.1.0
    • SQL
    • None

    Description

      Testing some failure scenarios (inserting data into postgresql where there is a schema mismatch) , there is an exception thrown (fine so far) however it doesn't report the actual SQL error. It refers to a getNextException call but this is beyond my non-existant Java skills to deal with correctly. Supporting this would help users to see the SQL error quickly and resolve the underlying problem.

      Caused by: java.sql.BatchUpdateException: Batch entry 0 INSERT INTO core VALUES('5fdf5...',....) was aborted.  Call getNextException to see the cause.
      	at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2746)
      	at org.postgresql.core.v3.QueryExecutorImpl$1.handleError(QueryExecutorImpl.java:457)
      	at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1887)
      	at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:405)
      	at org.postgresql.jdbc2.AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java:2893)
      	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:185)
      	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:248)
      	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:247)
      	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920)
      	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920)
      	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
      	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
      	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
      	at org.apache.spark.scheduler.Task.run(Task.scala:89)
      	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      	at java.lang.Thread.run(Thread.java:745)
      

      Attachments

        Activity

          People

            davies Davies Liu
            abridgett Adrian Bridgett
            Votes:
            1 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: