Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-26077

Reserved SQL words are not escaped by JDBC writer for table name

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Incomplete
    • 2.3.2
    • None
    • SQL

    Description

      This bug is similar to SPARK-16387 but this time table name is not escaped.

      How to reproduce:

      1/ Start spark shell with mysql connector

      spark-shell --jars ./mysql-connector-java-8.0.13.jar

       

      2/ Execute next code
       
      import spark.implicits._

      (spark
      .createDataset(Seq("a","b","c"))
      .toDF("order")
      .write
      .format("jdbc")
      .option("url", s"jdbc:mysql://root@localhost:3306/test")
      .option("driver", "com.mysql.cj.jdbc.Driver")
      .option("dbtable", "condition")
      .save)
       
      , where condition - is reserved word.
       
      Error message:
       
      java.sql.SQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'condition (`order` TEXT )' at line 1
      at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120)
      at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
      at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
      at com.mysql.cj.jdbc.StatementImpl.executeUpdateInternal(StatementImpl.java:1355)
      at com.mysql.cj.jdbc.StatementImpl.executeLargeUpdate(StatementImpl.java:2128)
      at com.mysql.cj.jdbc.StatementImpl.executeUpdate(StatementImpl.java:1264)
      at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.createTable(JdbcUtils.scala:844)
      at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:95)
      at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
      at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
      at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
      at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
      at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
      at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
      at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
      at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
      at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
      at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
      at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
      at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
      at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:656)
      at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:656)
      at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
      at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:656)
      at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
      at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
      ... 59 elided
       
       
       

      Attachments

        Activity

          People

            Unassigned Unassigned
            golovan Eugene Golovan
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: