Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-12563

"No suitable driver" when calling JdbcUtils.saveTable in isolation

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Minor
    • Resolution: Duplicate
    • Affects Version/s: 1.5.2
    • Fix Version/s: None
    • Component/s: SQL
    • Labels:
      None

      Description

      When calling the following function

      JdbcUtils.saveTable(df, url, table, properties)

      the following exception is thrown.

      Exception in thread "main" java.sql.SQLException: No suitable driver
      at java.sql.DriverManager.getDriver(DriverManager.java:315)
      at org.apache.spark.sql.execution.datasources.jdbc.DriverRegistry$.getDriverClassName(DriverRegistry.scala:55)
      at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.saveTable(JdbcUtils.scala:212)
      at com.pul.sive.TestThingy$$anonfun$main$2.apply(TestThingy.scala:77)
      at com.pul.sive.TestThingy$$anonfun$main$2.apply(TestThingy.scala:69)
      at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
      at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
      at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
      at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
      at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
      at com.pul.sive.TestThingy$.main(TestThingy.scala:69)
      at com.pul.sive.TestThingy.main(TestThingy.scala)
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      at java.lang.reflect.Method.invoke(Method.java:497)
      at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:674)
      at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
      at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
      at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
      at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

      However, the above works if the following is called directly before:

      JdbcUtils.createConnection(url, properties)

      It appears that JdbcUtils.saveTable attempts to get the driver from DriverRegistry before reading the contents of the properties argument. Jdbc.createConnection adds the driver to DriverRegistry as a side effect, so this lookup works.

      However it also appears that DataFrame.write.jdbc(url, table, properties) accomplishes the same thing with more flexibility, so I am not sure if JdbcUtils.saveTable is redundant.

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                Unassigned
                Reporter:
                sonya Sonya Huang
              • Votes:
                0 Vote for this issue
                Watchers:
                2 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: