DataFrameReader JDBC methods throw an IllegalStateException when:
1. the JDBC driver is contained in a user-provided jar, and
2. the user does not specify which driver to use, but rather allows spark to determine the driver from the JDBC URL.
This broke some of our database ETL jobs at @premisedata when we upgraded from 1.6.0 to 1.6.1.
I have tracked the problem down to a regression introduced in the fix for
The issue is that DriverRegistry.register is not called on the executors for a JDBC driver that is derived from the JDBC path.
The problem can be demonstrated within spark-shell, provided you're in cluster mode and you've deployed a JDBC driver (e.g. postgresql.Driver) via the --jars argument:
A sufficient fix is to apply DriverRegistry.register to the `driverClass` variable, rather than to `userSpecifiedDriverClass`, at the code link provided above. I will submit a PR for this shortly.
In the meantime, a temporary workaround is to manually specify the JDBC driver class in the Properties object passed to DataFrameReader.jdbc, or in the options used in other entry points, which will force the executors to register the class properly.