Description
When create a DataFrame from jdbc method through SqlContext:
DataFrame df = sql.jdbc(url, fullTableName);
If there is column type NVARCHAR, below exception will be thrown:
Caused by: java.sql.SQLException: Unsupported type -9 at org.apache.spark.sql.jdbc.JDBCRDD$.getCatalystType(JDBCRDD.scala:78) at org.apache.spark.sql.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:112) at org.apache.spark.sql.jdbc.JDBCRelation.<init>(JDBCRelation.scala:133) at org.apache.spark.sql.SQLContext.jdbc(SQLContext.scala:900) at org.apache.spark.sql.SQLContext.jdbc(SQLContext.scala:852)
When comparing the code between JDBCRDD.scala and java.sql.Types.java, the only type is not supported in JDBCRDD.scala is NVARCHAR. Because NCHAR is supported, so I think this is just a small mistake that people skip this type instead of ignore it intentionally.