Description
ShortType and FloatTypes are not correctly mapped to right JDBC types when using JDBC connector. This results in tables and spark data frame being created with unintended types. The issue was observed when validating against SQLServer.
Some example issue
- Write from df with column type results in a SQL table of with column type as INTEGER as opposed to SMALLINT. Thus a larger table that expected.
- read results in a dataframe with type INTEGER as opposed to ShortType
FloatTypes have a issue with read path. In the write path Spark data type 'FloatType' is correctly mapped to JDBC equivalent data type 'Real'. But in the read path when JDBC data types need to be converted to Catalyst data types ( getCatalystType) 'Real' gets incorrectly gets mapped to 'DoubleType' rather than 'FloatType'.
Attachments
Issue Links
- Is contained by
-
SPARK-28151 [JDBC Connector] ByteType is not correctly mapped for read/write of SQLServer tables
- In Progress
- links to