Details
Description
Using oracle 11g as a datasource with ojdbc7.jar. When importing data into a scala app, I am getting an exception "Overflowed precision". Some times I would get the exception "Unscaled value too large for precision".
This issue likely affects older versions as well, but this was the version I verified it on.
I narrowed it down to the fact that the schema detection system was trying to set the precision to 0, and the scale to -127.
I have a proposed pull request to follow.
Attachments
Issue Links
- is duplicated by
-
SPARK-10909 Spark sql jdbc fails for Oracle NUMBER type columns
- Resolved
- is related to
-
SPARK-10909 Spark sql jdbc fails for Oracle NUMBER type columns
- Resolved
- relates to
-
SPARK-10909 Spark sql jdbc fails for Oracle NUMBER type columns
- Resolved
- links to