Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-10648

Spark-SQL JDBC fails to set a default precision and scale when they are not defined in an oracle schema.

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 1.5.0
    • Fix Version/s: 1.4.2, 1.5.3, 1.6.0
    • Component/s: SQL
    • Labels:
      None
    • Environment:

      using oracle 11g, ojdbc7.jar

    • Target Version/s:

      Description

      Using oracle 11g as a datasource with ojdbc7.jar. When importing data into a scala app, I am getting an exception "Overflowed precision". Some times I would get the exception "Unscaled value too large for precision".

      This issue likely affects older versions as well, but this was the version I verified it on.

      I narrowed it down to the fact that the schema detection system was trying to set the precision to 0, and the scale to -127.

      I have a proposed pull request to follow.

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                travis.hegner Travis Hegner
                Reporter:
                travis.hegner Travis Hegner
              • Votes:
                1 Vote for this issue
                Watchers:
                6 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: