Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-10648

Spark-SQL JDBC fails to set a default precision and scale when they are not defined in an oracle schema.

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 1.5.0
    • 1.4.2, 1.5.3, 1.6.0
    • SQL
    • None
    • using oracle 11g, ojdbc7.jar

    Description

      Using oracle 11g as a datasource with ojdbc7.jar. When importing data into a scala app, I am getting an exception "Overflowed precision". Some times I would get the exception "Unscaled value too large for precision".

      This issue likely affects older versions as well, but this was the version I verified it on.

      I narrowed it down to the fact that the schema detection system was trying to set the precision to 0, and the scale to -127.

      I have a proposed pull request to follow.

      Attachments

        Issue Links

          Activity

            People

              travis.hegner Travis Hegner
              travis.hegner Travis Hegner
              Votes:
              1 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: