Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-10648

Spark-SQL JDBC fails to set a default precision and scale when they are not defined in an oracle schema.

Attach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 1.5.0
    • 1.4.2, 1.5.3, 1.6.0
    • SQL
    • None
    • using oracle 11g, ojdbc7.jar

    Description

      Using oracle 11g as a datasource with ojdbc7.jar. When importing data into a scala app, I am getting an exception "Overflowed precision". Some times I would get the exception "Unscaled value too large for precision".

      This issue likely affects older versions as well, but this was the version I verified it on.

      I narrowed it down to the fact that the schema detection system was trying to set the precision to 0, and the scale to -127.

      I have a proposed pull request to follow.

      Attachments

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            travis.hegner Travis Hegner
            travis.hegner Travis Hegner
            Votes:
            1 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment