Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-20427

Issue with Spark interpreting Oracle datatype NUMBER

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 2.1.0
    • 2.3.0
    • SQL
    • None

    Description

      In Oracle exists data type NUMBER. When defining a filed in a table of type NUMBER the field has two components, precision and scale.
      For example, NUMBER(p,s) has precision p and scale s.
      Precision can range from 1 to 38.
      Scale can range from -84 to 127.
      When reading such a filed Spark can create numbers with precision exceeding 38. In our case it has created fields with precision 44,
      calculated as sum of the precision (in our case 34 digits) and the scale (10):

      "...java.lang.IllegalArgumentException: requirement failed: Decimal precision 44 exceeds max precision 38...".

      The result was, that a data frame was read from a table on one schema but could not be inserted in the identical table on other schema.

      Attachments

        Issue Links

          Activity

            People

              yumwang Yuming Wang
              alextornado Alexander Andrushenko
              Votes:
              0 Vote for this issue
              Watchers:
              15 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: