Description
In Oracle exists data type NUMBER. When defining a filed in a table of type NUMBER the field has two components, precision and scale.
For example, NUMBER(p,s) has precision p and scale s.
Precision can range from 1 to 38.
Scale can range from -84 to 127.
When reading such a filed Spark can create numbers with precision exceeding 38. In our case it has created fields with precision 44,
calculated as sum of the precision (in our case 34 digits) and the scale (10):
"...java.lang.IllegalArgumentException: requirement failed: Decimal precision 44 exceeds max precision 38...".
The result was, that a data frame was read from a table on one schema but could not be inserted in the identical table on other schema.
Attachments
Issue Links
- contains
-
SPARK-20921 While reading from oracle database, it converts to wrong type.
- Resolved
- is related to
-
SPARK-22002 Read JDBC table use custom schema support specify partial fields
- Resolved
- links to