Description
Hi, I am using MySQL JDBC Driver along with Spark to do some sql queries.
When multiplying a LongType with a decimal in scientific notation, say
spark.sql("select some_int * 2.34E10 from t")
, decimal 2.34E10 will be treated as decimal(3,-8), and some_int will be casted as decimal(20,0).
So according to the rules in comments:
/* * Operation Result Precision Result Scale * ------------------------------------------------------------------------ * e1 + e2 max(s1, s2) + max(p1-s1, p2-s2) + 1 max(s1, s2) * e1 - e2 max(s1, s2) + max(p1-s1, p2-s2) + 1 max(s1, s2) * e1 * e2 p1 + p2 + 1 s1 + s2 * e1 / e2 p1 - s1 + s2 + max(6, s1 + p2 + 1) max(6, s1 + p2 + 1) * e1 % e2 min(p1-s1, p2-s2) + max(s1, s2) max(s1, s2) * e1 union e2 max(s1, s2) + max(p1-s1, p2-s2) max(s1, s2) */
their multiplication will be decimal(3+20+1,-8+0) and thus fails the assert assumption (scale>=0) on DecimalType.scala:166.
My current workaround is to set spark.sql.decimalOperations.allowPrecisionLoss to false.