Details
-
Bug
-
Status: Open
-
Major
-
Resolution: Unresolved
-
2.0.2, 2.1.3, 2.2.3, 2.3.4, 2.4.4, 3.0.0
-
None
Description
Spark on master at commit de00ac8a05aedb3a150c8c10f76d1fe5496b1df3 with set spark.sql.ansi.enabled=true; as compared to the default behavior on PostgreSQL 16.
Case 1:
select tinyint(128) * tinyint(2); -- 0 select smallint(2147483647) * smallint(2); -- -2 select int(2147483647) * int(2); -- -2 SELECT smallint((-32768)) * smallint(-1); -- -32768
With ANSI mode enabled, this case is no longer an issue. All 4 of the above statements now yield CAST_OVERFLOW or ARITHMETIC_OVERFLOW errors.
Case 2:
spark-sql> select cast('10e-70' as float), cast('-10e-70' as float); 0.0 -0.0 postgres=# select cast('10e-70' as float), cast('-10e-70' as float); float8 | float8 --------+-------- 1e-69 | -1e-69
Case 3:
spark-sql> select cast('10e-400' as double), cast('-10e-400' as double); 0.0 -0.0 postgres=# select cast('10e-400' as double precision), cast('-10e-400' as double precision); ERROR: "10e-400" is out of range for type double precision LINE 1: select cast('10e-400' as double precision), cast('-10e-400' ... ^
Case 4:
spark-sql (default)> select exp(1.2345678901234E200); Infinity postgres=# select exp(1.2345678901234E200); ERROR: value overflows numeric format
Attachments
Attachments
Issue Links
- is related to
-
SPARK-44444 Use ANSI SQL mode by default
- Resolved
- relates to
-
SPARK-23179 Support option to throw exception if overflow occurs during Decimal arithmetic
- Resolved
-
SPARK-26218 Throw exception on overflow for integers
- Resolved