I agree with the comments of Amit.
The following code in ConditionalNode.bindExpression() determines the result type of NULLIF():
- Set the result type of this conditional to be the dominant type
- of the result expressions.
DataTypeDescriptor.getDominantType() returns the type with the higher typePrecedence value.
The typePrecedence values for CHAR and INT are defined in TypeId.java:
CHAR_PRECEDENCE = 0
INT_PRECEDENCE = 50
So the "dominant" type between CHAR(1) and INT is INT.
The result type of NULLIF() is set to INT.
At the end of bindExpression(), a CAST node is inserted into thenExpression, to convert the NULL constant from CHAR to INT.
In ConditionalNode.bindExpression() there is the following comment:
- If it is comparable, then we are ok. Note that we
- could in fact allow any expressions that are convertible()
- since we are going to generate a cast node, but that might
- be confusing to users...
Maybe this explains the reason, why the type compatibility check (the one that Amit suggests to comment out) was added to bindExpression(): It "might be confusing to users..."?
To prevent this "user confusion", the term "built-in broadening conversion" should be clarified.
What is a "broadening conversion"? Is the return value type of NULLIF() the "broadened type"?
But then, why is a conversion from CHAR to INT a "broadening conversion"? CHAR would be the "broader" type than INT. (All INT values can be converted to CHAR, but not all CHAR values can be converted to INT.)
Or is the return type the "narrower" type?
An better solution would be to define that the result type of NULLIF() is the "dominant" type, which is the one that comes first in the following list:
USER-defined, BLOB, LONGVARBIT, VARBIT, BIT, BOOLEAN, TIME, TIMESTAMP, DATE, DOUBLE, REAL, DECIMAL, NUMERIC, LONGINT, INT, SMALLINT, TINYINT, REF, NATIONAL_LONGVARCHAR, NATIONAL_VARCHAR, NATIONAL_CHAR, CLOB, NCLOB, LONGVARCHAR, VARCHAR, CHAR.
For DECIMAL/NUMERIC types, scale and precision are broadened to matche both input types.