Details
-
Bug
-
Status: Open
-
Normal
-
Resolution: Unresolved
-
None
-
None
-
Normal
Description
Unlike double or float, for decimal used as primary key in Cassandra we have that 3 != 3.0 even though 3 <= 3.0 and 3 >= 3.0:
cqlsh:keyspace1> create table testdec (key decimal primary key, value int);
cqlsh:keyspace1> insert into testdec (key, value) values (3.0, 3);
cqlsh:keyspace1> select * from testdec;
key | value
------+-------
3.0 | 3
(1 rows)
cqlsh:keyspace1> select * from testdec where key = 3;
key | value
-----+-------
(0 rows)
cqlsh:keyspace1> select * from testdec where key = 3.0;
key | value
-----+-------
3.0 | 3
(1 rows)
cqlsh:keyspace1> select * from testdec where key >= 3 and key <= 3 ALLOW FILTERING;
key | value
-----+-------
3.0 | 3
(1 rows)
The reason for this is that we use the key's bytes (as produced by BigDecimal) to form the token:
cqlsh:keyspace1> select * from testdec where token(key) = token(3); key | value -----+------- (0 rows) cqlsh:keyspace1> select * from testdec where token(key) = token(3.0); key | value -----+------- 3.0 | 3 (1 rows)
as well as to check key matches in BigTableReader.getPosition.
The solution is to always store a canonical form of each key. In this case, such a value that the decimal's unscaled value is not divisible by 10.
This problem may be affecting other types as well.