Details
Description
I have been doing some testing with Decimal values for the RAPIDS Accelerator for Apache Spark. I have been trying to add in new corner cases and when I tried to enable the maximum supported value for a sort I started to get failures. On closer inspection it looks like the CPU is sorting things incorrectly. Specifically anything that is "999999999999999999.50" or above is placed as a chunk in the wrong location in the outputs.
In local mode with 12 tasks.
spark.read.parquet("input.parquet").orderBy(col("a")).collect.foreach(System.err.println)
Here you will notice that the last entry printed is [999999999999999999.49], and [999999999999999999.99] is near the top near [-999999999999999999.99]