I'm in the process of migrating from Spark 2.4.x to Spark 3.0.0 and I'm noticing a behaviour change in a particular aggregation we're doing, and I think I've tracked it down to how Spark is now treating nullable properties within the column being grouped by.
Here's a simple test I've been able to set up to repro it:
Spark 2.4.6 has the expected result:
But Spark 3.0.0 has an unexpected result:
Notice how it has keyed one of the values in be as `[null]`; that is, an instance of B with a null value for the `c` property instead of a null for the overall value itself.
Is this an intended change?