Details
Description
df.sort(...).count()
performs shuffle and sort and then count! This is wasteful as sort is not required here and makes me wonder how smart the algebraic optimiser is indeed! The data may be partitioned by known count (such as parquet files) and we should not shuffle to just perform count.
This may look trivial, but if optimiser fails to recognise this, I wonder what else is it missing especially in more complex operations.