Spark currently allows MapType expressions to be used as input to hash expressions, but I think that this should be prohibited because Spark SQL does not support map equality.
Currently, Spark SQL's map hashcodes are sensitive to the insertion order of map elements:
This behavior might be surprising to Scala developers.
If we decide that we want this to be an error then it might also be a good idea to add a spark.sql.legacy flag as an escape-hatch to re-enable the old and buggy behavior (in case applications were relying on it in cases where it just so happens to be safe-by-accident (e.g. maps which only have one entry)).
Alternatively, we could support hashing here if we implemented support for comparable map types (SPARK-18134).