The task result of a shuffle map stage is not the query result but instead is only map status and metrics accumulator updates. Aside from the metrics that can vary in size, the total task result size solely depends on the number of tasks. And the number of tasks can get large regardless of the stage's output size. For example, the number of tasks generated by `CartesianProduct` is square of "spark.sql.shuffle.partitions", say if "spark.sql.shuffle.partitions" is set to 200, you get 40,000 tasks, if set to 500, you get 250,000 tasks, which can easily error on the default limit of `spark.driver.maxResultSize`:
However, map status and accumulator updates are used by the driver to update the overall map stats and metrics of the query, and they are not cached on the driver, so they won't cause catastrophic memory issues on the driver. So we should remove this check for shuffle map stage tasks.