When running FlatMapGroupsInPandasExec or AggregateInPandasExec the shuffle uses a default number of partitions of 200 in "spark.sql.shuffle.partitions". If the data is small, e.g. in testing, many of the partitions will be empty but are treated just the same. For example, ArrowPythonRunner.compute is called and starts a number of threads that do nothing since there is no iteration. These computations could be skipped for empty partitions, which will save time overall.