I would like to propose a refactor of the physical/execution planning based on the experience I have had in implementing distributed execution in Ballista.
This will likely need subtasks but here is an overview of the changes I am proposing.
We should extend the ExecutionPlan trait so that each operator can specify its input and output partitioning needs, and then have an optimization rule that can insert any repartitioning or reordering steps required.
For example, these are the methods to be added to ExecutionPlan. This design is based on Apache Spark.
A good example of applying this rule would be in the case of hash aggregates where we perform a partial aggregate in parallel across partitions and then coalesce the results and apply a final hash aggregate.
Another example would be a SortMergeExec specifying the sort order required for its children.