Description
Physical plan of "select colA from t order by colB limit M" is TakeOrderedAndProject;
Currently TakeOrderedAndProject sorts data in memory, see https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/limit.scala#L158
Shall we add a config – if the number of limit (M) is too big, we can sort by disk ? Thus memory issue can be resolved.