Currently, VectorGroupByOperator::checkHashModeEfficiency compares the number of entries with the number input records that have been processed. For grouping sets, it accounts for grouping set length as well.
Issue is that, the condition becomes invalid after processing large number of input records. This prevents the system from switching over to streaming mode.
e.g Assume 500,000 input records processed, with 9 grouping sets, with 100,000 entries in hashtable. Hashtable would never cross 4,500,0000 entries as the max size itself is 1M by default.
It would be good to compare the input records (adjusted for grouping sets) with number of output records (along with size of hashtable size) to determine hashing or streaming mode.