Uploaded image for project: 'Apache Drill'
  1. Apache Drill
  2. DRILL-6115

SingleMergeExchange is not scaling up when many minor fragments are allocated for a query.




      SingleMergeExchange is created when a global order is required in the output. The following query produces the SingleMergeExchange.

      0: jdbc:drill:zk=local> explain plan for select L_LINENUMBER from dfs.`/drill/tables/lineitem` order by L_LINENUMBER;
      | text | json |
      | 00-00 Screen
      00-01 Project(L_LINENUMBER=[$0])
      00-02 SingleMergeExchange(sort0=[0])
      01-01 SelectionVectorRemover
      01-02 Sort(sort0=[$0], dir0=[ASC])
      01-03 HashToRandomExchange(dist0=[[$0]])
      02-01 Scan(table=[[dfs, /drill/tables/lineitem]], groupscan=[JsonTableGroupScan [ScanSpec=JsonScanSpec [tableName=maprfs:///drill/tables/lineitem, condition=null], columns=[`L_LINENUMBER`], maxwidth=15]])

      On a 10 node cluster if the table is huge then DRILL can spawn many minor fragments which are all merged on a single node with one merge receiver. Doing so will create lot of memory pressure on the receiver node and also execution bottleneck. To address this issue, merge receiver should be multiphase merge receiver.

      Ideally for large cluster one can introduce tree merges so that merging can be done parallel. But as a first step I think it is better to use the existing infrastructure for multiplexing operators to generate an OrderedMux so that all the minor fragments pertaining to one DRILLBIT should be merged and the merged data can be sent across to the receiver operator.

      On a 10 node cluster if each node processes 14 minor fragments.

      Current version of code merges 140 minor fragments
      the proposed version has two level merges 1 - 14 merge in each drillbit which is parallel
      and 10 minorfragments are merged at the receiver node.


        Issue Links



              hanu.ncr Hanumath Rao Maduri
              hanu.ncr Hanumath Rao Maduri
              Vlad Rozov Vlad Rozov
              0 Vote for this issue
              4 Start watching this issue