• Sub-task
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • Impala 3.4.0
    • Backend
    • None
    • ghx-label-8


      The current entry in the runtime profile for PLAN_ROOT_SINK does not contain much useful information:

      PLAN_ROOT_SINK:(Total: 234.996ms, non-child: 234.996ms, % non-child: 100.00%)
          - PeakMemoryUsage: 0

      There are several additional counters we could add to the PlanRootSink (either the BufferedPlanRootSink or BlockingPlanRootSink):

      • Amount of time spent blocking inside the PlanRootSink - both the time spent by the client thread waiting for rows to become available and the time spent by the impala thread waiting for the client to consume rows
        • So similar to the RowBatchQueueGetWaitTime and RowBatchQueuePutWaitTime inside the scan nodes
        • The difference between these counters and the ones in ClientRequestState (e.g. ClientFetchWaitTimer and RowMaterializationTimer) should be documented
      • For BufferedPlanRootSink there are already several Buffer pool counters, we should make sure they are exposed in the PLAN_ROOT_SINK section
      • Track the number of rows sent (e.g. rows sent to PlanRootSink::Send and the number of rows fetched (might need to be tracked in the ClientRequestState)
        • For BlockingPlanRootSink the sent and fetched values should be pretty much the same, but for BufferedPlanRootSink this is more useful
        • Similar to RowsReturned in each exec node
      • The rate at which rows are sent and fetched
        • Should be useful when attempting to debug perf of the fetching rows (e.g. if the send rate is much higher than the fetch rate, then maybe there is something wrong with the client)
        • Similar to RowsReturnedRate in each exec node

      Open to other suggestions for counters that folks think are useful.


        Issue Links



              stakiar Sahil Takiar
              stakiar Sahil Takiar
              0 Vote for this issue
              2 Start watching this issue