Details

    • Type: Improvement
    • Status: Open
    • Priority: Major
    • Resolution: Unresolved
    • Fix Version/s: None
    • Component/s: Tools
    • Labels:

      Description

      CASSANDRA-7918 introduces graph output from a stress run, but these graphs are a little limited. Attached to the ticket is an example of some improved graphs which can serve as the basis for some improvements, which I will briefly describe. They should not be taken as the exact end goal, but we should aim for at least their functionality. Preferably with some Javascript advantages thrown in, such as the hiding of datasets/graphs for clarity. Any ideas for improvements are definitely encouraged.

      Some overarching design principles:

      • Display on one screen all of the information necessary to get a good idea of how two or more branches compare to each other. Ideally we will reintroduce this, painting multiple graphs onto one screen, stretched to fit.
      • Axes must be truncated to only the interesting dimensions, to ensure there is no wasted space.
      • Each graph displaying multiple kinds of data should use colour and shape to help easily distinguish the different datasets.
      • Each graph should be tailored to the data it is representing, and we should have multiple views of each data.

      The data can roughly be partitioned into three kinds:

      • throughput
      • latency
      • gc

      These can each be viewed in different ways:

      • as a continuous plot of:
        • raw data
        • scaled/compared to a "base" branch, or other metric
        • cumulatively
      • as box plots
        • ideally, these will plot median, outer quartiles, outer deciles and absolute limits of the distribution, so the shape of the data can be best understood

      Each compresses the information differently, losing different information, so that collectively they help to understand the data.

      Some basic rules for presentation that work well:

      • Latency information should be plotted to a logarithmic scale, to avoid high latencies drowning out low ones
      • GC information should be plotted cumulatively, to avoid differing throughputs giving the impression of worse GC. It should also have a line that is rescaled by the amount of work (number of operations) completed
      • Throughput should be plotted as the actual numbers

      To walk the graphs top-left to bottom-right, we have:

      • Spot throughput comparison of branches to the baseline branch, as an improvement ratio (which can of course be negative, but is not in this example)
      • Raw throughput of all branches (no baseline)
      • Raw throughput as a box plot
      • Latency percentiles, compared to baseline. The percentage improvement at any point in time vs baseline is calculated, and then multiplied by the overall median for the entire run. This simply permits the non-baseline branches to scatter their wins/loss around a relatively clustered line for each percentile. It's probably the most "dishonest" graph but comparing something like latency where each data point can have very high variance is difficult, and this gives you an idea of clustering of improvements/losses.
      • Latency percentiles, raw, each with a different shape; lowest percentiles plotted as a solid line as they vary least, with higher percentiles each getting their own subtly different shape to scatter.
      • Latency box plots
      • GC time, plotted cumulatively and also scaled by work done
      • GC Mb, plotted cumulatively and also scaled by work done
      • GC time, raw
      • GC time as a box plot

      These do mostly introduce the concept of a "baseline" branch. It may be that, ideally, this baseline be selected by a dropdown so the javascript can transform the output dynamically. This would permit more interesting comparisons to be made on the fly.

      There are also some complexities, such as deciding which datapoints to compare against baseline when times get out-of-whack (due to GC, etc, causing a lack of output for a period). The version I uploaded does a merge of the times, permitting a small degree of variance, and ignoring those datapoints we cannot pair. One option here might be to change stress' behaviour to always print to a strict schedule, instead of trying to get absolutely accurate apportionment of timings. If this makes things much simpler, it can be done.

      As previously stated, but may be lost in the wall-of-text, these should be taken as a starting point / sign post, rather than a golden rule for the end goal. But ideally they will be the lower bound of what we can deliver.

      1. reads.svg
        322 kB
        Benedict

        Issue Links

          Activity

          Hide
          benedict Benedict added a comment - - edited

          A thought occurs on further clarifying / honestifying the improvement ratios: instead of calculating the ratio as (branch - baseline) / baseline, calculate it as (branch - baseline) / (branch >= baseline ? baseline : branch

          The effect of this is to make

          • positive results: improvement ratio over baseline
          • negative results: improvement ratio (of baseline) over the branch

          What this means is that when baseline is 20% faster than branch, it moves the graph just as much as if branch is 20% faster than baseline. Without this, such a scenario would result in the baseline win being scaled down to 16.6%. (-0.2/1.2)

          For the logarithmic latency plots we are instead best off calculating the exact ratio of baseline (not improvement ratio), as a 50% reduction is probably best plotted equivalently to a doubling of latency. So this suggestion only affects spot throughput comparison plots.

          Show
          benedict Benedict added a comment - - edited A thought occurs on further clarifying / honestifying the improvement ratios: instead of calculating the ratio as (branch - baseline) / baseline , calculate it as (branch - baseline) / (branch >= baseline ? baseline : branch The effect of this is to make positive results: improvement ratio over baseline negative results: improvement ratio (of baseline) over the branch What this means is that when baseline is 20% faster than branch, it moves the graph just as much as if branch is 20% faster than baseline. Without this, such a scenario would result in the baseline win being scaled down to 16.6%. (-0.2/1.2) For the logarithmic latency plots we are instead best off calculating the exact ratio of baseline (not improvement ratio), as a 50% reduction is probably best plotted equivalently to a doubling of latency. So this suggestion only affects spot throughput comparison plots.
          Hide
          shawn.kumar Shawn Kumar added a comment -

          Just a quick update: the code for this lives here. I have built off what Ryan had already written out, but changes were quite significant since the code was previously pretty much limited to displaying raw metrics and organized for that purpose. Here are some things that have been implemented: support for multiple datasets per revision (see lat_all graph), support for baseline-requiring graphs (see throughput % improvement) and fixing/rebuilding the existing functions for these graphs (ie. scaling, legends, colouring etc.). The remaining things left include: boxplot support (currently working on this using d3plus library), logarithmic scaling, fleshing out data processing for remaining graphs, adding legend entries/changing line styles for different datasets under same revision and finally the aesthetic/UI changes - namely the 'aggregating' screen showing all graphs.

          Show
          shawn.kumar Shawn Kumar added a comment - Just a quick update: the code for this lives here . I have built off what Ryan had already written out, but changes were quite significant since the code was previously pretty much limited to displaying raw metrics and organized for that purpose. Here are some things that have been implemented: support for multiple datasets per revision (see lat_all graph), support for baseline-requiring graphs (see throughput % improvement) and fixing/rebuilding the existing functions for these graphs (ie. scaling, legends, colouring etc.). The remaining things left include: boxplot support (currently working on this using d3plus library), logarithmic scaling, fleshing out data processing for remaining graphs, adding legend entries/changing line styles for different datasets under same revision and finally the aesthetic/UI changes - namely the 'aggregating' screen showing all graphs.
          Hide
          benedict Benedict added a comment -

          How is this progressing? When do you think we'll have some example graphs to take a look at?

          Show
          benedict Benedict added a comment - How is this progressing? When do you think we'll have some example graphs to take a look at?

            People

            • Assignee:
              enigmacurry Ryan McGuire
              Reporter:
              benedict Benedict
            • Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

              • Created:
                Updated:

                Development