• Type: Improvement Improvement
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.6
    • Fix Version/s: 0.6
    • Labels:


      Note: this ticket has become a ticket about improving the performance of the platform

      In order to track performance improvements, we need some reproducible performance benchmarks. Here are some ideas of what we'd need:

      • use PEs that do nothing but create a new message and forward. Allows us to focus on the overhead of the platform
      • what is the maximum throughput without dropping messages, in a given host (in a setup with 1 adapter node and 1 or 2 app nodes)
      • what is the latency for end to end processing (avg, median, etc...)
      • using a very simple app, with only 1 PE prototype
      • varying the number of keys
      • using a slightly more complex app (at least 2 communicating prototypes), in order to take into account inter-PE communications and related optimizations
      • start measurements after a warmup phase

      Some tests could be part of the test suite (by specifying a given option for those performance-related tests). That would allow some tracking of the performance.

      We could also add a simple injection mechanism that would work out of the box with the example bundled with new S4 apps (through "s4 newApp" command).

        Issue Links


          No work has yet been logged on this issue.


            • Assignee:
              Matthieu Morel
              Matthieu Morel
            • Votes:
              0 Vote for this issue
              4 Start watching this issue


              • Created: