Uploaded image for project: 'ActiveMQ Artemis'
  1. ActiveMQ Artemis
  2. ARTEMIS-2852

Huge performance decrease between versions 2.2.0 and 2.13.0

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • 2.16.0
    • None
    • None

    Description

      Hi,

      Recently, we started to prepare a new revision of our blog-post in which we test various implementations of replicated queues. Previous version can be found here:  https://softwaremill.com/mqperf/

      We updated artemis binary to 2.13.0, regenerated configuration file and applied all the performance tricks you told us last time. In particular these were:

      • the Xmx java parameter bumped to 16G (now bumped to 48G)
      • in broker.xml, the global-max-size setting changed to 8G (this one we forgot to set, but we suspect that it is not the issue)
      • journal-type set to MAPPED
      • journal-datasyncjournal-sync-non-transactional and journal-sync-transactional all set to false

      Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 Mbps) and we decided to always run twice as much receivers as senders.

      From our tests it looks like version 2.13.0 is not scaling as well, with the increase of senders and receivers, as version 2.2.0 (previously tested). Basically is not scaling at all as the throughput stays almost at the same level, while previously it used to grow linearly.

      Here you can find our tests results for both versions: https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing

      We are aware that now there is a dedicated page in documentation about performance tuning, but we are surprised that same settings as before performs much worse.

      Maybe there is an obvious property which we overlooked which should be turned on? 

      All changes between those versions together with the final configuration can be found on this merged PR: https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a

       

      Charts showing machines' usage in attachments. Memory consumed by artemis process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 

      p.s. I wanted to ask this question on mailing list/nabble forum first but it seems that I don't have permissions to do so even though I registered & subscribed. Is that intentional?

       

      Attachments

        1. Selection_451.png
          9 kB
          Kasper Kondzielski
        2. Selection_441.png
          73 kB
          Kasper Kondzielski
        3. Selection_440.png
          65 kB
          Kasper Kondzielski
        4. Selection_434.png
          74 kB
          Kasper Kondzielski
        5. Selection_433.png
          68 kB
          Kasper Kondzielski

        Issue Links

          Activity

            People

              nigro.fra@gmail.com Francesco Nigro
              kkondzielski Kasper Kondzielski
              Votes:
              1 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: