Uploaded image for project: 'Qpid'
  1. Qpid
  2. QPID-4873

Optimizations in Java client to reduce queue memory footprint



    • Improvement
    • Status: Closed
    • Minor
    • Resolution: Fixed
    • 0.23
    • 0.23
    • Java Common, JMS AMQP 0-x
    • None


      My team is using the Java broker and Java client, version 0.16, and looking to lower the client's memory footprint on our servers. We did some heap analysis and found that the consumption is coming mostly from AMQAnyDestination instances, each having a retained size close to ~3KB, and since we have 6000 queues on each of our 2 brokers, this amounts to about ~33MB, which is valuable real estate for us. In our analysis we found a few possible optimizations in the Qpid code that would reduce the per-queue heap consumption and which don't seem high risk, and would like to propose the following changes (will attach a patch file).

      (I had originally emailed the users list 2 weeks ago, and Rob Godfrey asked me to raise a JIRA with the changes in a patch file – http://mail-archives.apache.org/mod_mbox/qpid-users/201305.mbox/%3CCACsaS94F0MQeyAKTN3yoU=j-MPc6oFWZgtCtj68GAwOcN=508g@mail.gmail.com%3E)

      The changes I attach here are with the trunk code and I've redone the numbers / analysis running with the latest client.

      1. In Address / AddressParser, cacheing / reusing the options Maps for queues created with the same options string. (This optimization gives us the most significant savings.)

      All our queues are created with the same options string, which means each corresponding AMQDestination has an Address that has an _options Map that is the same for all queues, i.e., 12K copies of the same map. As far as we can tell, the _options map is effectively immutable, i.e., there is no code path by which an Address’s _options map can be modified. (Is this correct?) So a possible improvement is that in org.apache.qpid.messaging.util.AddressParser, we cache the options map for each options string that we've already encountered, and if the options string passed in has already been seen, we use the stored options map for that Address. This way, for queues having the same options, their Address options will reference the same Map. (For our queues, each Address _options Map currently takes up 1416 B.)

      2. AMQDestination's _link field – org.apache.qpid.client.messaging.address.Link

      Optimization A: org.apache.qpid.client.messaging.address.Link$Subscription's args field is by default a new HashMap with default capacity 16. In our use case it remains empty for all queues. A possible optimization is to set the default value as Collections.emptyMap() instead. As far was we can tell, Subscription.getArgs() is not used to get the map and then modify it. For us this saves 128B per queue.

      Optimization B: Similarly, Link has a _bindings List that is by default a new ArrayList with a default capacity of 10. In our use case it remains empty for all queues, and as far as we can tell this list is not modified after it is set. If we make the default value Collections.emptyList() instead, it will save us 80B per queue.

      3. AMQDestination's _node field – org.apache.qpid.client.messaging.address.Node

      Node has a _bindings List that is by default a new ArrayList with the default capacity. In our use case _bindings remains empty for all queues, and I don't see getBindings() being used to get the list and then modify it. I also don't see addBindings() being called anywhere in the client. So a possible optimization is to set the default value as Collections.emptyList() instead. For us this saves 80B per queue.

      The changes in AddressHelper.getBindings() are for the case when there are node or link properties defined, but no bindings.

      Overall: Originally, each queue took up about 2760B for us; with these optimizations, that goes down to 1024B, saving 63% per queue for us.

      We'd appreciate feedback on these changes and whether we are making any incorrect assumptions. I've also added relevant tests to AMQDestinationTest but am not sure if that's the best place. Thanks a lot!




            rajith Rajith Muditha Attapattu
            helenkwong Helen Kwong
            0 Vote for this issue
            4 Start watching this issue