Camel
  1. Camel
  2. CAMEL-5683

JMS connection leak with request/reply producer on temporary queues

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 2.10.0
    • Fix Version/s: 2.9.4, 2.10.2, 2.11.0
    • Component/s: camel-jms
    • Labels:
      None
    • Environment:

      Apache Camel 2.10.0
      ActiveMQ 5.6.0
      Spring 3.2.1.RELEASE
      Java 1.6.0_27
      SunOS HOST 5.10 Generic_144488-09 sun4v sparc SUNW,SPARC-Enterprise-T5220

    • Estimated Complexity:
      Unknown

      Description

      Over time I see the number of temporary queues in ActiveMQ slowly climb. Using JMX information and memory dumps in MAT, I believe the cause is a connection leak in Apache Camel.

      My environment contains 2 ActiveMQ brokers in a network of brokers configuration. There are about 15 separate applications which use Apache Camel to connect to the broker using the ActiveMQ/JMS component. The various applications have different load profiles and route configurations.

      In the more active client applications, I found that ActiveMQ was listing 300+ consumers when, based on my configuration, I would expect no more than 75. The vast majority of the consumers are sitting on a temporary queue. Over time, the 300 number increments by one or two over about a 4 hour period.

      I did a memory dump on one of the more active client applications and found about 275 DefaultMessageListenerContainers. Using MAT, I can see that some of the containers are referenced by JmsProducers in the ProducerCache; however I can also see a large number of listener containers that are no longer being referenced at all. I was also able to match up a soft-references producer/listener endpoint with an unreferenced listener which means a second producer was created at some point.

      Looking through the ProducerCache code, it looks like the LRU cache uses soft-references to producers, in my case a JmsProducer. This seems problematic for two reasons:

      • If memory gets constrained and the GC cleans up a producer, it is never properly stopped.
      • If the cache gets full and the map removes the LRU producer, it is never properly stopped.

      What I believe is happening, is that my application is sending a few request/reply messages to a JmsProducer. The producer creates a TemporaryReplyManager which creates a DefaultMessageListenerContainer. At some point, the JmsProducer is claimed by the GC (either via the soft-reference or because the cache is full) and the reply manager is never stopped. This causes the listener container to continue to listen on the temporary queue, consuming local resources and more importantly, consuming resources on the JMS broker.

      I haven't had a chance to write an application to reproduce this behavior, but I will attach one of my route configurations and a screenshot of the MAT analysis looking at DefaultMessageListenerContainers. If needed, I could provide the entire memory dump for analysis (although I rather not post it publicly). The leak depends on memory usage or producer count in the client application because the ProducerCache must have some churn. Like I said, in our production system we see about 12 temporary queues abandoned per client per day.

      Unless I'm missing something, it looks like the producer cache would need to be much smarter to support stopping a producer when the soft-reference is reclaimed or a member of the cache is ejected from the LRU list.

      1. MAT Snapshot.png
        231 kB
        Michael Pilone
      2. Route Configuration.txt
        8 kB
        Michael Pilone
      3. Consumer List.txt
        95 kB
        Michael Pilone
      4. CamelConnectionLeak.zip
        10 kB
        Michael Pilone
      5. CamelConnectionLeak-ProducerTemplate.zip
        10 kB
        Michael Pilone

        Issue Links

          Activity

          Hide
          Michael Pilone added a comment -

          Attached screenshot from MAT analysis.

          Show
          Michael Pilone added a comment - Attached screenshot from MAT analysis.
          Hide
          Michael Pilone added a comment -

          Attached the route configuration for the JMS client application being analyzed.

          Attached the list of consumers as ActiveMQ sees it for the client application being analyzed.

          Show
          Michael Pilone added a comment - Attached the route configuration for the JMS client application being analyzed. Attached the list of consumers as ActiveMQ sees it for the client application being analyzed.
          Hide
          Michael Pilone added a comment -

          I attached a test case which reproduces the problem. The test case has 3 JMS request/reply routes. It runs in a loop, sending a message, consuming a bunch of memory, then sending another message. As the GC starts to run, the producer on route 2 is reclaimed and a consumer is leaked. Instructions for running it are in the LeakMain class.

          Show
          Michael Pilone added a comment - I attached a test case which reproduces the problem. The test case has 3 JMS request/reply routes. It runs in a loop, sending a message, consuming a bunch of memory, then sending another message. As the GC starts to run, the producer on route 2 is reclaimed and a consumer is leaked. Instructions for running it are in the LeakMain class.
          Hide
          Michael Pilone added a comment -

          It would be great to get a work-around for this. As of now, we have to restart our services every couple of days to keep them from exhausting ActiveMQ resources with hundreds of temporary queues.

          Show
          Michael Pilone added a comment - It would be great to get a work-around for this. As of now, we have to restart our services every couple of days to keep them from exhausting ActiveMQ resources with hundreds of temporary queues.
          Hide
          Raúl Kripalani added a comment -

          Michael,

          Many thanks for such a detailed description, test case and bug report!

          Have you tried setting the size of the ProducerCache to zero? Check [1] for instructions on how to do this. Beware I haven't tested it, it's just a suggestion for a workaround. If you have static endpoint URIs, then I don't think you should experience any churn or performance hit by having a non-existent ProducerCache.

          Regards,
          Raúl.

          [1] http://camel.apache.org/how-do-i-configure-the-default-maximum-cache-size-for-producercache-or-producertemplate.html

          Show
          Raúl Kripalani added a comment - Michael, Many thanks for such a detailed description, test case and bug report! Have you tried setting the size of the ProducerCache to zero? Check [1] for instructions on how to do this. Beware I haven't tested it, it's just a suggestion for a workaround. If you have static endpoint URIs, then I don't think you should experience any churn or performance hit by having a non-existent ProducerCache. Regards, Raúl. [1] http://camel.apache.org/how-do-i-configure-the-default-maximum-cache-size-for-producercache-or-producertemplate.html
          Hide
          Michael Pilone added a comment -

          Raul, thanks for the suggestion. I gave it a try but it didn't work. I set a few breakpoints and found that the configuration of my test case creates the ProducerCache in SendProcessor.java line 152. The cache is hard coded to a size of 1.

          If you then set a breakpoint in ProducerCache.java line 385 where the producer is created using the endpoint, you can see that the producer is occasionally no longer in the cache and must be recreated which means it must have been reclaimed via a GC soft-reference.

          Show
          Michael Pilone added a comment - Raul, thanks for the suggestion. I gave it a try but it didn't work. I set a few breakpoints and found that the configuration of my test case creates the ProducerCache in SendProcessor.java line 152. The cache is hard coded to a size of 1. If you then set a breakpoint in ProducerCache.java line 385 where the producer is created using the endpoint, you can see that the producer is occasionally no longer in the cache and must be recreated which means it must have been reclaimed via a GC soft-reference.
          Hide
          Claus Ibsen added a comment -

          Yeah it does not make so much sense to use a producer cache in the send processor as its a single producer based. So if we just store the Producer as a strong reference then there is no issue like this.

          Show
          Claus Ibsen added a comment - Yeah it does not make so much sense to use a producer cache in the send processor as its a single producer based. So if we just store the Producer as a strong reference then there is no issue like this.
          Hide
          Claus Ibsen added a comment -

          I have committed a fix on trunk, and backporting to 2.10 and 2.9 branches.
          You are welcome to give those a try.

          Show
          Claus Ibsen added a comment - I have committed a fix on trunk, and backporting to 2.10 and 2.9 branches. You are welcome to give those a try.
          Hide
          Michael Pilone added a comment -

          Claus, thanks for the quick fix. I'll try building the source and verifying the fix. Your change in the SendProcessor looks like it will solve my problem but doesn't the problem still exist if I was using the DefaultProducerTemplate? I could probably hack my test case to use the template rather than a gateway proxy and route configuration and I think Camel would continue to leak listeners.

          For example, the sample documentation for ProducerTemplate shows:

          ProducerTemplate template;
          // send to default endpoint
          template.sendBody("<hello>world!</hello>");
          // send to a specific queue
          template.sendBody("activemq:MyQueue", "<hello>world!</hello>");

          The second send to ActiveMQ, if it was request/reply, would put a JmsProducer in the ProducerCache with a listener/consumer which could/would later leak.

          Show
          Michael Pilone added a comment - Claus, thanks for the quick fix. I'll try building the source and verifying the fix. Your change in the SendProcessor looks like it will solve my problem but doesn't the problem still exist if I was using the DefaultProducerTemplate? I could probably hack my test case to use the template rather than a gateway proxy and route configuration and I think Camel would continue to leak listeners. For example, the sample documentation for ProducerTemplate shows: ProducerTemplate template; // send to default endpoint template.sendBody("<hello>world!</hello>"); // send to a specific queue template.sendBody("activemq:MyQueue", "<hello>world!</hello>"); The second send to ActiveMQ, if it was request/reply, would put a JmsProducer in the ProducerCache with a listener/consumer which could/would later leak.
          Hide
          Raúl Kripalani added a comment -

          Maybe we need to override the finalize() method of the JmsProducer (and review all other producers), but take a look at this post which suggests another approach: http://stackoverflow.com/questions/1638859/gracefully-finalizing-the-softreference-referent.

          Show
          Raúl Kripalani added a comment - Maybe we need to override the finalize() method of the JmsProducer (and review all other producers), but take a look at this post which suggests another approach: http://stackoverflow.com/questions/1638859/gracefully-finalizing-the-softreference-referent .
          Hide
          Michael Pilone added a comment -

          I compiled the code from the 2.10.x branch and confirmed that your change does appear to fix the issue when using the SendProducer. However I also confirmed my previous comment that the problem still exists when using the DefaultProducerTemplate (or any other code that uses the ProducerCache with the LRU map implementation). I'll attach an update test case which uses the ProducerTemplate to reproduce the problem. The current cache implementation is going to be a problem with any producer that requires a stop call to properly cleanup.

          You might want to look at modifying the ProducerCache to support a ReferenceQueue with the SoftReferences. Then the ProducerCache could drain the queue and stop all the reclaimed producers before creating a new producer.

          Even with that fix, it might be a good idea to have an easy way (e.g. via a context property) to disable the soft-references in the cache and rely only on max cache size. If I know I'm only going to have 3 or 4 producers but a lot of memory churn, it would be nice to know that my producers would stay in the cache until I completely fill it. This could be really valuable if producer construction/teardown were expensive.

          Show
          Michael Pilone added a comment - I compiled the code from the 2.10.x branch and confirmed that your change does appear to fix the issue when using the SendProducer. However I also confirmed my previous comment that the problem still exists when using the DefaultProducerTemplate (or any other code that uses the ProducerCache with the LRU map implementation). I'll attach an update test case which uses the ProducerTemplate to reproduce the problem. The current cache implementation is going to be a problem with any producer that requires a stop call to properly cleanup. You might want to look at modifying the ProducerCache to support a ReferenceQueue with the SoftReferences. Then the ProducerCache could drain the queue and stop all the reclaimed producers before creating a new producer. Even with that fix, it might be a good idea to have an easy way (e.g. via a context property) to disable the soft-references in the cache and rely only on max cache size. If I know I'm only going to have 3 or 4 producers but a lot of memory churn, it would be nice to know that my producers would stay in the cache until I completely fill it. This could be really valuable if producer construction/teardown were expensive.
          Hide
          Michael Pilone added a comment -

          Reopening because the problem still exists when using the ProducerTemplate (or anything else using the ProducerCache).

          Show
          Michael Pilone added a comment - Reopening because the problem still exists when using the ProducerTemplate (or anything else using the ProducerCache).
          Hide
          Michael Pilone added a comment -

          Attached an updated test case that shows the same problem when using the ProducerTemplate.

          Show
          Michael Pilone added a comment - Attached an updated test case that shows the same problem when using the ProducerTemplate.
          Hide
          Michael Pilone added a comment -

          Raul, I agree. I need to refresh my page before commenting

          The more I think about it the trickier the problem gets. Using the ReferenceQueue on the SoftReferences would help cleanup producers in the GC case, but you would need to make sure the cache also handles the case where the LRU item is evicted when the capacity is reached. In the eviction case, there is no ReferenceQueue to hold the item for later cleanup.

          It might make sense to remove the SoftReference support and just keep the LRU/capacity behavior. Then add a listener interface or "evicted queue" to the LRU hashmap to collect items (i.e. producers) that have been evicted and are pending cleanup. It seems like the use of SoftReferences undermines the LRU concept because the GC is deciding when to collect it rather than letting the map track the last used time. In theory the GC is supposed to be bias against SoftRef collection but it seems pretty aggressive from my simple tests.

          Something like java.util.LinkedHashMap gives you a removeEldestEntry method which would be a nice place to hook in producer shutdown code and avoids these problems.

          Show
          Michael Pilone added a comment - Raul, I agree. I need to refresh my page before commenting The more I think about it the trickier the problem gets. Using the ReferenceQueue on the SoftReferences would help cleanup producers in the GC case, but you would need to make sure the cache also handles the case where the LRU item is evicted when the capacity is reached. In the eviction case, there is no ReferenceQueue to hold the item for later cleanup. It might make sense to remove the SoftReference support and just keep the LRU/capacity behavior. Then add a listener interface or "evicted queue" to the LRU hashmap to collect items (i.e. producers) that have been evicted and are pending cleanup. It seems like the use of SoftReferences undermines the LRU concept because the GC is deciding when to collect it rather than letting the map track the last used time. In theory the GC is supposed to be bias against SoftRef collection but it seems pretty aggressive from my simple tests. Something like java.util.LinkedHashMap gives you a removeEldestEntry method which would be a nice place to hook in producer shutdown code and avoids these problems.
          Hide
          Raúl Kripalani added a comment -

          Beware that the LRU and the cleanup of the SoftReferences kick in at different times. They cater for different situations:

          • LRU logic is valuable when your recipientList can generate many, many different producers. In a hypothetical case, if there are 2000 users and each user has a dedicated JMS topic where you want to publish messages to from your Camel route, you may end up with 2000 items in the ProducerCache, even if 1000 users are no longer active. The LRU allows Camel to vacuum potentially irrelevant producers. There is a max. producer cache size you can set to control the threshold.
          • SoftReferences are valuable in near-OOM situations. It allows the JVM to 'intelligently' dispose of objects that can be recreated later, once the memory exhaustion subsides.

          Both functionalities are thus valuable. We just need to address the memory leak in SoftReferences perhaps by using finalize().

          Show
          Raúl Kripalani added a comment - Beware that the LRU and the cleanup of the SoftReferences kick in at different times. They cater for different situations: LRU logic is valuable when your recipientList can generate many, many different producers. In a hypothetical case, if there are 2000 users and each user has a dedicated JMS topic where you want to publish messages to from your Camel route, you may end up with 2000 items in the ProducerCache, even if 1000 users are no longer active. The LRU allows Camel to vacuum potentially irrelevant producers. There is a max. producer cache size you can set to control the threshold. SoftReferences are valuable in near-OOM situations. It allows the JVM to 'intelligently' dispose of objects that can be recreated later, once the memory exhaustion subsides. Both functionalities are thus valuable. We just need to address the memory leak in SoftReferences perhaps by using finalize().
          Hide
          Michael Pilone added a comment -

          I can understand the need for the two different mechanisms, but I'd suggest that you find an approach where both the ReferenceQueue from collected SoftRef and the LRU evictions end up in the same place to support producer shutdown. Maybe the LRU evictions could be put on the same reference queue.

          Using finalizers means that each stateful producer needs to properly implement a finalizer and ensure that it is safe to call it even if the producer was properly stopped previously. This seems like you're asking for trouble given the number of disparate producer implementations. Because you already have an API/mechanism for stopping producers, you just want to make sure the cache uses that mechanism in all automatic cache removal cases. Just my opinion though.

          Show
          Michael Pilone added a comment - I can understand the need for the two different mechanisms, but I'd suggest that you find an approach where both the ReferenceQueue from collected SoftRef and the LRU evictions end up in the same place to support producer shutdown. Maybe the LRU evictions could be put on the same reference queue. Using finalizers means that each stateful producer needs to properly implement a finalizer and ensure that it is safe to call it even if the producer was properly stopped previously. This seems like you're asking for trouble given the number of disparate producer implementations. Because you already have an API/mechanism for stopping producers, you just want to make sure the cache uses that mechanism in all automatic cache removal cases. Just my opinion though.
          Hide
          Claus Ibsen added a comment -

          The DefaultProducerTemplate constructor allows you to pass in your own map cache, so you can just pass in the LRUCache (not the soft) or use a unlimited cache etc.

          Show
          Claus Ibsen added a comment - The DefaultProducerTemplate constructor allows you to pass in your own map cache, so you can just pass in the LRUCache (not the soft) or use a unlimited cache etc.
          Hide
          Claus Ibsen added a comment -

          The LRUCache now stops the service when evicting the entry.

          Show
          Claus Ibsen added a comment - The LRUCache now stops the service when evicting the entry.
          Hide
          Claus Ibsen added a comment -

          1)
          I think it may make sense to let the DefaultProducerTemplate / DefaultConsumerTemplate uses a non soft cache (eg just LRUCache) as they are created by end users, and thus they would be able to control this. For example they can lower the cache size to reduce memory occupation if using a lot of different producers. And now the elements that gets evicted will be stopped as well.

          2)
          Then there is some internal caches in Camel such as some based on Class/Method introspections which can safely be soft/weak based, as there is no "stop" logic needed.

          3)
          Whether some of the EIPs which uses a ProducerCache should be non-soft based; we can take a look. It may make sense.

          Show
          Claus Ibsen added a comment - 1) I think it may make sense to let the DefaultProducerTemplate / DefaultConsumerTemplate uses a non soft cache (eg just LRUCache) as they are created by end users, and thus they would be able to control this. For example they can lower the cache size to reduce memory occupation if using a lot of different producers. And now the elements that gets evicted will be stopped as well. 2) Then there is some internal caches in Camel such as some based on Class/Method introspections which can safely be soft/weak based, as there is no "stop" logic needed. 3) Whether some of the EIPs which uses a ProducerCache should be non-soft based; we can take a look. It may make sense.
          Hide
          Claus Ibsen added a comment -

          I have committed a fix for 1+2+3, so we use a non-soft cache for the producer/consumer caches in Camel. And they are stopped on eviction as well.

          Michael, fell free to give it a test run.

          Show
          Claus Ibsen added a comment - I have committed a fix for 1+2+3, so we use a non-soft cache for the producer/consumer caches in Camel. And they are stopped on eviction as well. Michael, fell free to give it a test run.
          Hide
          Michael Pilone added a comment -

          I ran 2.10.2-SNAPSHOT through my test cases and everything looks good. Thanks for your attention to the matter and a good, complete solution. Now I just need to decide if I want to run with a SNAPSHOT in production or wait for 2.10.2 final!

          Show
          Michael Pilone added a comment - I ran 2.10.2-SNAPSHOT through my test cases and everything looks good. Thanks for your attention to the matter and a good, complete solution. Now I just need to decide if I want to run with a SNAPSHOT in production or wait for 2.10.2 final!

            People

            • Assignee:
              Claus Ibsen
              Reporter:
              Michael Pilone
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development