Camel
  1. Camel
  2. CAMEL-2740

Using static queue as a reply queue in InOut pattern causes memory leak

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 2.2.0
    • Fix Version/s: 2.7.5, 2.8.3, 2.9.0
    • Component/s: camel-jms
    • Labels:
      None
    • Environment:

      Originally reported on Windows 2003 server and recently confirmed on OSX in a stand alone unit test.

      Description

      I am running JBoss, ActiveMQ and Camel for my application. In the InOut pattern, I am using a predefined static queue as a reply queue. After running for a while, the memory usage of JBoss keeps growing and growing until it hits outOfMemory error and app server is totally hung. I monitor the thread in jconsole, I can see the jms connection/session keeps growing and growing.

      But once I switch to use temp queue for InOut pattern, this problem goes away.

        Activity

        Hide
        Claus Ibsen added a comment -

        Could you post more details, eg what Camel route do you use?

        And how have you setup ActiveMQ?

        Show
        Claus Ibsen added a comment - Could you post more details, eg what Camel route do you use? And how have you setup ActiveMQ?
        Hide
        Claus Ibsen added a comment -

        And what versions of the various software are you using? eg JBoss, AMQ, JDK etc.

        Show
        Claus Ibsen added a comment - And what versions of the various software are you using? eg JBoss, AMQ, JDK etc.
        Hide
        Qingyi Gu added a comment -

        Here is the version of software.

        JBoss 4.2.2
        AMQ 5.3.1
        JDK 1.5.0_22
        Camel 2.2.0

        Show
        Qingyi Gu added a comment - Here is the version of software. JBoss 4.2.2 AMQ 5.3.1 JDK 1.5.0_22 Camel 2.2.0
        Hide
        Qingyi Gu added a comment -

        Here are some more details.

        AMQ: I have two activemq and there is one network connector between them. See config below.

        <networkConnector name="server1" uri="static://(https://localhost:61617?proxyHost=server1&proxyPort=80)" duplex="true">
        <!-- limit store and forward to specific queues -->
        <dynamicallyIncludedDestinations>
        <queue physicalName="TO_SERVER1.>"/>
        <queue physicalName="TO_SCA.>"/>
        </dynamicallyIncludedDestinations>
        <staticallyIncludedDestinations>
        <queue physicalName="TO_SERVER1.SYNC_RESP"/>
        <queue physicalName="TO_SCA.SSO.SYNC_REQ"/>
        </staticallyIncludedDestinations>
        </networkConnector>

        Camel Route on consumer side:
        from("jms:queue:TO_SCA.SSO.SYNC_REQ?concurrentConsumers=25")
        .choice()
        .when(header(JMS_HEADER_TYPE).isEqualTo("TYPE1"))
        .beanRef("service1")
        .when(header(JMS_HEADER_TYPE).isEqualTo("TYPE2"))
        .beanRef("service2")
        .otherwise()
        .beanRef("unknownService")
        .end();

        On producer side:

        HashMap<String, Object> reqHeaders = new HashMap<String, Object>();
        reqHeaders.put("JMSType", "TYPE1");

        // Options
        StringBuffer options = new StringBuffer();
        options.append("?");
        options.append("replyTo");
        options.append("=");
        options.append("TO_EC.SERVER1.SYNC_RESP");

        // Send Message
        String outMsg = (String)camelTemplate.sendBodyAndHeaders("jms:queue:TO_SCA.SSO.SYNC_REQ"+options.toString(),
        ExchangePattern.InOut,
        inMsg,
        reqHeaders);

        Show
        Qingyi Gu added a comment - Here are some more details. AMQ: I have two activemq and there is one network connector between them. See config below. <networkConnector name="server1" uri="static://( https://localhost:61617?proxyHost=server1&proxyPort=80 )" duplex="true"> <!-- limit store and forward to specific queues --> <dynamicallyIncludedDestinations> <queue physicalName="TO_SERVER1.>"/> <queue physicalName="TO_SCA.>"/> </dynamicallyIncludedDestinations> <staticallyIncludedDestinations> <queue physicalName="TO_SERVER1.SYNC_RESP"/> <queue physicalName="TO_SCA.SSO.SYNC_REQ"/> </staticallyIncludedDestinations> </networkConnector> Camel Route on consumer side: from("jms:queue:TO_SCA.SSO.SYNC_REQ?concurrentConsumers=25") .choice() .when(header(JMS_HEADER_TYPE).isEqualTo("TYPE1")) .beanRef("service1") .when(header(JMS_HEADER_TYPE).isEqualTo("TYPE2")) .beanRef("service2") .otherwise() .beanRef("unknownService") .end(); On producer side: HashMap<String, Object> reqHeaders = new HashMap<String, Object>(); reqHeaders.put("JMSType", "TYPE1"); // Options StringBuffer options = new StringBuffer(); options.append("?"); options.append("replyTo"); options.append("="); options.append("TO_EC.SERVER1.SYNC_RESP"); // Send Message String outMsg = (String)camelTemplate.sendBodyAndHeaders("jms:queue:TO_SCA.SSO.SYNC_REQ"+options.toString(), ExchangePattern.InOut, inMsg, reqHeaders);
        Hide
        Claus Ibsen added a comment -

        Can you crete a small project and attach a zip file with it? Then its easier to use to look into this.

        Show
        Claus Ibsen added a comment - Can you crete a small project and attach a zip file with it? Then its easier to use to look into this.
        Hide
        Claus Ibsen added a comment -

        See this FAQ about the correct way of using templates
        http://camel.apache.org/why-does-camel-use-too-many-threads-with-producertemplate.html

        And also try Camel 2.5 because the request-reply logic over JMS have been refactored a bit to cater for async routing engine.

        Show
        Claus Ibsen added a comment - See this FAQ about the correct way of using templates http://camel.apache.org/why-does-camel-use-too-many-threads-with-producertemplate.html And also try Camel 2.5 because the request-reply logic over JMS have been refactored a bit to cater for async routing engine.
        Hide
        Claus Ibsen added a comment -

        User doesn't respond

        Show
        Claus Ibsen added a comment - User doesn't respond
        Hide
        Claus Ibsen added a comment -

        Closing all resolved tickets from 2010 or older

        Show
        Claus Ibsen added a comment - Closing all resolved tickets from 2010 or older
        Hide
        David Valeri added a comment -

        I have attached a screen shot showing memory usage for the attached test case. The attached test case uses a simple JUnit test to throw a bunch of small messages at JMS with the InOut MEP. The test case uses the asynch capabilities of the ProducerTemplate to easily ramp up the traffic. It doesn't wait for the generated Futures so it isn't really usable for correctness testing, it is just intended to reproduce the issue. Also note that killing the build will likely not terminate the forked JVM.

        It takes less than 8K messages to consume the available memory. The test crawls along for a bit after this point and eventually runs out of memory and crashes somewhere in the low 8K message range. It only takes a couple minutes to reach this point. Based on real world observation, it would appear that the issue is not driven by message frequency but by message numbers. That is, it can take a minute or a week to encounter enough messages, but eventually you run out of memory.

        The heap is occupied mostly by character arrays that appear to contain message IDs / correlation IDs. I did not traverse the object graph in the heap, but it would appear that there is some sort of issue with JMS filters or connections not getting cleaned up. I did not attach the heap dump as it is trivial to generate from the attached test code.

        Removing the replyTo URI parameter and using a temp destination for replies resolves the issue. Memory usage stays in an acceptable range and message throughput is relatively constant, although does appear to slow slightly over time for a currently unknown reason.

        Show
        David Valeri added a comment - I have attached a screen shot showing memory usage for the attached test case. The attached test case uses a simple JUnit test to throw a bunch of small messages at JMS with the InOut MEP. The test case uses the asynch capabilities of the ProducerTemplate to easily ramp up the traffic. It doesn't wait for the generated Futures so it isn't really usable for correctness testing, it is just intended to reproduce the issue. Also note that killing the build will likely not terminate the forked JVM. It takes less than 8K messages to consume the available memory. The test crawls along for a bit after this point and eventually runs out of memory and crashes somewhere in the low 8K message range. It only takes a couple minutes to reach this point. Based on real world observation, it would appear that the issue is not driven by message frequency but by message numbers. That is, it can take a minute or a week to encounter enough messages, but eventually you run out of memory. The heap is occupied mostly by character arrays that appear to contain message IDs / correlation IDs. I did not traverse the object graph in the heap, but it would appear that there is some sort of issue with JMS filters or connections not getting cleaned up. I did not attach the heap dump as it is trivial to generate from the attached test code. Removing the replyTo URI parameter and using a temp destination for replies resolves the issue. Memory usage stays in an acceptable range and message throughput is relatively constant, although does appear to slow slightly over time for a currently unknown reason.
        Hide
        David Valeri added a comment -

        Also, the attached test uses 2.9-RC1. The steady slowdown with a temporary reply destination was due to Eclipse's console window slowing down the logging activities when running the test in Eclipse. Once Eclipse was out of the mix, using a temporary reply destination performed flawlessly in a 30 minute test run.

        Show
        David Valeri added a comment - Also, the attached test uses 2.9-RC1. The steady slowdown with a temporary reply destination was due to Eclipse's console window slowing down the logging activities when running the test in Eclipse. Once Eclipse was out of the mix, using a temporary reply destination performed flawlessly in a 30 minute test run.
        Hide
        Claus Ibsen added a comment -

        David please create a new ticket instead of opening old tickets. We prefer this way. You may link the new ticket to the old ticket.

        Show
        Claus Ibsen added a comment - David please create a new ticket instead of opening old tickets. We prefer this way. You may link the new ticket to the old ticket.
        Hide
        Claus Ibsen added a comment -

        Okay I can reproduce the issue (at about 8000 msgs) and have a patch which improves this, but I hit a OOME at 100000 msg now.

        Show
        Claus Ibsen added a comment - Okay I can reproduce the issue (at about 8000 msgs) and have a patch which improves this, but I hit a OOME at 100000 msg now.
        Hide
        Claus Ibsen added a comment -

        Okay I was using the default maven surefire memory settings which of course is low.

        So I increased the memory setting and have the test be able to run all 1000000 messages

        
          <build>
            <plugins>
              <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-surefire-plugin</artifactId>
                <configuration>
                  <argLine>-Xmx1024m -XX:MaxPermSize=512m</argLine>
                </configuration>
              </plugin>
            </plugins>
          </build>
        
        Show
        Claus Ibsen added a comment - Okay I was using the default maven surefire memory settings which of course is low. So I increased the memory setting and have the test be able to run all 1000000 messages <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <argLine>-Xmx1024m -XX:MaxPermSize=512m</argLine> </configuration> </plugin> </plugins> </build>
        Hide
        Claus Ibsen added a comment -

        Thanks for the sample project to reproduce the issue.

        Show
        Claus Ibsen added a comment - Thanks for the sample project to reproduce the issue.
        Hide
        Claus Ibsen added a comment -

        Well the sample project from David itself also causes high memory occupation as it creates 1.000.000 tasks on the executor service pool, which is stored in memory. For example using a sample with for example 50.000 tasks does not take up so much memory.

        Show
        Claus Ibsen added a comment - Well the sample project from David itself also causes high memory occupation as it creates 1.000.000 tasks on the executor service pool, which is stored in memory. For example using a sample with for example 50.000 tasks does not take up so much memory.

          People

          • Assignee:
            Claus Ibsen
            Reporter:
            Qingyi Gu
          • Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development