ActiveMQ
  1. ActiveMQ
  2. AMQ-1927

activemq producer hangs (using spring)

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Incomplete
    • Affects Version/s: 5.1.0
    • Fix Version/s: NEEDS_REVIEW
    • Component/s: Broker
    • Labels:
      None
    • Environment:

      suse linux 10.3
      sun jdk "1.6.0_06"
      tomcat 6.0.16
      spring framework 2.0

      Description

      We have an internal activemq queue configured using the spring framework (configuration below). During a high volume message test, the message producer hangs. See stack trace below.

      May be related to bugs #AMQ-1641 or #AMQ-1490.

      "pool-2-thread-2" prio=10 tid=0x00002aaaf2c20000 nid=0x297e waiting on condition [0x000000004173f000..0x000000004173fc20]
      java.lang.Thread.State: WAITING (parking)
      at sun.misc.Unsafe.park(Native Method)

      • parking to wait for <0x00002aaae859af40> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
        at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:317)
        at org.apache.activemq.transport.FutureResponse.getResult(FutureResponse.java:40)
        at org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:80)
        at org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1195)
        at org.apache.activemq.ActiveMQSession.send(ActiveMQSession.java:1644)
      • locked <0x00002aaab3e433d8> (a java.lang.Object)
        at org.apache.activemq.ActiveMQMessageProducer.send(ActiveMQMessageProducer.java:227)
        at org.apache.activemq.pool.PooledProducer.send(PooledProducer.java:74)
      • locked <0x00002aaab3e42d08> (a org.apache.activemq.ActiveMQMessageProducer)
        at org.apache.activemq.pool.PooledProducer.send(PooledProducer.java:59)
        at org.springframework.jms.core.JmsTemplate.doSend(JmsTemplate.java:534)
        at org.springframework.jms.core.JmsTemplate.doSend(JmsTemplate.java:511)
        at org.springframework.jms.core.JmsTemplate$2.doInJms(JmsTemplate.java:477)
        at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:428)
        at org.springframework.jms.core.JmsTemplate.send(JmsTemplate.java:475)

      <amq:broker id="broker" useJmx="true" persistent="false" brokerName="fb" >

      <amq:managementContext>
      <amq:managementContext connectorPort="2011" jmxDomainName="org.apache.activemq"/>
      </amq:managementContext>

      <amq:transportConnectors>
      <amq:transportConnector uri="tcp://localhost:0" />
      <amq:transportConnector uri="tcp://localhost:61616" />
      </amq:transportConnectors>
      </amq:broker>

      <!-- ActiveMQ destinations to use -->
      <amq:queue id="inboundEvents" physicalName="fb.inbound.events">
      </amq:queue>

      <bean id="jmsFactory" class="org.apache.activemq.pool.PooledConnectionFactory">
      <property name="connectionFactory">
      <bean class="org.apache.activemq.ActiveMQConnectionFactory">
      <property name="brokerURL" value="vm://localhost"/>
      </bean>
      </property>
      </bean>

      <bean id="simpleJmsTemplate" class="org.springframework.jms.core.JmsTemplate">
      <property name="connectionFactory" ref="jmsFactory"/>
      </bean>

      <!-- consumers -->
      <bean id="inboundEventConsumer" class="jms.WrapperConsumer" init-method="start" destroy-method="stop">
      <property name="myId" value="fb.consumer.events"/>
      <property name="template" ref="simpleJmsTemplate"/>
      <property name="destination" ref="inboundEvents"/>
      </bean>

      <!-- producers -->
      <bean id="inboundEventProducer" class="jms.WrapperProducer">
      <property name="template" ref="simpleJmsTemplate"/>
      <property name="destination" ref="inboundEvents"/>
      </bean>

        Activity

        Hide
        Timothy Bish added a comment -

        No report from the user on this one and no test case provided. Suggest testing against a newer release.

        Show
        Timothy Bish added a comment - No report from the user on this one and no test case provided. Suggest testing against a newer release.
        Hide
        Bruce Snyder added a comment -

        Jeff, I notice that you have a destination policy entry for all topics restricting them to using only 1mb of memory. Please change or remove this policy entry to see if this helps at all.

        Show
        Bruce Snyder added a comment - Jeff, I notice that you have a destination policy entry for all topics restricting them to using only 1mb of memory. Please change or remove this policy entry to see if this helps at all.
        Hide
        Jeff Gutierrez added a comment - - edited

        I've been trying to get the AMQ 5.1.0 Broker to work with our Spring-based application. I'm seeing the same behavior as Juliano was seeing but it didn't work for me when I set processFlowControl to "false". (See my activemq.xml file below.) The AMQ Broker actually stopped receiving and distributing messages (but /admin was responsive though.) After restarting the AMQ Broker the producers and consumers started working again. Take note that the producers and consumers were NOT restarted.

        I also tried the following to no avail:

        • set jms.prefetchPolicy.queuePrefetch=1
        • with out failover protocol
        • AMQ 5.0.0 client with AMQ 5.1.0 broker
        • no parameters in the URL

        Configuration:

        • Centos (kernel 2.6.18-92)
        • Java 1.6.0_05
        • ActiveMQ 5.1.0 (java -Xmx256m -Xss128k -Xms128m -Dorg.apache.activemq.UseDedicatedTaskRunner=true)
        • Spring 2.5.5
        • Camel 1.3.0
        • JMeter (load produces messages to 3 queue with throughput of 10/sec, 1/sec, and 1/sec.)

        AMQ URL used by producer: failover:(tcp://host1.internal:12345)?jms.prefetchPolicy.queuePrefetch=1&jms.redeliveryPolicy.allPrefetchValues=1&jms.redeliveryPolicy.initialRedeliveryDelay=2000&jms.redeliveryPoli
        cy.maximumRedeliveries=24&jms.redeliveryPolicy.useCollisionAvoidance=true

        In activemq.xml:

        <broker>
        <!-- Destination specific policies using destination names or wildcards -->
        <destinationPolicy>
        <policyMap>
        <policyEntries>

        <policyEntry topic=">" producerFlowControl="false" memoryLimit="1mb">
        <dispatchPolicy>
        <strictOrderDispatchPolicy />
        </dispatchPolicy>
        <subscriptionRecoveryPolicy>
        <lastImageSubscriptionRecoveryPolicy />
        </subscriptionRecoveryPolicy>
        </policyEntry>

        <policyEntry queue=">">
        <dispatchPolicy>
        <strictOrderDispatchPolicy />
        </dispatchPolicy>
        <subscriptionRecoveryPolicy>
        <lastImageSubscriptionRecoveryPolicy />
        </subscriptionRecoveryPolicy>
        <deadLetterStrategy>
        <individualDeadLetterStrategy queuePrefix="DLQ." useQueueForQueueMessages="true"/>
        </deadLetterStrategy>
        </policyEntry>

        </policyEntries>
        </policyMap>
        </destinationPolicy>

        <!-- The transport connectors ActiveMQ will listen to -->
        <transportConnectors>
        <transportConnector name="openwire" uri="tcp://host1.internal:12345" />
        </transportConnectors>

        <!-- Use the following to set the broker memory limit -->
        <systemUsage>
        <systemUsage>
        <memoryUsage>
        <memoryUsage limit="64 mb" percentUsageMinDelta="20" />
        </memoryUsage>
        <tempUsage>
        <tempUsage limit="100 mb" />
        </tempUsage>
        <storeUsage>
        <storeUsage limit="1 g" name="host1.internal" />
        </storeUsage>
        </systemUsage>
        </systemUsage>

        <!-- Use the following to configure how ActiveMQ is exposed in JMX -->
        <managementContext>
        <managementContext connectorPort="56789" jmxDomainName="org.apache.activemq" />
        </managementContext>
        </broker>
        ...

        Thread blocked in the producer side:
        "resin-tcp-connection-*:6102-9" daemon prio=10 tid=0xa7a5d400 nid=0x6946 waiting on condition [0xa68b5000..0xa68b7130]
        java.lang.Thread.State: WAITING (parking)
        at sun.misc.Unsafe.park(Native Method)

        • parking to wait for <0xb0b3d848> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
          at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
          at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
          at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:317)

        Do you guys see any issue with my configuration?

        I'd appreciate any information.

        Thanks,
        Jeff

        Show
        Jeff Gutierrez added a comment - - edited I've been trying to get the AMQ 5.1.0 Broker to work with our Spring-based application. I'm seeing the same behavior as Juliano was seeing but it didn't work for me when I set processFlowControl to "false". (See my activemq.xml file below.) The AMQ Broker actually stopped receiving and distributing messages (but /admin was responsive though.) After restarting the AMQ Broker the producers and consumers started working again. Take note that the producers and consumers were NOT restarted. I also tried the following to no avail: set jms.prefetchPolicy.queuePrefetch=1 with out failover protocol AMQ 5.0.0 client with AMQ 5.1.0 broker no parameters in the URL Configuration: Centos (kernel 2.6.18-92) Java 1.6.0_05 ActiveMQ 5.1.0 (java -Xmx256m -Xss128k -Xms128m -Dorg.apache.activemq.UseDedicatedTaskRunner=true) Spring 2.5.5 Camel 1.3.0 JMeter (load produces messages to 3 queue with throughput of 10/sec, 1/sec, and 1/sec.) AMQ URL used by producer: failover:(tcp://host1.internal:12345)?jms.prefetchPolicy.queuePrefetch=1&jms.redeliveryPolicy.allPrefetchValues=1&jms.redeliveryPolicy.initialRedeliveryDelay=2000&jms.redeliveryPoli cy.maximumRedeliveries=24&jms.redeliveryPolicy.useCollisionAvoidance=true In activemq.xml: <broker> <!-- Destination specific policies using destination names or wildcards --> <destinationPolicy> <policyMap> <policyEntries> <policyEntry topic=">" producerFlowControl="false" memoryLimit="1mb"> <dispatchPolicy> <strictOrderDispatchPolicy /> </dispatchPolicy> <subscriptionRecoveryPolicy> <lastImageSubscriptionRecoveryPolicy /> </subscriptionRecoveryPolicy> </policyEntry> <policyEntry queue=">"> <dispatchPolicy> <strictOrderDispatchPolicy /> </dispatchPolicy> <subscriptionRecoveryPolicy> <lastImageSubscriptionRecoveryPolicy /> </subscriptionRecoveryPolicy> <deadLetterStrategy> <individualDeadLetterStrategy queuePrefix="DLQ." useQueueForQueueMessages="true"/> </deadLetterStrategy> </policyEntry> </policyEntries> </policyMap> </destinationPolicy> <!-- The transport connectors ActiveMQ will listen to --> <transportConnectors> <transportConnector name="openwire" uri="tcp://host1.internal:12345" /> </transportConnectors> <!-- Use the following to set the broker memory limit --> <systemUsage> <systemUsage> <memoryUsage> <memoryUsage limit="64 mb" percentUsageMinDelta="20" /> </memoryUsage> <tempUsage> <tempUsage limit="100 mb" /> </tempUsage> <storeUsage> <storeUsage limit="1 g" name="host1.internal" /> </storeUsage> </systemUsage> </systemUsage> <!-- Use the following to configure how ActiveMQ is exposed in JMX --> <managementContext> <managementContext connectorPort="56789" jmxDomainName="org.apache.activemq" /> </managementContext> </broker> ... Thread blocked in the producer side: "resin-tcp-connection-*:6102-9" daemon prio=10 tid=0xa7a5d400 nid=0x6946 waiting on condition [0xa68b5000..0xa68b7130] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) parking to wait for <0xb0b3d848> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925) at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:317) Do you guys see any issue with my configuration? I'd appreciate any information. Thanks, Jeff
        Hide
        Ryan Witcher added a comment -

        We saw the same problem and turning off the producerFlowControl worked.

        We think the underlying issue is that the MemoryPercentageUsed never seemed to go down (as seen in JConsole). So once the MemoryPercentageUsed reached 100%, even if consumers removed messages from it, the system would still block.

        This issue claims that was fixed in 5.1.0 but we aren't so sure.
        https://issues.apache.org/activemq/browse/AMQ-1644

        Show
        Ryan Witcher added a comment - We saw the same problem and turning off the producerFlowControl worked. We think the underlying issue is that the MemoryPercentageUsed never seemed to go down (as seen in JConsole). So once the MemoryPercentageUsed reached 100%, even if consumers removed messages from it, the system would still block. This issue claims that was fixed in 5.1.0 but we aren't so sure. https://issues.apache.org/activemq/browse/AMQ-1644
        Hide
        Juliano Carniel added a comment -

        Just to give the feedback i've promissed.
        I've tried the producerFlowControl turned on, and the lock problem persists. We have seen one of our systems, at the end of a transaction when it will produce a message to a queue, it "stay running" and the thread gets the state of "Waiting", the same as the one reported above.
        When i tried this with the producerFlowControle turned off, it worked perfectly.

        Thanks.

        Show
        Juliano Carniel added a comment - Just to give the feedback i've promissed. I've tried the producerFlowControl turned on, and the lock problem persists. We have seen one of our systems, at the end of a transaction when it will produce a message to a queue, it "stay running" and the thread gets the state of "Waiting", the same as the one reported above. When i tried this with the producerFlowControle turned off, it worked perfectly. Thanks.
        Hide
        Juliano Carniel added a comment -

        Hi Bruce,

        just for a feedback, the last (mis)information i gave is not real, it became to be a tomcat NIO problem. I Have updated to 6.0.18 and the problem is solved. I haven't tried the the new tomcat with the producerFlowControl turned on yet, but as soon as i try this i will reply to this.
        I think this could be a tomcat problem, seen that the report for this issue is with the tomcat.6.0.16 wich one i was using.

        Thanks in advance.

        Show
        Juliano Carniel added a comment - Hi Bruce, just for a feedback, the last (mis)information i gave is not real, it became to be a tomcat NIO problem. I Have updated to 6.0.18 and the problem is solved. I haven't tried the the new tomcat with the producerFlowControl turned on yet, but as soon as i try this i will reply to this. I think this could be a tomcat problem, seen that the report for this issue is with the tomcat.6.0.16 wich one i was using. Thanks in advance.
        Hide
        Juliano Carniel added a comment -

        I'll try that. But just a question. Is there any problem of having, like 10 consumers on the same queue?
        Maybe, i'll try to make 1 consumer with a prefetch about 10 messages instead of 10 consumers.

        In this application we don't have large amount of messages, we could average on 5 messages per minute.

        Just fo the records, today i have this configuration on the URI:
        failover:(tcp://10.11.20.12:61616?wireFormat.maxInactivityDuration=10000&connectionTimeout=30000)?maxReconnectAttempts=4&initialReconnectDelay=1000

        If you could see something that is not right, please tell me.

        Thanks.

        Show
        Juliano Carniel added a comment - I'll try that. But just a question. Is there any problem of having, like 10 consumers on the same queue? Maybe, i'll try to make 1 consumer with a prefetch about 10 messages instead of 10 consumers. In this application we don't have large amount of messages, we could average on 5 messages per minute. Just fo the records, today i have this configuration on the URI: failover:(tcp://10.11.20.12:61616?wireFormat.maxInactivityDuration=10000&connectionTimeout=30000)?maxReconnectAttempts=4&initialReconnectDelay=1000 If you could see something that is not right, please tell me. Thanks.
        Hide
        Bruce Snyder added a comment -

        I'm not sure yet if this is the issue, but have you tried adjusting the consumer prefetch limit?

        Show
        Bruce Snyder added a comment - I'm not sure yet if this is the issue, but have you tried adjusting the consumer prefetch limit ?
        Hide
        Juliano Carniel added a comment -

        I've tried adding your suggestion code to the queue policyEntry, like this:
        <policyEntry queue=">" producerFlowControl="false" memoryLimit="5mb"/>

        It has worked. It was deadlocking at the producer and doesn't anymore.

        Although it worked, now it's deadlocking at the Consumer, like the stack Trace shows below. The problem is that, It keeps one processor core at 100%, very strange.

        "listenerContainerQuickActivation-4" prio=10 tid=0x00002aaebc906800 nid=0x44d5 in Object.wait() [0x0000000043cd6000..0x0000000043cd6c90]
        java.lang.Thread.State: TIMED_WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)

        • waiting on <0x00002aabb0190468> (a java.lang.Object)
          at org.apache.activemq.MessageDispatchChannel.dequeue(MessageDispatchChannel.java:77)
        • locked <0x00002aabb0190468> (a java.lang.Object)
          at org.apache.activemq.ActiveMQMessageConsumer.dequeue(ActiveMQMessageConsumer.java:409)
          at org.apache.activemq.ActiveMQMessageConsumer.receive(ActiveMQMessageConsumer.java:521)
          at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveMessage(AbstractPollingMessageListenerContainer.java:375)
          at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:300)
          at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:254)
          at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:870)
          at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:810)
          at java.lang.Thread.run(Thread.java:619)

        Thanks for any tip you could give.

        Show
        Juliano Carniel added a comment - I've tried adding your suggestion code to the queue policyEntry, like this: <policyEntry queue=">" producerFlowControl="false" memoryLimit="5mb"/> It has worked. It was deadlocking at the producer and doesn't anymore. Although it worked, now it's deadlocking at the Consumer, like the stack Trace shows below. The problem is that, It keeps one processor core at 100%, very strange. "listenerContainerQuickActivation-4" prio=10 tid=0x00002aaebc906800 nid=0x44d5 in Object.wait() [0x0000000043cd6000..0x0000000043cd6c90] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) waiting on <0x00002aabb0190468> (a java.lang.Object) at org.apache.activemq.MessageDispatchChannel.dequeue(MessageDispatchChannel.java:77) locked <0x00002aabb0190468> (a java.lang.Object) at org.apache.activemq.ActiveMQMessageConsumer.dequeue(ActiveMQMessageConsumer.java:409) at org.apache.activemq.ActiveMQMessageConsumer.receive(ActiveMQMessageConsumer.java:521) at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveMessage(AbstractPollingMessageListenerContainer.java:375) at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:300) at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:254) at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:870) at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:810) at java.lang.Thread.run(Thread.java:619) Thanks for any tip you could give.
        Hide
        Juliano Carniel added a comment -

        Hi... i'm facing the same problem as above. My configuration variates a little bit but have the same idea.
        My system is:

        • CentOS
        • Dual QuadCore processor
        • JVM 1.6 update 7
        • Tomcat 6.0.16
        • Spring 2.0
        • ActiveMQ 5.1

        I have made this change that you suggested in a test server and it apparently worked but it's hard to be sure, because it's hard to reproduce this, at least it doesn't break anything. I guess i will try in production as soon as we have a window.

        Thanks

        Show
        Juliano Carniel added a comment - Hi... i'm facing the same problem as above. My configuration variates a little bit but have the same idea. My system is: CentOS Dual QuadCore processor JVM 1.6 update 7 Tomcat 6.0.16 Spring 2.0 ActiveMQ 5.1 I have made this change that you suggested in a test server and it apparently worked but it's hard to be sure, because it's hard to reproduce this, at least it doesn't break anything. I guess i will try in production as soon as we have a window. Thanks
        Hide
        Bruce Snyder added a comment -

        Is this the producer flow control kicking in? Try disabling it in the configuration to see if your results are different.

        Show
        Bruce Snyder added a comment - Is this the producer flow control kicking in? Try disabling it in the configuration to see if your results are different.

          People

          • Assignee:
            Unassigned
            Reporter:
            Randy
          • Votes:
            6 Vote for this issue
            Watchers:
            8 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development