Details

    • Type: Bug
    • Status: Resolved
    • Priority: Critical
    • Resolution: Fixed
    • Affects Version/s: 5.2.0
    • Fix Version/s: 5.15.0, 5.14.4
    • Component/s: Broker
    • Labels:
      None
    • Environment:

      Found on Windows but reproduced under Linux

    • Regression:
      Regression

      Description

      When you "purge" queue from web admin console, it zeroes queue message
      counter. But if you have an active consumer at that time which
      pre-fetched messages than your consumer will keep sending ack as it
      process messages from its buffer. ActiveMQ will keep decrement counter
      upon receiving each ack. So when consumer is done queue will show
      MINUS<consumer buffer size>.

      1. Main.java
        4 kB
        Vadim Chekan
      2. Picture 6.png
        121 kB
        Brian Moran
      3. QueuePurgeTest.java.diff.txt
        4 kB
        Bruce Snyder

        Issue Links

          Activity

          Hide
          vchekan Vadim Chekan added a comment -

          Attached program reproduces a queue with -100 messages.

          Show
          vchekan Vadim Chekan added a comment - Attached program reproduces a queue with -100 messages.
          Hide
          bsnyder Bruce Snyder added a comment -

          I've created the following test case out of the attached Main.java class:

          package org.apache.activemq.broker.region;
          
          import javax.jms.Connection;
          import javax.jms.ConnectionFactory;
          import javax.jms.JMSException;
          import javax.jms.Message;
          import javax.jms.MessageConsumer;
          import javax.jms.MessageProducer;
          import javax.jms.Queue;
          import javax.jms.Session;
          import javax.jms.TextMessage;
          import javax.management.MBeanServerInvocationHandler;
          import javax.management.MalformedObjectNameException;
          import javax.management.ObjectName;
          
          import junit.framework.TestCase;
          
          import org.apache.activemq.ActiveMQConnectionFactory;
          import org.apache.activemq.broker.BrokerService;
          import org.apache.activemq.broker.jmx.QueueViewMBean;
          
          public class QueuePurgeTest extends TestCase {
              
              BrokerService broker; 
              ConnectionFactory factory; 
              Connection connection;
              Session session;
              Queue queue;
              MessageConsumer consumer;
          
              protected void setUp() throws Exception {
                  broker = new BrokerService();
                  broker.setUseJmx(true);
                  broker.setPersistent(false);
                  broker.addConnector("tcp://localhost:0");
                  broker.start();
                  
                  factory = new ActiveMQConnectionFactory("vm://localhost");
                  
                  connection = factory.createConnection();
                  connection.start();
              }
          
              protected void tearDown() throws Exception {
                  consumer.close();
                  session.close();
                  connection.stop();
                  connection.close();
                  broker.stop();
              }
              
              public void testPurgeQueueWithActiveConsumer() throws Exception {
                  createProducerAndSendMessages();        
                  QueueViewMBean proxy = getProxyToQueueViewMBean();
                  createConsumer();
                  proxy.purge();
                  assertEquals("Queue size is not zero, it's " + proxy.getQueueSize(), 0, proxy.getQueueSize());
              }
          
              private QueueViewMBean getProxyToQueueViewMBean()
                      throws MalformedObjectNameException, JMSException {
                  ObjectName queueViewMBeanName = new ObjectName("org.apache.activemq" + ":Type=Queue,Destination=" + 
                          queue.getQueueName() + ",BrokerName=localhost");
                  QueueViewMBean proxy = (QueueViewMBean)MBeanServerInvocationHandler.newProxyInstance(
                          broker.getManagementContext().getMBeanServer(), 
                          queueViewMBeanName, QueueViewMBean.class, true);
                   
                   return proxy;
              }
              
              private void createProducerAndSendMessages() throws Exception {
                  session =  connection.createSession(false, Session.CLIENT_ACKNOWLEDGE);
                  queue = session.createQueue("test1");
                  MessageProducer producer = session.createProducer(queue);
                  for(int i=0; i<10000; i++) {
                      TextMessage message = session.createTextMessage("message "+i);
                      producer.send(message);
                  }
                  producer.close();
              }
              
              private void createConsumer() throws Exception {
                  consumer = session.createConsumer(queue);
                  // wait for buffer fill out
                  Thread.sleep(5*1000);
                  
                  for(int i = 0; i < 100; ++i) {
                      Message message = consumer.receive();
                      message.acknowledge();
                  }
              }
          
          }
          

          What I'm finding is that failure is intermittent, but I am able to see it once in a while:

          junit.framework.AssertionFailedError: Queue size is not zero expected:<0> but was:<-60>
          at junit.framework.Assert.fail(Assert.java:47)
          at junit.framework.Assert.failNotEquals(Assert.java:282)
          at junit.framework.Assert.assertEquals(Assert.java:64)
          at junit.framework.Assert.assertEquals(Assert.java:136)
          at org.apache.activemq.broker.region.QueuePurgeTest.testPurgeQueueWithActiveConsumer(QueuePurgeTest.java:56)
          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
          at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
          at java.lang.reflect.Method.invoke(Method.java:585)
          at junit.framework.TestCase.runTest(TestCase.java:154)
          at junit.framework.TestCase.runBare(TestCase.java:127)
          at junit.framework.TestResult$1.protect(TestResult.java:106)
          at junit.framework.TestResult.runProtected(TestResult.java:124)
          at junit.framework.TestResult.run(TestResult.java:109)
          at junit.framework.TestCase.run(TestCase.java:118)
          at junit.framework.TestSuite.runTest(TestSuite.java:208)
          at junit.framework.TestSuite.run(TestSuite.java:203)
          at org.eclipse.jdt.internal.junit.runner.junit3.JUnit3TestReference.run(JUnit3TestReference.java:130)
          at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
          at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460)
          at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673)
          at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386)
          at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196)

          Show
          bsnyder Bruce Snyder added a comment - I've created the following test case out of the attached Main.java class: package org.apache.activemq.broker.region; import javax.jms.Connection; import javax.jms.ConnectionFactory; import javax.jms.JMSException; import javax.jms.Message; import javax.jms.MessageConsumer; import javax.jms.MessageProducer; import javax.jms.Queue; import javax.jms.Session; import javax.jms.TextMessage; import javax.management.MBeanServerInvocationHandler; import javax.management.MalformedObjectNameException; import javax.management.ObjectName; import junit.framework.TestCase; import org.apache.activemq.ActiveMQConnectionFactory; import org.apache.activemq.broker.BrokerService; import org.apache.activemq.broker.jmx.QueueViewMBean; public class QueuePurgeTest extends TestCase { BrokerService broker; ConnectionFactory factory; Connection connection; Session session; Queue queue; MessageConsumer consumer; protected void setUp() throws Exception { broker = new BrokerService(); broker.setUseJmx( true ); broker.setPersistent( false ); broker.addConnector( "tcp: //localhost:0" ); broker.start(); factory = new ActiveMQConnectionFactory( "vm: //localhost" ); connection = factory.createConnection(); connection.start(); } protected void tearDown() throws Exception { consumer.close(); session.close(); connection.stop(); connection.close(); broker.stop(); } public void testPurgeQueueWithActiveConsumer() throws Exception { createProducerAndSendMessages(); QueueViewMBean proxy = getProxyToQueueViewMBean(); createConsumer(); proxy.purge(); assertEquals( "Queue size is not zero, it's " + proxy.getQueueSize(), 0, proxy.getQueueSize()); } private QueueViewMBean getProxyToQueueViewMBean() throws MalformedObjectNameException, JMSException { ObjectName queueViewMBeanName = new ObjectName( "org.apache.activemq" + ":Type=Queue,Destination=" + queue.getQueueName() + ",BrokerName=localhost" ); QueueViewMBean proxy = (QueueViewMBean)MBeanServerInvocationHandler.newProxyInstance( broker.getManagementContext().getMBeanServer(), queueViewMBeanName, QueueViewMBean.class, true ); return proxy; } private void createProducerAndSendMessages() throws Exception { session = connection.createSession( false , Session.CLIENT_ACKNOWLEDGE); queue = session.createQueue( "test1" ); MessageProducer producer = session.createProducer(queue); for ( int i=0; i<10000; i++) { TextMessage message = session.createTextMessage( "message " +i); producer.send(message); } producer.close(); } private void createConsumer() throws Exception { consumer = session.createConsumer(queue); // wait for buffer fill out Thread .sleep(5*1000); for ( int i = 0; i < 100; ++i) { Message message = consumer.receive(); message.acknowledge(); } } } What I'm finding is that failure is intermittent, but I am able to see it once in a while: junit.framework.AssertionFailedError: Queue size is not zero expected:<0> but was:<-60> at junit.framework.Assert.fail(Assert.java:47) at junit.framework.Assert.failNotEquals(Assert.java:282) at junit.framework.Assert.assertEquals(Assert.java:64) at junit.framework.Assert.assertEquals(Assert.java:136) at org.apache.activemq.broker.region.QueuePurgeTest.testPurgeQueueWithActiveConsumer(QueuePurgeTest.java:56) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at junit.framework.TestCase.runTest(TestCase.java:154) at junit.framework.TestCase.runBare(TestCase.java:127) at junit.framework.TestResult$1.protect(TestResult.java:106) at junit.framework.TestResult.runProtected(TestResult.java:124) at junit.framework.TestResult.run(TestResult.java:109) at junit.framework.TestCase.run(TestCase.java:118) at junit.framework.TestSuite.runTest(TestSuite.java:208) at junit.framework.TestSuite.run(TestSuite.java:203) at org.eclipse.jdt.internal.junit.runner.junit3.JUnit3TestReference.run(JUnit3TestReference.java:130) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196)
          Hide
          vchekan Vadim Chekan added a comment -

          Perhaps it can be made more reproducible if purge is run in a thread parallel to the consumer. I'll try to rewrite it later tonight.

          Show
          vchekan Vadim Chekan added a comment - Perhaps it can be made more reproducible if purge is run in a thread parallel to the consumer. I'll try to rewrite it later tonight.
          Hide
          rajdavies Rob Davies added a comment -

          Fixed by SVN revision 697921

          Show
          rajdavies Rob Davies added a comment - Fixed by SVN revision 697921
          Hide
          ghopper Gordon Hopper added a comment -

          1693 displays the same symptom of negative message counts.

          Show
          ghopper Gordon Hopper added a comment - 1693 displays the same symptom of negative message counts.
          Hide
          bmoran Brian Moran added a comment -

          I can reproduce this in 5.2.0.
          I am going to attach a screenshot.

          Show
          bmoran Brian Moran added a comment - I can reproduce this in 5.2.0. I am going to attach a screenshot.
          Hide
          bmoran Brian Moran added a comment -

          Screenshot of negative messages, reproducible in 5.2.0 after purging the message queue, STOMP clients.

          Show
          bmoran Brian Moran added a comment - Screenshot of negative messages, reproducible in 5.2.0 after purging the message queue, STOMP clients.
          Hide
          deathemperor Loc Truong added a comment -

          I can reproduct this in 5.2.0. This happens to me because there are too many receivers, each of them try to recevie 500 messages but the total messages in queue is smaller, hence negative number.

          Show
          deathemperor Loc Truong added a comment - I can reproduct this in 5.2.0. This happens to me because there are too many receivers, each of them try to recevie 500 messages but the total messages in queue is smaller, hence negative number.
          Hide
          yuriushakov Yuri Ushakov added a comment -

          We have the same problem with 5.2.0. A queue, single consumer, single producer. We have -1 pending now, 1616 sent, 1617 received.

          Show
          yuriushakov Yuri Ushakov added a comment - We have the same problem with 5.2.0. A queue, single consumer, single producer. We have -1 pending now, 1616 sent, 1617 received.
          Hide
          deathemperor Loc Truong added a comment -

          I can reproduce this on 5.2.0 too. I have 7 consumers getting 150 messages each. The negative number I receive is -123 at lowest.

          Show
          deathemperor Loc Truong added a comment - I can reproduce this on 5.2.0 too. I have 7 consumers getting 150 messages each. The negative number I receive is -123 at lowest.
          Hide
          wfrag Ivan Dubrov added a comment -

          I had the same issue today.

          We are running two AMQ brokers in network of brokers, each one talking to the other one (the duplex is set to false). Under a high load after few hours I've noted that one of our queues started growing on both nodes, several thousand of pending messages in about 20 minutes (and the cursor memory usage was quite large). After I accidentally stopped the second node, the network bridge stopped on the first node and the queue size on the first node immediately became "-8".

          So in my case this probably somehow related to the instability of network of brokers implementation, because if I disable the network connector the system runs stable for several days under the load.

          Show
          wfrag Ivan Dubrov added a comment - I had the same issue today. We are running two AMQ brokers in network of brokers, each one talking to the other one (the duplex is set to false). Under a high load after few hours I've noted that one of our queues started growing on both nodes, several thousand of pending messages in about 20 minutes (and the cursor memory usage was quite large). After I accidentally stopped the second node, the network bridge stopped on the first node and the queue size on the first node immediately became "-8". So in my case this probably somehow related to the instability of network of brokers implementation, because if I disable the network connector the system runs stable for several days under the load.
          Hide
          rajdavies Rob Davies added a comment -

          Fixed for the 5.3

          Show
          rajdavies Rob Davies added a comment - Fixed for the 5.3
          Hide
          hubbitus Pavel added a comment -

          We seen such issue on 5.6.0 ActiveMQ.

          Show
          hubbitus Pavel added a comment - We seen such issue on 5.6.0 ActiveMQ.
          Hide
          wschlich Wolfram Schlich added a comment -

          We also experience this currently with ActiveMQ 5.8.0.

          Show
          wschlich Wolfram Schlich added a comment - We also experience this currently with ActiveMQ 5.8.0.
          Hide
          cp123 Sergey added a comment -

          Still have this issue in 5.10.0. Moreover, sometime I can see number of pending messages groving, but if I try to browse queue, I see nothing there. I.e. this numbers are all wrong.

          Show
          cp123 Sergey added a comment - Still have this issue in 5.10.0. Moreover, sometime I can see number of pending messages groving, but if I try to browse queue, I see nothing there. I.e. this numbers are all wrong.
          Hide
          cp123 Sergey added a comment -

          What I can see in JMX using Hawtio:
          Total dequeue count 2474137
          Total enqueue count 1795563
          Total message count -678584

          BTW, I have camel route within broker, where incoming messages are duplicated in two queues. And, at the moment, web console displays something like:

          queue pending enqueued dequeued
          algo250a 856725 906471 1552909
          algo250b -23451 906472 929923
          highwayQueue 0 0 0

          Last one is inbound queue, algoxxx are outbound. I expect that number of enqueued messages for inbound Q should be equal to number of dequeued. But its always zero.

          Show
          cp123 Sergey added a comment - What I can see in JMX using Hawtio: Total dequeue count 2474137 Total enqueue count 1795563 Total message count -678584 BTW, I have camel route within broker, where incoming messages are duplicated in two queues. And, at the moment, web console displays something like: queue pending enqueued dequeued algo250a 856725 906471 1552909 algo250b -23451 906472 929923 highwayQueue 0 0 0 Last one is inbound queue, algoxxx are outbound. I expect that number of enqueued messages for inbound Q should be equal to number of dequeued. But its always zero.
          Hide
          gtully Gary Tully added a comment -

          @Sergey - can you make a junit test case that can reproduce - peek at the activemq-camel module if you need to use camel routes

          Show
          gtully Gary Tully added a comment - @Sergey - can you make a junit test case that can reproduce - peek at the activemq-camel module if you need to use camel routes
          Hide
          cp123 Sergey added a comment -

          I've just created AMQ-5576, and where is a route test case.

          Just deploy this route, and send 1 message into inbound queue using web console, it should be enough to reproduce some of issues with counters/sizes.

          Show
          cp123 Sergey added a comment - I've just created AMQ-5576 , and where is a route test case. Just deploy this route, and send 1 message into inbound queue using web console, it should be enough to reproduce some of issues with counters/sizes.
          Hide
          cshannon Christopher L. Shannon added a comment -

          I noticed this issue today when doing a purge. I believe the issue is that after the messages are removed in the do/while loop inside the queue purge method the statistics counter is reset to 0. This is a problem because there is nothing preventing messages from being added to the Queue in between message removal and resetting of the counters. This means it's possible to have new messages in the queue but the counter is set to 0 such that the count will now go negative when those messages acked.

          Gary Tully, I think the fix is as simple as removing two lines of code on line 1267 and 1268. https://github.com/apache/activemq/blob/activemq-5.14.3/activemq-broker/src/main/java/org/apache/activemq/broker/region/Queue.java#L1267

          I don't see any reason why we should need to reset the messages cursor or the messages statistics counter to 0 at the end if the purge behaves properly as the counter should already be properly decremented when each message is dropped. What do you think?

          Show
          cshannon Christopher L. Shannon added a comment - I noticed this issue today when doing a purge. I believe the issue is that after the messages are removed in the do/while loop inside the queue purge method the statistics counter is reset to 0. This is a problem because there is nothing preventing messages from being added to the Queue in between message removal and resetting of the counters. This means it's possible to have new messages in the queue but the counter is set to 0 such that the count will now go negative when those messages acked. Gary Tully , I think the fix is as simple as removing two lines of code on line 1267 and 1268. https://github.com/apache/activemq/blob/activemq-5.14.3/activemq-broker/src/main/java/org/apache/activemq/broker/region/Queue.java#L1267 I don't see any reason why we should need to reset the messages cursor or the messages statistics counter to 0 at the end if the purge behaves properly as the counter should already be properly decremented when each message is dropped. What do you think?
          Hide
          gtully Gary Tully added a comment -

          Christopher L. Shannon that is interesting.. I agree, setting to zero does seem wrong. Also, I think it makes sense to sync some more such that both sending and dispatch is locked out.
          Seems there two issues:
          1) the concurrent send issue
          2) removing messages that may be dispatched and the ack doing a double decrement

          for 1) locking is the answer
          for 2) locking and then a) ensuring that an ack does not decrement if the message cannot be found or b) purge ignoring inflight messages or c) ensuring the stat cannot go negative.

          2b may be the cleanest but may be difficult to implement, on balance maybe go for 2a. 2c would potentially hide later problems

          Show
          gtully Gary Tully added a comment - Christopher L. Shannon that is interesting.. I agree, setting to zero does seem wrong. Also, I think it makes sense to sync some more such that both sending and dispatch is locked out. Seems there two issues: 1) the concurrent send issue 2) removing messages that may be dispatched and the ack doing a double decrement for 1) locking is the answer for 2) locking and then a) ensuring that an ack does not decrement if the message cannot be found or b) purge ignoring inflight messages or c) ensuring the stat cannot go negative. 2b may be the cleanest but may be difficult to implement, on balance maybe go for 2a. 2c would potentially hide later problems
          Hide
          cshannon Christopher L. Shannon added a comment - - edited

          Gary Tully, Thanks for the feedback. I was going to try for option 2a) (and possibly 2b) initially because it seems like some of that work might have already been done over the years to prevent handling acks for messages that don't exist. I will check the ack handling to verify and see what changes are needed.

          In terms of locking I thought about this but was concerned about it being a bit slow if the purge took a long time. My initial thought was to just grab a write lock on the pagedInMessagesLock for the entire purge() method and release when purge is done. This should prevent new messages from being sent to the Queue. Off the top of my head I don't remember if that is sufficient for preventing dispatch or if we need to grab another lock such as the messages lock or pagedInPendingDispatchLock.

          Show
          cshannon Christopher L. Shannon added a comment - - edited Gary Tully , Thanks for the feedback. I was going to try for option 2a) (and possibly 2b) initially because it seems like some of that work might have already been done over the years to prevent handling acks for messages that don't exist. I will check the ack handling to verify and see what changes are needed. In terms of locking I thought about this but was concerned about it being a bit slow if the purge took a long time. My initial thought was to just grab a write lock on the pagedInMessagesLock for the entire purge() method and release when purge is done. This should prevent new messages from being sent to the Queue. Off the top of my head I don't remember if that is sufficient for preventing dispatch or if we need to grab another lock such as the messages lock or pagedInPendingDispatchLock.
          Hide
          gtully Gary Tully added a comment -

          I don't think perf is a problem with the purge operation. an all out destination pause is sensible to maintain consistent state. I think purge needs to take the sendLock and pagedInPendingDispatchLock. It is like purge a snapshot. Inflight sends can still be inflight.

          Show
          gtully Gary Tully added a comment - I don't think perf is a problem with the purge operation. an all out destination pause is sensible to maintain consistent state. I think purge needs to take the sendLock and pagedInPendingDispatchLock. It is like purge a snapshot. Inflight sends can still be inflight.
          Hide
          cshannon Christopher L. Shannon added a comment -

          Makes sense, I don't think I'll get a chance today but hopefully Monday I'll take a look at a patch for this.

          Show
          cshannon Christopher L. Shannon added a comment - Makes sense, I don't think I'll get a chance today but hopefully Monday I'll take a look at a patch for this.
          Hide
          cshannon Christopher L. Shannon added a comment -

          Gary Tully, I don't think we need to do anything here other than to grab the sendLock to prevent new messages from coming in while the purge is running and works its way through all the pending messages in the queue including already dispatched messages (we can also remove the line of code that sets the count to 0 which isn't necessary)

          For messages that are already dispatched we don't seem to need to worry about the ack doing a double decrement. If the Queue has been purged and the already dispatched messages are acked later the metrics don't double decrement because in dropMessage() there is a dropIfLive() check that will fail. This scenario is already shown if you look at QueuePurgeTest and run testPurgeLargeQueueWithConsumer. This test shows that after a purge if the consumer acks the messages already in prefetch the messages count doesn't go negative.

          I suppose an argument could be made that we should clear out the dispatch list in each subscription to save memory but then if messages are acked we'd get a bunch of unmatched acknowledge warnings in the log because assertAckMatchesDispatched() in PrefetchSubscription would throw a bunch of exceptions when the acks came in.

          Show
          cshannon Christopher L. Shannon added a comment - Gary Tully , I don't think we need to do anything here other than to grab the sendLock to prevent new messages from coming in while the purge is running and works its way through all the pending messages in the queue including already dispatched messages (we can also remove the line of code that sets the count to 0 which isn't necessary) For messages that are already dispatched we don't seem to need to worry about the ack doing a double decrement. If the Queue has been purged and the already dispatched messages are acked later the metrics don't double decrement because in dropMessage() there is a dropIfLive() check that will fail. This scenario is already shown if you look at QueuePurgeTest and run testPurgeLargeQueueWithConsumer. This test shows that after a purge if the consumer acks the messages already in prefetch the messages count doesn't go negative. I suppose an argument could be made that we should clear out the dispatch list in each subscription to save memory but then if messages are acked we'd get a bunch of unmatched acknowledge warnings in the log because assertAckMatchesDispatched() in PrefetchSubscription would throw a bunch of exceptions when the acks came in.
          Hide
          jira-bot ASF subversion and git services added a comment -

          Commit 56bb079c8227a2beee609b205c001d66597db98a in activemq's branch refs/heads/master from Christopher L. Shannon
          [ https://git-wip-us.apache.org/repos/asf?p=activemq.git;h=56bb079 ]

          https://issues.apache.org/jira/browse/AMQ-1940

          Queue purge now acquires the sendLock to prevent new messages from
          coming in while purging. The statistics are no longer zeroed out as
          they should properly decrement as messages are removed. These changes
          should prevent the statistics from going negative.

          Show
          jira-bot ASF subversion and git services added a comment - Commit 56bb079c8227a2beee609b205c001d66597db98a in activemq's branch refs/heads/master from Christopher L. Shannon [ https://git-wip-us.apache.org/repos/asf?p=activemq.git;h=56bb079 ] https://issues.apache.org/jira/browse/AMQ-1940 Queue purge now acquires the sendLock to prevent new messages from coming in while purging. The statistics are no longer zeroed out as they should properly decrement as messages are removed. These changes should prevent the statistics from going negative.
          Hide
          jira-bot ASF subversion and git services added a comment -

          Commit 1811d191afa91d15e6a37d13dbfd7fd8ef32690e in activemq's branch refs/heads/activemq-5.14.x from Christopher L. Shannon
          [ https://git-wip-us.apache.org/repos/asf?p=activemq.git;h=1811d19 ]

          https://issues.apache.org/jira/browse/AMQ-1940

          Queue purge now acquires the sendLock to prevent new messages from
          coming in while purging. The statistics are no longer zeroed out as
          they should properly decrement as messages are removed. These changes
          should prevent the statistics from going negative.

          (cherry picked from commit 56bb079c8227a2beee609b205c001d66597db98a)

          Show
          jira-bot ASF subversion and git services added a comment - Commit 1811d191afa91d15e6a37d13dbfd7fd8ef32690e in activemq's branch refs/heads/activemq-5.14.x from Christopher L. Shannon [ https://git-wip-us.apache.org/repos/asf?p=activemq.git;h=1811d19 ] https://issues.apache.org/jira/browse/AMQ-1940 Queue purge now acquires the sendLock to prevent new messages from coming in while purging. The statistics are no longer zeroed out as they should properly decrement as messages are removed. These changes should prevent the statistics from going negative. (cherry picked from commit 56bb079c8227a2beee609b205c001d66597db98a)

            People

            • Assignee:
              cshannon Christopher L. Shannon
              Reporter:
              vchekan Vadim Chekan
            • Votes:
              5 Vote for this issue
              Watchers:
              13 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development