A few times over the last few years the busiest queue in our system would suddenly start spewing these "Duplicate dispatch on connection" warning messages into the log and 0 messages would be processed. Googling around the only references I could find mentioned "optimize acknowledge" may be at fault. We stopped using that feature and I though perhaps this issue was gone but it popped up again today.
The log messages look like this:
2015-01-05 20:25:21,310 [ActiveMQ Session Task-2341 ] WARN apache.activemq.ActiveMQMessageConsumer - Duplicate dispatch on connection: ID:perf4-flexapp-43114-1419911988672-9:1 to consumer: ID:perf4-flexapp-43114-1419911988672-9:1:1:1, ignoring (auto acking) duplicate: MessageDispatch {commandId = 0, responseRequired = false, consumerId = ID:perf4-flexapp-43114-1419911988672-9:1:1:1, destination = queue://*********, message = ActiveMQBytesMessage {commandId = -497935582, responseRequired = false, messageId = ID:perf4-nc-49600-1419436870976-1:8:1:1:3797031711, originalDestination = null, originalTransactionId = null, producerId = ID:perf4-nc-49600-1419436870976-1:8:1:1, destination = queue://*********, transactionId = null, expiration = 0, timestamp = 1420507504378, arrival = 0, brokerInTime = 1420507504527, brokerOutTime = 1420507521253, correlationId = null, replyTo = null, persistent = true, type = null, priority = 4, groupID = null, groupSequence = 0, targetConsumerId = null, compressed = false, userID = null, content = org.apache.activemq.util.ByteSequence@477f9261, marshalledProperties = null, dataStructure = null, redeliveryCounter = 0, size = 0, properties = null, readOnlyProperties = true, readOnlyBody = true, droppable = false} ActiveMQBytesMessage{ bytesOut = null, dataOut = null, dataIn = null }, redeliveryCounter = 0}
I randomly noticed that the "commandId" was negative. I looked at the messageId and sure enough it is very large, in the 2^31 range. I checked all my field reports for this bug and in all cases the id was just over 2^31. I dug into the activemq code and found the issue was the BitArrayBin class doesn't handle values of that size correctly. I was about to post that as a critical bug but found it had already been reported and fixed for 5.11.0 in another context. AMQ-5016
In case other people have a system stable enough to generate 2^31 messages from a single producer they should know they can add the setting "?checkForDuplicates=false" at the queue or connection level. Are there any other mitigation strategies I may not know?