I'm running into a bug where, when I send messages 64 kilobytes long via a JMS producer, and retrieve them via a JMS consumer, it appears they are not dequeued until much later (even though the consumer is somehow still reading the messages). It is probably dequeueing finally when connection.close() or ctx.close() is called. I've concluded this is the situation, because:
(A) The message number that overflows the queue is the same as the queue size divided by the message size (i.e., all the messages are still in the queue when the overflow happens).
(B) The qpid-queue-stats program shows no dequeueing occuring.
(C) When I make a simple consumer to run against the 64k message producer, it receives the messages, despite no actual dequeueing occuring in the queue. The last thing it does is hang on messageConsumer.receive(), and the read messages are never dequeued.
(D) When I modify the simple consumer from (C) to timeout after 30 seconds (messageConsumer.receive(30000)), and it reaches the end of the program by timing out, the dequeues occurs all at once suddenly.
(E) This occurs even when I take it down to about 50 messages per second--no dequeueing occurs until after the timeout mentioned in (D).
This has the effect of causing my queue to fill up. Note that I do not have this problem when sending messages that are 32 kilobytes long and smaller--messages dequeue normally at those sizes.
I tried to replicate this behavior in the Python client, but the Python client seemed to handle 64k messages without any problems.
Note that I am running against the C++ broker and my queue size limit is 100 megabytes.