In version 5.1.0, we are seeing our queue consumers stop consuming for no reason.
We have a staged queue environment and we occasionally see one queue display negative pending message counts that hang around -x, rise to -x+n gradually and then fall back to -x abruptly. The messages are building up and being processed in bunches but its not easy to see because the counts are negative. We see this behavior in the messages coming out of the system. Outbound messages come out in bunches and are synchronized with the queue pending count dropping to -x.
This issue does not happen ALL of the time. It happens about once a week and the only way to fix it is to bounce the broker. It doesn't happen to the same queue everytime, so it is not our consuming code.
Although we don't have a reproducible scenario, we have been able to debug the issue in our test environment.
We traced the problem to the cached store size in the AbstractStoreCursor.
This value becomes 0 or negative and prevents the AbstractStoreCursor from retrieving more messages from the store. (see AbstractStoreCursor.fillBatch() )
We have seen size value go lower than -1000.
We have also forced it to fix itself by sending in n+1 messages. Once the size goes above zero, the cached value is refreshed and things work ok again.
Unfortunately, during low volume times, it could be hours before n+1 messages are received, so our message latency can rise during low volume times....
I have attached our broker config.