The current implementation of TimeSeriesMax - which is what is backing eg the very important 'ObservationQueueMaxLength' statistics - has a very infamous behavior: it does very frequent, intermittent 'jumps back to 0'. This even though the queue-lengths are still at the previous highs, as can often be seen with subsequent measurements (which eg are still showing there are 1000 events in the observation queue).
The reason seems to be that
- the value is increased via TimeSeriesMax.recordValue() during a 1 second interval
- reset to 0 via TimeSeriesMax<init>.run() every second
So basically, every second the counter is reset, then during 1 second if any call to recordValue() happens, it is increased.
This in my view is rather unfortunate - as it can result in mentioned 'jumpy-0' behavior, but it can also jump to values in between if the largest queue does not reports its length during 1 second.
It sounds a bit like this was done this way intentionally? (perhaps to make it as inexpensive as possible) or could this be fixed?