Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
2.11.2
-
None
-
None
Description
I've encountered unexpected performance characteristics when the asynchronous logging queue is full, resulting in the application becoming unavailable for an extended period of time.
I think EventRoute.ENQUEUE is using a spin lock (or similar construct). When many threads attempt to log at a higher rate than we can drain the queue, logging throughput drops significantly while CPU utilization skyrockets causing instability across the system.
I can reproduce this in a benchmark (I will clean it up and push when I have a moment) where I get the following results:
Setup:
Root logger has a random access file appender with immediateFlush disabled
AsyncLoggerContextSelector enabled for completely asynchronous logging
1 thread logging, any queue full policy: ~1,400k events/second, 150% CPU
32 threads logging, default policy (EventRoute.ENQUEUE, default policy): 300k-400k events/second. 930% CPU load
32 threads logging, custom policy using EventRoute.SYNCHRONOUS: 1,200k events/second. 350% CPU load
Using the synchronous policy results in ~1/3 the CPU load and 3-4x throughput when the system is loaded to the point that the logging queue is filled.
Attachments
Issue Links
- links to