In org.apache.logging.log4j.spi.AbstractLoggerAdapter, the getLoggersInContext method has a synchronize block to prevent concurrent destructive access to the registry, a java.util.WeakHashMap:
However, in the case when loggers are already in the map, the large synchronize block means that two threads cannot simultaneously read from the map, which hurts performance in highly multi-threaded applications that constantly re-instantiate loggers.
In our application, we have modified this method to use a ReadWriteLock instead, providing unlimited concurrent get() operations but blocking put() operations by using a double-checked locking idiom:
The ReadWriteLock interface and the ReentrantReadWriteLock implementation have been available since Java 5. The performance gains from using read locks have so far been considerable.
Why are we constantly re-instantiating loggers instead of, for example, keeping a static final Logger in our classes? In many cases it's because the class which holds the logger is a base class, and it can't use a static logger in case a different outlet has been specified for the actual derived class which has been instantiated. Some of these objects, for example AbstractMediaTypeExpression in the Spring framework, are constantly being destroyed and reconstructed. Where reasonable for our application, we've also patched those other classes to just use static final Loggers, but there are a lot of them and it is ultimately better to solve this problem at the source.