Michael, I like your idea very much. How we use the Log4J here, is through a fully configured BoneCP connection pool which indeed already does checking. As you stated, the problem is that the current implementation just uses a single connection.
I think that in terms of performance it might actually be faster if you implement it so as it can request multiple connections from a pool for parallel processing. If this isn't possible you could mitigate performance drawbacks (if any) by using the buffer attribute and only requesting a new connection from the pool on each buffer iteration.
I'm not sure if a "isPooled" flag would be needed, just close and open a connection by default and document this behaviour with a addition that a ConnectionPool is strongly required for intensive logging. I suspect that most enterprise applications or applications generating lots of data would already use a pool.
I think there are now 2 approaches in this Issue that are under discussion:
- Reconnect on exception/disconnect (as explained by Thomas Neidhart)
- Redo the connection management and open/close on each (batch of) logging event, especially in the case when a connection pool is used (as explained by Michael Kloster)
I'm more in favor of the second for a long term solution, but as a quickfix I wrote some code doing the first. I was unable to test my code today, I hope to be able to provide quickfix by friday (my application needs it bad),