Summary: | Add "Flush on Level" capability to FileAppender | ||
---|---|---|---|
Product: | Log4j - Now in Jira | Reporter: | Simon Dorrat <sdorrat> |
Component: | Appender | Assignee: | log4j-dev <log4j-dev> |
Status: | RESOLVED FIXED | ||
Severity: | enhancement | CC: | ferenc.toth |
Priority: | P5 | ||
Version: | 1.2 | ||
Target Milestone: | --- | ||
Hardware: | All | ||
OS: | All | ||
Attachments: |
Performance test results
Java test harness |
Description
Simon Dorrat
2004-04-28 03:20:14 UTC
Hi Simon, Have you measured the performance improvements? I'd be interested in the results with 5 threads performing simultaneous logging, of which 80% DEBUGS and 20% INFO. Running the tests on both Windows and Linux would be even better. Performance tests I did previously on Windows XP showed the new method was up to 20 times faster. However this was with 1000 writes per flush. I will retest with the ratio you suggested (much more realistic) and more than 1 thread. This is where having Appender being an interface would be ideal, because then one could 'decorate' one appender with another object to provide support code such as a flushing function. Sort of like how the InputStream stuff works. Lets assume Appender is an interface. One could then do something like this in java code. Configuration code of course might be more complex to achieve this, but certainly possible: Appender a = new FlushingAppender(new FileAppender(.....)); In the above example the FlushingAppender implements the Appender interface, but for the most part delegates to the internal (file)appender, except for the actual append methods where it might buffer events based on some strategy of the implementors choosing. That way we could mix and match strategies. This is how AsyncAppender kinda works, but I think having Appender as an interface is neater. Large change though. Paul, Appender is an interface! Would this able to be configured via Property files/XML config files? (Is this what you referred to as "more complex config code"?). Using a decorator seems like a useful approach, as it would not only allow many such features to be mixed and matched, but also potentially allow to be applied to any appender. However since this LevelFlushedFileAppender flushes by calling the qw.flush() from the WriterAppender class, it could only be applied to classes who are decendants of that class. To apply to other appenders would require a change to the interface and each appender class to support a "flush" method, which seems to defeat the purpose of a decorator pattern. What do you think? You are of course correct... When I looked in the Hierachy view within Eclipse, I keep seeing AppenderSkeleton, and forget that there is actually an interface. Silly me, particularly as a log4j-dev member... 8-| Anyway. I was thinking that the appender that is wrapped need not have extend from WriterAppender. The Wrapping appender (FlushAppender in our example) is free to determine when to 'flush', which really means 'take any currently buffered events and send them to the wrapped appender'. This is more a more generic flushing solution but kind of duplicates the work that WriterAppender would do. I guess you could have a FlushingWriterAppender wrapper that is only designed to wrap a WriterAppender only, and rather than store it's own buffer of events, always writes the event to the internal WriterAppender, but waits until the internal strategy determines that a flush should occur, and only at that point calls writerAppender.flush(). Does that make sense? It does make sense. In the first approach however (wrapper buffering events then sending them all at once to the wrapped appender upon flush), you lose most of the advantage of buffering. This is because the wrapped appender will still execute all events one by one when 'flushed'. I.e for a file writer it would still be still be one disk i/o per event. The second strategy was the one I implemented (though via inheritance rather than as a decorator). Not convinced either way yet. Will get the performance testing done tonight or tommorow on the current implementation. Yup, this is where the (smarter) FlushingWriterAppender would not actually buffer any events, just write them to the wrapped WriterAppender and periodically call writerAppender.flush() as it deems appropriate. So via the decorator approach, you could have a generic flushing appender (akin really to the Async appender), and one optimised to dealing with WriterAppenders. With the decorator approache you could then do something like: Appender a = new FlushingAppender(new FlushingWriterAppender(new WriterAppender (...), 50), 500); Lets say that in this example: * FlushingAppender: 500 = # ms to buffer events before flushing to it's internal appender (FlushingWriterAppender) * FlushingWriterAppender: 50= # of event objects to write before calling writerAppender.flush(). Having said all of this though, one does need to think about the risks of some events not getting written. While you may have a JDK hook to listen for a shutdown, some applications won't always exit quite so gracefully. And this is usually when you MOST want to know what was going on...... This is why I use the LevelFlushed approach (i.e Flush on level equal to or higher than x). If the error is trapped prior to shutdown, then the log will be flushed when an ERROR event is logged. If the error is not trapped, then some low level messages will not be flushed. But with the existing approach, you wouldnt have low level logging enabled anyway (due to performance hit), so no difference. Thats the rationale behind this. It enables you to leave a lower level on permanently than you otherwise would. So you get the advantage of seeing the low level info in all cases bar catastrophic failure. Would it make sense to use something like the new ExpressionFilter in this case instead of only supporting the triggering based on Level? Not sure what the ExpressionFilter is. Where can I find info on it? I could certainly imagine that there could be a number of different approaches to triggering the flush that people may want to take, so a flexible framework has merit. See ExpressionFilter here: http://cvs.apache.org/viewcvs.cgi/logging-log4j/src/java/org/apache/log4j/varia/ExpressionFilter.java?rev=1.3&view=auto And ExpressionRule here: http://cvs.apache.org/viewcvs.cgi/logging-log4j/src/java/org/apache/log4j/rule/ExpressionRule.java?rev=1.6&view=auto The ExpressionFilter is an example of how an ExpressionRule can be used to allow or prevent events from being processed by an appender (and ExpressionRule could be used similarly in a flush trigger). Here is an example of how an ExpressionFilter is used to filter events: <appender name="STDOUT" class="org.apache.log4j.ConsoleAppender"> <layout class="org.apache.log4j.SimpleLayout"/> <filter class="org.apache.log4j.varia.ExpressionFilter"> <param name="Expression" value="level > INFO" /> <param name="AcceptOnMatch" value="true"/> <param name="ConvertInFixToPostFix" value="true"/> </filter> <filter class="org.apache.log4j.varia.DenyAllFilter"/> </appender> Created attachment 11527 [details]
Performance test results
Created attachment 11528 [details]
Java test harness
Performance test results attached as requested. Also Java test harness. Improvement roughly 25%. System load and differing threading models and disk caching between XP and Linux made individual tests hard to compare, but it certainly showed a consistent performance improvement. Hi all, I totally aggree with the flushing being tied to the level of the event. This totally makes sense and would be good to have it in the default log4J. As Simon, I also use a shutdown hook in my application to address the issue - i.e. automatically flush on VM exit. The only way I found to flush was to close all the appenders. This is fine as I am stopping the application but a way to flush globally all the logs without closing the appenders in a very simple way should also be there. Let's imagine I want to flush all the logs using some admin interface to examine them but I don't want to close them even to reopen seconds later (to not loose events and getting error messages everywhere while they are created). What are my options in this case???? So flushing should be present in each appender that could delegate to sub appenders if required. Olivier DUPUY I think what should be supported is to enhance WriterAppender so that subclasses can specify the flushing behavior. I don't think it's generally a good idea to code in the policy itself, but it seems fair to allow users to easily extend existing classes to get the behavior they want. I thought of adding an Interface, such as "FlushPolicy" but it seemed like overkill. I propose the following change: Index: C:/src/workspace/log4j/src/java/org/apache/log4j/WriterAppender.java =================================================================== --- C:/src/workspace/log4j/src/java/org/apache/log4j/WriterAppender.java (revision 500611) +++ C:/src/workspace/log4j/src/java/org/apache/log4j/WriterAppender.java (working copy) @@ -372,7 +372,7 @@ } } - if (this.immediateFlush) { + if (shouldFlush(event)) { this.qw.flush(); } } @@ -426,6 +426,15 @@ } } } + + /** + * Returns true if an event should be flushed immediately. + * This method returns the value of {@link #immediateFlush}. + * Subclasses override this behavior to flush only when certain events occur. + */ + protected boolean shouldFlush(LoggingEvent event) { + return this.immediateFlush; + } } } In response to #15, adding a separate API to "flush" appenders really is a separate issue that should be raised in a different bug. *** Bug 43061 has been marked as a duplicate of this bug. *** I really hope that you will integrate this in 2.0 as in the comment to my RFE below. See my follow up comments for a general way to flush whatever the type of logger or appender. https://issues.apache.org/bugzilla/show_bug.cgi?id=41670 This is really something important for production systems to have a decent flush mechanism with no performance penalty associated to an ongoing flushing. Reset assignee Committed Elias's recommendation in rev 824003. |