Log4j 2
  1. Log4j 2
  2. LOG4J2-151

Please facilitate subclassing Logger and LoggerContext (in org.apache.logging.log4j.core)

    Details

    • Type: New Feature New Feature
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 2.0-beta3
    • Fix Version/s: 2.0-beta5
    • Component/s: Core
    • Labels:
      None

      Description

      I would like to create a custom logger, while reusing the org.apache.logging.log4j.core.Logger functionality.

      The following two changes would make subclassing possible:

      • change visibility of method Logger$PrivateConfig#logEvent(LogEvent) (line 265) from protected to public
      • change visibility of method LoggerContext#newInstance(LoggerContext, String) (line 310) from private to protected

      My use case is that I want to create an asynchronous Logger for low latency logging.
      This custom logger hands off control to a separate thread as early as possible. In my case, AsynchAppender is not a good match for my requirements, as with that approach (a) the logging call still needs to flow down the hierarchy to the appender, doing synchronization and creating objects at various points on the way, and (b) when serializing the LogEvent, the getSource() method is always called, which is expensive.

      1. FastLog4j.zip
        551 kB
        Remko Popma
      2. LOG4J2-151-patch-Logger.txt
        0.7 kB
        Remko Popma
      3. LOG4J2-151-patch-LoggerContext.txt
        0.7 kB
        Remko Popma
      4. FastLog4j-v2-for-beta4.zip
        226 kB
        Remko Popma

        Issue Links

          Activity

          Hide
          Ralph Goers added a comment -

          I'd love the see the code you are going to use to create the asynchronous logger. I'm wondering how you are going to handle the ThreadContext and call stack without "serializing" LogEvent. (Note that the LogEvent really isn't serialized in the asyncAppender - it is just converted into a LogEventProxy which causes all the fields to be populated). It also will mean that the application can't ever be informed that logging has failed - which is certainly acceptable in many use cases.

          Show
          Ralph Goers added a comment - I'd love the see the code you are going to use to create the asynchronous logger. I'm wondering how you are going to handle the ThreadContext and call stack without "serializing" LogEvent. (Note that the LogEvent really isn't serialized in the asyncAppender - it is just converted into a LogEventProxy which causes all the fields to be populated). It also will mean that the application can't ever be informed that logging has failed - which is certainly acceptable in many use cases.
          Hide
          Remko Popma added a comment - - edited

          I'm planning to use the LMAX Disruptor library (http://lmax-exchange.github.com/disruptor/).
          To pass on data to the thread that does the actual logging, I use a similar approach as LogEventProxy, with one difference that Disruptor pre-allocates the event objects, to avoid creating a new event instance for every call to Logger.log() on the publisher (application) side. (The Log4jLogEvent instance is created on the event handling thread on the consumer side.)

          I do pass on the context map using (ThreadContext.isEmpty() ? null : ThreadContext.getImmutableContext())
          and may pass on the stack too with (ThreadContext.getDepth() == 0 ? null : ThreadContext.cloneStack())
          because these are fairly cheap.
          However, not calculating Log4jLogEvent.location is a big performance win.

          This is still a work in progress, but preliminary tests show it is possible to get to 450 nanos per call to Logger.log.
          I may be able to get it down to 150 nanos by always passing a null context map and stack and using a custom clock implementation instead of System.currentTimeMillis().

          (Side note: In my application I don't need the location or the stack so I can save time by leaving out this information unconditionally.
          For a more general solution it would be nice if the Logger would somehow be able to find out if any of the appenders has a layout where stack, context map or location is used. The logger could then include this information only if necessary.)

          Show
          Remko Popma added a comment - - edited I'm planning to use the LMAX Disruptor library ( http://lmax-exchange.github.com/disruptor/ ). To pass on data to the thread that does the actual logging, I use a similar approach as LogEventProxy, with one difference that Disruptor pre-allocates the event objects, to avoid creating a new event instance for every call to Logger.log() on the publisher (application) side. (The Log4jLogEvent instance is created on the event handling thread on the consumer side.) I do pass on the context map using (ThreadContext.isEmpty() ? null : ThreadContext.getImmutableContext()) and may pass on the stack too with (ThreadContext.getDepth() == 0 ? null : ThreadContext.cloneStack()) because these are fairly cheap. However, not calculating Log4jLogEvent.location is a big performance win. This is still a work in progress, but preliminary tests show it is possible to get to 450 nanos per call to Logger.log. I may be able to get it down to 150 nanos by always passing a null context map and stack and using a custom clock implementation instead of System.currentTimeMillis(). (Side note: In my application I don't need the location or the stack so I can save time by leaving out this information unconditionally. For a more general solution it would be nice if the Logger would somehow be able to find out if any of the appenders has a layout where stack, context map or location is used. The logger could then include this information only if necessary.)
          Hide
          Remko Popma added a comment -

          Ralph,
          You were right that the ThreadContext and call stack (location) were tricky when creating an asynchronous logger.

          For the stack trace problem, I proposed to make an API change to the Layout interface in LOG4J2-153.
          I realize this impacts quite a few classes, but the performance benefits are large and I don't see any other way to do it in a generic way.

          The ThreadContext is a different problem, but I believe significant performance gains can be made here as well.
          In my custom logger I modified ThreadContext to use a copy-on-write mechanism, and this brought copying the ThreadContext down from 200 to 20 nanos.

          I made a patch for ThreadContext based on the beta-3 source, but I just checked out trunk and found it has changed a lot...
          Instead, I wrote up the idea in LOG4J2-154, I hope this is still useful...

          I shaved off an additional 300 nanos by using a custom clock (essentially a class that updates a volatile long field to the value of System.currentTimeMillis in a background thread once every millisecond). Getting the value of that volatile long field is much cheaper than calling System.currentTimeMillis, but you lose some precision. The time increases in increments of 10 milliseconds on Solaris, and 16 milliseconds on Windows. So there is a trade-off here. I'm not sure if it is worth offering this as a standard option in log4j. "useFastButCoarseClock=true", anyone?

          Finally, using a RandomAccessFile is much faster than a BufferedOutputStream on the platforms that I tested on (Solaris, Linux and Windows XP). To give you an idea, logging 100,000 events of 500 bytes (after warmup of 10 x 100,000 events) took ~8700 nanos per line with the standard BufferedOutputStream, and ~5200 per line with a RandomAccessFileAppender in my environment.

          Let me know if you want more detail.
          Best regards,
          Remko

          Show
          Remko Popma added a comment - Ralph, You were right that the ThreadContext and call stack (location) were tricky when creating an asynchronous logger. For the stack trace problem, I proposed to make an API change to the Layout interface in LOG4J2-153 . I realize this impacts quite a few classes, but the performance benefits are large and I don't see any other way to do it in a generic way. The ThreadContext is a different problem, but I believe significant performance gains can be made here as well. In my custom logger I modified ThreadContext to use a copy-on-write mechanism, and this brought copying the ThreadContext down from 200 to 20 nanos. I made a patch for ThreadContext based on the beta-3 source, but I just checked out trunk and found it has changed a lot... Instead, I wrote up the idea in LOG4J2-154 , I hope this is still useful... I shaved off an additional 300 nanos by using a custom clock (essentially a class that updates a volatile long field to the value of System.currentTimeMillis in a background thread once every millisecond). Getting the value of that volatile long field is much cheaper than calling System.currentTimeMillis, but you lose some precision. The time increases in increments of 10 milliseconds on Solaris, and 16 milliseconds on Windows. So there is a trade-off here. I'm not sure if it is worth offering this as a standard option in log4j. "useFastButCoarseClock=true", anyone? Finally, using a RandomAccessFile is much faster than a BufferedOutputStream on the platforms that I tested on (Solaris, Linux and Windows XP). To give you an idea, logging 100,000 events of 500 bytes (after warmup of 10 x 100,000 events) took ~8700 nanos per line with the standard BufferedOutputStream, and ~5200 per line with a RandomAccessFileAppender in my environment. Let me know if you want more detail. Best regards, Remko
          Hide
          Ralph Goers added a comment -

          Now I am starting to wish we were using git. I think it would be great to have what you are doing be an integral part of Log4j. I just don't know if it should be a separate module or something controlled by configuration or something else. I'd really need to see it to know. Do you have a zip of what you have done so far that you could attach? Or perhaps you have an account at github?

          Show
          Ralph Goers added a comment - Now I am starting to wish we were using git. I think it would be great to have what you are doing be an integral part of Log4j. I just don't know if it should be a separate module or something controlled by configuration or something else. I'd really need to see it to know. Do you have a zip of what you have done so far that you could attach? Or perhaps you have an account at github?
          Hide
          Remko Popma added a comment -

          Ok, let me send you something this weekend.

          Show
          Remko Popma added a comment - Ok, let me send you something this weekend.
          Hide
          Remko Popma added a comment -

          Ralph,

          I attached FastLog4j.zip.
          It contains an Eclipse project with source and binaries.
          The top directory has a readme.txt file and two scripts to run performance tests. Let me know if you need details on anything.

          Best regards,
          Remko Popma

          Show
          Remko Popma added a comment - Ralph, I attached FastLog4j.zip. It contains an Eclipse project with source and binaries. The top directory has a readme.txt file and two scripts to run performance tests. Let me know if you need details on anything. Best regards, Remko Popma
          Hide
          Remko Popma added a comment -

          Ralph,

          The performance test results in the readme.txt in the attachment were done on my Windows laptop at home.
          I also ran the same tests on an enterprise server. Here are the results for comparison:

          Results of the performance tests, measured on Solaris-10 64 bit, JDK 1.7.0_06,
          8 cores Xeon X5570 CPU @ 2.93 GHz with hyperthreading switched on

          Same test as before:
          The measured time is the average time per log event, when logging 100,000 events
          with a 500-byte message and one MDC key-value pair.

          SL = Standard (synchronous) Logger
          AL = Async Logger
          SA = Standard File Appender with BufferedIO=true
          FA = Fast File Appender
          IF = Immediate Flush=true

          SL + SA: 5067 nanos (average of 5311, 5511, 4928, 4792, 4794)
          SL + FA: 3990 nanos (average of 4089, 4117, 4015, 4058, 3671)
          AL + SA: 156 nanos (average of 152, 154, 157, 151, 166)
          AL + FA: 150 nanos (average of 152, 152, 151, 146, 152)

          SL + SA + IF: 8290 nanos (average of 8475, 8525, 8606, 7756, 7992)
          SL + FA + IF: 8112 nanos (average of 7741, 8596, 7760, 7918, 8549)
          AL + SA + IF: 158 nanos (average of 170, 160, 152, 153, 155)
          AL + FA + IF: 156 nanos (average of 165, 162, 153, 154, 149)

          Observations:

          • some interesting differences between Windows and Solaris, especially for immediateFlush=true
          • in most cases, RandomAccessFile is quite a bit faster than BufferedOutputStream
          • If users need their log events flushed to disk, then AsyncLogger + FastFileAppender can give a 40x to 55x performance improvement
          Show
          Remko Popma added a comment - Ralph, The performance test results in the readme.txt in the attachment were done on my Windows laptop at home. I also ran the same tests on an enterprise server. Here are the results for comparison: Results of the performance tests, measured on Solaris-10 64 bit, JDK 1.7.0_06, 8 cores Xeon X5570 CPU @ 2.93 GHz with hyperthreading switched on Same test as before: The measured time is the average time per log event, when logging 100,000 events with a 500-byte message and one MDC key-value pair. SL = Standard (synchronous) Logger AL = Async Logger SA = Standard File Appender with BufferedIO=true FA = Fast File Appender IF = Immediate Flush=true SL + SA: 5067 nanos (average of 5311, 5511, 4928, 4792, 4794) SL + FA: 3990 nanos (average of 4089, 4117, 4015, 4058, 3671) AL + SA: 156 nanos (average of 152, 154, 157, 151, 166) AL + FA: 150 nanos (average of 152, 152, 151, 146, 152) SL + SA + IF: 8290 nanos (average of 8475, 8525, 8606, 7756, 7992) SL + FA + IF: 8112 nanos (average of 7741, 8596, 7760, 7918, 8549) AL + SA + IF: 158 nanos (average of 170, 160, 152, 153, 155) AL + FA + IF: 156 nanos (average of 165, 162, 153, 154, 149) Observations: some interesting differences between Windows and Solaris, especially for immediateFlush=true in most cases, RandomAccessFile is quite a bit faster than BufferedOutputStream If users need their log events flushed to disk, then AsyncLogger + FastFileAppender can give a 40x to 55x performance improvement
          Hide
          Ralph Goers added a comment -

          I haven't had a chance to look at the attachment yet, but I'm pretty sure that you aren't really measuring the time to log the events but rather the time before control is returned back to the application. Simply delegating the logging to another thread isn't likely to improve the throughput of logging. Buffering or Immediate flush is unlikely to have any measurable effect with an asynchronous logger since the I/O is not on the thread being measured, which is consistent with the results above.

          It would be interesting to include the AsynchAppender in the results above.

          Show
          Ralph Goers added a comment - I haven't had a chance to look at the attachment yet, but I'm pretty sure that you aren't really measuring the time to log the events but rather the time before control is returned back to the application. Simply delegating the logging to another thread isn't likely to improve the throughput of logging. Buffering or Immediate flush is unlikely to have any measurable effect with an asynchronous logger since the I/O is not on the thread being measured, which is consistent with the results above. It would be interesting to include the AsynchAppender in the results above.
          Hide
          Remko Popma added a comment - - edited

          Ralph, you are right, I am measuring the time before control is returned to the application.
          But that is exactly the selling point, because that is what log4j users care about. (At least I hope it is not just me
          150 nanos! with the semantics of immediateFlush! (because FastFileAppender flushes to disk at the end of each batch)
          That would put Log4j2 in a whole different performance category than Log4j-1.x or LogBack, wouldn't you agree?

          As long as logging throughput is fast enough that the buffers don't fill up so that any slowness in the actually logging does not impact the application, the actual speed of writing events to disk is less important.

          Wrapping either FileAppender or FastFileAppender in a AsyncAppender actually makes things slower, at least on Solaris.

          For reference, below are the numbers for the same tests with AsyncAppender, measured on Solaris-10 64 bit, JDK 1.7.0_06,
          8 cores Xeon X5570 CPU @ 2.93 GHz with hyperthreading switched on

          SL = Standard (synchronized) Logger
          AA/SA = AsynchAppender with Standard File Appender with BufferedIO=true
          AA/FA = AsynchAppender with Fast File Appender
          default buffer = 128
          large buffer = 262144 (same buffer size as ringbuffer used in disruptor)
          IF = ImmediateFlush=true

          SL + (default buffer AA/SA): 17168 nanos (average of 18480, 13778, 17573, 17730, 18279)
          SL + (default buffer AA/FA): 16467 nanos (average of 16991, 17143, 17319, 13646, 17238)

          SL + (large buffer AA/SA): 17541 nanos (average of 17734, 17852, 17756, 16918, 17445)
          SL + (large buffer AA/FA): 17117 nanos (average of 17136, 17522, 16680, 17147, 17103)

          SL + (default buffer AA/SA) + IF: 16703 nanos (average of 17783, 17863, 12868, 18265, 16736)
          SL + (default buffer AA/FA) + IF: 13983 nanos (average of 18063, 11346, 13634, 13081, 13791)

          SL + (large buffer AA/SA) + IF: 17860 nanos (average of 17821, 17664, 17906, 18070, 17840)
          SL + (large buffer AA/FA) + IF: 17641 nanos (average of 17739, 17794, 16974, 17871, 17827)

          If you want to verify these results in your environment, you can unzip the attachment and
          run the AsyncLoggerPerfTest.bat and SyncLoggerPerfTest.bat scripts.
          They should work out of the box. Each script runs a single test with the log4j2.xml config in the bin/ folder.

          Show
          Remko Popma added a comment - - edited Ralph, you are right, I am measuring the time before control is returned to the application. But that is exactly the selling point, because that is what log4j users care about. (At least I hope it is not just me 150 nanos! with the semantics of immediateFlush! (because FastFileAppender flushes to disk at the end of each batch) That would put Log4j2 in a whole different performance category than Log4j-1.x or LogBack, wouldn't you agree? As long as logging throughput is fast enough that the buffers don't fill up so that any slowness in the actually logging does not impact the application, the actual speed of writing events to disk is less important. Wrapping either FileAppender or FastFileAppender in a AsyncAppender actually makes things slower, at least on Solaris. For reference, below are the numbers for the same tests with AsyncAppender, measured on Solaris-10 64 bit, JDK 1.7.0_06, 8 cores Xeon X5570 CPU @ 2.93 GHz with hyperthreading switched on SL = Standard (synchronized) Logger AA/SA = AsynchAppender with Standard File Appender with BufferedIO=true AA/FA = AsynchAppender with Fast File Appender default buffer = 128 large buffer = 262144 (same buffer size as ringbuffer used in disruptor) IF = ImmediateFlush=true SL + (default buffer AA/SA): 17168 nanos (average of 18480, 13778, 17573, 17730, 18279) SL + (default buffer AA/FA): 16467 nanos (average of 16991, 17143, 17319, 13646, 17238) SL + (large buffer AA/SA): 17541 nanos (average of 17734, 17852, 17756, 16918, 17445) SL + (large buffer AA/FA): 17117 nanos (average of 17136, 17522, 16680, 17147, 17103) SL + (default buffer AA/SA) + IF: 16703 nanos (average of 17783, 17863, 12868, 18265, 16736) SL + (default buffer AA/FA) + IF: 13983 nanos (average of 18063, 11346, 13634, 13081, 13791) SL + (large buffer AA/SA) + IF: 17860 nanos (average of 17821, 17664, 17906, 18070, 17840) SL + (large buffer AA/FA) + IF: 17641 nanos (average of 17739, 17794, 16974, 17871, 17827) If you want to verify these results in your environment, you can unzip the attachment and run the AsyncLoggerPerfTest.bat and SyncLoggerPerfTest.bat scripts. They should work out of the box. Each script runs a single test with the log4j2.xml config in the bin/ folder.
          Hide
          Remko Popma added a comment - - edited

          Ralph,

          About logging throughput:
          I don't think it makes sense to look at logging throughput in isolation. What matters is application throughput and latency, and how logging latency contributes to that.

          You are right in one sense that with asynchronous logging, there is a queue, and when that queue is full, the logger becomes essentially synchronous and logging latency will become equal to the appender latency.

          A clear case of when logging latency (not throughput) is crucial is during activity peaks. Asynchronous logging can help prevent or dampen latency spikes during bursts of events.

          Let me illustrate with an example application:

          • on average, 5000 events/second (on avg., a new event every 200 microseconds)
          • during bursts, 20,000 events/second (on avg., a new event every 50 microseconds)
          • bursts usually last up to 10 seconds
          • the business logic takes 30 microseconds to process an event (without logging)

          Now we want to add logging, let's say 10 Logger.log calls per event.

          • With synchronous logging, Logger.log takes 4 microseconds, 4 x 10 = 40 micros/event
          • With AsyncLogger, Logger.log takes 150 nanoseconds, 0.150 x 10 = 1.5 micros/event

          So the total latency of processing one event is:
          business logic latency + logging latency =
          30 + 40 = 70 micros/event with synchronous logging, and
          30 + 1.5 = 31.5 micros/event with asynchronous logging.

          The problem is that with synchronous logging we would fall behind during bursts.
          In our example, a 10 second burst causes a latency spike of 4 seconds.

          (Calculation: During a burst, events come in at a rate of 20,000 events/sec (50 micros per event).
          We need 70 micros/event, so we would fall behind 20 micros per event.
          20,000 events/sec x 20 micros delay/event = 400 millis delay per second.)

          With AsyncLogger, the application would not experience any delay, until the logger queue is full. After the logger queue is full, the application would experience the same delay as with synchronous logging.
          To avoid these latency spikes, you configure your queue for the expected burst duration. In our example we may want to have space for ~2,000,000 LogEvents

          Of course, AsyncLogger is not a magic bullet; if bursts last longer than your queue size can accomodate, you will eventually end up with synchronous logging. But AsyncLogger can act like a buffer to dampen latency spikes that your application would have otherwise during bursts.

          I think activity peaks are common for many applications, and this is one case where AsyncLogger can add value.

          Show
          Remko Popma added a comment - - edited Ralph, About logging throughput: I don't think it makes sense to look at logging throughput in isolation. What matters is application throughput and latency, and how logging latency contributes to that. You are right in one sense that with asynchronous logging, there is a queue, and when that queue is full, the logger becomes essentially synchronous and logging latency will become equal to the appender latency. A clear case of when logging latency (not throughput) is crucial is during activity peaks. Asynchronous logging can help prevent or dampen latency spikes during bursts of events. Let me illustrate with an example application: on average, 5000 events/second (on avg., a new event every 200 microseconds) during bursts, 20,000 events/second (on avg., a new event every 50 microseconds) bursts usually last up to 10 seconds the business logic takes 30 microseconds to process an event (without logging) Now we want to add logging, let's say 10 Logger.log calls per event. With synchronous logging, Logger.log takes 4 microseconds, 4 x 10 = 40 micros/event With AsyncLogger, Logger.log takes 150 nanoseconds, 0.150 x 10 = 1.5 micros/event So the total latency of processing one event is: business logic latency + logging latency = 30 + 40 = 70 micros/event with synchronous logging, and 30 + 1.5 = 31.5 micros/event with asynchronous logging. The problem is that with synchronous logging we would fall behind during bursts. In our example, a 10 second burst causes a latency spike of 4 seconds. (Calculation: During a burst, events come in at a rate of 20,000 events/sec (50 micros per event). We need 70 micros/event, so we would fall behind 20 micros per event. 20,000 events/sec x 20 micros delay/event = 400 millis delay per second.) With AsyncLogger, the application would not experience any delay, until the logger queue is full. After the logger queue is full, the application would experience the same delay as with synchronous logging. To avoid these latency spikes, you configure your queue for the expected burst duration. In our example we may want to have space for ~2,000,000 LogEvents Of course, AsyncLogger is not a magic bullet; if bursts last longer than your queue size can accomodate, you will eventually end up with synchronous logging. But AsyncLogger can act like a buffer to dampen latency spikes that your application would have otherwise during bursts. I think activity peaks are common for many applications, and this is one case where AsyncLogger can add value.
          Hide
          Remko Popma added a comment - - edited

          One more comment on the merits of async logging, and then I'll stop, I promise!

          Delegating logging to a separate thread does improve the throughput of the application.
          If the application spends half its time logging, then using async logging will double the application's throughput.

          (Sorry for the many comments, apologies for taking so much of your time, hope you still think this is valueable.)

          Show
          Remko Popma added a comment - - edited One more comment on the merits of async logging, and then I'll stop, I promise! Delegating logging to a separate thread does improve the throughput of the application. If the application spends half its time logging, then using async logging will double the application's throughput. (Sorry for the many comments, apologies for taking so much of your time, hope you still think this is valueable.)
          Hide
          Ralph Goers added a comment -

          You don't have to sell me on the merits of asynchronous logging. I fully intend to review your patch. I have just been very busy with other things.

          Show
          Ralph Goers added a comment - You don't have to sell me on the merits of asynchronous logging. I fully intend to review your patch. I have just been very busy with other things.
          Hide
          Remko Popma added a comment -

          attached patches are based on trunk.

          Show
          Remko Popma added a comment - attached patches are based on trunk.
          Hide
          Remko Popma added a comment -

          Ralph,

          The last attachment FastLog4j-v2-for-beta4.zip is based on current trunk. It contains the code for AsyncLogger, FastFileAppender and supporting code as well as implementations for JIRA tickets LOG4J2-151, LOG4J2-154 and LOG4J2-157.

          Because implementations for these JIRAs are included you can run the performance tests without making any changes to trunk. You only need to supply log4j-api-2.0-beta4-SNAPSHOT.jar and log4j-core-2.0-beta4-SNAPSHOT.jar generated from trunk.

          Best regards,
          Remko

          Show
          Remko Popma added a comment - Ralph, The last attachment FastLog4j-v2-for-beta4.zip is based on current trunk. It contains the code for AsyncLogger, FastFileAppender and supporting code as well as implementations for JIRA tickets LOG4J2-151 , LOG4J2-154 and LOG4J2-157 . Because implementations for these JIRAs are included you can run the performance tests without making any changes to trunk. You only need to supply log4j-api-2.0-beta4-SNAPSHOT.jar and log4j-core-2.0-beta4-SNAPSHOT.jar generated from trunk. Best regards, Remko
          Hide
          Remko Popma added a comment -

          Ralph,

          would you mind if I created a separate JIRA "Create asynchronous Logger for low-latency logging" and move the FastLog4j-v2-for-beta4.zip attachment to that JIRA? That way this JIRA can be purely for "facilitate subclassing" and use the new JIRA to discuss AsyncLogger details.

          -Remko

          Show
          Remko Popma added a comment - Ralph, would you mind if I created a separate JIRA "Create asynchronous Logger for low-latency logging" and move the FastLog4j-v2-for-beta4.zip attachment to that JIRA? That way this JIRA can be purely for "facilitate subclassing" and use the new JIRA to discuss AsyncLogger details. -Remko
          Hide
          Ralph Goers added a comment -

          That is fine. Even though you provided one patch I am reviewing the pieces one at a time.

          Show
          Ralph Goers added a comment - That is fine. Even though you provided one patch I am reviewing the pieces one at a time.
          Hide
          Ralph Goers added a comment -

          Patch applied in revision 1463078 from LOG4J2-163. Please verify and close.

          Show
          Ralph Goers added a comment - Patch applied in revision 1463078 from LOG4J2-163 . Please verify and close.
          Hide
          Remko Popma added a comment -

          Verified as complete.

          Show
          Remko Popma added a comment - Verified as complete.

            People

            • Assignee:
              Ralph Goers
              Reporter:
              Remko Popma
            • Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development