A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
- The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
- Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
- Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
- At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
Measuring on a Solaris box, the difference between flushing to disk (with RandomAccessFile.write(bytes)) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
The difference may be most visible if immediateFlush is set to true, which is only recommended if async loggers/appenders are not used. If immediateFlush=false, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in synchronous logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.