Details
Description
Journal scalability with a replicated pair has degraded due to:
- a semantic change on journal sync that was causing the Netty event loop on the backup to await any journal operation to hit the disk - see https://issues.apache.org/jira/browse/ARTEMIS-2837
- a semantic change on NettyConnection::write from within the Netty event loop, that is now immediately writing and flushing buffers, while it was delaying it by offering it again in the event loop - see: https://issues.apache.org/jira/browse/ARTEMIS-2205 (in particular https://github.com/apache/activemq-artemis/commit/a40a459f8c536a10a0dccae6e522ec38f09dd544#diff-3477fe0d8138d589ef33feeea2ecd28eL377-L392)
The former issues has been solved by reverting the changes and reimplemented without introducing any semantic change.
The latter need some more explanation to be understood:
- ReplicationEndpoint is responsible to handle packets from live
- Netty provide incoming packets to ReplicationEndpoint in batches
- after each processed packet coming from live (that would likely end to append something to the journal), a replication packet response need to be sent back from backup to the live: in the original behavior (< 2.7.0) the responses were delayed to be flushed to the connection until the end of a processed batch of packets, causing the journal to append records in bursts and amortizing the full cost of awaking the I/O thread responsible of appending data to the journal.
To emulate the original "bursty" behavior. but making the batching more explicit (and tunable too), it can be solved:
- using Netty's ChannelInboundHandler::channelReadComplete event to flush each batch of packet responses as before
- [OPTIONAL] implement a new append executor on the journal to further reduce the cost of awaking the appending thread, reducing the appending cost
Attachments
Issue Links
- fixes
-
ARTEMIS-2852 Huge performance decrease between versions 2.2.0 and 2.13.0
- Closed
- relates to
-
ARTEMIS-3282 Expose Replication response batching tuning
- Open
- links to