Details
-
Improvement
-
Status: Open
-
Normal
-
Resolution: Unresolved
-
None
-
None
Description
While profiling C* via multiple profilers, I've consistently seen a significant amount of time being spent calculating MD5 digests.
Stack Trace Sample Count Percentage(%) sun.security.provider.MD5.implCompress(byte[], int) 264 1.566 sun.security.provider.DigestBase.implCompressMultiBlock(byte[], int, int) 200 1.187 sun.security.provider.DigestBase.engineUpdate(byte[], int, int) 200 1.187 java.security.MessageDigestSpi.engineUpdate(ByteBuffer) 200 1.187 java.security.MessageDigest$Delegate.engineUpdate(ByteBuffer) 200 1.187 java.security.MessageDigest.update(ByteBuffer) 200 1.187 org.apache.cassandra.db.Column.updateDigest(MessageDigest) 193 1.145 org.apache.cassandra.db.ColumnFamily.updateDigest(MessageDigest) 193 1.145 org.apache.cassandra.db.ColumnFamily.digest(ColumnFamily) 193 1.145 org.apache.cassandra.service.RowDigestResolver.resolve() 106 0.629 org.apache.cassandra.service.RowDigestResolver.resolve() 106 0.629 org.apache.cassandra.service.ReadCallback.get() 88 0.522 org.apache.cassandra.service.AbstractReadExecutor.get() 88 0.522 org.apache.cassandra.service.StorageProxy.fetchRows(List, ConsistencyLevel) 88 0.522 org.apache.cassandra.service.StorageProxy.read(List, ConsistencyLevel) 88 0.522 org.apache.cassandra.service.pager.SliceQueryPager.queryNextPage(int, ConsistencyLevel, boolean) 88 0.522 org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(int) 88 0.522 org.apache.cassandra.service.pager.SliceQueryPager.fetchPage(int) 88 0.522 org.apache.cassandra.cql3.statements.SelectStatement.execute(QueryState, QueryOptions) 88 0.522 org.apache.cassandra.cql3.statements.SelectStatement.execute(QueryState, QueryOptions) 88 0.522 org.apache.cassandra.cql3.QueryProcessor.processStatement(CQLStatement, QueryState, QueryOptions) 88 0.522 org.apache.cassandra.cql3.QueryProcessor.process(String, QueryState, QueryOptions) 88 0.522 org.apache.cassandra.transport.messages.QueryMessage.execute(QueryState) 88 0.522 org.apache.cassandra.transport.Message$Dispatcher.messageReceived(ChannelHandlerContext, MessageEvent) 88 0.522 org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(ChannelHandlerContext, ChannelEvent) 88 0.522 org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline$DefaultChannelHandlerContext, ChannelEvent) 88 0.522 org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(ChannelEvent) 88 0.522 org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun() 88 0.522 org.jboss.netty.handler.execution.ChannelEventRunnable.run() 88 0.522 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker) 88 0.522 java.util.concurrent.ThreadPoolExecutor$Worker.run() 88 0.522 java.lang.Thread.run() 88 0.522
Pending CASSANDRA-13291, it would be pretty easy to:
- Switch out the hashing implementation from MD5 to implementations such as adler128 and murmur3_128 (but certainly not limited to) and do some profiling to compare the net improvement on latencies and CPU usage
- As we can't switch the algorithm from MD5 without breaking things, we could rev the MessagingService protocol version – like we already do for things like switching from Snappy compression -> LZ4, we could switch to the new hashing implementation once all peers in the node are upgraded and support the new MessagingService version.
Attachments
Attachments
Issue Links
- is blocked by
-
CASSANDRA-13291 Replace usages of MessageDigest with Guava's Hasher
- Resolved
- is related to
-
CASSANDRA-14611 Quorum reads of wide partitions are expensive due to inefficient hashing in AbstractCell
- Open