Details
-
Bug
-
Status: Resolved
-
Normal
-
Resolution: Cannot Reproduce
-
None
-
None
-
None
-
C* 1.2.6, Ubuntu 12.04.2 LTS, java version "1.7.0_25" (oracle)
-
Normal
Description
With Cassandra 1.2.6 configured to use vnodes and running on
SSD/10gige (EC2 hi1.4xlarge's) instances, we are noticing quite poor
streaming performance while bootstrapping a new node. We observe a
maximum streaming rate of approximately 5-6 MB/sec per C* instance
sending streams. Between the same nodes we can rsync the same sstables
and observe at least 115 MB/sec+, so we don't believe there's a
hardware limitation.
With Cassandra 1.1.12 we observe higher streaming througputs, even on
slower hardware. Currently it means that adding/replacing nodes in our
1.2.6 ring takes hours, even with relatively small storage loads.
Streaming throughput on all nodes involved was set at 200+ MB/sec. The
nodes were operating at an average CPU usage of 23%, with spikes up
to a maximum 45%.
We are using the Oracle JVM 1.7.0_25 and have JNA installed. Our heap
is 10G with a 2G new gen.
Our cassandra.yaml changes are the following (aside from
directory/cluster names):
>>>
+num_tokens: 256
-partitioner: org.apache.cassandra.dht.Murmur3Partitioner
+partitioner: org.apache.cassandra.dht.RandomPartitioner
-concurrent_reads: 32
-concurrent_writes: 32
+concurrent_reads: 128
+concurrent_writes: 128
-rpc_server_type: sync
+rpc_server_type: hsha
-compaction_throughput_mb_per_sec: 16
+compaction_throughput_mb_per_sec: 256
-read_request_timeout_in_ms: 10000
+read_request_timeout_in_ms: 6000
-endpoint_snitch: SimpleSnitch
+endpoint_snitch: Ec2Snitch
-internode_compression: all
+internode_compression: none
<<<
We originally tried with internode compression on, but disabled it to ensure it was not adding overhead.