Details
Description
I have 256MB memory Debian to do the stress test (it is easier to find out the problem)
This is my setting excepts the default
MAX_HEAP_SIZE="192M"
HEAP_NEWSIZE="16M"
commitlog_segment_size_in_mb: 4
flush_largest_memtables_at: 0.5 (too few memory, flush it earlier...)
concurrent_reads: 16
concurrent_writes: 8
memtable_total_space_in_mb: 64
commitlog_total_space_in_mb: 4
memtable_flush_queue_size: 6
in_memory_compaction_limit_in_mb: 4
concurrent_compactors: 1
stream_throughput_outbound_megabits_per_sec: 400
rpc_timeout_in_ms: 60000
And this is my schema
create keyspace PT
with placement_strategy = 'org.apache.cassandra.locator.SimpleStrategy'
and strategy_options = [
];
use PT;
create column family cheque
with comparator = UTF8Type
and key_validation_class = UTF8Type
and default_validation_class = UTF8Type
and column_metadata = [
];
I tried to insert a 50KB file per record using hector 1.1.0
I did not set any swap as it is not recommended.
GC is working and telling me 0.80 heap is used, blahblahblah, the number finally reach 0.99 and of course OOM happens
So my question is the following
Can the server block the incoming insert when the heap size used is 0.95, is it feasible?
I know hector will retry when timeoutexception happens. so it is good to implement blocking features instead of throttling in client problem.
Sorry for my poor english and i am completely cassandra newbie, so my wish may not valid.
Thanks!