Uploaded image for project: 'Cassandra'
  1. Cassandra
  2. CASSANDRA-4507

Can cassandra block request when it is super busy ?

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Normal
    • Resolution: Duplicate
    • None
    • None
    • Debian squeeze 32bit

    Description

      I have 256MB memory Debian to do the stress test (it is easier to find out the problem)
      This is my setting excepts the default

      MAX_HEAP_SIZE="192M"
      HEAP_NEWSIZE="16M"
      commitlog_segment_size_in_mb: 4
      flush_largest_memtables_at: 0.5 (too few memory, flush it earlier...)
      concurrent_reads: 16
      concurrent_writes: 8
      memtable_total_space_in_mb: 64
      commitlog_total_space_in_mb: 4
      memtable_flush_queue_size: 6
      in_memory_compaction_limit_in_mb: 4
      concurrent_compactors: 1
      stream_throughput_outbound_megabits_per_sec: 400
      rpc_timeout_in_ms: 60000

      And this is my schema
      create keyspace PT
      with placement_strategy = 'org.apache.cassandra.locator.SimpleStrategy'
      and strategy_options = [

      {replication_factor:1}

      ];

      use PT;

      create column family cheque
      with comparator = UTF8Type
      and key_validation_class = UTF8Type
      and default_validation_class = UTF8Type
      and column_metadata = [

      {column_name: acct_no, validation_class: UTF8Type, index_name:cheque_acct_no_idx, index_type:KEYS } {column_name: date, validation_class: UTF8Type, index_name:cheque_date_idx, index_type:KEYS } {column_name: bank, validation_class: UTF8Type, index_name:cheque_bank_idx, index_type:KEYS } {column_name: amount, validation_class: LongType, index_name:cheque_amout_idx, index_type:KEYS } {column_name: receipt, validation_class: UTF8Type, index_name:cheque_receipt_idx, index_type:KEYS } {column_name: create_timestamp, validation_class: LongType, index_name:cheque_create_timestamp_idx, index_type:KEYS} {column_name: image, validation_class: BytesType}

      ];

      I tried to insert a 50KB file per record using hector 1.1.0
      I did not set any swap as it is not recommended.

      GC is working and telling me 0.80 heap is used, blahblahblah, the number finally reach 0.99 and of course OOM happens

      So my question is the following
      Can the server block the incoming insert when the heap size used is 0.95, is it feasible?
      I know hector will retry when timeoutexception happens. so it is good to implement blocking features instead of throttling in client problem.

      Sorry for my poor english and i am completely cassandra newbie, so my wish may not valid.

      Thanks!

      Attachments

        Activity

          People

            Unassigned Unassigned
            tomcheng76 Tommy Cheng
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: