Details
-
Bug
-
Status: Resolved
-
Normal
-
Resolution: Duplicate
-
None
-
None
-
Pre-prod
-
Normal
Description
HI,
we are ingesting data 6 million records every 15 mins into one DTCS table and relaying on Cassandra for purging the data.Table Schema given below, Issue 1: we are expecting to see table sstable created on day d1 will not be compacted after d1 how we are not seeing this, how ever i see some data being purged at random intervals
Issue 2: when we run incremental repair using "nodetool repair keyspace table -inc -pr" each sstable is splitting up to multiple smaller SStables and increasing the total storage.This behavior is same running repairs on any node and any number of times
There are mutation drop's in the cluster
Table:
CREATE TABLE TableA ( F1 text, F2 int, createts bigint, stats blob, PRIMARY KEY ((F1,F2), createts) ) WITH CLUSTERING ORDER BY (createts DESC) AND bloom_filter_fp_chance = 0.01 AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}' AND comment = '' AND compaction = {'min_threshold': '12', 'max_sstable_age_days': '1', 'base_time_seconds': '50', 'class': 'org.apache.cassandra.db.compaction.DateTieredCompactionStrategy'} AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'} AND dclocal_read_repair_chance = 0.0 AND default_time_to_live = 93600 AND gc_grace_seconds = 3600 AND max_index_interval = 2048 AND memtable_flush_period_in_ms = 0 AND min_index_interval = 128 AND read_repair_chance = 0.0 AND speculative_retry = '99.0PERCENTILE';
Thanks
Attachments
Attachments
Issue Links
- duplicates
-
CASSANDRA-9644 DTCS configuration proposals for handling consequences of repairs
- Resolved