I was just wondering if there is any logic for "which bloom filter should be checked first" to increase the probability of getting the result and not just minimizing the probability of false positive.
( Note: I have checked into the code and I am not talking about "Getting BloomFilter with the lowest practical false positive probability" OR "Getting smallest BloomFilter that can provide the given false positive probability rate for the given number of elements." )
Consider following Scenario:
1) In our Cassandra Cluster we are inserting 130 millions of rows on daily basis for single column family and practically we cant keep this data compacted always.(As the loading time is much and compaction may take too much time that could affect the schedule for loading of data for next day )
2) We are inserting same rowkeys(values of all the 130 millions rows are same) everyday with different supercolumn.
3) So if we do not compact the data say for 30 days, each row key is present in 30 different sstables.
4) So in worst case, even with 0 probability of false positive, there could be 30 unnecessary disk accesses.
5) Because of this scenario we are experiencing extremely degraded read performance.
1) We can have some sorting of bloom-filters based on logic like the bloom filter of the sstable which resulted into successfully serving the read request will have higher priority over other bloom filters.
I mean we will go for the bloom filter of the sstable which is most recently accessed and which successfully returned the requested columns.(MRU approach, As the probability of getting result from MRU sstable is greater).This way we can reduce the disk access.
2) The point is we should have some sort of logic for sorting of bloom filters to boost the read performance in case where sstables are not yet compacted.