Uploaded image for project: 'Apache Cassandra'
  1. Apache Cassandra
  2. CASSANDRA-11920

bloom_filter_fp_chance needs to be validated up front

    XMLWordPrintableJSON

Details

    • Low

    Description

      Hi,

      I was doing some bench-marking on bloom_filter_fp_chance values. Everything worked fine for values .01(default for STCS), .001, .0001. But when I set bloom_filter_fp_chance = .00001 i observed following behaviour:

      1). Reads and writes looked normal from cqlsh.
      2). SSttables are never created.
      3). It just creates two files (*-Data.db and *-index.db) of size 0kb.
      4). nodetool flush does not work and produce following exception:

      java.lang.UnsupportedOperationException: Unable to satisfy 1.0E-5 with 20 buckets per element
      at org.apache.cassandra.utils.BloomCalculations.computeBloomSpec(BloomCalculations.java:150) .....

      I checked BloomCalculations class and following lines are responsible for this exception:

      if (maxFalsePosProb < probs[maxBucketsPerElement][maxK])

      { throw new UnsupportedOperationException(String.format("Unable to satisfy %s with %s buckets per element", maxFalsePosProb, maxBucketsPerElement)); }

      From the code it looks like a hard coaded validation (unless we can change the nuber of buckets).
      So, if this validation is hard coaded then why it is even allowed to set such value of bloom_fileter_fp_chance, that can prevent ssTable generation?

      Please correct this issue.

      Attachments

        1. 11920-3.0.txt
          3 kB
          Arindam Gupta

        Activity

          People

            arindamg Arindam Gupta
            adarsh0007@gmail.com ADARSH KUMAR
            Arindam Gupta
            Tom Hobbs
            ADARSH KUMAR ADARSH KUMAR
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: