Details
-
Improvement
-
Status: Resolved
-
Normal
-
Resolution: Fixed
-
None
-
None
Description
Just spotted that we allocate potentially large amounts of garbage on bloom filter lookups, since we allocate a new long[] for each hash() and to store the bucket indexes we visit, in a manner that guarantees they are allocated on heap. With a lot of sstables and many requests, this could easily be hundreds of megabytes of young gen churn per second.