Uploaded image for project: 'Cassandra'
  1. Cassandra
  2. CASSANDRA-6345

Endpoint cache invalidation causes CPU spike (on vnode rings?)

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Normal
    • Resolution: Fixed
    • 1.2.13, 2.0.4
    • None
    • None
    • 30 nodes total, 2 DCs
      Cassandra 1.2.11
      vnodes enabled (256 per node)

    • Normal

    Description

      We've observed that events which cause invalidation of the endpoint cache (update keyspace, add/remove nodes, etc) in AbstractReplicationStrategy result in several seconds of thundering herd behavior on the entire cluster.

      A thread dump shows over a hundred threads (I stopped counting at that point) with a backtrace like this:

      at java.net.Inet4Address.getAddress(Inet4Address.java:288)
      at org.apache.cassandra.locator.TokenMetadata$1.compare(TokenMetadata.java:106)
      at org.apache.cassandra.locator.TokenMetadata$1.compare(TokenMetadata.java:103)
      at java.util.TreeMap.getEntryUsingComparator(TreeMap.java:351)
      at java.util.TreeMap.getEntry(TreeMap.java:322)
      at java.util.TreeMap.get(TreeMap.java:255)
      at com.google.common.collect.AbstractMultimap.put(AbstractMultimap.java:200)
      at com.google.common.collect.AbstractSetMultimap.put(AbstractSetMultimap.java:117)
      at com.google.common.collect.TreeMultimap.put(TreeMultimap.java:74)
      at com.google.common.collect.AbstractMultimap.putAll(AbstractMultimap.java:273)
      at com.google.common.collect.TreeMultimap.putAll(TreeMultimap.java:74)
      at org.apache.cassandra.utils.SortedBiMultiValMap.create(SortedBiMultiValMap.java:60)
      at org.apache.cassandra.locator.TokenMetadata.cloneOnlyTokenMap(TokenMetadata.java:598)
      at org.apache.cassandra.locator.AbstractReplicationStrategy.getNaturalEndpoints(AbstractReplicationStrategy.java:104)
      at org.apache.cassandra.service.StorageService.getNaturalEndpoints(StorageService.java:2671)
      at org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:375)

      It looks like there's a large amount of cost in the TokenMetadata.cloneOnlyTokenMap that AbstractReplicationStrategy.getNaturalEndpoints is calling each time there is a cache miss for an endpoint. It seems as if this would only impact clusters with large numbers of tokens, so it's probably a vnodes-only issue.

      Proposal: In AbstractReplicationStrategy.getNaturalEndpoints(), cache the cloned TokenMetadata instance returned by TokenMetadata.cloneOnlyTokenMap(), wrapping it with a lock to prevent stampedes, and clearing it in clearEndpointCache(). Thoughts?

      Attachments

        1. 6345.txt
          3 kB
          Jonathan Ellis
        2. 6345-rbranson.txt
          5 kB
          Rick Branson
        3. 6345-rbranson-v2.txt
          10 kB
          Rick Branson
        4. 6345-v2.txt
          3 kB
          Jonathan Ellis
        5. 6345-v3.txt
          4 kB
          Jonathan Ellis
        6. 6345-v4.txt
          9 kB
          Jonathan Ellis
        7. 6345-v5.txt
          8 kB
          Jonathan Ellis
        8. half-way-thru-6345-rbranson-patch-applied.png
          31 kB
          Rick Branson

        Issue Links

          Activity

            People

              jbellis Jonathan Ellis
              rbranson Rick Branson
              Jonathan Ellis
              Rick Branson
              Votes:
              2 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: