Uploaded image for project: 'Cassandra'
  1. Cassandra
  2. CASSANDRA-6405

When making heavy use of counters, neighbor nodes occasionally enter spiral of constant memory consumpion

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Fix Version/s: 2.1 beta2
    • Component/s: None
    • Labels:
      None
    • Environment:

      RF of 3, 15 nodes.
      Sun Java 7 (also occurred in OpenJDK 6, and Sun Java 6).
      Xmx of 8G.
      No row cache.

      Description

      We're randomly running into an interesting issue on our ring. When making use of counters, we'll occasionally have 3 nodes (always neighbors) suddenly start immediately filling up memory, CMSing, fill up again, repeat. This pattern goes on for 5-20 minutes. Nearly all requests to the nodes time out during this period. Restarting one, two, or all three of the nodes does not resolve the spiral; after a restart the three nodes immediately start hogging up memory again and CMSing constantly.

      When the issue resolves itself, all 3 nodes immediately get better. Sometimes it reoccurs in bursts, where it will be trashed for 20 minutes, fine for 5, trashed for 20, and repeat that cycle a few times.

      There are no unusual logs provided by cassandra during this period of time, other than recording of the constant dropped read requests and the constant CMS runs. I have analyzed the log files prior to multiple distinct instances of this issue and have found no preceding events which are associated with this issue.

      I have verified that our apps are not performing any unusual number or type of requests during this time.

      This behaviour occurred on 1.0.12, 1.1.7, and now on 1.2.11.

      The way I've narrowed this down to counters is a bit naive. It started happening when we started making use of counter columns, went away after we rolled back use of counter columns. I've repeated this attempted rollout on each version now, and it consistently rears its head every time. I should note this incident does seem to happen more rarely on 1.2.11 compared to the previous versions.

      This incident has been consistent across multiple different types of hardware, as well as major kernel version changes (2.6 all the way to 3.2). The OS is operating normally during the event.

      I managed to get an hprof dump when the issue was happening in the wild. Something notable in the class instance counts as reported by jhat. Here are the top 5 counts for this one node:

      5967846 instances of class org.apache.cassandra.db.CounterColumn 
      1247525 instances of class com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$WeightedValue 
      1247310 instances of class org.apache.cassandra.cache.KeyCacheKey 
      1246648 instances of class com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
      1237526 instances of class org.apache.cassandra.db.RowIndexEntry 
      

      Is it normal or expected for CounterColumn to have that number of instances?

      The data model for how we use counters is as follows: between 50-20000 counter columns per key. We currently have around 3 million keys total, but this issue also replicated when we only had a few thousand keys total. Average column count is around 1k, and 90th is 18k. New columns are added regularly, and columns are incremented regularly. No column or key deletions occur. We probably have 1-5k "hot" keys at any given time, spread across the entire ring. R:W ratio is typically around 50:1. This is the only CF we're using counters on, at this time. CF details are as follows:

          ColumnFamily: CommentTree
            Key Validation Class: org.apache.cassandra.db.marshal.AsciiType
            Default column value validator: org.apache.cassandra.db.marshal.CounterColumnType
            Cells sorted by: org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType)
            GC grace seconds: 864000
            Compaction min/max thresholds: 4/32
            Read repair chance: 0.01
            DC Local Read repair chance: 0.0
            Populate IO Cache on flush: false
            Replicate on write: true
            Caching: KEYS_ONLY
            Bloom Filter FP chance: default
            Built indexes: []
            Compaction Strategy: org.apache.cassandra.db.compaction.LeveledCompactionStrategy
            Compaction Strategy Options:
              sstable_size_in_mb: 160
      
      
      
                      Column Family: CommentTree
                      SSTable count: 30
                      SSTables in each level: [1, 10, 19, 0, 0, 0, 0, 0, 0]
                      Space used (live): 4656930594
                      Space used (total): 4677221791
                      SSTable Compression Ratio: 0.0
                      Number of Keys (estimate): 679680
                      Memtable Columns Count: 8289
                      Memtable Data Size: 2639908
                      Memtable Switch Count: 5769
                      Read Count: 185479324
                      Read Latency: 1.786 ms.
                      Write Count: 5377562
                      Write Latency: 0.026 ms.
                      Pending Tasks: 0
                      Bloom Filter False Positives: 2914204
                      Bloom Filter False Ratio: 0.56403
                      Bloom Filter Space Used: 523952
                      Compacted row minimum size: 30
                      Compacted row maximum size: 4866323
                      Compacted row mean size: 7742
                      Average live cells per slice (last five minutes): 39.0
                      Average tombstones per slice (last five minutes): 0.0
      
      

      Please let me know if I can provide any further information. I can provide the hprof if desired, however it is 3GB so I'll need to provide it outside of JIRA.

      1. threaddump.txt
        589 kB
        Jason Harvey

        Issue Links

          Activity

          Hide
          alienth Jason Harvey added a comment -

          Just did some analysis under normal conditions. Typically, our nodes have less than 200k instances of org.apache.cassandra.db.CounterColumn. During this issue we had nearly 6 million instances, as shown above.

          Show
          alienth Jason Harvey added a comment - Just did some analysis under normal conditions. Typically, our nodes have less than 200k instances of org.apache.cassandra.db.CounterColumn. During this issue we had nearly 6 million instances, as shown above.
          Hide
          alienth Jason Harvey added a comment -

          I have verified that an instance which exhibited the high instance count of CounterColumn classes returned to a lower count (from 5.5m to 180k) after the issue resolved itself, without a restart.

          Show
          alienth Jason Harvey added a comment - I have verified that an instance which exhibited the high instance count of CounterColumn classes returned to a lower count (from 5.5m to 180k) after the issue resolved itself, without a restart.
          Hide
          jbellis Jonathan Ellis added a comment -

          It sounds like compaction tbh.

          Show
          jbellis Jonathan Ellis added a comment - It sounds like compaction tbh.
          Hide
          alienth Jason Harvey added a comment -

          Jonathan Ellis There are no compaction tasks pending during the incident. Additionally, on an earlier occurrence I disabled compaction on the CF to no avail.

          Would a compaction pileup account for the huge number of class instances? I also find it somewhat unlikely that a compaction issue would appear on 3 nodes simultaneously, and then immediately resolve on 3 nodes simultaneously.

          Show
          alienth Jason Harvey added a comment - Jonathan Ellis There are no compaction tasks pending during the incident. Additionally, on an earlier occurrence I disabled compaction on the CF to no avail. Would a compaction pileup account for the huge number of class instances? I also find it somewhat unlikely that a compaction issue would appear on 3 nodes simultaneously, and then immediately resolve on 3 nodes simultaneously.
          Hide
          alienth Jason Harvey added a comment -

          We're experiencing this issue right this moment (in fact, reddit is down as a result). Compactionstats on the three nodes that are spiking is as follows:

          pending tasks: 0
          Active compaction remaining time : n/a

          Show
          alienth Jason Harvey added a comment - We're experiencing this issue right this moment (in fact, reddit is down as a result). Compactionstats on the three nodes that are spiking is as follows: pending tasks: 0 Active compaction remaining time : n/a
          Hide
          mishail Mikhail Stepura added a comment - - edited

          Jason Harvey can you take a thread dump when the issue happens?

          Show
          mishail Mikhail Stepura added a comment - - edited Jason Harvey can you take a thread dump when the issue happens?
          Hide
          alienth Jason Harvey added a comment -

          Thread dump during incident.

          Show
          alienth Jason Harvey added a comment - Thread dump during incident.
          Hide
          alienth Jason Harvey added a comment -

          Mikhail Stepura I have just attached a thread dump to this issue.

          Thanks!

          Show
          alienth Jason Harvey added a comment - Mikhail Stepura I have just attached a thread dump to this issue. Thanks!
          Hide
          alienth Jason Harvey added a comment -

          Also, GC details after a CMS occurred (immediately followedy by another CMS)

           Heap
            par new generation   total 276480K, used 31300K [0x00000005fae00000, 0x000000060da00000, 0x000000060da00000)
             eden space 245760K,   0% used [0x00000005fae00000, 0x00000005fae913e8, 0x0000000609e00000)
             from space 30720K, 100% used [0x000000060bc00000, 0x000000060da00000, 0x000000060da00000)
             to   space 30720K,   0% used [0x0000000609e00000, 0x0000000609e00000, 0x000000060bc00000)
            concurrent mark-sweep generation total 8081408K, used 1319539K [0x000000060da00000, 0x00000007fae00000, 0x00000007fae00000)
            concurrent-mark-sweep perm gen total 41060K, used 24529K [0x00000007fae00000, 0x00000007fd619000, 0x0000000800000000)
          
          Show
          alienth Jason Harvey added a comment - Also, GC details after a CMS occurred (immediately followedy by another CMS) Heap par new generation total 276480K, used 31300K [0x00000005fae00000, 0x000000060da00000, 0x000000060da00000) eden space 245760K, 0% used [0x00000005fae00000, 0x00000005fae913e8, 0x0000000609e00000) from space 30720K, 100% used [0x000000060bc00000, 0x000000060da00000, 0x000000060da00000) to space 30720K, 0% used [0x0000000609e00000, 0x0000000609e00000, 0x000000060bc00000) concurrent mark-sweep generation total 8081408K, used 1319539K [0x000000060da00000, 0x00000007fae00000, 0x00000007fae00000) concurrent-mark-sweep perm gen total 41060K, used 24529K [0x00000007fae00000, 0x00000007fd619000, 0x0000000800000000)
          Hide
          mishail Mikhail Stepura added a comment -

          Jason Harvey How big is your key_cache_size_in_mb? . And I assume you have compaction_preheat_key_cache: true, right?

          Show
          mishail Mikhail Stepura added a comment - Jason Harvey How big is your key_cache_size_in_mb ? . And I assume you have compaction_preheat_key_cache: true , right?
          Hide
          alienth Jason Harvey added a comment -

          Mikhail Stepura 100M currently. preheat is turned on.

          Show
          alienth Jason Harvey added a comment - Mikhail Stepura 100M currently. preheat is turned on.
          Hide
          alienth Jason Harvey added a comment -

          I should note, Brandon Williams took a peek at the heap dump and it was unfortunately caught just after a CMS, so it doesn't tell us much. I've been unable to get a heap dump from when the memory is full. Despite the thing constantly CMSing, every dump I've taken is what the heap looked like just after a CMS.

          Only solid clue still remaining is that instance count of CounterColumn.

          Show
          alienth Jason Harvey added a comment - I should note, Brandon Williams took a peek at the heap dump and it was unfortunately caught just after a CMS, so it doesn't tell us much. I've been unable to get a heap dump from when the memory is full. Despite the thing constantly CMSing, every dump I've taken is what the heap looked like just after a CMS. Only solid clue still remaining is that instance count of CounterColumn.
          Hide
          slebresne Sylvain Lebresne added a comment -

          In and of itself, having lots of instances of CounterColumn is not abnormal when doing lots of counter operations as this the class for each counter value while inserting/reading them. If you do lots of normal operations, you'll similarly see a lot of Column object allocated. The behavior you are seeing is not particularly normal, but having very many CounterColumn objects is not a definitive sign of a problem.

          That being said, do you do insertions at CL.ONE? If so, counters are kind of a time bomb in the sense that the read that is done as part of replication is done after we've answered the client. Which means that if you insert too fast, replication tasks will pill up behind the scenes, and those task will hold memory that cannot be GCed. In particular, one thing to look at is the replicate_on_write stage in JMX. If pendingTasks are accumulating, that's likely your problem (you're inserting faster than your cluster can actually handle). In which case the basic solution consists in rate limiting the insertions so pending tasks don't pill up.

          Show
          slebresne Sylvain Lebresne added a comment - In and of itself, having lots of instances of CounterColumn is not abnormal when doing lots of counter operations as this the class for each counter value while inserting/reading them. If you do lots of normal operations, you'll similarly see a lot of Column object allocated. The behavior you are seeing is not particularly normal, but having very many CounterColumn objects is not a definitive sign of a problem. That being said, do you do insertions at CL.ONE? If so, counters are kind of a time bomb in the sense that the read that is done as part of replication is done after we've answered the client. Which means that if you insert too fast, replication tasks will pill up behind the scenes, and those task will hold memory that cannot be GCed. In particular, one thing to look at is the replicate_on_write stage in JMX. If pendingTasks are accumulating, that's likely your problem (you're inserting faster than your cluster can actually handle). In which case the basic solution consists in rate limiting the insertions so pending tasks don't pill up.
          Hide
          alienth Jason Harvey added a comment - - edited

          Sylvain Lebresne When the issue is occurring, we have no pending ReplicateOnWrite threads. All pending threads are either reads or writes.

          When the crazy CounterColumn instance counts are reached, the number of reads/writes occurring on the table are drastically reduced they're all moving extremely slowly.

          If the CounterColumn instance count was legitimate, wouldn't we expect a huge number of reads/writes to be occurring, rather than a small few? Even during peak hours, we don't do more than 150 reads / 10 writes a second per cassandra node. When this issue occurs, that drops down to 2-3 reads and writes a second.

          Additionally, we allow up to 128 read threads concurrently. Most of the counter column rows have around 1k columns, with the 95th percentile having 18k columns. Even if every single read thread was dedicated to reading our largest countercolumn row (which they're not), that accounts for a maximum of ~2-3m counter columns being concurrently accessed.

          Show
          alienth Jason Harvey added a comment - - edited Sylvain Lebresne When the issue is occurring, we have no pending ReplicateOnWrite threads. All pending threads are either reads or writes. When the crazy CounterColumn instance counts are reached, the number of reads/writes occurring on the table are drastically reduced they're all moving extremely slowly. If the CounterColumn instance count was legitimate, wouldn't we expect a huge number of reads/writes to be occurring, rather than a small few? Even during peak hours, we don't do more than 150 reads / 10 writes a second per cassandra node. When this issue occurs, that drops down to 2-3 reads and writes a second. Additionally, we allow up to 128 read threads concurrently. Most of the counter column rows have around 1k columns, with the 95th percentile having 18k columns. Even if every single read thread was dedicated to reading our largest countercolumn row (which they're not), that accounts for a maximum of ~2-3m counter columns being concurrently accessed.
          Hide
          slebresne Sylvain Lebresne added a comment -

          If the CounterColumn instance count was legitimate, wouldn't we expect a huge number of reads/writes to be occurring, rather than a small few?

          I expressed myself badly. I'm not saying the exact number is normal, and you are definitively reaching a bad situtation. I was merely saying that it's likely a consequence, not a cause, and that it unfortunately does not allow to narrow what the cause may be a whole lot. I'm not suggesting to ignore that information though.

          Now, sorry to insist, but are you doing CL.ONE inserts?

          Because the fact is, if you do, we know that replicate on write tasks may easily pile up behind the scenes. Which would hold ColumnCounter objects in memory and might explain why neighboring nodes are affected together.

          Granted the attached thread dump don't show a whole lot of activity on the ReplicateOnWriteStage, and the absence of pending task on that task would suggest it's not the problem. Nonetheless, it's the best lead I have to offer so far.

          Show
          slebresne Sylvain Lebresne added a comment - If the CounterColumn instance count was legitimate, wouldn't we expect a huge number of reads/writes to be occurring, rather than a small few? I expressed myself badly. I'm not saying the exact number is normal, and you are definitively reaching a bad situtation. I was merely saying that it's likely a consequence, not a cause, and that it unfortunately does not allow to narrow what the cause may be a whole lot. I'm not suggesting to ignore that information though. Now, sorry to insist, but are you doing CL.ONE inserts? Because the fact is, if you do, we know that replicate on write tasks may easily pile up behind the scenes. Which would hold ColumnCounter objects in memory and might explain why neighboring nodes are affected together. Granted the attached thread dump don't show a whole lot of activity on the ReplicateOnWriteStage, and the absence of pending task on that task would suggest it's not the problem. Nonetheless, it's the best lead I have to offer so far.
          Hide
          alienth Jason Harvey added a comment - - edited

          Sylvain Lebresne Woops, thought i included that. We are doing QUORUM writes.

          Just dug through all of our logs and verified that we have never seen a pending count on ReplicateOnWriteStage during these incidents on any server. (Not only verified in thread dumps, but via periodic tpstats dumps).

          Show
          alienth Jason Harvey added a comment - - edited Sylvain Lebresne Woops, thought i included that. We are doing QUORUM writes. Just dug through all of our logs and verified that we have never seen a pending count on ReplicateOnWriteStage during these incidents on any server. (Not only verified in thread dumps, but via periodic tpstats dumps).
          Hide
          alienth Jason Harvey added a comment -

          I should also note a few other things I've tried on nodes experiencing this issue.

          • Wiping the keycache with a restart.
          • Disabling the keycache.
          • Disabling thrift.
          • Adjusting read thread concurrency down to 32 and up to 256.

          All of these attempts resulted in no change on the affected nodes. They continued to operate in the manner described above until they randomly got better. I have tried all of the above methods with a restart on a single server, as well as a restart on all three ndoes.

          Show
          alienth Jason Harvey added a comment - I should also note a few other things I've tried on nodes experiencing this issue. Wiping the keycache with a restart. Disabling the keycache. Disabling thrift. Adjusting read thread concurrency down to 32 and up to 256. All of these attempts resulted in no change on the affected nodes. They continued to operate in the manner described above until they randomly got better. I have tried all of the above methods with a restart on a single server, as well as a restart on all three ndoes.
          Hide
          alienth Jason Harvey added a comment -

          Just had a thought. One thing I haven't tried is disabling hinted handoff on the affected nodes. When this issue is occurring, the constant CMSs result in a bunch of piled up hints. Perhaps something triggers this behaviour, and the hints keep it rolling until all hints have been handed off?

          Bit of a stretch, but I'm grasping for anything at this point. I'll try this next time to see if it changes the behaviour at all.

          Show
          alienth Jason Harvey added a comment - Just had a thought. One thing I haven't tried is disabling hinted handoff on the affected nodes. When this issue is occurring, the constant CMSs result in a bunch of piled up hints. Perhaps something triggers this behaviour, and the hints keep it rolling until all hints have been handed off? Bit of a stretch, but I'm grasping for anything at this point. I'll try this next time to see if it changes the behaviour at all.
          Hide
          alienth Jason Harvey added a comment -

          Happened again a few more times today, taking the site down.

          Pausing hinted handoff resulted in no change on the affected nodes.

          I've also verified that there are no abnormal number of requests via the org.apache.cassandra.metrics:type=ClientRequest mbeans.

          Show
          alienth Jason Harvey added a comment - Happened again a few more times today, taking the site down. Pausing hinted handoff resulted in no change on the affected nodes. I've also verified that there are no abnormal number of requests via the org.apache.cassandra.metrics:type=ClientRequest mbeans.
          Hide
          alienth Jason Harvey added a comment -

          We've started abandoning the use of the counter columns. This issue has taken the site down for several hours in the past few days, so I could not allow it to continue.

          Unfortunately this also means I won't have any place to reproduce this for continued troubleshooting.

          Show
          alienth Jason Harvey added a comment - We've started abandoning the use of the counter columns. This issue has taken the site down for several hours in the past few days, so I could not allow it to continue. Unfortunately this also means I won't have any place to reproduce this for continued troubleshooting.
          Hide
          iamaleksey Aleksey Yeschenko added a comment -

          Jason Harvey As Sylvain said, counters currently cause a lot of allocations. I can't say for sure if what I'm going to describe is the cause of your issues, but it definitely contributes to it.

          One issue is that all the counter shards of a counter are stored in a single cell, as a blob with sorted tuples (see CounterContext class). And when we reconcile two counter cells, we have to allocate a third cell, large enough to hold the merged context. So unlike regular cells, where reconcile simply picks one of the two cells, reconcile for counter columns creates one more. This doesn't just affect reads, it also affects writes (to the memtable, including replication writes).

          Another issues is that when we replicate the counter, we read, and then send, the whole thing to the neighbouring nodes, and not just the value local to the leader-node, and it makes issue #1 worse.

          We are aware of it all, and will fix it in 2.1, with CASSANDRA-6506. The second issue is/will be fixed as part of CASSANDRA-6504.

          Please note that while it was possible to deal with #2, partially, before, there was no way to make CASSANDRA-6506 happen - because of the supercolumns. However, with CASSANDRA-3237 resolved in 2.0, it is now possible, and I'm currently working on that ticket.

          Show
          iamaleksey Aleksey Yeschenko added a comment - Jason Harvey As Sylvain said, counters currently cause a lot of allocations. I can't say for sure if what I'm going to describe is the cause of your issues, but it definitely contributes to it. One issue is that all the counter shards of a counter are stored in a single cell, as a blob with sorted tuples (see CounterContext class). And when we reconcile two counter cells, we have to allocate a third cell, large enough to hold the merged context. So unlike regular cells, where reconcile simply picks one of the two cells, reconcile for counter columns creates one more. This doesn't just affect reads, it also affects writes (to the memtable, including replication writes). Another issues is that when we replicate the counter, we read, and then send, the whole thing to the neighbouring nodes, and not just the value local to the leader-node, and it makes issue #1 worse. We are aware of it all, and will fix it in 2.1, with CASSANDRA-6506 . The second issue is/will be fixed as part of CASSANDRA-6504 . Please note that while it was possible to deal with #2, partially, before, there was no way to make CASSANDRA-6506 happen - because of the supercolumns. However, with CASSANDRA-3237 resolved in 2.0, it is now possible, and I'm currently working on that ticket.
          Hide
          alienth Jason Harvey added a comment -

          Aleksey Yeschenko Thanks for the details on those issues.

          It definitely feels as though there is an allocation leak, since we go from ~200k allocations, up to 6 million when the issue is happening, and then immediately back down to ~200k when it goes away. Obviously very hard to determine exactly why that is :/

          Is there any way to empirically determine if the issues you described are a contributing factor here?

          Show
          alienth Jason Harvey added a comment - Aleksey Yeschenko Thanks for the details on those issues. It definitely feels as though there is an allocation leak, since we go from ~200k allocations, up to 6 million when the issue is happening, and then immediately back down to ~200k when it goes away. Obviously very hard to determine exactly why that is :/ Is there any way to empirically determine if the issues you described are a contributing factor here?
          Hide
          iamaleksey Aleksey Yeschenko added a comment -

          Jason Harvey No built-in metrics that comes to mind. Sylvain Lebresne any ideas?

          Show
          iamaleksey Aleksey Yeschenko added a comment - Jason Harvey No built-in metrics that comes to mind. Sylvain Lebresne any ideas?
          Hide
          slebresne Sylvain Lebresne added a comment -

          Nothing coming to mind no, not by default at least. I suppose it wouldn't be too hard to add some instrumentation to count the number of times CounterColumn.reconcile() is called and see if the issue happening is linked to a sudden increase in those calls. That being said, that would still not tell us why there is a sudden increase of the calls... It's still mysterious to me why nodes would suddenly start allocating counters like crazy.

          Show
          slebresne Sylvain Lebresne added a comment - Nothing coming to mind no, not by default at least. I suppose it wouldn't be too hard to add some instrumentation to count the number of times CounterColumn.reconcile() is called and see if the issue happening is linked to a sudden increase in those calls. That being said, that would still not tell us why there is a sudden increase of the calls... It's still mysterious to me why nodes would suddenly start allocating counters like crazy.
          Hide
          jbellis Jonathan Ellis added a comment -

          when we reconcile two counter cells, we have to allocate a third cell, large enough to hold the merged context. So unlike regular cells, where reconcile simply picks one of the two cells, reconcile for counter columns creates one more. This doesn't just affect reads, it also affects writes (to the memtable, including replication writes).

          Contention within a counter (as multiple writers race to merge cells) makes this worse, because you will get this allocation for failed merges (that is, that lost the CAS race) that need to retry as well.

          Show
          jbellis Jonathan Ellis added a comment - when we reconcile two counter cells, we have to allocate a third cell, large enough to hold the merged context. So unlike regular cells, where reconcile simply picks one of the two cells, reconcile for counter columns creates one more. This doesn't just affect reads, it also affects writes (to the memtable, including replication writes). Contention within a counter (as multiple writers race to merge cells) makes this worse, because you will get this allocation for failed merges (that is, that lost the CAS race) that need to retry as well.
          Hide
          jbellis Jonathan Ellis added a comment -

          Closing as a duplicate of CASSANDRA-6506. There's no reasonable way to fix this in earlier C* versions.

          Show
          jbellis Jonathan Ellis added a comment - Closing as a duplicate of CASSANDRA-6506 . There's no reasonable way to fix this in earlier C* versions.
          Hide
          iamaleksey Aleksey Yeschenko added a comment -

          CASSANDRA-6506 has been delayed until 3.0, but this issues is now actually resolved in 2.1 by the combination of new memtable code and various counters++ commits (including, but not limited to, part of CASSANDRA-6506 and CASSANDRA-6953).

          Show
          iamaleksey Aleksey Yeschenko added a comment - CASSANDRA-6506 has been delayed until 3.0, but this issues is now actually resolved in 2.1 by the combination of new memtable code and various counters++ commits (including, but not limited to, part of CASSANDRA-6506 and CASSANDRA-6953 ).

            People

            • Assignee:
              Unassigned
              Reporter:
              alienth Jason Harvey
            • Votes:
              0 Vote for this issue
              Watchers:
              9 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development