Details
-
Bug
-
Status: Resolved
-
Urgent
-
Resolution: Duplicate
-
None
-
- Cassandra 3.9
- Oracle JDK 1.8.0_112 and 1.8.0_131
- Kernel 4.9.43-17.38.amzn1.x86_64 and 3.14.35-28.38.amzn1.x86_64
-
Critical
Description
We are getting segfaults on a production Cassandra cluster, apparently caused by Memtable flushes to disk.
Current thread (0x000000000cd77920): JavaThread "PerDiskMemtableFlushWriter_0:140" daemon [_thread_in_Java, id=28952, stack(0x00007f8b7aa53000,0x00007f8b7aa94000)]
Stack
Stack: [0x00007f8b7aa53000,0x00007f8b7aa94000], sp=0x00007f8b7aa924a0, free space=253k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) J 21889 C2 org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(Lorg/apache/cassandra/db/rows/UnfilteredRowIterator;)Lorg/apache/cassandra/db/RowIndexEntry; (361 bytes) @ 0x00007f8e9fcf75ac [0x00007f8e9fcf42c0+0x32ec] J 22464 C2 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents()V (383 bytes) @ 0x00007f8e9f17b988 [0x00007f8e9f17b5c0+0x3c8] j org.apache.cassandra.db.Memtable$FlushRunnable.call()Lorg/apache/cassandra/io/sstable/SSTableMultiWriter;+1 j org.apache.cassandra.db.Memtable$FlushRunnable.call()Ljava/lang/Object;+1 J 18865 C2 java.util.concurrent.FutureTask.run()V (126 bytes) @ 0x00007f8e9d3c9540 [0x00007f8e9d3c93a0+0x1a0] J 21832 C2 java.util.concurrent.ThreadPoolExecutor.runWorker(Ljava/util/concurrent/ThreadPoolExecutor$Worker;)V (225 bytes) @ 0x00007f8e9f16856c [0x00007f8e9f168400+0x16c] J 6720 C1 java.util.concurrent.ThreadPoolExecutor$Worker.run()V (9 bytes) @ 0x00007f8e9def73c4 [0x00007f8e9def72c0+0x104] J 22079 C2 java.lang.Thread.run()V (17 bytes) @ 0x00007f8e9e67c4ac [0x00007f8e9e67c460+0x4c] v ~StubRoutines::call_stub V [libjvm.so+0x691d16] JavaCalls::call_helper(JavaValue*, methodHandle*, JavaCallArguments*, Thread*)+0x1056 V [libjvm.so+0x692221] JavaCalls::call_virtual(JavaValue*, KlassHandle, Symbol*, Symbol*, JavaCallArguments*, Thread*)+0x321 V [libjvm.so+0x6926c7] JavaCalls::call_virtual(JavaValue*, Handle, KlassHandle, Symbol*, Symbol*, Thread*)+0x47 V [libjvm.so+0x72da50] thread_entry(JavaThread*, Thread*)+0xa0 V [libjvm.so+0xa76833] JavaThread::thread_main_inner()+0x103 V [libjvm.so+0xa7697c] JavaThread::run()+0x11c V [libjvm.so+0x927568] java_start(Thread*)+0x108 C [libpthread.so.0+0x7de5] start_thread+0xc5
For further details, we attached:
- JVM error file with all details
- cassandra config file (we are using offheap_buffers as memtable_allocation_method)
- some lines printed in debug.log when the JVM error file was created and process died
Reproducing the issue
So far we have been unable to reproduce it. It happens once/twice a week on single nodes. It happens either during high load or low load times. We have seen that when we replace EC2 instances and bootstrap new ones, due to compactions happening on source nodes before stream starts, sometimes more than a single node was affected by this, letting us with 2 out of 3 replicas out and UnavailableExceptions in the cluster.
This issue might have relation with CASSANDRA-12590 (Segfault reading secondary index) even this is the write path. Can someone confirm if both issues could be related?
Specifics of our scenario:
- Cassandra 3.9 on Amazon Linux (previous to this, we were running Cassandra 2.0.9 and there are no records of this also happening, even I was not working on Cassandra)
- 12 x i3.2xlarge EC2 instances (8 core, 64GB RAM)
- a total of 176 keyspaces (there is a per-customer pattern)
- Some keyspaces have a single table, while others have 2 or 5 tables
- There is a table that uses standard Secondary Indexes ("emailindex" on "user_info" table)
- It happens on both Oracle JDK 1.8.0_112 and 1.8.0_131
- It happens in both kernel 4.9.43-17.38.amzn1.x86_64 and 3.14.35-28.38.amzn1.x86_64
Possible workarounds/solutions that we have in mind (to be validated yet)
- switching to heap_buffers (in case offheap_buffers triggers the bug), even we are still pending to measure performance degradation under that scenario.
- removing secondary indexes in favour of Materialized Views for this specific case, even we are concerned too about the fact that using MVs introduces new issues that may be present in our current Cassandra 3.9
- Upgrading to 3.11.1 is an option, but we are trying to keep it as last resort given that the cost of migrating is big and we don't have any guarantee that new bugs that affects nodes availability are not introduced.
Attachments
Attachments
Issue Links
- duplicates
-
CASSANDRA-12651 Failure in SecondaryIndexTest.testAllowFilteringOnPartitionKeyWithSecondaryIndex
- Resolved