Uploaded image for project: 'Cassandra'
  1. Cassandra
  2. CASSANDRA-7759

DROP TABLE makes C* unreachable

Agile BoardAttach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskConvert to sub-taskMoveLinkCloneLabelsUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Normal
    • Resolution: Cannot Reproduce
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None
    • Environment:

      CQLSH of C* 2.10.-rc5 with one or two node(s).

    • Severity:
      Normal
    • Since Version:

      Description

      After a DROP KEYSPACE or DROP TABLE command from CQLSH I often get the output :

      errors={}, last_host=127.0.0.1
      

      Then, the application can not access the node anymore (it uses the Java Driver 2.1.0-rc1). And I get the following stacktraces from the system.log file ;

      ERROR [MemtableFlushWriter:5] 2014-08-13 11:26:07,577 CassandraDaemon.java:166 - Exception in thread Thread[MemtableFlushWriter:5,5,main]
      java.lang.RuntimeException: Last written key DecoratedKey(28149890686391545-8456361251720325, 30313034353666392d666464362d343539362d393537372d653539336430333138396437) >= current key DecoratedKey(108b5f3f-fc06-4a0d-99f1-3c6484b32e04, 31303862356633662d666330362d346130642d393966312d336336343834623332653034) writing into ./../data/data/metrics/run-a3c1fe80216911e49fd1ab9c3338a2ff/metrics-run.run_probe-tmp-ka-2-Data.db
      	at org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:172) ~[apache-cassandra-2.1.0-rc5.jar:2.1.0-rc5]
      	at org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:215) ~[apache-cassandra-2.1.0-rc5.jar:2.1.0-rc5]
      	at org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:351) ~[apache-cassandra-2.1.0-rc5.jar:2.1.0-rc5]
      	at org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:314) ~[apache-cassandra-2.1.0-rc5.jar:2.1.0-rc5]
      	at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) ~[apache-cassandra-2.1.0-rc5.jar:2.1.0-rc5]
      	at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[apache-cassandra-2.1.0-rc5.jar:2.1.0-rc5]
      	at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297) ~[guava-16.0.jar:na]
      	at org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1054) ~[apache-cassandra-2.1.0-rc5.jar:2.1.0-rc5]
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ~[na:1.7.0_45]
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[na:1.7.0_45]
      	at java.lang.Thread.run(Thread.java:744) ~[na:1.7.0_45]
      

      I also get the following (perhaps not related)

      ERROR [MemtableFlushWriter:6] 2014-08-13 11:26:08,827 CassandraDaemon.java:166 - Exception in thread Thread[MemtableFlushWriter:6,5,main]
      org.apache.cassandra.serializers.MarshalException: Invalid byte for ascii: -50
      	at org.apache.cassandra.serializers.AsciiSerializer.validate(AsciiSerializer.java:39) ~[apache-cassandra-2.1.0-rc5.jar:2.1.0-rc5]
      	at org.apache.cassandra.db.marshal.AbstractType.getString(AbstractType.java:78) ~[apache-cassandra-2.1.0-rc5.jar:2.1.0-rc5]
      	at org.apache.cassandra.dht.LocalToken.toString(LocalToken.java:39) ~[apache-cassandra-2.1.0-rc5.jar:2.1.0-rc5]
      	at java.lang.String.valueOf(String.java:2854) ~[na:1.7.0_45]
      	at java.lang.StringBuilder.append(StringBuilder.java:128) ~[na:1.7.0_45]
      	at org.apache.cassandra.db.DecoratedKey.toString(DecoratedKey.java:118) ~[apache-cassandra-2.1.0-rc5.jar:2.1.0-rc5]
      	at java.lang.String.valueOf(String.java:2854) ~[na:1.7.0_45]
      	at java.lang.StringBuilder.append(StringBuilder.java:128) ~[na:1.7.0_45]
      	at org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:172) ~[apache-cassandra-2.1.0-rc5.jar:2.1.0-rc5]
      	at org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:215) ~[apache-cassandra-2.1.0-rc5.jar:2.1.0-rc5]
      	at org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:351) ~[apache-cassandra-2.1.0-rc5.jar:2.1.0-rc5]
      	at org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:314) ~[apache-cassandra-2.1.0-rc5.jar:2.1.0-rc5]
      	at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) ~[apache-cassandra-2.1.0-rc5.jar:2.1.0-rc5]
      	at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[apache-cassandra-2.1.0-rc5.jar:2.1.0-rc5]
      	at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297) ~[guava-16.0.jar:na]
      	at org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1054) ~[apache-cassandra-2.1.0-rc5.jar:2.1.0-rc5]
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ~[na:1.7.0_45]
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[na:1.7.0_45]
      	at java.lang.Thread.run(Thread.java:744) ~[na:1.7.0_45]
      

      The issue does not always occur but I reproduce it at a rate of about 50% of the time. I experienced it already with C* 2.1.0 rc2 (perhaps earlier). But I remember that the issue did not exist with a C* 2.1.0-beta version (beta1 or beta2).

      Related to http://mail-archives.apache.org/mod_mbox/cassandra-user/201407.mbox/%3CCAABB5w_x8PAYjyRTX68wqePR3Ajxn7Zo-nLuK0xk9O9+4wCkhA@mail.gmail.com%3E
      (and perhaps also to http://mail-archives.apache.org/mod_mbox/cassandra-user/201407.mbox/%3CCAEDUwd2iRX54UyexyPJ2eUEYLcMmtWLXPh9R3FMPjBvpGuX=LA@mail.gmail.com%3E )

        Attachments

          Activity

            People

            • Assignee:
              Unassigned Assign to me
              Reporter:
              flarcher Fabrice Larcher

              Dates

              • Created:
                Updated:
                Resolved:

                Issue deployment