Details
-
Bug
-
Status: Resolved
-
Normal
-
Resolution: Duplicate
-
None
-
None
-
None
-
Normal
Description
The dtest sstablesplit_test.py completes compaction and sstablesplit successfully, but errors on node1.log ERROR entries.
Full trace when reproduced locally:
DEBUG [CompactionExecutor:2] 2014-08-06 12:25:43,133 CompactionManager.java:221 - Checking system.compactions_in_progress DEBUG [CompactionExecutor:2] 2014-08-06 12:25:43,134 SizeTieredCompactionStrategy.java:95 - Compaction buckets are [[SSTableReader(path='/home/mshuler/git/cassandra/./bin/../data/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-13-Data.db'), SSTableReader(path='/home/mshuler/git/cassandra/./bin/../data/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-12-Data.db')]] DEBUG [CompactionExecutor:2] 2014-08-06 12:25:43,134 CompactionManager.java:232 - No tasks available ERROR [CompactionExecutor:1] 2014-08-06 12:25:43,142 CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor:1,1,main] org.apache.cassandra.db.compaction.CompactionInterruptedException: Compaction interrupted: Compaction@0698db80-1d8e-11e4-984d-c10aaefcf248(Keyspace1, Standard1, 138156251/262144900)bytes at org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:174) ~[main/:na] at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) ~[main/:na] at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[main/:na] at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74) ~[main/:na] at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) ~[main/:na] at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:235) ~[main/:na] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[na:1.7.0_65] at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_65] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ~[na:1.7.0_65] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_65] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65] INFO [CompactionExecutor:1] 2014-08-06 12:25:43,143 CompactionManager.java:1159 - Compaction interrupted: Compaction@0698db80-1d8e-11e4-984d-c10aaefcf248(Keyspace1, Standard1, 138156251/262144900)bytes DEBUG [CompactionExecutor:1] 2014-08-06 12:25:43,144 CompactionManager.java:1160 - Full interruption stack trace: org.apache.cassandra.db.compaction.CompactionInterruptedException: Compaction interrupted: Compaction@0698db80-1d8e-11e4-984d-c10aaefcf248(Keyspace1, Standard1, 138156251/262144900)bytes at org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:174) ~[main/:na] at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) ~[main/:na] at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[main/:na] at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74) ~[main/:na] at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) ~[main/:na] at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:235) ~[main/:na] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[na:1.7.0_65] at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_65] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ~[na:1.7.0_65] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_65] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65] DEBUG [RMI TCP Connection(4)-127.0.0.1] 2014-08-06 12:25:43,211 ColumnFamilyStore.java:2499 - Compactions successfully cancelled INFO [CompactionExecutor:2] 2014-08-06 12:25:43,214 ColumnFamilyStore.java:857 - Enqueuing flush of compactions_in_progress: 1882 (0%) on-heap, 0 (0%) off-heap DEBUG [CompactionExecutor:2] 2014-08-06 12:25:43,215 ColumnFamilyStore.java:182 - scheduling flush in 3600000 ms
DEBUG log attached from the local reproduction - system.log.18.zip
dtest stdout log attached - dtest_sstablesplit_test-debugstdout.txt
dtest node1 INFO log attached - dtest_sstablesplit_test-node1.log.txt
Shell repro script:
#!/bin/sh REPODIR=$HOME/git/cassandra cd $REPODIR sed -i 's/root level="INFO"/root level="DEBUG"/' conf/logback.xml start_cstar() { echo "$( date +%T ) - Starting Cassandra.." ./bin/cassandra >/dev/null sleep 10 } stop_cstar() { echo "$( date +%T ) - Stopping Cassandra.." pkill -f CassandraDaemon sleep 5 } compact() { echo "$( date +%T ) - Compacting.." ./bin/nodetool flush ./bin/nodetool compact ./bin/nodetool flush } split() { stop_cstar DBFILE=$( ls data/data/Keyspace1/Standard1-*/Keyspace1-Standard1-ka-*-Data.db ) echo "$( date +%T ) - Splitting $DBFILE" ./tools/bin/sstablesplit $DBFILE start_cstar } start_cstar echo "$( date +%T ) - Writing data.." if ./tools/bin/cassandra-stress write n=1000000 -rate threads=25 >/dev/null 2>&1; then echo "$( date +%T ) - Done writing data.." compact split compact split echo "$( date +%T ) - Reading data.." if ./tools/bin/cassandra-stress read n=1000000 -rate threads=25 >/dev/null 2>&1; then echo "$( date +%T ) - Done reading data.." else echo "Something went wrong reading.." fi else echo "Something went wrong writing.." fi stop_cstar
Since the compaction actually appears to complete successfully, is this error simply informational, or is this actually a problem that needs to be addressed? That's my concern - if this can be ignored as expected, then we can include that in testing.
Attachments
Attachments
Issue Links
- duplicates
-
CASSANDRA-7694 Expected Compaction Interruption is logged as ERROR
- Resolved