Uploaded image for project: 'Cassandra'
  1. Cassandra
  2. CASSANDRA-10909

NPE in ActiveRepairService

    XMLWordPrintableJSON

Details

    • Normal

    Description

      NPE after one started multiple incremental repairs

      INFO  [Thread-62] 2015-12-21 11:40:53,742  RepairRunnable.java:125 - Starting repair command #1, repairing keyspace keyspace1 with repair options (parallelism: parallel, primary range: false, incremental: true, job threads: 1, ColumnFamilies: [], dataCenters: [], hosts: [], # of ranges: 2)
      INFO  [Thread-62] 2015-12-21 11:40:53,813  RepairSession.java:237 - [repair #b13e3740-a7d7-11e5-b568-f565b837eb0d] new session: will sync /10.200.177.32, /10.200.177.33 on range [(10,-9223372036854775808]] for keyspace1.[counter1, standard1]
      INFO  [Repair#1:1] 2015-12-21 11:40:53,853  RepairJob.java:100 - [repair #b13e3740-a7d7-11e5-b568-f565b837eb0d] requesting merkle trees for counter1 (to [/10.200.177.33, /10.200.177.32])
      INFO  [Repair#1:1] 2015-12-21 11:40:53,853  RepairJob.java:174 - [repair #b13e3740-a7d7-11e5-b568-f565b837eb0d] Requesting merkle trees for counter1 (to [/10.200.177.33, /10.200.177.32])
      INFO  [Thread-62] 2015-12-21 11:40:53,854  RepairSession.java:237 - [repair #b1449fe0-a7d7-11e5-b568-f565b837eb0d] new session: will sync /10.200.177.32, /10.200.177.31 on range [(0,10]] for keyspace1.[counter1, standard1]
      INFO  [AntiEntropyStage:1] 2015-12-21 11:40:53,896  RepairSession.java:181 - [repair #b13e3740-a7d7-11e5-b568-f565b837eb0d] Received merkle tree for counter1 from /10.200.177.32
      INFO  [AntiEntropyStage:1] 2015-12-21 11:40:53,906  RepairSession.java:181 - [repair #b13e3740-a7d7-11e5-b568-f565b837eb0d] Received merkle tree for counter1 from /10.200.177.33
      INFO  [Repair#1:1] 2015-12-21 11:40:53,906  RepairJob.java:100 - [repair #b13e3740-a7d7-11e5-b568-f565b837eb0d] requesting merkle trees for standard1 (to [/10.200.177.33, /10.200.177.32])
      INFO  [Repair#1:1] 2015-12-21 11:40:53,906  RepairJob.java:174 - [repair #b13e3740-a7d7-11e5-b568-f565b837eb0d] Requesting merkle trees for standard1 (to [/10.200.177.33, /10.200.177.32])
      INFO  [RepairJobTask:2] 2015-12-21 11:40:53,910  SyncTask.java:66 - [repair #b13e3740-a7d7-11e5-b568-f565b837eb0d] Endpoints /10.200.177.33 and /10.200.177.32 are consistent for counter1
      INFO  [RepairJobTask:1] 2015-12-21 11:40:53,910  RepairJob.java:145 - [repair #b13e3740-a7d7-11e5-b568-f565b837eb0d] counter1 is fully synced
      INFO  [AntiEntropyStage:1] 2015-12-21 11:40:54,823  Validator.java:272 - [repair #b17a2ed0-a7d7-11e5-ada8-8304f5629908] Sending completed merkle tree to /10.200.177.33 for keyspace1.counter1
      ERROR [ValidationExecutor:3] 2015-12-21 11:40:55,104  CompactionManager.java:1065 - Cannot start multiple repair sessions over the same sstables
      ERROR [ValidationExecutor:3] 2015-12-21 11:40:55,105  Validator.java:259 - Failed creating a merkle tree for [repair #b17a2ed0-a7d7-11e5-ada8-8304f5629908 on keyspace1/standard1, [(10,-9223372036854775808]]], /10.200.177.33 (see log for details)
      ERROR [ValidationExecutor:3] 2015-12-21 11:40:55,110  CassandraDaemon.java:195 - Exception in thread Thread[ValidationExecutor:3,1,main]
      java.lang.RuntimeException: Cannot start multiple repair sessions over the same sstables
      	at org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:1066) ~[cassandra-all-3.0.1.777.jar:3.0.1.777]
      	at org.apache.cassandra.db.compaction.CompactionManager.access$700(CompactionManager.java:80) ~[cassandra-all-3.0.1.777.jar:3.0.1.777]
      	at org.apache.cassandra.db.compaction.CompactionManager$10.call(CompactionManager.java:679) ~[cassandra-all-3.0.1.777.jar:3.0.1.777]
      	at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_40]
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_40]
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_40]
      	at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40]
      ERROR [AntiEntropyStage:1] 2015-12-21 11:40:55,174  RepairMessageVerbHandler.java:161 - Got error, removing parent repair session
      INFO  [CompactionExecutor:3] 2015-12-21 11:40:55,175  CompactionManager.java:489 - Starting anticompaction for keyspace1.counter1 on 0/[] sstables
      INFO  [CompactionExecutor:3] 2015-12-21 11:40:55,176  CompactionManager.java:547 - Completed anticompaction successfully
      ERROR [AntiEntropyStage:1] 2015-12-21 11:40:55,179  CassandraDaemon.java:195 - Exception in thread Thread[AntiEntropyStage:1,5,main]
      java.lang.RuntimeException: java.lang.NullPointerException
      	at org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:164) ~[cassandra-all-3.0.1.777.jar:3.0.1.777]
      	at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) ~[cassandra-all-3.0.1.777.jar:3.0.1.777]
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_40]
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_40]
      	at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_40]
      Caused by: java.lang.NullPointerException: null
      	at org.apache.cassandra.service.ActiveRepairService$ParentRepairSession.getAndReferenceSSTables(ActiveRepairService.java:452) ~[cassandra-all-3.0.1.777.jar:3.0.1.777]
      	at org.apache.cassandra.service.ActiveRepairService.doAntiCompaction(ActiveRepairService.java:379) ~[cassandra-all-3.0.1.777.jar:3.0.1.777]
      	at org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:136) ~[cassandra-all-3.0.1.777.jar:3.0.1.777]
      	... 4 common frames omitted
      

      Attachments

        Activity

          People

            marcuse Marcus Eriksson
            eduard.tudenhoefner Eduard Tudenhoefner
            Marcus Eriksson
            Carl Yeksigian
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: