[2016-03-23 01:04:56,724] TRACE Controller 3 epoch 40 received response {error_code=11} for a request sent to broker Node(5, core05.tec1.tivo.com, 9092) (state.change.logger) [2016-03-23 01:04:56,724] TRACE Controller 3 epoch 40 received response {error_code=11,partitions=[]} for a request sent to broker Node(4, core04.tec1.tivo.com, 9092) (state.change.logger) [2016-03-23 01:04:56,724] TRACE Controller 3 epoch 40 received response {error_code=0,partitions=[{topic=tec1.ono_dev1.thedvr.thedvrRecordingStorage,partition=5,error_code=0}]} for a request sent to broker Node(2, core02.tec1.tivo.com, 9092) (state.change.logger) [2016-03-23 01:04:56,724] TRACE Controller 3 epoch 40 received response {error_code=11} for a request sent to broker Node(2, core02.tec1.tivo.com, 9092) (state.change.logger) [2016-03-23 01:04:56,728] TRACE Broker 3 truncated logs and checkpointed recovery boundaries for partition [tec1.ono_dev1.thedvr.thedvrRecordingStorage,5] as part of become-follower request with correlation id 396 from controller 3 epoch 40 (state.change.logger) [2016-03-23 01:04:56,774] TRACE Controller 3 epoch 40 received response {error_code=11,partitions=[]} for a request sent to broker Node(2, core02.tec1.tivo.com, 9092) (state.change.logger) [2016-03-23 01:04:56,774] TRACE Controller 3 epoch 40 received response {error_code=11} for a request sent to broker Node(2, core02.tec1.tivo.com, 9092) (state.change.logger) [2016-03-23 01:04:56,774] TRACE Controller 3 epoch 40 received response {error_code=11} for a request sent to broker Node(4, core04.tec1.tivo.com, 9092) (state.change.logger) [2016-03-23 01:04:56,783] TRACE Broker 3 started fetcher to new leader as part of become-follower request from controller 3 epoch 40 with correlation id 396 for partition [tec1.ono_dev1.thedvr.thedvrRecordingStorage,5] (state.change.logger) [2016-03-23 01:04:56,783] TRACE Broker 3 completed LeaderAndIsr request correlationId 396 from controller 3 epoch 40 for the become-follower transition for partition [tec1.ono_dev1.thedvr.thedvrRecordingStorage,5] (state.change.logger) [2016-03-23 01:04:56,794] TRACE Controller 3 epoch 40 received response {error_code=0,partitions=[{topic=tec1.ono_dev1.thedvr.thedvrRecordingStorage,partition=5,error_code=0}]} for a request sent to broker Node(3, core03.tec1.tivo.com, 9092) (state.change.logger) [2016-03-23 01:04:56,972] TRACE Broker 3 cached leader info (LeaderAndIsrInfo:(Leader:4,ISR:5,1,2,3,4,LeaderEpoch:82,ControllerEpoch:40),ReplicationFactor:5),AllReplicas:5,1,2,3,4) for partition [tec1.ono_dev1.thedvr.thedvrRecordingStorage,5] in response to UpdateMetadata request sent by controller 3 epoch 40 with correlation id 397 (state.change.logger) [2016-03-23 01:04:56,972] TRACE Controller 3 epoch 40 received response {error_code=0} for a request sent to broker Node(3, core03.tec1.tivo.com, 9092) (state.change.logger) [2016-03-23 01:04:57,029] TRACE Controller 3 epoch 40 received response {error_code=11,partitions=[]} for a request sent to broker Node(1, core01.tec1.tivo.com, 9092) (state.change.logger) [2016-03-23 01:04:57,033] TRACE Broker 3 cached leader info (LeaderAndIsrInfo:(Leader:4,ISR:1,2,4,LeaderEpoch:291,ControllerEpoch:40),ReplicationFactor:3),AllReplicas:4,1,2) for partition [tec1.en2.frontend.eventListLog,57] in response to UpdateMetadata request sent by controller 3 epoch 40 with correlation id 398 (state.change.logger) [2016-03-23 01:04:57,034] TRACE Controller 3 epoch 40 received response {error_code=0} for a request sent to broker Node(3, core03.tec1.tivo.com, 9092) (state.change.logger) [2016-03-23 01:04:57,499] TRACE Controller 3 epoch 40 received response {error_code=11} for a request sent to broker Node(1, core01.tec1.tivo.com, 9092) (state.change.logger) [2016-03-23 01:05:23,748] ERROR Controller 3 epoch 40 aborted leader election for partition [tec1.bclab1.bodydata.bodyconfig,45] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. (state.change.logger) [2016-03-23 01:05:23,748] ERROR Controller 3 epoch 40 encountered error while electing leader for partition [tec1.bclab1.bodydata.bodyconfig,45] due to: aborted leader election for partition [tec1.bclab1.bodydata.bodyconfig,45] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. (state.change.logger) [2016-03-23 01:05:23,748] ERROR Controller 3 epoch 40 initiated state change for partition [tec1.bclab1.bodydata.bodyconfig,45] from OnlinePartition to OnlinePartition failed (state.change.logger) kafka.common.StateChangeFailedException: encountered error while electing leader for partition [tec1.bclab1.bodydata.bodyconfig,45] due to: aborted leader election for partition [tec1.bclab1.bodydata.bodyconfig,45] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:368) at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:207) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:146) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:145) at scala.collection.immutable.Set$Set1.foreach(Set.scala:79) at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:145) at kafka.controller.KafkaController.onPreferredReplicaElection(KafkaController.scala:662) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply$mcV$sp(KafkaController.scala:1225) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1217) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1215) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1215) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1194) at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221) at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428) at kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance(KafkaController.scala:1194) at kafka.controller.KafkaController$$anonfun$onControllerFailover$1.apply$mcV$sp(KafkaController.scala:344) at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: kafka.common.StateChangeFailedException: aborted leader election for partition [tec1.bclab1.bodydata.bodyconfig,45] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:342) ... 32 more [2016-03-23 01:05:23,749] TRACE Controller 3 epoch 40 started leader election for partition [debug.simdev1.haproxy,0] (state.change.logger) [2016-03-23 01:05:23,753] ERROR Controller 3 epoch 40 aborted leader election for partition [debug.simdev1.haproxy,0] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. (state.change.logger) [2016-03-23 01:05:23,753] ERROR Controller 3 epoch 40 encountered error while electing leader for partition [debug.simdev1.haproxy,0] due to: aborted leader election for partition [debug.simdev1.haproxy,0] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. (state.change.logger) [2016-03-23 01:05:23,753] ERROR Controller 3 epoch 40 initiated state change for partition [debug.simdev1.haproxy,0] from OnlinePartition to OnlinePartition failed (state.change.logger) kafka.common.StateChangeFailedException: encountered error while electing leader for partition [debug.simdev1.haproxy,0] due to: aborted leader election for partition [debug.simdev1.haproxy,0] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:368) at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:207) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:146) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:145) at scala.collection.immutable.Set$Set1.foreach(Set.scala:79) at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:145) at kafka.controller.KafkaController.onPreferredReplicaElection(KafkaController.scala:662) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply$mcV$sp(KafkaController.scala:1225) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1217) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1215) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1215) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1194) at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221) at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428) at kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance(KafkaController.scala:1194) at kafka.controller.KafkaController$$anonfun$onControllerFailover$1.apply$mcV$sp(KafkaController.scala:344) at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: kafka.common.StateChangeFailedException: aborted leader election for partition [debug.simdev1.haproxy,0] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:342) ... 32 more [2016-03-23 01:05:23,753] TRACE Controller 3 epoch 40 started leader election for partition [tec1.simdev1.thedvr.thedvrMedia,2] (state.change.logger) [2016-03-23 01:05:23,754] ERROR Controller 3 epoch 40 aborted leader election for partition [tec1.simdev1.thedvr.thedvrMedia,2] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. (state.change.logger) [2016-03-23 01:05:23,754] ERROR Controller 3 epoch 40 encountered error while electing leader for partition [tec1.simdev1.thedvr.thedvrMedia,2] due to: aborted leader election for partition [tec1.simdev1.thedvr.thedvrMedia,2] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. (state.change.logger) [2016-03-23 01:05:23,754] ERROR Controller 3 epoch 40 initiated state change for partition [tec1.simdev1.thedvr.thedvrMedia,2] from OnlinePartition to OnlinePartition failed (state.change.logger) kafka.common.StateChangeFailedException: encountered error while electing leader for partition [tec1.simdev1.thedvr.thedvrMedia,2] due to: aborted leader election for partition [tec1.simdev1.thedvr.thedvrMedia,2] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:368) at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:207) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:146) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:145) at scala.collection.immutable.Set$Set1.foreach(Set.scala:79) at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:145) at kafka.controller.KafkaController.onPreferredReplicaElection(KafkaController.scala:662) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply$mcV$sp(KafkaController.scala:1225) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1217) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1215) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1215) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1194) at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221) at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428) at kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance(KafkaController.scala:1194) at kafka.controller.KafkaController$$anonfun$onControllerFailover$1.apply$mcV$sp(KafkaController.scala:344) at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: kafka.common.StateChangeFailedException: aborted leader election for partition [tec1.simdev1.thedvr.thedvrMedia,2] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:342) ... 32 more [2016-03-23 01:05:23,754] TRACE Controller 3 epoch 40 started leader election for partition [tec1.ono_dev1.bodydata.bodyconfig-llc-anon,54] (state.change.logger) [2016-03-23 01:05:23,758] ERROR Controller 3 epoch 40 aborted leader election for partition [tec1.ono_dev1.bodydata.bodyconfig-llc-anon,54] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. (state.change.logger) [2016-03-23 01:05:23,758] ERROR Controller 3 epoch 40 encountered error while electing leader for partition [tec1.ono_dev1.bodydata.bodyconfig-llc-anon,54] due to: aborted leader election for partition [tec1.ono_dev1.bodydata.bodyconfig-llc-anon,54] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. (state.change.logger) [2016-03-23 01:05:23,758] ERROR Controller 3 epoch 40 initiated state change for partition [tec1.ono_dev1.bodydata.bodyconfig-llc-anon,54] from OnlinePartition to OnlinePartition failed (state.change.logger) kafka.common.StateChangeFailedException: encountered error while electing leader for partition [tec1.ono_dev1.bodydata.bodyconfig-llc-anon,54] due to: aborted leader election for partition [tec1.ono_dev1.bodydata.bodyconfig-llc-anon,54] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:368) at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:207) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:146) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:145) at scala.collection.immutable.Set$Set1.foreach(Set.scala:79) at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:145) at kafka.controller.KafkaController.onPreferredReplicaElection(KafkaController.scala:662) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply$mcV$sp(KafkaController.scala:1225) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1217) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1215) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1215) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1194) at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221) at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428) at kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance(KafkaController.scala:1194) at kafka.controller.KafkaController$$anonfun$onControllerFailover$1.apply$mcV$sp(KafkaController.scala:344) at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: kafka.common.StateChangeFailedException: aborted leader election for partition [tec1.ono_dev1.bodydata.bodyconfig-llc-anon,54] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:342) ... 32 more [2016-03-23 01:05:23,758] TRACE Controller 3 epoch 40 started leader election for partition [tec1.en2.bodydata.recordings,25] (state.change.logger) [2016-03-23 01:05:23,759] ERROR Controller 3 epoch 40 aborted leader election for partition [tec1.en2.bodydata.recordings,25] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. (state.change.logger) [2016-03-23 01:05:23,759] ERROR Controller 3 epoch 40 encountered error while electing leader for partition [tec1.en2.bodydata.recordings,25] due to: aborted leader election for partition [tec1.en2.bodydata.recordings,25] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. (state.change.logger) [2016-03-23 01:05:23,759] ERROR Controller 3 epoch 40 initiated state change for partition [tec1.en2.bodydata.recordings,25] from OnlinePartition to OnlinePartition failed (state.change.logger) kafka.common.StateChangeFailedException: encountered error while electing leader for partition [tec1.en2.bodydata.recordings,25] due to: aborted leader election for partition [tec1.en2.bodydata.recordings,25] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:368) at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:207) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:146) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:145) at scala.collection.immutable.Set$Set1.foreach(Set.scala:79) at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:145) at kafka.controller.KafkaController.onPreferredReplicaElection(KafkaController.scala:662) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply$mcV$sp(KafkaController.scala:1225) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1217) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1215) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1215) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1194) at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221) at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428) at kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance(KafkaController.scala:1194) at kafka.controller.KafkaController$$anonfun$onControllerFailover$1.apply$mcV$sp(KafkaController.scala:344) at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: kafka.common.StateChangeFailedException: aborted leader election for partition [tec1.en2.bodydata.recordings,25] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:342) ... 32 more [2016-03-23 01:05:23,759] TRACE Controller 3 epoch 40 started leader election for partition [tec1.ono_qe1.frontend.bodyConfigStore,1] (state.change.logger) [2016-03-23 01:05:23,765] ERROR Controller 3 epoch 40 aborted leader election for partition [tec1.ono_qe1.frontend.bodyConfigStore,1] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. (state.change.logger) [2016-03-23 01:05:23,765] ERROR Controller 3 epoch 40 encountered error while electing leader for partition [tec1.ono_qe1.frontend.bodyConfigStore,1] due to: aborted leader election for partition [tec1.ono_qe1.frontend.bodyConfigStore,1] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. (state.change.logger) [2016-03-23 01:05:23,765] ERROR Controller 3 epoch 40 initiated state change for partition [tec1.ono_qe1.frontend.bodyConfigStore,1] from OnlinePartition to OnlinePartition failed (state.change.logger) kafka.common.StateChangeFailedException: encountered error while electing leader for partition [tec1.ono_qe1.frontend.bodyConfigStore,1] due to: aborted leader election for partition [tec1.ono_qe1.frontend.bodyConfigStore,1] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:368) at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:207) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:146) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:145) at scala.collection.immutable.Set$Set1.foreach(Set.scala:79) at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:145) at kafka.controller.KafkaController.onPreferredReplicaElection(KafkaController.scala:662) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply$mcV$sp(KafkaController.scala:1225) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1217) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1215) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1215) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1194) at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221) at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428) at kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance(KafkaController.scala:1194) at kafka.controller.KafkaController$$anonfun$onControllerFailover$1.apply$mcV$sp(KafkaController.scala:344) at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: kafka.common.StateChangeFailedException: aborted leader election for partition [tec1.ono_qe1.frontend.bodyConfigStore,1] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:342) ... 32 more [2016-03-23 01:05:23,765] TRACE Controller 3 epoch 40 started leader election for partition [bodyconfig-store-changelog,36] (state.change.logger) [2016-03-23 01:05:23,766] ERROR Controller 3 epoch 40 aborted leader election for partition [bodyconfig-store-changelog,36] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. (state.change.logger) [2016-03-23 01:05:23,766] ERROR Controller 3 epoch 40 encountered error while electing leader for partition [bodyconfig-store-changelog,36] due to: aborted leader election for partition [bodyconfig-store-changelog,36] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. (state.change.logger) [2016-03-23 01:05:23,766] ERROR Controller 3 epoch 40 initiated state change for partition [bodyconfig-store-changelog,36] from OnlinePartition to OnlinePartition failed (state.change.logger) kafka.common.StateChangeFailedException: encountered error while electing leader for partition [bodyconfig-store-changelog,36] due to: aborted leader election for partition [bodyconfig-store-changelog,36] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:368) at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:207) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:146) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:145) at scala.collection.immutable.Set$Set1.foreach(Set.scala:79) at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:145) at kafka.controller.KafkaController.onPreferredReplicaElection(KafkaController.scala:662) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply$mcV$sp(KafkaController.scala:1225) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1217) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1215) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1215) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1194) at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221) at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428) at kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance(KafkaController.scala:1194) at kafka.controller.KafkaController$$anonfun$onControllerFailover$1.apply$mcV$sp(KafkaController.scala:344) at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: kafka.common.StateChangeFailedException: aborted leader election for partition [bodyconfig-store-changelog,36] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:342) ... 32 more [2016-03-23 01:05:23,766] TRACE Controller 3 epoch 40 started leader election for partition [bodyconfig-store-changelog,46] (state.change.logger) [2016-03-23 01:05:23,770] ERROR Controller 3 epoch 40 aborted leader election for partition [bodyconfig-store-changelog,46] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. (state.change.logger) [2016-03-23 01:05:23,770] ERROR Controller 3 epoch 40 encountered error while electing leader for partition [bodyconfig-store-changelog,46] due to: aborted leader election for partition [bodyconfig-store-changelog,46] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. (state.change.logger) [2016-03-23 01:05:23,770] ERROR Controller 3 epoch 40 initiated state change for partition [bodyconfig-store-changelog,46] from OnlinePartition to OnlinePartition failed (state.change.logger) kafka.common.StateChangeFailedException: encountered error while electing leader for partition [bodyconfig-store-changelog,46] due to: aborted leader election for partition [bodyconfig-store-changelog,46] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:368) at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:207) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:146) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:145) at scala.collection.immutable.Set$Set1.foreach(Set.scala:79) at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:145) at kafka.controller.KafkaController.onPreferredReplicaElection(KafkaController.scala:662) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply$mcV$sp(KafkaController.scala:1225) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1217) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1215) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1215) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1194) at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221) at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428) at kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance(KafkaController.scala:1194) at kafka.controller.KafkaController$$anonfun$onControllerFailover$1.apply$mcV$sp(KafkaController.scala:344) at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: kafka.common.StateChangeFailedException: aborted leader election for partition [bodyconfig-store-changelog,46] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:342) ... 32 more [2016-03-23 01:05:23,771] TRACE Controller 3 epoch 40 started leader election for partition [tec1.simdev1.thedvr.thedvrRecordingStorage,3] (state.change.logger) [2016-03-23 01:05:23,771] ERROR Controller 3 epoch 40 aborted leader election for partition [tec1.simdev1.thedvr.thedvrRecordingStorage,3] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. (state.change.logger) [2016-03-23 01:05:23,772] ERROR Controller 3 epoch 40 encountered error while electing leader for partition [tec1.simdev1.thedvr.thedvrRecordingStorage,3] due to: aborted leader election for partition [tec1.simdev1.thedvr.thedvrRecordingStorage,3] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. (state.change.logger) [2016-03-23 01:05:23,772] ERROR Controller 3 epoch 40 initiated state change for partition [tec1.simdev1.thedvr.thedvrRecordingStorage,3] from OnlinePartition to OnlinePartition failed (state.change.logger) kafka.common.StateChangeFailedException: encountered error while electing leader for partition [tec1.simdev1.thedvr.thedvrRecordingStorage,3] due to: aborted leader election for partition [tec1.simdev1.thedvr.thedvrRecordingStorage,3] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:368) at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:207) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:146) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:145) at scala.collection.immutable.Set$Set1.foreach(Set.scala:79) at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:145) at kafka.controller.KafkaController.onPreferredReplicaElection(KafkaController.scala:662) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply$mcV$sp(KafkaController.scala:1225) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1217) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1215) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1215) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1194) at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221) at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428) at kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance(KafkaController.scala:1194) at kafka.controller.KafkaController$$anonfun$onControllerFailover$1.apply$mcV$sp(KafkaController.scala:344) at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: kafka.common.StateChangeFailedException: aborted leader election for partition [tec1.simdev1.thedvr.thedvrRecordingStorage,3] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:342) ... 32 more [2016-03-23 01:05:23,772] TRACE Controller 3 epoch 40 started leader election for partition [tec1.usqe1.livelog,31] (state.change.logger) [2016-03-23 01:05:23,775] ERROR Controller 3 epoch 40 aborted leader election for partition [tec1.usqe1.livelog,31] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. (state.change.logger) [2016-03-23 01:05:23,775] ERROR Controller 3 epoch 40 encountered error while electing leader for partition [tec1.usqe1.livelog,31] due to: aborted leader election for partition [tec1.usqe1.livelog,31] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. (state.change.logger) [2016-03-23 01:05:23,775] ERROR Controller 3 epoch 40 initiated state change for partition [tec1.usqe1.livelog,31] from OnlinePartition to OnlinePartition failed (state.change.logger) kafka.common.StateChangeFailedException: encountered error while electing leader for partition [tec1.usqe1.livelog,31] due to: aborted leader election for partition [tec1.usqe1.livelog,31] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:368) at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:207) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:146) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:145) at scala.collection.immutable.Set$Set1.foreach(Set.scala:79) at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:145) at kafka.controller.KafkaController.onPreferredReplicaElection(KafkaController.scala:662) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply$mcV$sp(KafkaController.scala:1225) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1217) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1215) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1215) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1194) at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221) at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428) at kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance(KafkaController.scala:1194) at kafka.controller.KafkaController$$anonfun$onControllerFailover$1.apply$mcV$sp(KafkaController.scala:344) at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: kafka.common.StateChangeFailedException: aborted leader election for partition [tec1.usqe1.livelog,31] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:342) ... 32 more [2016-03-23 01:05:23,777] TRACE Controller 3 epoch 40 started leader election for partition [tec1.us_engr.frontend.eventListLog,46] (state.change.logger) [2016-03-23 01:05:23,777] ERROR Controller 3 epoch 40 aborted leader election for partition [tec1.us_engr.frontend.eventListLog,46] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. (state.change.logger) [2016-03-23 01:05:23,777] ERROR Controller 3 epoch 40 encountered error while electing leader for partition [tec1.us_engr.frontend.eventListLog,46] due to: aborted leader election for partition [tec1.us_engr.frontend.eventListLog,46] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. (state.change.logger) [2016-03-23 01:05:23,777] ERROR Controller 3 epoch 40 initiated state change for partition [tec1.us_engr.frontend.eventListLog,46] from OnlinePartition to OnlinePartition failed (state.change.logger) kafka.common.StateChangeFailedException: encountered error while electing leader for partition [tec1.us_engr.frontend.eventListLog,46] due to: aborted leader election for partition [tec1.us_engr.frontend.eventListLog,46] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:368) at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:207) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:146) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:145) at scala.collection.immutable.Set$Set1.foreach(Set.scala:79) at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:145) at kafka.controller.KafkaController.onPreferredReplicaElection(KafkaController.scala:662) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply$mcV$sp(KafkaController.scala:1225) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1217) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1215) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1215) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1194) at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221) at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428) at kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance(KafkaController.scala:1194) at kafka.controller.KafkaController$$anonfun$onControllerFailover$1.apply$mcV$sp(KafkaController.scala:344) at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: kafka.common.StateChangeFailedException: aborted leader election for partition [tec1.us_engr.frontend.eventListLog,46] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:342) ... 32 more [2016-03-23 01:05:23,778] TRACE Controller 3 epoch 40 started leader election for partition [tec1.usqe1.frontend.eventListLog,20] (state.change.logger) [2016-03-23 01:05:23,781] ERROR Controller 3 epoch 40 aborted leader election for partition [tec1.usqe1.frontend.eventListLog,20] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. (state.change.logger) [2016-03-23 01:05:23,781] ERROR Controller 3 epoch 40 encountered error while electing leader for partition [tec1.usqe1.frontend.eventListLog,20] due to: aborted leader election for partition [tec1.usqe1.frontend.eventListLog,20] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. (state.change.logger) [2016-03-23 01:05:23,781] ERROR Controller 3 epoch 40 initiated state change for partition [tec1.usqe1.frontend.eventListLog,20] from OnlinePartition to OnlinePartition failed (state.change.logger) kafka.common.StateChangeFailedException: encountered error while electing leader for partition [tec1.usqe1.frontend.eventListLog,20] due to: aborted leader election for partition [tec1.usqe1.frontend.eventListLog,20] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:368) at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:207) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:146) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:145) at scala.collection.immutable.Set$Set1.foreach(Set.scala:79) at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:145) at kafka.controller.KafkaController.onPreferredReplicaElection(KafkaController.scala:662) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply$mcV$sp(KafkaController.scala:1225) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1217) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1215) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1215) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1194) at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221) at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428) at kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance(KafkaController.scala:1194) at kafka.controller.KafkaController$$anonfun$onControllerFailover$1.apply$mcV$sp(KafkaController.scala:344) at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: kafka.common.StateChangeFailedException: aborted leader election for partition [tec1.usqe1.frontend.eventListLog,20] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:342) ... 32 more [2016-03-23 01:05:23,781] TRACE Controller 3 epoch 40 started leader election for partition [tec1.us_engr.bodydata.recordings,17] (state.change.logger) [2016-03-23 01:05:23,781] ERROR Controller 3 epoch 40 aborted leader election for partition [tec1.us_engr.bodydata.recordings,17] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. (state.change.logger) [2016-03-23 01:05:23,781] ERROR Controller 3 epoch 40 encountered error while electing leader for partition [tec1.us_engr.bodydata.recordings,17] due to: aborted leader election for partition [tec1.us_engr.bodydata.recordings,17] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. (state.change.logger) [2016-03-23 01:05:23,782] ERROR Controller 3 epoch 40 initiated state change for partition [tec1.us_engr.bodydata.recordings,17] from OnlinePartition to OnlinePartition failed (state.change.logger) kafka.common.StateChangeFailedException: encountered error while electing leader for partition [tec1.us_engr.bodydata.recordings,17] due to: aborted leader election for partition [tec1.us_engr.bodydata.recordings,17] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:368) at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:207) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:146) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:145) at scala.collection.immutable.Set$Set1.foreach(Set.scala:79) at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:145) at kafka.controller.KafkaController.onPreferredReplicaElection(KafkaController.scala:662) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply$mcV$sp(KafkaController.scala:1225) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1217) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1215) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1215) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1194) at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221) at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428) at kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance(KafkaController.scala:1194) at kafka.controller.KafkaController$$anonfun$onControllerFailover$1.apply$mcV$sp(KafkaController.scala:344) at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: kafka.common.StateChangeFailedException: aborted leader election for partition [tec1.us_engr.bodydata.recordings,17] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:342) ... 32 more [2016-03-23 01:05:23,782] TRACE Controller 3 epoch 40 started leader election for partition [bodystate-store-changelog,50] (state.change.logger) [2016-03-23 01:05:23,787] ERROR Controller 3 epoch 40 aborted leader election for partition [bodystate-store-changelog,50] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. (state.change.logger) [2016-03-23 01:05:23,787] ERROR Controller 3 epoch 40 encountered error while electing leader for partition [bodystate-store-changelog,50] due to: aborted leader election for partition [bodystate-store-changelog,50] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. (state.change.logger) [2016-03-23 01:05:23,787] ERROR Controller 3 epoch 40 initiated state change for partition [bodystate-store-changelog,50] from OnlinePartition to OnlinePartition failed (state.change.logger) kafka.common.StateChangeFailedException: encountered error while electing leader for partition [bodystate-store-changelog,50] due to: aborted leader election for partition [bodystate-store-changelog,50] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:368) at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:207) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:146) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:145) at scala.collection.immutable.Set$Set1.foreach(Set.scala:79) at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:145) at kafka.controller.KafkaController.onPreferredReplicaElection(KafkaController.scala:662) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply$mcV$sp(KafkaController.scala:1225) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1217) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1215) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1215) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1194) at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221) at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428) at kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance(KafkaController.scala:1194) at kafka.controller.KafkaController$$anonfun$onControllerFailover$1.apply$mcV$sp(KafkaController.scala:344) at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: kafka.common.StateChangeFailedException: aborted leader election for partition [bodystate-store-changelog,50] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:342) ... 32 more [2016-03-23 01:05:23,787] TRACE Controller 3 epoch 40 started leader election for partition [debug.feedbrowser.text2speech,0] (state.change.logger) [2016-03-23 01:05:23,788] ERROR Controller 3 epoch 40 aborted leader election for partition [debug.feedbrowser.text2speech,0] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. (state.change.logger) [2016-03-23 01:05:23,788] ERROR Controller 3 epoch 40 encountered error while electing leader for partition [debug.feedbrowser.text2speech,0] due to: aborted leader election for partition [debug.feedbrowser.text2speech,0] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. (state.change.logger) [2016-03-23 01:05:23,788] ERROR Controller 3 epoch 40 initiated state change for partition [debug.feedbrowser.text2speech,0] from OnlinePartition to OnlinePartition failed (state.change.logger) kafka.common.StateChangeFailedException: encountered error while electing leader for partition [debug.feedbrowser.text2speech,0] due to: aborted leader election for partition [debug.feedbrowser.text2speech,0] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:368) at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:207) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:146) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:145) at scala.collection.immutable.Set$Set1.foreach(Set.scala:79) at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:145) at kafka.controller.KafkaController.onPreferredReplicaElection(KafkaController.scala:662) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply$mcV$sp(KafkaController.scala:1225) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1217) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1215) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1215) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1194) at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221) at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428) at kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance(KafkaController.scala:1194) at kafka.controller.KafkaController$$anonfun$onControllerFailover$1.apply$mcV$sp(KafkaController.scala:344) at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: kafka.common.StateChangeFailedException: aborted leader election for partition [debug.feedbrowser.text2speech,0] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:342) ... 32 more [2016-03-23 01:05:23,788] TRACE Controller 3 epoch 40 started leader election for partition [__consumer_offsets,21] (state.change.logger) [2016-03-23 01:05:23,796] ERROR Controller 3 epoch 40 aborted leader election for partition [__consumer_offsets,21] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. (state.change.logger) [2016-03-23 01:05:23,796] ERROR Controller 3 epoch 40 encountered error while electing leader for partition [__consumer_offsets,21] due to: aborted leader election for partition [__consumer_offsets,21] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. (state.change.logger) [2016-03-23 01:05:23,796] ERROR Controller 3 epoch 40 initiated state change for partition [__consumer_offsets,21] from OnlinePartition to OnlinePartition failed (state.change.logger) kafka.common.StateChangeFailedException: encountered error while electing leader for partition [__consumer_offsets,21] due to: aborted leader election for partition [__consumer_offsets,21] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:368) at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:207) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:146) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:145) at scala.collection.immutable.Set$Set1.foreach(Set.scala:79) at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:145) at kafka.controller.KafkaController.onPreferredReplicaElection(KafkaController.scala:662) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply$mcV$sp(KafkaController.scala:1225) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1217) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1215) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1215) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1194) at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221) at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428) at kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance(KafkaController.scala:1194) at kafka.controller.KafkaController$$anonfun$onControllerFailover$1.apply$mcV$sp(KafkaController.scala:344) at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: kafka.common.StateChangeFailedException: aborted leader election for partition [__consumer_offsets,21] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:342) ... 32 more [2016-03-23 01:05:23,797] TRACE Controller 3 epoch 40 started leader election for partition [debug.ono_dev1.storagemind,3] (state.change.logger) [2016-03-23 01:05:23,798] ERROR Controller 3 epoch 40 aborted leader election for partition [debug.ono_dev1.storagemind,3] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. (state.change.logger) [2016-03-23 01:05:23,798] ERROR Controller 3 epoch 40 encountered error while electing leader for partition [debug.ono_dev1.storagemind,3] due to: aborted leader election for partition [debug.ono_dev1.storagemind,3] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. (state.change.logger) [2016-03-23 01:05:23,798] ERROR Controller 3 epoch 40 initiated state change for partition [debug.ono_dev1.storagemind,3] from OnlinePartition to OnlinePartition failed (state.change.logger) kafka.common.StateChangeFailedException: encountered error while electing leader for partition [debug.ono_dev1.storagemind,3] due to: aborted leader election for partition [debug.ono_dev1.storagemind,3] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:368) at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:207) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:146) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:145) at scala.collection.immutable.Set$Set1.foreach(Set.scala:79) at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:145) at kafka.controller.KafkaController.onPreferredReplicaElection(KafkaController.scala:662) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply$mcV$sp(KafkaController.scala:1225) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1217) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1215) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1215) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1194) at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221) at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428) at kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance(KafkaController.scala:1194) at kafka.controller.KafkaController$$anonfun$onControllerFailover$1.apply$mcV$sp(KafkaController.scala:344) at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: kafka.common.StateChangeFailedException: aborted leader election for partition [debug.ono_dev1.storagemind,3] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:342) ... 32 more [2016-03-23 01:05:23,798] TRACE Controller 3 epoch 40 started leader election for partition [bodyconfig-store-changelog,41] (state.change.logger) [2016-03-23 01:05:23,803] ERROR Controller 3 epoch 40 aborted leader election for partition [bodyconfig-store-changelog,41] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. (state.change.logger) [2016-03-23 01:05:23,803] ERROR Controller 3 epoch 40 encountered error while electing leader for partition [bodyconfig-store-changelog,41] due to: aborted leader election for partition [bodyconfig-store-changelog,41] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. (state.change.logger) [2016-03-23 01:05:23,803] ERROR Controller 3 epoch 40 initiated state change for partition [bodyconfig-store-changelog,41] from OnlinePartition to OnlinePartition failed (state.change.logger) kafka.common.StateChangeFailedException: encountered error while electing leader for partition [bodyconfig-store-changelog,41] due to: aborted leader election for partition [bodyconfig-store-changelog,41] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:368) at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:207) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:146) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:145) at scala.collection.immutable.Set$Set1.foreach(Set.scala:79) at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:145) at kafka.controller.KafkaController.onPreferredReplicaElection(KafkaController.scala:662) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply$mcV$sp(KafkaController.scala:1225) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1217) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1215) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1215) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1194) at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221) at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428) at kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance(KafkaController.scala:1194) at kafka.controller.KafkaController$$anonfun$onControllerFailover$1.apply$mcV$sp(KafkaController.scala:344) at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: kafka.common.StateChangeFailedException: aborted leader election for partition [bodyconfig-store-changelog,41] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:342) ... 32 more [2016-03-23 01:05:23,803] TRACE Controller 3 epoch 40 started leader election for partition [tec1.usqe1.bodydata.bodyconfig-llc-anon,40] (state.change.logger) [2016-03-23 01:05:23,804] ERROR Controller 3 epoch 40 aborted leader election for partition [tec1.usqe1.bodydata.bodyconfig-llc-anon,40] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. (state.change.logger) [2016-03-23 01:05:23,804] ERROR Controller 3 epoch 40 encountered error while electing leader for partition [tec1.usqe1.bodydata.bodyconfig-llc-anon,40] due to: aborted leader election for partition [tec1.usqe1.bodydata.bodyconfig-llc-anon,40] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. (state.change.logger) [2016-03-23 01:05:23,804] ERROR Controller 3 epoch 40 initiated state change for partition [tec1.usqe1.bodydata.bodyconfig-llc-anon,40] from OnlinePartition to OnlinePartition failed (state.change.logger) kafka.common.StateChangeFailedException: encountered error while electing leader for partition [tec1.usqe1.bodydata.bodyconfig-llc-anon,40] due to: aborted leader election for partition [tec1.usqe1.bodydata.bodyconfig-llc-anon,40] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:368) at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:207) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:146) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:145) at scala.collection.immutable.Set$Set1.foreach(Set.scala:79) at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:145) at kafka.controller.KafkaController.onPreferredReplicaElection(KafkaController.scala:662) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply$mcV$sp(KafkaController.scala:1225) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1217) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1215) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1215) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1194) at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221) at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428) at kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance(KafkaController.scala:1194) at kafka.controller.KafkaController$$anonfun$onControllerFailover$1.apply$mcV$sp(KafkaController.scala:344) at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: kafka.common.StateChangeFailedException: aborted leader election for partition [tec1.usqe1.bodydata.bodyconfig-llc-anon,40] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:342) ... 32 more [2016-03-23 01:05:23,804] TRACE Controller 3 epoch 40 started leader election for partition [tec1.simdev1.bodydata.recordings,21] (state.change.logger) [2016-03-23 01:05:23,811] ERROR Controller 3 epoch 40 aborted leader election for partition [tec1.simdev1.bodydata.recordings,21] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. (state.change.logger) [2016-03-23 01:05:23,811] ERROR Controller 3 epoch 40 encountered error while electing leader for partition [tec1.simdev1.bodydata.recordings,21] due to: aborted leader election for partition [tec1.simdev1.bodydata.recordings,21] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. (state.change.logger) [2016-03-23 01:05:23,811] ERROR Controller 3 epoch 40 initiated state change for partition [tec1.simdev1.bodydata.recordings,21] from OnlinePartition to OnlinePartition failed (state.change.logger) kafka.common.StateChangeFailedException: encountered error while electing leader for partition [tec1.simdev1.bodydata.recordings,21] due to: aborted leader election for partition [tec1.simdev1.bodydata.recordings,21] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:368) at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:207) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:146) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:145) at scala.collection.immutable.Set$Set1.foreach(Set.scala:79) at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:145) at kafka.controller.KafkaController.onPreferredReplicaElection(KafkaController.scala:662) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply$mcV$sp(KafkaController.scala:1225) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1217) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1215) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1215) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1194) at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221) at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428) at kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance(KafkaController.scala:1194) at kafka.controller.KafkaController$$anonfun$onControllerFailover$1.apply$mcV$sp(KafkaController.scala:344) at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: kafka.common.StateChangeFailedException: aborted leader election for partition [tec1.simdev1.bodydata.recordings,21] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:342) ... 32 more [2016-03-23 01:05:23,813] TRACE Controller 3 epoch 40 started leader election for partition [tec1.us_engr.frontend.eventListLog,55] (state.change.logger) [2016-03-23 01:05:23,814] ERROR Controller 3 epoch 40 aborted leader election for partition [tec1.us_engr.frontend.eventListLog,55] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. (state.change.logger) [2016-03-23 01:05:23,814] ERROR Controller 3 epoch 40 encountered error while electing leader for partition [tec1.us_engr.frontend.eventListLog,55] due to: aborted leader election for partition [tec1.us_engr.frontend.eventListLog,55] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. (state.change.logger) [2016-03-23 01:05:23,814] ERROR Controller 3 epoch 40 initiated state change for partition [tec1.us_engr.frontend.eventListLog,55] from OnlinePartition to OnlinePartition failed (state.change.logger) kafka.common.StateChangeFailedException: encountered error while electing leader for partition [tec1.us_engr.frontend.eventListLog,55] due to: aborted leader election for partition [tec1.us_engr.frontend.eventListLog,55] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:368) at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:207) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:146) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:145) at scala.collection.immutable.Set$Set1.foreach(Set.scala:79) at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:145) at kafka.controller.KafkaController.onPreferredReplicaElection(KafkaController.scala:662) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply$mcV$sp(KafkaController.scala:1225) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1217) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1215) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1215) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1194) at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221) at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428) at kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance(KafkaController.scala:1194) at kafka.controller.KafkaController$$anonfun$onControllerFailover$1.apply$mcV$sp(KafkaController.scala:344) at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: kafka.common.StateChangeFailedException: aborted leader election for partition [tec1.us_engr.frontend.eventListLog,55] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:342) ... 32 more [2016-03-23 01:05:23,815] TRACE Controller 3 epoch 40 started leader election for partition [tec1.us_engr.bodydata.recordings,27] (state.change.logger) [2016-03-23 01:05:23,820] ERROR Controller 3 epoch 40 aborted leader election for partition [tec1.us_engr.bodydata.recordings,27] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. (state.change.logger) [2016-03-23 01:05:23,820] ERROR Controller 3 epoch 40 encountered error while electing leader for partition [tec1.us_engr.bodydata.recordings,27] due to: aborted leader election for partition [tec1.us_engr.bodydata.recordings,27] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. (state.change.logger) [2016-03-23 01:05:23,820] ERROR Controller 3 epoch 40 initiated state change for partition [tec1.us_engr.bodydata.recordings,27] from OnlinePartition to OnlinePartition failed (state.change.logger) kafka.common.StateChangeFailedException: encountered error while electing leader for partition [tec1.us_engr.bodydata.recordings,27] due to: aborted leader election for partition [tec1.us_engr.bodydata.recordings,27] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:368) at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:207) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:146) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:145) at scala.collection.immutable.Set$Set1.foreach(Set.scala:79) at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:145) at kafka.controller.KafkaController.onPreferredReplicaElection(KafkaController.scala:662) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply$mcV$sp(KafkaController.scala:1225) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1217) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1215) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1215) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1194) at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221) at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428) at kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance(KafkaController.scala:1194) at kafka.controller.KafkaController$$anonfun$onControllerFailover$1.apply$mcV$sp(KafkaController.scala:344) at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: kafka.common.StateChangeFailedException: aborted leader election for partition [tec1.us_engr.bodydata.recordings,27] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:342) ... 32 more [2016-03-23 01:05:23,820] TRACE Controller 3 epoch 40 started leader election for partition [tec1.usqe1.livelog-cooked,18] (state.change.logger) [2016-03-23 01:05:23,822] ERROR Controller 3 epoch 40 aborted leader election for partition [tec1.usqe1.livelog-cooked,18] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. (state.change.logger) [2016-03-23 01:05:23,822] ERROR Controller 3 epoch 40 encountered error while electing leader for partition [tec1.usqe1.livelog-cooked,18] due to: aborted leader election for partition [tec1.usqe1.livelog-cooked,18] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. (state.change.logger) [2016-03-23 01:05:23,822] ERROR Controller 3 epoch 40 initiated state change for partition [tec1.usqe1.livelog-cooked,18] from OnlinePartition to OnlinePartition failed (state.change.logger) kafka.common.StateChangeFailedException: encountered error while electing leader for partition [tec1.usqe1.livelog-cooked,18] due to: aborted leader election for partition [tec1.usqe1.livelog-cooked,18] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:368) at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:207) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:146) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:145) at scala.collection.immutable.Set$Set1.foreach(Set.scala:79) at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:145) at kafka.controller.KafkaController.onPreferredReplicaElection(KafkaController.scala:662) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply$mcV$sp(KafkaController.scala:1225) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1217) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1215) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1215) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1194) at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221) at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428) at kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance(KafkaController.scala:1194) at kafka.controller.KafkaController$$anonfun$onControllerFailover$1.apply$mcV$sp(KafkaController.scala:344) at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: kafka.common.StateChangeFailedException: aborted leader election for partition [tec1.usqe1.livelog-cooked,18] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:342) ... 32 more [2016-03-23 01:05:23,822] TRACE Controller 3 epoch 40 started leader election for partition [debug.simdev1.yarn-master,4] (state.change.logger) [2016-03-23 01:05:23,826] ERROR Controller 3 epoch 40 aborted leader election for partition [debug.simdev1.yarn-master,4] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. (state.change.logger) [2016-03-23 01:05:23,826] ERROR Controller 3 epoch 40 encountered error while electing leader for partition [debug.simdev1.yarn-master,4] due to: aborted leader election for partition [debug.simdev1.yarn-master,4] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. (state.change.logger) [2016-03-23 01:05:23,826] ERROR Controller 3 epoch 40 initiated state change for partition [debug.simdev1.yarn-master,4] from OnlinePartition to OnlinePartition failed (state.change.logger) kafka.common.StateChangeFailedException: encountered error while electing leader for partition [debug.simdev1.yarn-master,4] due to: aborted leader election for partition [debug.simdev1.yarn-master,4] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41.. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:368) at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:207) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:146) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:145) at scala.collection.immutable.Set$Set1.foreach(Set.scala:79) at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:145) at kafka.controller.KafkaController.onPreferredReplicaElection(KafkaController.scala:662) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply$mcV$sp(KafkaController.scala:1225) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1220) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1217) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1215) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1215) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1194) at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221) at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428) at kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance(KafkaController.scala:1194) at kafka.controller.KafkaController$$anonfun$onControllerFailover$1.apply$mcV$sp(KafkaController.scala:344) at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: kafka.common.StateChangeFailedException: aborted leader election for partition [debug.simdev1.yarn-master,4] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 3 went through a soft failure and another controller was elected with epoch 41. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:342) ... 32 more