2017-05-06T18:43:35,785 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit() @742 - stream-thread [StreamThread-1] Committing all tasks because the commit interval 1000ms has elapsed 2017-05-06T18:43:35,909 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamThread.commitOne() @780 - stream-thread [StreamThread-1] Committing task StreamTask 0_21 2017-05-06T18:43:35,909 DEBUG StreamThread-1 org.apache.kafka.streams.processor.internals.StreamTask.run() @70 - task [0_21] Committing its state 2017-05-06T18:43:35,909 DEBUG StreamThread-1 org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush() @319 - task [0_21] Flushing all stores registered in the state manager 2017-05-06T18:43:35,909 DEBUG StreamThread-1 org.apache.kafka.streams.processor.internals.RecordCollectorImpl.flush() @125 - task [0_21] Flushing producer 2017-05-06T18:43:35,909 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamThread.commitOne() @780 - stream-thread [StreamThread-1] Committing task StreamTask 0_6 2017-05-06T18:43:35,909 DEBUG StreamThread-1 org.apache.kafka.streams.processor.internals.StreamTask.run() @70 - task [0_6] Committing its state 2017-05-06T18:43:35,909 DEBUG StreamThread-1 org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush() @319 - task [0_6] Flushing all stores registered in the state manager 2017-05-06T18:43:35,910 DEBUG StreamThread-1 org.apache.kafka.streams.processor.internals.RecordCollectorImpl.flush() @125 - task [0_6] Flushing producer 2017-05-06T18:43:35,910 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamThread.commitOne() @780 - stream-thread [StreamThread-1] Committing task StreamTask 0_38 2017-05-06T18:43:35,910 DEBUG StreamThread-1 org.apache.kafka.streams.processor.internals.StreamTask.run() @70 - task [0_38] Committing its state 2017-05-06T18:43:35,910 DEBUG StreamThread-1 org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush() @319 - task [0_38] Flushing all stores registered in the state manager 2017-05-06T18:43:35,910 DEBUG StreamThread-1 org.apache.kafka.streams.processor.internals.RecordCollectorImpl.flush() @125 - task [0_38] Flushing producer 2017-05-06T18:43:35,910 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamThread.commitOne() @780 - stream-thread [StreamThread-1] Committing task StreamTask 0_12 2017-05-06T18:43:35,910 DEBUG StreamThread-1 org.apache.kafka.streams.processor.internals.StreamTask.run() @70 - task [0_12] Committing its state 2017-05-06T18:43:35,910 DEBUG StreamThread-1 org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush() @319 - task [0_12] Flushing all stores registered in the state manager 2017-05-06T18:43:35,910 DEBUG StreamThread-1 org.apache.kafka.streams.processor.internals.RecordCollectorImpl.flush() @125 - task [0_12] Flushing producer 2017-05-06T18:43:39,595 DEBUG kafka-coordinator-heartbeat-thread | hades org.apache.kafka.clients.consumer.internals.AbstractCoordinator.handle() @726 - Attempt to heartbeat failed for group hades since it is rebalancing. 2017-05-06T18:43:40,350 DEBUG kafka-coordinator-heartbeat-thread | hades org.apache.kafka.clients.consumer.internals.AbstractCoordinator.sendHeartbeatRequest() @704 - Sending Heartbeat request for group hades to coordinator 10.210.200.144:9092 (id: 2147483644 rack: null) 2017-05-06T18:43:40,737 DEBUG kafka-coordinator-heartbeat-thread | hades org.apache.kafka.clients.consumer.internals.AbstractCoordinator.handle() @726 - Attempt to heartbeat failed for group hades since it is rebalancing. 2017-05-06T18:43:40,519 INFO StreamThread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinPrepare() @393 - Revoking previously assigned partitions [poseidonIncidentFeed-21, poseidonIncidentFeed-38, poseidonIncidentFeed-6, poseidonIncidentFeed-12] for group hades 2017-05-06T18:43:43,441 DEBUG kafka-coordinator-heartbeat-thread | hades org.apache.kafka.clients.consumer.internals.AbstractCoordinator.sendHeartbeatRequest() @704 - Sending Heartbeat request for group hades to coordinator 10.210.200.144:9092 (id: 2147483644 rack: null) 2017-05-06T18:43:43,542 DEBUG kafka-coordinator-heartbeat-thread | hades org.apache.kafka.clients.consumer.internals.AbstractCoordinator.handle() @726 - Attempt to heartbeat failed for group hades since it is rebalancing. 2017-05-06T18:43:45,488 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamThread.onPartitionsRevoked() @254 - stream-thread [StreamThread-1] partitions [[poseidonIncidentFeed-21, poseidonIncidentFeed-38, poseidonIncidentFeed-6, poseidonIncidentFeed-12]] revoked at the beginning of consumer rebalance. 2017-05-06T18:43:46,446 DEBUG kafka-coordinator-heartbeat-thread | hades org.apache.kafka.clients.consumer.internals.AbstractCoordinator.sendHeartbeatRequest() @704 - Sending Heartbeat request for group hades to coordinator 10.210.200.144:9092 (id: 2147483644 rack: null) 2017-05-06T18:43:46,547 DEBUG kafka-coordinator-heartbeat-thread | hades org.apache.kafka.clients.consumer.internals.AbstractCoordinator.handle() @726 - Attempt to heartbeat failed for group hades since it is rebalancing. 2017-05-06T18:43:49,509 DEBUG kafka-coordinator-heartbeat-thread | hades org.apache.kafka.clients.consumer.internals.AbstractCoordinator.sendHeartbeatRequest() @704 - Sending Heartbeat request for group hades to coordinator 10.210.200.144:9092 (id: 2147483644 rack: null) 2017-05-06T18:43:49,612 DEBUG kafka-coordinator-heartbeat-thread | hades org.apache.kafka.clients.consumer.internals.AbstractCoordinator.handle() @726 - Attempt to heartbeat failed for group hades since it is rebalancing. 2017-05-06T18:43:45,667 DEBUG StreamThread-1 org.apache.kafka.streams.processor.internals.StreamThread.suspendTasksAndState() @471 - stream-thread [StreamThread-1] suspendTasksAndState: suspending all active tasks [[0_21, 0_6, 0_38, 0_12]] and standby tasks [[]] 2017-05-06T18:44:15,796 INFO kafka-coordinator-heartbeat-thread | hades org.apache.kafka.clients.consumer.internals.AbstractCoordinator.coordinatorDead() @618 - Marking the coordinator 10.210.200.144:9092 (id: 2147483644 rack: null) dead for group hades 2017-05-06T18:44:16,160 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamThread.apply() @1061 - stream-thread [StreamThread-1] Closing a task's topology 0_21 2017-05-06T18:44:18,744 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_21.stateMessageTopic-process 2017-05-06T18:44:18,744 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name process 2017-05-06T18:44:18,745 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_21.stateMessageProcessor-process 2017-05-06T18:44:18,745 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_21.sink-process 2017-05-06T18:44:18,745 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_6.stateMessageTopic-process 2017-05-06T18:44:18,745 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_6.stateMessageProcessor-process 2017-05-06T18:44:18,745 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_6.sink-process 2017-05-06T18:44:18,745 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_12.stateMessageTopic-process 2017-05-06T18:44:18,745 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_12.stateMessageProcessor-process 2017-05-06T18:44:18,746 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_12.sink-process 2017-05-06T18:44:18,746 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_38.stateMessageTopic-process 2017-05-06T18:44:18,746 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_38.stateMessageProcessor-process 2017-05-06T18:44:18,746 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_38.sink-process 2017-05-06T18:44:18,746 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_21.stateMessageTopic-punctuate 2017-05-06T18:44:18,746 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name punctuate 2017-05-06T18:44:18,747 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_21.stateMessageProcessor-punctuate 2017-05-06T18:44:18,747 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_21.sink-punctuate 2017-05-06T18:44:18,747 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_6.stateMessageTopic-punctuate 2017-05-06T18:44:18,747 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_6.stateMessageProcessor-punctuate 2017-05-06T18:44:18,747 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_6.sink-punctuate 2017-05-06T18:44:18,747 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_12.stateMessageTopic-punctuate 2017-05-06T18:44:18,747 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_12.stateMessageProcessor-punctuate 2017-05-06T18:44:18,748 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_12.sink-punctuate 2017-05-06T18:44:18,748 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_38.stateMessageTopic-punctuate 2017-05-06T18:44:18,748 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_38.stateMessageProcessor-punctuate 2017-05-06T18:44:18,748 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_38.sink-punctuate 2017-05-06T18:44:18,748 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_21.stateMessageTopic-forward 2017-05-06T18:44:18,748 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name forward 2017-05-06T18:44:18,748 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_21.stateMessageProcessor-forward 2017-05-06T18:44:18,749 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_21.sink-forward 2017-05-06T18:44:18,749 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_6.stateMessageTopic-forward 2017-05-06T18:44:18,749 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_6.stateMessageProcessor-forward 2017-05-06T18:44:18,749 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_6.sink-forward 2017-05-06T18:44:18,749 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_12.stateMessageTopic-forward 2017-05-06T18:44:18,749 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_12.stateMessageProcessor-forward 2017-05-06T18:44:18,749 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_12.sink-forward 2017-05-06T18:44:18,749 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_38.stateMessageTopic-forward 2017-05-06T18:44:18,749 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_38.stateMessageProcessor-forward 2017-05-06T18:44:18,750 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_38.sink-forward 2017-05-06T18:44:18,750 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_21.stateMessageTopic-create 2017-05-06T18:44:18,750 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name create 2017-05-06T18:44:18,750 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_21.stateMessageProcessor-create 2017-05-06T18:44:18,750 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_21.sink-create 2017-05-06T18:44:18,750 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_6.stateMessageTopic-create 2017-05-06T18:44:18,750 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_6.stateMessageProcessor-create 2017-05-06T18:44:18,751 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_6.sink-create 2017-05-06T18:44:18,751 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_12.stateMessageTopic-create 2017-05-06T18:44:18,751 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_12.stateMessageProcessor-create 2017-05-06T18:44:18,751 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_12.sink-create 2017-05-06T18:44:18,751 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_38.stateMessageTopic-create 2017-05-06T18:44:18,751 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_38.stateMessageProcessor-create 2017-05-06T18:44:18,751 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_38.sink-create 2017-05-06T18:44:18,752 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_21.stateMessageTopic-destroy 2017-05-06T18:44:18,752 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name destroy 2017-05-06T18:44:18,752 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_21.stateMessageProcessor-destroy 2017-05-06T18:44:18,752 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_21.sink-destroy 2017-05-06T18:44:18,752 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_6.stateMessageTopic-destroy 2017-05-06T18:44:18,752 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_6.stateMessageProcessor-destroy 2017-05-06T18:44:18,753 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_6.sink-destroy 2017-05-06T18:44:18,753 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_12.stateMessageTopic-destroy 2017-05-06T18:44:18,753 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_12.stateMessageProcessor-destroy 2017-05-06T18:44:18,753 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_12.sink-destroy 2017-05-06T18:44:18,753 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_38.stateMessageTopic-destroy 2017-05-06T18:44:18,753 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_38.stateMessageProcessor-destroy 2017-05-06T18:44:18,753 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.removeSensor() @368 - Removed sensor with name task.0_38.sink-destroy 2017-05-06T18:44:18,754 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamThread.apply() @1061 - stream-thread [StreamThread-1] Closing a task's topology 0_6 2017-05-06T18:44:18,754 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamThread.apply() @1061 - stream-thread [StreamThread-1] Closing a task's topology 0_38 2017-05-06T18:44:18,754 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamThread.apply() @1061 - stream-thread [StreamThread-1] Closing a task's topology 0_12 2017-05-06T18:44:18,754 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamThread.apply() @544 - stream-thread [StreamThread-1] Flushing state stores of task 0_21 2017-05-06T18:44:18,754 DEBUG StreamThread-1 org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush() @319 - task [0_21] Flushing all stores registered in the state manager 2017-05-06T18:44:18,754 DEBUG StreamThread-1 org.apache.kafka.streams.processor.internals.RecordCollectorImpl.flush() @125 - task [0_21] Flushing producer 2017-05-06T18:44:18,826 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamThread.apply() @544 - stream-thread [StreamThread-1] Flushing state stores of task 0_6 2017-05-06T18:44:18,826 DEBUG StreamThread-1 org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush() @319 - task [0_6] Flushing all stores registered in the state manager 2017-05-06T18:44:18,826 DEBUG StreamThread-1 org.apache.kafka.streams.processor.internals.RecordCollectorImpl.flush() @125 - task [0_6] Flushing producer 2017-05-06T18:44:18,923 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamThread.apply() @544 - stream-thread [StreamThread-1] Flushing state stores of task 0_38 2017-05-06T18:44:18,923 DEBUG StreamThread-1 org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush() @319 - task [0_38] Flushing all stores registered in the state manager 2017-05-06T18:44:18,923 DEBUG StreamThread-1 org.apache.kafka.streams.processor.internals.RecordCollectorImpl.flush() @125 - task [0_38] Flushing producer 2017-05-06T18:44:18,923 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamThread.apply() @544 - stream-thread [StreamThread-1] Flushing state stores of task 0_12 2017-05-06T18:44:18,923 DEBUG StreamThread-1 org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush() @319 - task [0_12] Flushing all stores registered in the state manager 2017-05-06T18:44:18,923 DEBUG StreamThread-1 org.apache.kafka.streams.processor.internals.RecordCollectorImpl.flush() @125 - task [0_12] Flushing producer 2017-05-06T18:44:18,924 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamThread.apply() @534 - stream-thread [StreamThread-1] Committing consumer offsets of task 0_21 2017-05-06T18:44:18,924 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamThread.apply() @534 - stream-thread [StreamThread-1] Committing consumer offsets of task 0_6 2017-05-06T18:44:18,924 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamThread.apply() @534 - stream-thread [StreamThread-1] Committing consumer offsets of task 0_38 2017-05-06T18:44:18,924 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamThread.apply() @534 - stream-thread [StreamThread-1] Committing consumer offsets of task 0_12 2017-05-06T18:44:18,924 DEBUG StreamThread-1 org.apache.kafka.clients.consumer.KafkaConsumer.unsubscribe() @899 - Unsubscribed all topics or patterns and assigned partitions 2017-05-06T18:44:18,924 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamThread.updateSuspendedTasks() @1017 - stream-thread [StreamThread-1] Updating suspended tasks to contain active tasks [[0_21, 0_6, 0_38, 0_12]] 2017-05-06T18:44:18,924 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamThread.removeStreamTasks() @1024 - stream-thread [StreamThread-1] Removing all active tasks [[0_21, 0_6, 0_38, 0_12]] 2017-05-06T18:44:18,924 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamThread.removeStandbyTasks() @1039 - stream-thread [StreamThread-1] Removing all standby tasks [[]] 2017-05-06T18:44:19,591 DEBUG kafka-coordinator-heartbeat-thread | hades org.apache.kafka.clients.consumer.internals.AbstractCoordinator.sendGroupCoordinatorRequest() @548 - Sending GroupCoordinator request for group hades to broker 10.210.201.129:9092 (id: 2 rack: null) 2017-05-06T18:44:21,706 DEBUG StreamThread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onSuccess() @559 - Received GroupCoordinator response ClientResponse(receivedTimeMs=1494096261706, latencyMs=2115, disconnected=false, requestHeader={api_key=10,api_version=0,correlation_id=907657,client_id=hades-StreamThread-1-consumer}, responseBody={error_code=0,coordinator={node_id=3,host=10.210.200.144,port=9092}}) for group hades 2017-05-06T18:44:24,453 INFO StreamThread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onSuccess() @573 - Discovered coordinator 10.210.200.144:9092 (id: 2147483644 rack: null) for group hades. 2017-05-06T18:44:25,690 INFO StreamThread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator.sendJoinGroupRequest() @407 - (Re-)joining group hades 2017-05-06T18:44:34,265 DEBUG StreamThread-1 org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.subscription() @238 - stream-thread [StreamThread-1] found [poseidonIncidentFeed] topics possibly matching regex 2017-05-06T18:44:35,706 DEBUG StreamThread-1 org.apache.kafka.streams.processor.TopologyBuilder.updateSubscriptions() @1339 - stream-thread [StreamThread-1] updating builder with SubscriptionUpdates{updatedTopicSubscriptions=[poseidonIncidentFeed]} topic(s) with possible matching regex subscription(s) 2017-05-06T18:44:35,707 DEBUG StreamThread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator.sendJoinGroupRequest() @415 - Sending JoinGroup ((type: JoinGroupRequest, groupId=hades, sessionTimeout=10000, rebalanceTimeout=300000, memberId=hades-StreamThread-1-consumer-0f3b71eb-620f-4207-9d5f-21c69f77def1, protocolType=consumer, groupProtocols=org.apache.kafka.common.requests.JoinGroupRequest$ProtocolMetadata@392760cc)) to coordinator 10.210.200.144:9092 (id: 2147483644 rack: null) 2017-05-06T18:44:36,684 DEBUG StreamThread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator.handle() @452 - Attempt to join group hades failed due to unknown member id. 2017-05-06T18:44:38,241 INFO StreamThread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator.sendJoinGroupRequest() @407 - (Re-)joining group hades 2017-05-06T18:44:42,990 DEBUG StreamThread-1 org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.subscription() @238 - stream-thread [StreamThread-1] found [poseidonIncidentFeed] topics possibly matching regex 2017-05-06T18:44:43,247 DEBUG StreamThread-1 org.apache.kafka.streams.processor.TopologyBuilder.updateSubscriptions() @1339 - stream-thread [StreamThread-1] updating builder with SubscriptionUpdates{updatedTopicSubscriptions=[poseidonIncidentFeed]} topic(s) with possible matching regex subscription(s) 2017-05-06T18:44:43,247 DEBUG StreamThread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator.sendJoinGroupRequest() @415 - Sending JoinGroup ((type: JoinGroupRequest, groupId=hades, sessionTimeout=10000, rebalanceTimeout=300000, memberId=, protocolType=consumer, groupProtocols=org.apache.kafka.common.requests.JoinGroupRequest$ProtocolMetadata@16bddfc9)) to coordinator 10.210.200.144:9092 (id: 2147483644 rack: null) 2017-05-06T18:44:45,418 DEBUG StreamThread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator.handle() @425 - Received successful JoinGroup response for group hades: {error_code=0,generation_id=203,group_protocol=stream,leader_id=hades-StreamThread-1-consumer-ca64d5a4-5ef1-4680-b730-8e4a9c1bd2fa,member_id=hades-StreamThread-1-consumer-ca64d5a4-5ef1-4680-b730-8e4a9c1bd2fa,members=[{member_id=hades-StreamThread-1-consumer-ca64d5a4-5ef1-4680-b730-8e4a9c1bd2fa,member_metadata=java.nio.HeapByteBuffer[pos=0 lim=96 cap=96]}]} 2017-05-06T18:45:21,668 DEBUG kafka-producer-network-thread | hades-StreamThread-1-producer org.apache.kafka.clients.NetworkClient.maybeUpdate() @751 - Sending metadata request (type=MetadataRequest, topics=) to node 3 2017-05-06T18:45:31,362 DEBUG kafka-producer-network-thread | hades-StreamThread-1-producer org.apache.kafka.clients.Metadata.update() @244 - Updated cluster metadata version 729 to Cluster(id = 0cLZL_ZaSdu3qosp7VVvoA, nodes = [10.210.200.144:9092 (id: 3 rack: null), 10.210.200.171:9092 (id: 1 rack: null), 10.210.201.129:9092 (id: 2 rack: null)], partitions = []) 2017-05-06T18:45:21,736 DEBUG StreamThread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.performAssignment() @336 - Performing assignment for group hades using strategy stream with subscriptions {hades-StreamThread-1-consumer-ca64d5a4-5ef1-4680-b730-8e4a9c1bd2fa=Subscription(topics=[poseidonIncidentFeed])} 2017-05-06T18:50:35,919 DEBUG kafka-producer-network-thread | hades-StreamThread-1-producer org.apache.kafka.clients.NetworkClient.maybeUpdate() @751 - Sending metadata request (type=MetadataRequest, topics=) to node 3 2017-05-06T21:08:38,816 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.assign() @290 - stream-thread [StreamThread-1] Constructed client metadata {8d510649-ddf3-41f1-86c9-c5b3c2c7a1b5=ClientMetadata{hostInfo=null, consumers=[hades-StreamThread-1-consumer-ca64d5a4-5ef1-4680-b730-8e4a9c1bd2fa], state=[activeTasks: ([]) assignedTasks: ([]) prevActiveTasks: ([0_21, 0_6, 0_38, 0_12]) prevAssignedTasks: ([0_21, 0_6, 0_38, 0_12]) capacity: 1.0 cost: 0.0]}} from the member subscriptions. 2017-05-07T00:01:54,379 DEBUG StreamThread-1 org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.prepareTopic() @596 - stream-thread [StreamThread-1] Starting to validate internal topics in partition assignor. 2017-05-07T00:01:54,379 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.prepareTopic() @630 - stream-thread [StreamThread-1] Completed validating internal topics in partition assignor 2017-05-07T00:01:54,380 DEBUG StreamThread-1 org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.assign() @376 - stream-thread [StreamThread-1] Created repartition topics [] from the parsed topology. 2017-05-06T20:54:07,189 DEBUG kafka-producer-network-thread | hades-StreamThread-1-producer org.apache.kafka.common.network.Selector.pollSelectionKeys() @375 - Connection with 10.210.200.144/10.210.200.144 disconnected  java.io.EOFException at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83) ~[kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) ~[kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:169) ~[kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:150) ~[kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:355) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.common.network.Selector.poll(Selector.java:303) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:225) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:126) [kafka-clients-0.10.2.0-TEST.jar!/:?] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_91] 2017-05-07T00:01:55,100 DEBUG StreamThread-1 org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.prepareTopic() @596 - stream-thread [StreamThread-1] Starting to validate internal topics in partition assignor. 2017-05-07T00:01:55,637 DEBUG StreamThread-1 org.apache.kafka.clients.Metadata.update() @244 - Updated cluster metadata version 1 to Cluster(id = null, nodes = [10.210.200.144:9092 (id: -3 rack: null), 10.210.201.129:9092 (id: -2 rack: null), 10.210.200.171:9092 (id: -1 rack: null)], partitions = []) 2017-05-07T00:01:55,637 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.initiateConnect() @627 - Initiating connection to node -3 at 10.210.200.144:9092. 2017-05-07T00:01:56,295 DEBUG kafka-producer-network-thread | hades-StreamThread-1-producer org.apache.kafka.clients.NetworkClient.handleDisconnections() @570 - Node 3 disconnected. 2017-05-07T00:01:56,295 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.initiateConnect() @627 - Initiating connection to node -2 at 10.210.201.129:9092. 2017-05-07T00:01:56,295 DEBUG kafka-producer-network-thread | hades-StreamThread-1-producer org.apache.kafka.clients.NetworkClient.maybeUpdate() @767 - Initialize connection to node 1 for sending metadata request 2017-05-07T00:01:56,295 DEBUG kafka-producer-network-thread | hades-StreamThread-1-producer org.apache.kafka.clients.NetworkClient.initiateConnect() @627 - Initiating connection to node 1 at 10.210.200.171:9092. 2017-05-07T00:01:56,296 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.initiateConnect() @627 - Initiating connection to node -1 at 10.210.200.171:9092. 2017-05-07T00:01:56,297 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.sensor() @335 - Added sensor with name node--3.bytes-sent 2017-05-07T00:01:56,855 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.sensor() @335 - Added sensor with name node--3.bytes-received 2017-05-07T00:01:56,856 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.sensor() @335 - Added sensor with name node--3.latency 2017-05-07T00:01:57,699 DEBUG kafka-producer-network-thread | hades-StreamThread-1-producer org.apache.kafka.common.network.Selector.pollSelectionKeys() @339 - Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 2017-05-07T00:01:57,699 DEBUG kafka-producer-network-thread | hades-StreamThread-1-producer org.apache.kafka.clients.NetworkClient.handleConnections() @590 - Completed connection to node 1. Fetching API versions. 2017-05-07T00:01:57,699 DEBUG StreamThread-1 org.apache.kafka.common.network.Selector.pollSelectionKeys() @339 - Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -3 2017-05-07T00:01:57,699 DEBUG kafka-producer-network-thread | hades-StreamThread-1-producer org.apache.kafka.clients.NetworkClient.handleInitiateApiVersionRequests() @603 - Initiating API versions fetch from node 1. 2017-05-07T00:01:57,699 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.sensor() @335 - Added sensor with name node--2.bytes-sent 2017-05-07T00:01:57,700 DEBUG kafka-producer-network-thread | hades-StreamThread-1-producer org.apache.kafka.clients.NetworkClient.maybeUpdate() @767 - Initialize connection to node 2 for sending metadata request 2017-05-07T00:01:57,700 DEBUG kafka-producer-network-thread | hades-StreamThread-1-producer org.apache.kafka.clients.NetworkClient.initiateConnect() @627 - Initiating connection to node 2 at 10.210.201.129:9092. 2017-05-07T00:01:57,700 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.sensor() @335 - Added sensor with name node--2.bytes-received 2017-05-07T00:01:57,700 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.sensor() @335 - Added sensor with name node--2.latency 2017-05-07T00:01:57,901 DEBUG StreamThread-1 org.apache.kafka.common.network.Selector.pollSelectionKeys() @339 - Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -2 2017-05-07T00:01:57,902 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.handleConnections() @590 - Completed connection to node -3. Fetching API versions. 2017-05-07T00:01:57,902 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.handleConnections() @590 - Completed connection to node -2. Fetching API versions. 2017-05-07T00:01:57,903 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.handleInitiateApiVersionRequests() @603 - Initiating API versions fetch from node -2. 2017-05-07T00:01:57,903 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.handleInitiateApiVersionRequests() @603 - Initiating API versions fetch from node -3. 2017-05-07T00:01:59,018 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.sensor() @335 - Added sensor with name node--1.bytes-sent 2017-05-07T00:01:59,019 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.sensor() @335 - Added sensor with name node--1.bytes-received 2017-05-07T00:01:59,317 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.sensor() @335 - Added sensor with name node--1.latency 2017-05-07T00:02:00,089 DEBUG kafka-producer-network-thread | hades-StreamThread-1-producer org.apache.kafka.common.network.Selector.pollSelectionKeys() @339 - Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 2 2017-05-07T00:02:00,089 DEBUG StreamThread-1 org.apache.kafka.common.network.Selector.pollSelectionKeys() @339 - Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 2017-05-07T00:02:00,089 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.handleConnections() @590 - Completed connection to node -1. Fetching API versions. 2017-05-07T00:02:00,089 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.handleInitiateApiVersionRequests() @603 - Initiating API versions fetch from node -1. 2017-05-07T00:02:00,089 DEBUG kafka-producer-network-thread | hades-StreamThread-1-producer org.apache.kafka.clients.NetworkClient.handleApiVersionsResponse() @558 - Recorded API versions for node 1: (Produce(0): 0 to 2 [usable: 2], Fetch(1): 0 to 3 [usable: 3], Offsets(2): 0 to 1 [usable: 1], Metadata(3): 0 to 2 [usable: 2], LeaderAndIsr(4): 0 [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 [usable: 3], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 2 [usable: 2], OffsetFetch(9): 0 to 2 [usable: 2], GroupCoordinator(10): 0 [usable: 0], JoinGroup(11): 0 to 1 [usable: 1], Heartbeat(12): 0 [usable: 0], LeaveGroup(13): 0 [usable: 0], SyncGroup(14): 0 [usable: 0], DescribeGroups(15): 0 [usable: 0], ListGroups(16): 0 [usable: 0], SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 [usable: 0], CreateTopics(19): 0 to 1 [usable: 1], DeleteTopics(20): 0 [usable: 0]) 2017-05-07T00:02:00,090 DEBUG kafka-producer-network-thread | hades-StreamThread-1-producer org.apache.kafka.clients.NetworkClient.handleConnections() @590 - Completed connection to node 2. Fetching API versions. 2017-05-07T00:02:00,090 DEBUG kafka-producer-network-thread | hades-StreamThread-1-producer org.apache.kafka.clients.NetworkClient.handleInitiateApiVersionRequests() @603 - Initiating API versions fetch from node 2. 2017-05-07T00:02:00,090 DEBUG kafka-producer-network-thread | hades-StreamThread-1-producer org.apache.kafka.clients.NetworkClient.maybeUpdate() @751 - Sending metadata request (type=MetadataRequest, topics=) to node 1 2017-05-07T00:02:00,903 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.handleApiVersionsResponse() @558 - Recorded API versions for node -2: (Produce(0): 0 to 2 [usable: 2], Fetch(1): 0 to 3 [usable: 3], Offsets(2): 0 to 1 [usable: 1], Metadata(3): 0 to 2 [usable: 2], LeaderAndIsr(4): 0 [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 [usable: 3], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 2 [usable: 2], OffsetFetch(9): 0 to 2 [usable: 2], GroupCoordinator(10): 0 [usable: 0], JoinGroup(11): 0 to 1 [usable: 1], Heartbeat(12): 0 [usable: 0], LeaveGroup(13): 0 [usable: 0], SyncGroup(14): 0 [usable: 0], DescribeGroups(15): 0 [usable: 0], ListGroups(16): 0 [usable: 0], SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 [usable: 0], CreateTopics(19): 0 to 1 [usable: 1], DeleteTopics(20): 0 [usable: 0]) 2017-05-07T00:02:00,904 DEBUG kafka-producer-network-thread | hades-StreamThread-1-producer org.apache.kafka.clients.Metadata.update() @244 - Updated cluster metadata version 730 to Cluster(id = 0cLZL_ZaSdu3qosp7VVvoA, nodes = [10.210.200.171:9092 (id: 1 rack: null), 10.210.201.129:9092 (id: 2 rack: null), 10.210.200.144:9092 (id: 3 rack: null)], partitions = []) 2017-05-07T00:02:00,904 DEBUG kafka-producer-network-thread | hades-StreamThread-1-producer org.apache.kafka.clients.NetworkClient.handleApiVersionsResponse() @558 - Recorded API versions for node 2: (Produce(0): 0 to 2 [usable: 2], Fetch(1): 0 to 3 [usable: 3], Offsets(2): 0 to 1 [usable: 1], Metadata(3): 0 to 2 [usable: 2], LeaderAndIsr(4): 0 [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 [usable: 3], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 2 [usable: 2], OffsetFetch(9): 0 to 2 [usable: 2], GroupCoordinator(10): 0 [usable: 0], JoinGroup(11): 0 to 1 [usable: 1], Heartbeat(12): 0 [usable: 0], LeaveGroup(13): 0 [usable: 0], SyncGroup(14): 0 [usable: 0], DescribeGroups(15): 0 [usable: 0], ListGroups(16): 0 [usable: 0], SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 [usable: 0], CreateTopics(19): 0 to 1 [usable: 1], DeleteTopics(20): 0 [usable: 0]) 2017-05-07T00:02:00,904 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.handleApiVersionsResponse() @558 - Recorded API versions for node -3: (Produce(0): 0 to 2 [usable: 2], Fetch(1): 0 to 3 [usable: 3], Offsets(2): 0 to 1 [usable: 1], Metadata(3): 0 to 2 [usable: 2], LeaderAndIsr(4): 0 [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 [usable: 3], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 2 [usable: 2], OffsetFetch(9): 0 to 2 [usable: 2], GroupCoordinator(10): 0 [usable: 0], JoinGroup(11): 0 to 1 [usable: 1], Heartbeat(12): 0 [usable: 0], LeaveGroup(13): 0 [usable: 0], SyncGroup(14): 0 [usable: 0], DescribeGroups(15): 0 [usable: 0], ListGroups(16): 0 [usable: 0], SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 [usable: 0], CreateTopics(19): 0 to 1 [usable: 1], DeleteTopics(20): 0 [usable: 0]) 2017-05-07T00:02:00,905 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.maybeUpdate() @751 - Sending metadata request (type=MetadataRequest, topics=) to node -3 2017-05-07T00:02:00,905 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.handleApiVersionsResponse() @558 - Recorded API versions for node -1: (Produce(0): 0 to 2 [usable: 2], Fetch(1): 0 to 3 [usable: 3], Offsets(2): 0 to 1 [usable: 1], Metadata(3): 0 to 2 [usable: 2], LeaderAndIsr(4): 0 [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 [usable: 3], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 2 [usable: 2], OffsetFetch(9): 0 to 2 [usable: 2], GroupCoordinator(10): 0 [usable: 0], JoinGroup(11): 0 to 1 [usable: 1], Heartbeat(12): 0 [usable: 0], LeaveGroup(13): 0 [usable: 0], SyncGroup(14): 0 [usable: 0], DescribeGroups(15): 0 [usable: 0], ListGroups(16): 0 [usable: 0], SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 [usable: 0], CreateTopics(19): 0 to 1 [usable: 1], DeleteTopics(20): 0 [usable: 0]) 2017-05-07T00:02:00,906 DEBUG StreamThread-1 org.apache.kafka.clients.Metadata.update() @244 - Updated cluster metadata version 2 to Cluster(id = 0cLZL_ZaSdu3qosp7VVvoA, nodes = [10.210.201.129:9092 (id: 2 rack: null), 10.210.200.144:9092 (id: 3 rack: null), 10.210.200.171:9092 (id: 1 rack: null)], partitions = []) 2017-05-07T00:02:00,919 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.initiateConnect() @627 - Initiating connection to node 1 at 10.210.200.171:9092. 2017-05-07T00:02:00,922 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.sensor() @335 - Added sensor with name node-1.bytes-sent 2017-05-07T00:02:00,922 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.sensor() @335 - Added sensor with name node-1.bytes-received 2017-05-07T00:02:00,922 DEBUG StreamThread-1 org.apache.kafka.common.metrics.Metrics.sensor() @335 - Added sensor with name node-1.latency 2017-05-07T00:02:00,924 DEBUG StreamThread-1 org.apache.kafka.common.network.Selector.pollSelectionKeys() @339 - Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 2017-05-07T00:02:00,924 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.handleConnections() @590 - Completed connection to node 1. Fetching API versions. 2017-05-07T00:02:00,925 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.handleInitiateApiVersionRequests() @603 - Initiating API versions fetch from node 1. 2017-05-07T00:02:00,926 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.handleApiVersionsResponse() @558 - Recorded API versions for node 1: (Produce(0): 0 to 2 [usable: 2], Fetch(1): 0 to 3 [usable: 3], Offsets(2): 0 to 1 [usable: 1], Metadata(3): 0 to 2 [usable: 2], LeaderAndIsr(4): 0 [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 [usable: 3], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 2 [usable: 2], OffsetFetch(9): 0 to 2 [usable: 2], GroupCoordinator(10): 0 [usable: 0], JoinGroup(11): 0 to 1 [usable: 1], Heartbeat(12): 0 [usable: 0], LeaveGroup(13): 0 [usable: 0], SyncGroup(14): 0 [usable: 0], DescribeGroups(15): 0 [usable: 0], ListGroups(16): 0 [usable: 0], SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 [usable: 0], CreateTopics(19): 0 to 1 [usable: 1], DeleteTopics(20): 0 [usable: 0]) 2017-05-07T00:02:01,161 DEBUG StreamThread-1 org.apache.kafka.clients.Metadata.update() @244 - Updated cluster metadata version 1 to Cluster(id = null, nodes = [10.210.201.129:9092 (id: -2 rack: null), 10.210.200.144:9092 (id: -3 rack: null), 10.210.200.171:9092 (id: -1 rack: null)], partitions = []) 2017-05-07T00:02:01,268 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.prepareTopic() @630 - stream-thread [StreamThread-1] Completed validating internal topics in partition assignor 2017-05-07T00:02:01,268 DEBUG StreamThread-1 org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.assign() @451 - stream-thread [StreamThread-1] Created state changelog topics {hades-hadesHistoryStore-changelog=org.apache.kafka.streams.processor.internals.StreamPartitionAssignor$InternalTopicMetadata@52d4c821} from the parsed topology. 2017-05-07T00:02:01,334 DEBUG StreamThread-1 org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.assign() @461 - stream-thread [StreamThread-1] Assigning tasks [0_0, 0_1, 0_2, 0_3, 0_4, 0_5, 0_6, 0_7, 0_8, 0_9, 0_10, 0_11, 0_12, 0_13, 0_14, 0_15, 0_16, 0_17, 0_18, 0_19, 0_20, 0_21, 0_22, 0_23, 0_24, 0_25, 0_26, 0_27, 0_28, 0_29, 0_30, 0_31, 0_32, 0_33, 0_34, 0_35, 0_36, 0_37, 0_38, 0_39] to clients {8d510649-ddf3-41f1-86c9-c5b3c2c7a1b5=[activeTasks: ([]) assignedTasks: ([]) prevActiveTasks: ([0_21, 0_6, 0_38, 0_12]) prevAssignedTasks: ([0_21, 0_6, 0_38, 0_12]) capacity: 1.0 cost: 0.0]} with number of replicas 0 2017-05-07T00:02:02,996 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.assign() @466 - stream-thread [StreamThread-1] Assigned tasks to clients as {8d510649-ddf3-41f1-86c9-c5b3c2c7a1b5=[activeTasks: ([0_0, 0_1, 0_2, 0_3, 0_4, 0_5, 0_6, 0_7, 0_8, 0_9, 0_10, 0_11, 0_12, 0_13, 0_14, 0_15, 0_16, 0_17, 0_18, 0_19, 0_20, 0_21, 0_22, 0_23, 0_24, 0_25, 0_26, 0_27, 0_28, 0_29, 0_30, 0_31, 0_32, 0_33, 0_34, 0_35, 0_36, 0_37, 0_38, 0_39]) assignedTasks: ([0_0, 0_1, 0_2, 0_3, 0_4, 0_5, 0_6, 0_7, 0_8, 0_9, 0_10, 0_11, 0_12, 0_13, 0_14, 0_15, 0_16, 0_17, 0_18, 0_19, 0_20, 0_21, 0_22, 0_23, 0_24, 0_25, 0_26, 0_27, 0_28, 0_29, 0_30, 0_31, 0_32, 0_33, 0_34, 0_35, 0_36, 0_37, 0_38, 0_39]) prevActiveTasks: ([]) prevAssignedTasks: ([]) capacity: 1.0 cost: 18.4]}. 2017-05-07T00:02:03,290 DEBUG StreamThread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.performAssignment() @375 - Finished assignment for group hades: {hades-StreamThread-1-consumer-ca64d5a4-5ef1-4680-b730-8e4a9c1bd2fa=Assignment(partitions=[poseidonIncidentFeed-0, poseidonIncidentFeed-1, poseidonIncidentFeed-2, poseidonIncidentFeed-3, poseidonIncidentFeed-4, poseidonIncidentFeed-5, poseidonIncidentFeed-6, poseidonIncidentFeed-7, poseidonIncidentFeed-8, poseidonIncidentFeed-9, poseidonIncidentFeed-10, poseidonIncidentFeed-11, poseidonIncidentFeed-12, poseidonIncidentFeed-13, poseidonIncidentFeed-14, poseidonIncidentFeed-15, poseidonIncidentFeed-16, poseidonIncidentFeed-17, poseidonIncidentFeed-18, poseidonIncidentFeed-19, poseidonIncidentFeed-20, poseidonIncidentFeed-21, poseidonIncidentFeed-22, poseidonIncidentFeed-23, poseidonIncidentFeed-24, poseidonIncidentFeed-25, poseidonIncidentFeed-26, poseidonIncidentFeed-27, poseidonIncidentFeed-28, poseidonIncidentFeed-29, poseidonIncidentFeed-30, poseidonIncidentFeed-31, poseidonIncidentFeed-32, poseidonIncidentFeed-33, poseidonIncidentFeed-34, poseidonIncidentFeed-35, poseidonIncidentFeed-36, poseidonIncidentFeed-37, poseidonIncidentFeed-38, poseidonIncidentFeed-39])} 2017-05-07T00:02:03,291 DEBUG StreamThread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onJoinLeader() @493 - Sending leader SyncGroup for group hades to coordinator 10.210.200.144:9092 (id: 2147483644 rack: null): (type=SyncGroupRequest, groupId=hades, generationId=203, memberId=hades-StreamThread-1-consumer-ca64d5a4-5ef1-4680-b730-8e4a9c1bd2fa, groupAssignment=hades-StreamThread-1-consumer-ca64d5a4-5ef1-4680-b730-8e4a9c1bd2fa) 2017-05-07T00:02:03,291 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.maybeUpdate() @751 - Sending metadata request (type=MetadataRequest, topics=) to node 1 2017-05-07T00:02:03,293 DEBUG StreamThread-1 org.apache.kafka.common.network.Selector.pollSelectionKeys() @375 - Connection with 10.210.200.144/10.210.200.144 disconnected  java.io.EOFException at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83) ~[kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) ~[kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:169) ~[kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:150) ~[kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:355) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.common.network.Selector.poll(Selector.java:303) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:172) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:334) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:303) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:286) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1030) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:582) [kafka-streams-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:368) [kafka-streams-0.10.2.0-TEST.jar!/:?] 2017-05-07T00:02:03,295 DEBUG StreamThread-1 org.apache.kafka.common.network.Selector.pollSelectionKeys() @375 - Connection with 10.210.200.171/10.210.200.171 disconnected  java.io.EOFException at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83) ~[kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) ~[kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:169) ~[kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:150) ~[kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:355) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.common.network.Selector.poll(Selector.java:303) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:172) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:334) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:303) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:286) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1030) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:582) [kafka-streams-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:368) [kafka-streams-0.10.2.0-TEST.jar!/:?] 2017-05-07T00:02:03,298 DEBUG StreamThread-1 org.apache.kafka.common.network.Selector.pollSelectionKeys() @375 - Connection with trdkafbb02.dev.williamhill.plc/10.210.201.129 disconnected  java.io.EOFException at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83) ~[kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) ~[kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:169) ~[kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:150) ~[kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:355) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.common.network.Selector.poll(Selector.java:303) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:172) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:334) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:303) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:286) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1030) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:582) [kafka-streams-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:368) [kafka-streams-0.10.2.0-TEST.jar!/:?] 2017-05-07T00:02:03,299 DEBUG StreamThread-1 org.apache.kafka.common.network.Selector.pollSelectionKeys() @375 - Connection with 10.210.200.144/10.210.200.144 disconnected  java.io.EOFException at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83) ~[kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) ~[kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:169) ~[kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:150) ~[kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:355) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.common.network.Selector.poll(Selector.java:303) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:172) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:334) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:303) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:286) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1030) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995) [kafka-clients-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:582) [kafka-streams-0.10.2.0-TEST.jar!/:?] at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:368) [kafka-streams-0.10.2.0-TEST.jar!/:?] 2017-05-07T00:02:03,300 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.handleDisconnections() @570 - Node 2147483644 disconnected. 2017-05-07T00:02:03,300 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.handleDisconnections() @570 - Node 1 disconnected. 2017-05-07T00:02:03,300 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.handleDisconnections() @570 - Node 2 disconnected. 2017-05-07T00:02:03,301 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.handleDisconnections() @570 - Node 3 disconnected. 2017-05-07T00:02:03,301 DEBUG StreamThread-1 org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.fireCompletion() @487 - Cancelled SYNC_GROUP request {api_key=14,api_version=0,correlation_id=907660,client_id=hades-StreamThread-1-consumer} with correlation id 907660 due to node 2147483644 being disconnected 2017-05-07T00:02:03,301 INFO StreamThread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator.coordinatorDead() @618 - Marking the coordinator 10.210.200.144:9092 (id: 2147483644 rack: null) dead for group hades 2017-05-07T00:02:03,402 DEBUG StreamThread-1 org.apache.kafka.clients.consumer.internals.Fetcher.sendFetches() @180 - Sending fetch for partitions [poseidonIncidentFeed-12, poseidonIncidentFeed-21, poseidonIncidentFeed-6] to broker 10.210.200.171:9092 (id: 1 rack: null) 2017-05-07T00:02:03,402 DEBUG StreamThread-1 org.apache.kafka.clients.consumer.internals.Fetcher.sendFetches() @180 - Sending fetch for partitions [poseidonIncidentFeed-38] to broker 10.210.200.144:9092 (id: 3 rack: null) 2017-05-07T00:02:03,403 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.initiateConnect() @627 - Initiating connection to node 1 at 10.210.200.171:9092. 2017-05-07T00:02:03,403 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.initiateConnect() @627 - Initiating connection to node 3 at 10.210.200.144:9092. 2017-05-07T00:02:03,406 DEBUG StreamThread-1 org.apache.kafka.common.network.Selector.pollSelectionKeys() @339 - Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 2017-05-07T00:02:03,406 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.handleConnections() @590 - Completed connection to node 1. Fetching API versions. 2017-05-07T00:02:03,406 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.handleInitiateApiVersionRequests() @603 - Initiating API versions fetch from node 1. 2017-05-07T00:02:03,407 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit() @742 - stream-thread [StreamThread-1] Committing all tasks because the commit interval 1000ms has elapsed 2017-05-07T00:02:03,483 DEBUG StreamThread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator.sendGroupCoordinatorRequest() @548 - Sending GroupCoordinator request for group hades to broker 10.210.200.144:9092 (id: 3 rack: null) 2017-05-07T00:02:03,624 DEBUG StreamThread-1 org.apache.kafka.common.network.Selector.pollSelectionKeys() @339 - Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 3 2017-05-07T00:02:03,625 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.handleConnections() @590 - Completed connection to node 3. Fetching API versions. 2017-05-07T00:02:03,625 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.handleInitiateApiVersionRequests() @603 - Initiating API versions fetch from node 3. 2017-05-07T00:02:03,625 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.maybeUpdate() @767 - Initialize connection to node 2 for sending metadata request 2017-05-07T00:02:03,625 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.initiateConnect() @627 - Initiating connection to node 2 at 10.210.201.129:9092. 2017-05-07T00:02:04,869 DEBUG StreamThread-1 org.apache.kafka.common.network.Selector.pollSelectionKeys() @339 - Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 2 2017-05-07T00:02:04,870 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.handleApiVersionsResponse() @558 - Recorded API versions for node 1: (Produce(0): 0 to 2 [usable: 2], Fetch(1): 0 to 3 [usable: 3], Offsets(2): 0 to 1 [usable: 1], Metadata(3): 0 to 2 [usable: 2], LeaderAndIsr(4): 0 [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 [usable: 3], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 2 [usable: 2], OffsetFetch(9): 0 to 2 [usable: 2], GroupCoordinator(10): 0 [usable: 0], JoinGroup(11): 0 to 1 [usable: 1], Heartbeat(12): 0 [usable: 0], LeaveGroup(13): 0 [usable: 0], SyncGroup(14): 0 [usable: 0], DescribeGroups(15): 0 [usable: 0], ListGroups(16): 0 [usable: 0], SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 [usable: 0], CreateTopics(19): 0 to 1 [usable: 1], DeleteTopics(20): 0 [usable: 0]) 2017-05-07T00:02:04,871 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.handleConnections() @590 - Completed connection to node 2. Fetching API versions. 2017-05-07T00:02:04,871 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.handleInitiateApiVersionRequests() @603 - Initiating API versions fetch from node 2. 2017-05-07T00:02:04,871 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.maybeUpdate() @751 - Sending metadata request (type=MetadataRequest, topics=) to node 1 2017-05-07T00:02:04,873 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.handleApiVersionsResponse() @558 - Recorded API versions for node 3: (Produce(0): 0 to 2 [usable: 2], Fetch(1): 0 to 3 [usable: 3], Offsets(2): 0 to 1 [usable: 1], Metadata(3): 0 to 2 [usable: 2], LeaderAndIsr(4): 0 [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 [usable: 3], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 2 [usable: 2], OffsetFetch(9): 0 to 2 [usable: 2], GroupCoordinator(10): 0 [usable: 0], JoinGroup(11): 0 to 1 [usable: 1], Heartbeat(12): 0 [usable: 0], LeaveGroup(13): 0 [usable: 0], SyncGroup(14): 0 [usable: 0], DescribeGroups(15): 0 [usable: 0], ListGroups(16): 0 [usable: 0], SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 [usable: 0], CreateTopics(19): 0 to 1 [usable: 1], DeleteTopics(20): 0 [usable: 0]) 2017-05-07T00:02:04,874 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.handleApiVersionsResponse() @558 - Recorded API versions for node 2: (Produce(0): 0 to 2 [usable: 2], Fetch(1): 0 to 3 [usable: 3], Offsets(2): 0 to 1 [usable: 1], Metadata(3): 0 to 2 [usable: 2], LeaderAndIsr(4): 0 [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 [usable: 3], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 2 [usable: 2], OffsetFetch(9): 0 to 2 [usable: 2], GroupCoordinator(10): 0 [usable: 0], JoinGroup(11): 0 to 1 [usable: 1], Heartbeat(12): 0 [usable: 0], LeaveGroup(13): 0 [usable: 0], SyncGroup(14): 0 [usable: 0], DescribeGroups(15): 0 [usable: 0], ListGroups(16): 0 [usable: 0], SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 [usable: 0], CreateTopics(19): 0 to 1 [usable: 1], DeleteTopics(20): 0 [usable: 0]) 2017-05-07T00:02:04,880 DEBUG StreamThread-1 org.apache.kafka.clients.Metadata.update() @244 - Updated cluster metadata version 688 to Cluster(id = 0cLZL_ZaSdu3qosp7VVvoA, nodes = [10.210.200.171:9092 (id: 1 rack: null), 10.210.201.129:9092 (id: 2 rack: null), 10.210.200.144:9092 (id: 3 rack: null)], partitions = [Partition(topic = poseidonIncidentFeed, partition = 24, leader = 1, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 22, leader = 2, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 28, leader = 2, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 26, leader = 3, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 16, leader = 2, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 14, leader = 3, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 20, leader = 3, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 18, leader = 1, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 39, leader = 1, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 8, leader = 3, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 37, leader = 2, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 6, leader = 1, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 12, leader = 1, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 10, leader = 2, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 31, leader = 2, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 0, leader = 1, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 29, leader = 3, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 35, leader = 3, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 4, leader = 2, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 33, leader = 1, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 2, leader = 3, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 23, leader = 3, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 21, leader = 1, replicas = [1,2,3], isr = [1,3,2]), Partition(topic = poseidonIncidentFeed, partition = 27, leader = 1, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 25, leader = 2, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 15, leader = 1, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 13, leader = 2, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 19, leader = 2, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 17, leader = 3, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 7, leader = 2, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 5, leader = 3, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 38, leader = 3, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 11, leader = 3, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 9, leader = 1, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 32, leader = 3, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 30, leader = 1, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 3, leader = 1, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 36, leader = 1, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 1, leader = 2, replicas = [1,2,3], isr = [1,2,3]), Partition(topic = poseidonIncidentFeed, partition = 34, leader = 2, replicas = [1,2,3], isr = [1,2,3])]) 2017-05-07T00:02:05,379 DEBUG StreamThread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onSuccess() @559 - Received GroupCoordinator response ClientResponse(receivedTimeMs=1494115325379, latencyMs=1896, disconnected=false, requestHeader={api_key=10,api_version=0,correlation_id=907665,client_id=hades-StreamThread-1-consumer}, responseBody={error_code=0,coordinator={node_id=3,host=10.210.200.144,port=9092}}) for group hades 2017-05-07T00:02:05,380 INFO StreamThread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onSuccess() @573 - Discovered coordinator 10.210.200.144:9092 (id: 2147483644 rack: null) for group hades. 2017-05-07T00:02:05,380 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.initiateConnect() @627 - Initiating connection to node 2147483644 at 10.210.200.144:9092. 2017-05-07T00:02:05,381 DEBUG StreamThread-1 org.apache.kafka.clients.consumer.internals.Fetcher.sendFetches() @180 - Sending fetch for partitions [poseidonIncidentFeed-38] to broker 10.210.200.144:9092 (id: 3 rack: null) 2017-05-07T00:02:05,384 DEBUG StreamThread-1 org.apache.kafka.common.network.Selector.pollSelectionKeys() @339 - Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 2147483644 2017-05-07T00:02:05,385 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.handleConnections() @590 - Completed connection to node 2147483644. Fetching API versions. 2017-05-07T00:02:05,385 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.handleInitiateApiVersionRequests() @603 - Initiating API versions fetch from node 2147483644. 2017-05-07T00:02:05,385 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit() @742 - stream-thread [StreamThread-1] Committing all tasks because the commit interval 1000ms has elapsed 2017-05-07T00:02:05,385 DEBUG StreamThread-1 org.apache.kafka.clients.consumer.internals.Fetcher.sendFetches() @180 - Sending fetch for partitions [poseidonIncidentFeed-12, poseidonIncidentFeed-21, poseidonIncidentFeed-6] to broker 10.210.200.171:9092 (id: 1 rack: null) 2017-05-07T00:02:05,388 DEBUG StreamThread-1 org.apache.kafka.clients.NetworkClient.handleApiVersionsResponse() @558 - Recorded API versions for node 2147483644: (Produce(0): 0 to 2 [usable: 2], Fetch(1): 0 to 3 [usable: 3], Offsets(2): 0 to 1 [usable: 1], Metadata(3): 0 to 2 [usable: 2], LeaderAndIsr(4): 0 [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 [usable: 3], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 2 [usable: 2], OffsetFetch(9): 0 to 2 [usable: 2], GroupCoordinator(10): 0 [usable: 0], JoinGroup(11): 0 to 1 [usable: 1], Heartbeat(12): 0 [usable: 0], LeaveGroup(13): 0 [usable: 0], SyncGroup(14): 0 [usable: 0], DescribeGroups(15): 0 [usable: 0], ListGroups(16): 0 [usable: 0], SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 [usable: 0], CreateTopics(19): 0 to 1 [usable: 1], DeleteTopics(20): 0 [usable: 0]) 2017-05-07T00:02:05,899 DEBUG StreamThread-1 org.apache.kafka.clients.consumer.internals.Fetcher.sendFetches() @180 - Sending fetch for partitions [poseidonIncidentFeed-12, poseidonIncidentFeed-21, poseidonIncidentFeed-6] to broker 10.210.200.171:9092 (id: 1 rack: null) 2017-05-07T00:02:05,900 DEBUG StreamThread-1 org.apache.kafka.clients.consumer.internals.Fetcher.sendFetches() @180 - Sending fetch for partitions [poseidonIncidentFeed-38] to broker 10.210.200.144:9092 (id: 3 rack: null) 2017-05-07T00:02:06,401 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit() @742 - stream-thread [StreamThread-1] Committing all tasks because the commit interval 1000ms has elapsed 2017-05-07T00:02:06,402 DEBUG StreamThread-1 org.apache.kafka.clients.consumer.internals.Fetcher.sendFetches() @180 - Sending fetch for partitions [poseidonIncidentFeed-12, poseidonIncidentFeed-21, poseidonIncidentFeed-6] to broker 10.210.200.171:9092 (id: 1 rack: null) 2017-05-07T00:02:06,414 DEBUG StreamThread-1 org.apache.kafka.clients.consumer.internals.Fetcher.sendFetches() @180 - Sending fetch for partitions [poseidonIncidentFeed-38] to broker 10.210.200.144:9092 (id: 3 rack: null) 2017-05-07T00:02:06,906 DEBUG StreamThread-1 org.apache.kafka.clients.consumer.internals.Fetcher.sendFetches() @180 - Sending fetch for partitions [poseidonIncidentFeed-12, poseidonIncidentFeed-21, poseidonIncidentFeed-6] to broker 10.210.200.171:9092 (id: 1 rack: null) 2017-05-07T00:02:06,917 DEBUG StreamThread-1 org.apache.kafka.clients.consumer.internals.Fetcher.sendFetches() @180 - Sending fetch for partitions [poseidonIncidentFeed-38] to broker 10.210.200.144:9092 (id: 3 rack: null) 2017-05-07T00:02:07,404 INFO StreamThread-1 org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit() @742 - stream-thread [StreamThread-1] Committing all tasks because the commit interval 1000ms has elapsed