15:55:44 [DEBUG] Logging$class.debug - Accepted connection from /127.0.0.1:64676 on /127.0.0.1:9092 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] (Logging.scala:54) 15:55:44 [DEBUG] Logging$class.debug - Processor 0 listening to new connection from /127.0.0.1:64676 (Logging.scala:54) 15:55:44 [INFO ] Logging$class.info - [BrokerChangeListener on Controller 1]: Newly added brokers: 1, deleted brokers: , all live brokers: 1 (Logging.scala:70) 15:55:44 [DEBUG] Logging$class.debug - [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (Logging.scala:54) 15:55:44 [TRACE] Logging$class.trace - Processor 1 received request : {api_key=0,api_version=2,correlation_id=0,client_id=} -- {acks=0,timeout=0,topic_data=[]} (Logging.scala:36) 15:55:44 [TRACE] Logging$class.trace - Processor 0 received request : {api_key=18,api_version=0,correlation_id=1,client_id=consumer-1} -- {} (Logging.scala:36) 15:55:44 [TRACE] Logging$class.trace - [Kafka Request Handler 3 on Broker 1], Kafka request handler 3 on broker 1 handling request Request(0,127.0.0.1:9092-127.0.0.1:64676,Session(User:ANONYMOUS,/127.0.0.1),null,1491400544109,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:44 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=18,api_version=0,correlation_id=1,client_id=consumer-1} -- {} from connection 127.0.0.1:9092-127.0.0.1:64676;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:44 [INFO ] Logging$class.info - [Controller 1]: New broker startup callback for 1 (Logging.scala:70) 15:55:44 [INFO ] Logging$class.info - [Controller-1-to-broker-1-send-thread], Starting (Logging.scala:70) 15:55:44 [INFO ] Logging$class.info - [Controller-1-to-broker-1-send-thread], Controller 1 connected to ISI050.utenze.BANKIT.IT:9092 (id: 1 rack: null) for sending state change requests (Logging.scala:70) 15:55:44 [DEBUG] Logging$class.debug - Accepted connection from /10.36.240.33:64679 on /10.36.240.33:9092 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] (Logging.scala:54) 15:55:44 [DEBUG] Logging$class.debug - Processor 1 listening to new connection from /10.36.240.33:64679 (Logging.scala:54) 15:55:44 [TRACE] Logging$class.trace - Processor 1 received request : {api_key=6,api_version=3,correlation_id=0,client_id=1} -- {controller_id=1,controller_epoch=1,partition_states=[],live_brokers=[{id=1,end_points=[{port=9092,host=ISI050.utenze.BANKIT.IT,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]} (Logging.scala:36) 15:55:44 [TRACE] Logging$class.trace - [Kafka Request Handler 7 on Broker 1], Kafka request handler 7 on broker 1 handling request Request(1,10.36.240.33:9092-10.36.240.33:64679,Session(User:ANONYMOUS,/10.36.240.33),null,1491400544164,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:44 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(0,Request(0,127.0.0.1:9092-127.0.0.1:64676,Session(User:ANONYMOUS,/127.0.0.1),null,1491400544109,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@617779bb,SendAction) (Logging.scala:36) 15:55:44 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=6,api_version=3,correlation_id=0,client_id=1} -- {controller_id=1,controller_epoch=1,partition_states=[],live_brokers=[{id=1,end_points=[{port=9092,host=ISI050.utenze.BANKIT.IT,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]} from connection 10.36.240.33:9092-10.36.240.33:64679;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:44 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(1,Request(1,10.36.240.33:9092-10.36.240.33:64679,Session(User:ANONYMOUS,/10.36.240.33),null,1491400544164,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@49d386c4,SendAction) (Logging.scala:36) 15:55:44 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=18,api_version=0,correlation_id=1,client_id=consumer-1} -- {} from connection 127.0.0.1:9092-127.0.0.1:64676;totalTime:64,requestQueueTime:7,localTime:56,remoteTime:0,responseQueueTime:3,sendTime:3,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:44 [TRACE] Logging$class.trace - Processor 0 received request : {api_key=3,api_version=2,correlation_id=2,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:44 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=6,api_version=3,correlation_id=0,client_id=1} -- {controller_id=1,controller_epoch=1,partition_states=[],live_brokers=[{id=1,end_points=[{port=9092,host=ISI050.utenze.BANKIT.IT,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]} from connection 10.36.240.33:9092-10.36.240.33:64679;totalTime:34,requestQueueTime:3,localTime:29,remoteTime:0,responseQueueTime:1,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:44 [TRACE] Logging$class.trace - [Kafka Request Handler 6 on Broker 1], Kafka request handler 6 on broker 1 handling request Request(0,127.0.0.1:9092-127.0.0.1:64676,Session(User:ANONYMOUS,/127.0.0.1),null,1491400544267,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:44 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=2,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 127.0.0.1:9092-127.0.0.1:64676;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:44 [INFO ] Logging$class.info - Topic creation {"version":1,"partitions":{"0":[1]}} (Logging.scala:70) 15:55:44 [DEBUG] Logging$class.debug - Updated path /brokers/topics/testOutputTopic with {"version":1,"partitions":{"0":[1]}} for replica assignment (Logging.scala:54) 15:55:44 [INFO ] Logging$class.info - [KafkaApi-1] Auto creation of topic testOutputTopic with 1 partitions and replication factor 1 is successful (Logging.scala:70) 15:55:44 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@69419955 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 2 to client consumer-1 (Logging.scala:36) 15:55:44 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(0,Request(0,127.0.0.1:9092-127.0.0.1:64676,Session(User:ANONYMOUS,/127.0.0.1),null,1491400544267,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@d25c9c,SendAction) (Logging.scala:36) 15:55:44 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=2,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 127.0.0.1:9092-127.0.0.1:64676;totalTime:240,requestQueueTime:8,localTime:230,remoteTime:0,responseQueueTime:0,sendTime:2,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:44 [TRACE] Logging$class.trace - Processor 0 received request : {api_key=10,api_version=0,correlation_id=0,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:44 [TRACE] Logging$class.trace - [Kafka Request Handler 1 on Broker 1], Kafka request handler 1 on broker 1 handling request Request(0,127.0.0.1:9092-127.0.0.1:64676,Session(User:ANONYMOUS,/127.0.0.1),null,1491400544507,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:44 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=0,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 127.0.0.1:9092-127.0.0.1:64676;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:44 [DEBUG] Logging$class.debug - [TopicChangeListener on Controller 1]: Topic change listener fired for path /brokers/topics with children testOutputTopic (Logging.scala:54) 15:55:44 [DEBUG] Logging$class.debug - Replicas assigned to topic [testOutputTopic], partition [0] are [List(1)] (Logging.scala:54) 15:55:44 [INFO ] Logging$class.info - [TopicChangeListener on Controller 1]: New topics: [Set(testOutputTopic)], deleted topics: [Set()], new partition replica assignment [Map([testOutputTopic,0] -> List(1))] (Logging.scala:70) 15:55:44 [INFO ] Logging$class.info - [Controller 1]: New topic creation callback for [testOutputTopic,0] (Logging.scala:70) 15:55:44 [INFO ] Logging$class.info - [Controller 1]: New partition creation callback for [testOutputTopic,0] (Logging.scala:70) 15:55:44 [INFO ] Logging$class.info - [Partition state machine on Controller 1]: Invoking state change to NewPartition for partitions [testOutputTopic,0] (Logging.scala:70) 15:55:44 [INFO ] Logging$class.info - [Replica state machine on controller 1]: Invoking state change to NewReplica for replicas [Topic=testOutputTopic,Partition=0,Replica=1] (Logging.scala:70) 15:55:44 [INFO ] Logging$class.info - [Partition state machine on Controller 1]: Invoking state change to OnlinePartition for partitions [testOutputTopic,0] (Logging.scala:70) 15:55:44 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [testOutputTopic,0] are: [List(1)] (Logging.scala:54) 15:55:44 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [testOutputTopic,0] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:44 [INFO ] Logging$class.info - Topic creation {"version":1,"partitions":{"45":[1],"34":[1],"12":[1],"8":[1],"19":[1],"23":[1],"4":[1],"40":[1],"15":[1],"11":[1],"9":[1],"44":[1],"33":[1],"22":[1],"26":[1],"37":[1],"13":[1],"46":[1],"24":[1],"35":[1],"16":[1],"5":[1],"10":[1],"48":[1],"21":[1],"43":[1],"32":[1],"49":[1],"6":[1],"36":[1],"1":[1],"39":[1],"17":[1],"25":[1],"14":[1],"47":[1],"31":[1],"42":[1],"0":[1],"20":[1],"27":[1],"2":[1],"38":[1],"18":[1],"30":[1],"7":[1],"29":[1],"41":[1],"3":[1],"28":[1]}} (Logging.scala:70) 15:55:44 [DEBUG] Logging$class.debug - Updated path /brokers/topics/__consumer_offsets with {"version":1,"partitions":{"45":[1],"34":[1],"12":[1],"8":[1],"19":[1],"23":[1],"4":[1],"40":[1],"15":[1],"11":[1],"9":[1],"44":[1],"33":[1],"22":[1],"26":[1],"37":[1],"13":[1],"46":[1],"24":[1],"35":[1],"16":[1],"5":[1],"10":[1],"48":[1],"21":[1],"43":[1],"32":[1],"49":[1],"6":[1],"36":[1],"1":[1],"39":[1],"17":[1],"25":[1],"14":[1],"47":[1],"31":[1],"42":[1],"0":[1],"20":[1],"27":[1],"2":[1],"38":[1],"18":[1],"30":[1],"7":[1],"29":[1],"41":[1],"3":[1],"28":[1]}} for replica assignment (Logging.scala:54) 15:55:44 [INFO ] Logging$class.info - [KafkaApi-1] Auto creation of topic __consumer_offsets with 50 partitions and replication factor 1 is successful (Logging.scala:70) 15:55:44 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 0 to client consumer-1. (Logging.scala:36) 15:55:44 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(0,Request(0,127.0.0.1:9092-127.0.0.1:64676,Session(User:ANONYMOUS,/127.0.0.1),null,1491400544507,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@447610ed,SendAction) (Logging.scala:36) 15:55:44 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=0,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 127.0.0.1:9092-127.0.0.1:64676;totalTime:366,requestQueueTime:2,localTime:362,remoteTime:0,responseQueueTime:1,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:44 [DEBUG] Logging$class.debug - Accepted connection from /10.36.240.33:64680 on /10.36.240.33:9092 and assigned it to processor 2, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] (Logging.scala:54) 15:55:44 [DEBUG] Logging$class.debug - Processor 2 listening to new connection from /10.36.240.33:64680 (Logging.scala:54) 15:55:44 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=18,api_version=0,correlation_id=3,client_id=consumer-1} -- {} (Logging.scala:36) 15:55:44 [TRACE] Logging$class.trace - [Kafka Request Handler 3 on Broker 1], Kafka request handler 3 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400544892,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:44 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=18,api_version=0,correlation_id=3,client_id=consumer-1} -- {} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:44 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400544892,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@22578fc5,SendAction) (Logging.scala:36) 15:55:44 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=18,api_version=0,correlation_id=3,client_id=consumer-1} -- {} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:4,requestQueueTime:1,localTime:2,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:44 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=4,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:44 [TRACE] Logging$class.trace - [Kafka Request Handler 7 on Broker 1], Kafka request handler 7 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400544898,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:44 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=4,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:44 [INFO ] Logging$class.info - [Replica state machine on controller 1]: Invoking state change to OnlineReplica for replicas [Topic=testOutputTopic,Partition=0,Replica=1] (Logging.scala:70) 15:55:44 [TRACE] Logging$class.trace - Processor 1 received request : {api_key=4,api_version=0,correlation_id=1,client_id=1} -- {controller_id=1,controller_epoch=1,partition_states=[{topic=testOutputTopic,partition=0,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]}],live_leaders=[{id=1,host=ISI050.utenze.BANKIT.IT,port=9092}]} (Logging.scala:36) 15:55:44 [TRACE] Logging$class.trace - [Kafka Request Handler 6 on Broker 1], Kafka request handler 6 on broker 1 handling request Request(1,10.36.240.33:9092-10.36.240.33:64679,Session(User:ANONYMOUS,/10.36.240.33),null,1491400544985,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:44 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=4,api_version=0,correlation_id=1,client_id=1} -- {controller_id=1,controller_epoch=1,partition_states=[{topic=testOutputTopic,partition=0,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]}],live_leaders=[{id=1,host=ISI050.utenze.BANKIT.IT,port=9092}]} from connection 10.36.240.33:9092-10.36.240.33:64679;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:45 [INFO ] Logging$class.info - [ReplicaFetcherManager on broker 1] Removed fetcher for partitions testOutputTopic-0 (Logging.scala:70) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@5d473ab1 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 4 to client consumer-1 (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400544898,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@675515eb,SendAction) (Logging.scala:36) 15:55:45 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=4,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:136,requestQueueTime:1,localTime:134,remoteTime:0,responseQueueTime:1,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:45 [DEBUG] Logging$class.debug - [TopicChangeListener on Controller 1]: Topic change listener fired for path /brokers/topics with children testOutputTopic,__consumer_offsets (Logging.scala:54) 15:55:45 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=5,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [Kafka Request Handler 5 on Broker 1], Kafka request handler 5 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545037,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=5,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [45] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [34] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [12] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [8] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [19] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [23] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [4] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [40] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [15] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [11] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [9] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [44] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [33] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [22] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [26] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [37] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [13] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [46] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [24] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [35] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [16] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [5] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [10] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [48] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [21] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [43] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [32] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [49] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [6] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [36] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [1] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [39] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [17] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [25] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [14] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [47] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [31] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [42] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [0] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [20] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [27] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [2] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [38] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [18] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [30] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [7] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [29] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [41] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [3] are [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Replicas assigned to topic [__consumer_offsets], partition [28] are [List(1)] (Logging.scala:54) 15:55:45 [INFO ] Logging$class.info - [TopicChangeListener on Controller 1]: New topics: [Set(__consumer_offsets)], deleted topics: [Set()], new partition replica assignment [Map([__consumer_offsets,19] -> List(1), [__consumer_offsets,30] -> List(1), [__consumer_offsets,47] -> List(1), [__consumer_offsets,29] -> List(1), [__consumer_offsets,41] -> List(1), [__consumer_offsets,39] -> List(1), [__consumer_offsets,10] -> List(1), [__consumer_offsets,17] -> List(1), [__consumer_offsets,14] -> List(1), [__consumer_offsets,40] -> List(1), [__consumer_offsets,18] -> List(1), [__consumer_offsets,26] -> List(1), [__consumer_offsets,0] -> List(1), [__consumer_offsets,24] -> List(1), [__consumer_offsets,33] -> List(1), [__consumer_offsets,20] -> List(1), [__consumer_offsets,21] -> List(1), [__consumer_offsets,3] -> List(1), [__consumer_offsets,5] -> List(1), [__consumer_offsets,22] -> List(1), [__consumer_offsets,12] -> List(1), [__consumer_offsets,8] -> List(1), [__consumer_offsets,23] -> List(1), [__consumer_offsets,15] -> List(1), [__consumer_offsets,48] -> List(1), [__consumer_offsets,11] -> List(1), [__consumer_offsets,13] -> List(1), [__consumer_offsets,49] -> List(1), [__consumer_offsets,6] -> List(1), [__consumer_offsets,28] -> List(1), [__consumer_offsets,4] -> List(1), [__consumer_offsets,37] -> List(1), [__consumer_offsets,31] -> List(1), [__consumer_offsets,44] -> List(1), [__consumer_offsets,42] -> List(1), [__consumer_offsets,34] -> List(1), [__consumer_offsets,46] -> List(1), [__consumer_offsets,25] -> List(1), [__consumer_offsets,45] -> List(1), [__consumer_offsets,27] -> List(1), [__consumer_offsets,32] -> List(1), [__consumer_offsets,43] -> List(1), [__consumer_offsets,36] -> List(1), [__consumer_offsets,35] -> List(1), [__consumer_offsets,7] -> List(1), [__consumer_offsets,9] -> List(1), [__consumer_offsets,38] -> List(1), [__consumer_offsets,1] -> List(1), [__consumer_offsets,16] -> List(1), [__consumer_offsets,2] -> List(1))] (Logging.scala:70) 15:55:45 [INFO ] Logging$class.info - [Controller 1]: New topic creation callback for [__consumer_offsets,19],[__consumer_offsets,30],[__consumer_offsets,47],[__consumer_offsets,29],[__consumer_offsets,41],[__consumer_offsets,39],[__consumer_offsets,10],[__consumer_offsets,17],[__consumer_offsets,14],[__consumer_offsets,40],[__consumer_offsets,18],[__consumer_offsets,26],[__consumer_offsets,0],[__consumer_offsets,24],[__consumer_offsets,33],[__consumer_offsets,20],[__consumer_offsets,21],[__consumer_offsets,3],[__consumer_offsets,5],[__consumer_offsets,22],[__consumer_offsets,12],[__consumer_offsets,8],[__consumer_offsets,23],[__consumer_offsets,15],[__consumer_offsets,48],[__consumer_offsets,11],[__consumer_offsets,13],[__consumer_offsets,49],[__consumer_offsets,6],[__consumer_offsets,28],[__consumer_offsets,4],[__consumer_offsets,37],[__consumer_offsets,31],[__consumer_offsets,44],[__consumer_offsets,42],[__consumer_offsets,34],[__consumer_offsets,46],[__consumer_offsets,25],[__consumer_offsets,45],[__consumer_offsets,27],[__consumer_offsets,32],[__consumer_offsets,43],[__consumer_offsets,36],[__consumer_offsets,35],[__consumer_offsets,7],[__consumer_offsets,9],[__consumer_offsets,38],[__consumer_offsets,1],[__consumer_offsets,16],[__consumer_offsets,2] (Logging.scala:70) 15:55:45 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\testOutputTopic-0\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 5 to client consumer-1. (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545037,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@3088cff8,SendAction) (Logging.scala:36) 15:55:45 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=5,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:113,requestQueueTime:3,localTime:109,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:45 [INFO ] Logging$class.info - [Controller 1]: New partition creation callback for [__consumer_offsets,19],[__consumer_offsets,30],[__consumer_offsets,47],[__consumer_offsets,29],[__consumer_offsets,41],[__consumer_offsets,39],[__consumer_offsets,10],[__consumer_offsets,17],[__consumer_offsets,14],[__consumer_offsets,40],[__consumer_offsets,18],[__consumer_offsets,26],[__consumer_offsets,0],[__consumer_offsets,24],[__consumer_offsets,33],[__consumer_offsets,20],[__consumer_offsets,21],[__consumer_offsets,3],[__consumer_offsets,5],[__consumer_offsets,22],[__consumer_offsets,12],[__consumer_offsets,8],[__consumer_offsets,23],[__consumer_offsets,15],[__consumer_offsets,48],[__consumer_offsets,11],[__consumer_offsets,13],[__consumer_offsets,49],[__consumer_offsets,6],[__consumer_offsets,28],[__consumer_offsets,4],[__consumer_offsets,37],[__consumer_offsets,31],[__consumer_offsets,44],[__consumer_offsets,42],[__consumer_offsets,34],[__consumer_offsets,46],[__consumer_offsets,25],[__consumer_offsets,45],[__consumer_offsets,27],[__consumer_offsets,32],[__consumer_offsets,43],[__consumer_offsets,36],[__consumer_offsets,35],[__consumer_offsets,7],[__consumer_offsets,9],[__consumer_offsets,38],[__consumer_offsets,1],[__consumer_offsets,16],[__consumer_offsets,2] (Logging.scala:70) 15:55:45 [INFO ] Logging$class.info - [Partition state machine on Controller 1]: Invoking state change to NewPartition for partitions [__consumer_offsets,19],[__consumer_offsets,30],[__consumer_offsets,47],[__consumer_offsets,29],[__consumer_offsets,41],[__consumer_offsets,39],[__consumer_offsets,10],[__consumer_offsets,17],[__consumer_offsets,14],[__consumer_offsets,40],[__consumer_offsets,18],[__consumer_offsets,26],[__consumer_offsets,0],[__consumer_offsets,24],[__consumer_offsets,33],[__consumer_offsets,20],[__consumer_offsets,21],[__consumer_offsets,3],[__consumer_offsets,5],[__consumer_offsets,22],[__consumer_offsets,12],[__consumer_offsets,8],[__consumer_offsets,23],[__consumer_offsets,15],[__consumer_offsets,48],[__consumer_offsets,11],[__consumer_offsets,13],[__consumer_offsets,49],[__consumer_offsets,6],[__consumer_offsets,28],[__consumer_offsets,4],[__consumer_offsets,37],[__consumer_offsets,31],[__consumer_offsets,44],[__consumer_offsets,42],[__consumer_offsets,34],[__consumer_offsets,46],[__consumer_offsets,25],[__consumer_offsets,45],[__consumer_offsets,27],[__consumer_offsets,32],[__consumer_offsets,43],[__consumer_offsets,36],[__consumer_offsets,35],[__consumer_offsets,7],[__consumer_offsets,9],[__consumer_offsets,38],[__consumer_offsets,1],[__consumer_offsets,16],[__consumer_offsets,2] (Logging.scala:70) 15:55:45 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=6,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [Kafka Request Handler 1 on Broker 1], Kafka request handler 1 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545159,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=6,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:45 [INFO ] Logging$class.info - [Replica state machine on controller 1]: Invoking state change to NewReplica for replicas [Topic=__consumer_offsets,Partition=28,Replica=1],[Topic=__consumer_offsets,Partition=48,Replica=1],[Topic=__consumer_offsets,Partition=5,Replica=1],[Topic=__consumer_offsets,Partition=21,Replica=1],[Topic=__consumer_offsets,Partition=2,Replica=1],[Topic=__consumer_offsets,Partition=18,Replica=1],[Topic=__consumer_offsets,Partition=23,Replica=1],[Topic=__consumer_offsets,Partition=9,Replica=1],[Topic=__consumer_offsets,Partition=39,Replica=1],[Topic=__consumer_offsets,Partition=31,Replica=1],[Topic=__consumer_offsets,Partition=19,Replica=1],[Topic=__consumer_offsets,Partition=10,Replica=1],[Topic=__consumer_offsets,Partition=22,Replica=1],[Topic=__consumer_offsets,Partition=43,Replica=1],[Topic=__consumer_offsets,Partition=40,Replica=1],[Topic=__consumer_offsets,Partition=27,Replica=1],[Topic=__consumer_offsets,Partition=6,Replica=1],[Topic=__consumer_offsets,Partition=1,Replica=1],[Topic=__consumer_offsets,Partition=47,Replica=1],[Topic=__consumer_offsets,Partition=30,Replica=1],[Topic=__consumer_offsets,Partition=42,Replica=1],[Topic=__consumer_offsets,Partition=41,Replica=1],[Topic=__consumer_offsets,Partition=3,Replica=1],[Topic=__consumer_offsets,Partition=13,Replica=1],[Topic=__consumer_offsets,Partition=4,Replica=1],[Topic=__consumer_offsets,Partition=16,Replica=1],[Topic=__consumer_offsets,Partition=46,Replica=1],[Topic=__consumer_offsets,Partition=49,Replica=1],[Topic=__consumer_offsets,Partition=14,Replica=1],[Topic=__consumer_offsets,Partition=45,Replica=1],[Topic=__consumer_offsets,Partition=37,Replica=1],[Topic=__consumer_offsets,Partition=29,Replica=1],[Topic=__consumer_offsets,Partition=20,Replica=1],[Topic=__consumer_offsets,Partition=8,Replica=1],[Topic=__consumer_offsets,Partition=38,Replica=1],[Topic=__consumer_offsets,Partition=7,Replica=1],[Topic=__consumer_offsets,Partition=0,Replica=1],[Topic=__consumer_offsets,Partition=34,Replica=1],[Topic=__consumer_offsets,Partition=33,Replica=1],[Topic=__consumer_offsets,Partition=26,Replica=1],[Topic=__consumer_offsets,Partition=44,Replica=1],[Topic=__consumer_offsets,Partition=32,Replica=1],[Topic=__consumer_offsets,Partition=25,Replica=1],[Topic=__consumer_offsets,Partition=11,Replica=1],[Topic=__consumer_offsets,Partition=36,Replica=1],[Topic=__consumer_offsets,Partition=12,Replica=1],[Topic=__consumer_offsets,Partition=35,Replica=1],[Topic=__consumer_offsets,Partition=15,Replica=1],[Topic=__consumer_offsets,Partition=17,Replica=1],[Topic=__consumer_offsets,Partition=24,Replica=1] (Logging.scala:70) 15:55:45 [INFO ] Logging$class.info - Completed load of log testOutputTopic-0 with 1 log segments and log end offset 0 in 121 ms (Logging.scala:70) 15:55:45 [INFO ] Logging$class.info - Created log for partition [testOutputTopic,0] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:45 [INFO ] Logging$class.info - Partition [testOutputTopic,0] on broker 1: No checkpointed highwatermark is found for partition testOutputTopic-0 (Logging.scala:70) 15:55:45 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log testOutputTopic-0 of length 0 bytes (Logging.scala:36) 15:55:45 [DEBUG] Logging$class.debug - Partition [testOutputTopic,0] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [testOutputTopic,0]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - Scheduling task highwatermark-checkpoint with initial delay 0 ms and period 5000 ms. (Logging.scala:54) 15:55:45 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'highwatermark-checkpoint'. (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@38f8fe11 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 6 to client consumer-1 (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(1,Request(1,10.36.240.33:9092-10.36.240.33:64679,Session(User:ANONYMOUS,/10.36.240.33),null,1491400544985,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@280614d6,SendAction) (Logging.scala:36) 15:55:45 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=4,api_version=0,correlation_id=1,client_id=1} -- {controller_id=1,controller_epoch=1,partition_states=[{topic=testOutputTopic,partition=0,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]}],live_leaders=[{id=1,host=ISI050.utenze.BANKIT.IT,port=9092}]} from connection 10.36.240.33:9092-10.36.240.33:64679;totalTime:266,requestQueueTime:1,localTime:263,remoteTime:0,responseQueueTime:1,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:45 [TRACE] Logging$class.trace - Processor 1 received request : {api_key=6,api_version=3,correlation_id=2,client_id=1} -- {controller_id=1,controller_epoch=1,partition_states=[{topic=testOutputTopic,partition=0,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]}],live_brokers=[{id=1,end_points=[{port=9092,host=ISI050.utenze.BANKIT.IT,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]} (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [Kafka Request Handler 1 on Broker 1], Kafka request handler 1 on broker 1 handling request Request(1,10.36.240.33:9092-10.36.240.33:64679,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545253,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=6,api_version=3,correlation_id=2,client_id=1} -- {controller_id=1,controller_epoch=1,partition_states=[{topic=testOutputTopic,partition=0,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]}],live_brokers=[{id=1,end_points=[{port=9092,host=ISI050.utenze.BANKIT.IT,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]} from connection 10.36.240.33:9092-10.36.240.33:64679;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545159,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@56f8d0a6,SendAction) (Logging.scala:36) 15:55:45 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=6,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:124,requestQueueTime:1,localTime:95,remoteTime:0,responseQueueTime:2,sendTime:27,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:45 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(1,Request(1,10.36.240.33:9092-10.36.240.33:64679,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545253,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@82c104b,SendAction) (Logging.scala:36) 15:55:45 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=6,api_version=3,correlation_id=2,client_id=1} -- {controller_id=1,controller_epoch=1,partition_states=[{topic=testOutputTopic,partition=0,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]}],live_brokers=[{id=1,end_points=[{port=9092,host=ISI050.utenze.BANKIT.IT,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]} from connection 10.36.240.33:9092-10.36.240.33:64679;totalTime:32,requestQueueTime:2,localTime:13,remoteTime:0,responseQueueTime:14,sendTime:4,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:45 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=7,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [Kafka Request Handler 5 on Broker 1], Kafka request handler 5 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545291,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=7,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - Completed execution of scheduled task 'highwatermark-checkpoint'. (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 7 to client consumer-1. (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545291,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@48cf541b,SendAction) (Logging.scala:36) 15:55:45 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=7,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:18,requestQueueTime:1,localTime:16,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:45 [INFO ] Logging$class.info - [Partition state machine on Controller 1]: Invoking state change to OnlinePartition for partitions [__consumer_offsets,19],[__consumer_offsets,30],[__consumer_offsets,47],[__consumer_offsets,29],[__consumer_offsets,41],[__consumer_offsets,39],[__consumer_offsets,10],[__consumer_offsets,17],[__consumer_offsets,14],[__consumer_offsets,40],[__consumer_offsets,18],[__consumer_offsets,26],[__consumer_offsets,0],[__consumer_offsets,24],[__consumer_offsets,33],[__consumer_offsets,20],[__consumer_offsets,21],[__consumer_offsets,3],[__consumer_offsets,5],[__consumer_offsets,22],[__consumer_offsets,12],[__consumer_offsets,8],[__consumer_offsets,23],[__consumer_offsets,15],[__consumer_offsets,48],[__consumer_offsets,11],[__consumer_offsets,13],[__consumer_offsets,49],[__consumer_offsets,6],[__consumer_offsets,28],[__consumer_offsets,4],[__consumer_offsets,37],[__consumer_offsets,31],[__consumer_offsets,44],[__consumer_offsets,42],[__consumer_offsets,34],[__consumer_offsets,46],[__consumer_offsets,25],[__consumer_offsets,45],[__consumer_offsets,27],[__consumer_offsets,32],[__consumer_offsets,43],[__consumer_offsets,36],[__consumer_offsets,35],[__consumer_offsets,7],[__consumer_offsets,9],[__consumer_offsets,38],[__consumer_offsets,1],[__consumer_offsets,16],[__consumer_offsets,2] (Logging.scala:70) 15:55:45 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,19] are: [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,19] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:45 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=8,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [Kafka Request Handler 4 on Broker 1], Kafka request handler 4 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545383,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=8,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@1957d419 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 8 to client consumer-1 (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545383,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@77aeaa4,SendAction) (Logging.scala:36) 15:55:45 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=8,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:6,requestQueueTime:1,localTime:4,remoteTime:0,responseQueueTime:1,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:45 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=9,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [Kafka Request Handler 4 on Broker 1], Kafka request handler 4 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545393,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=9,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 9 to client consumer-1. (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545393,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@717b12bc,SendAction) (Logging.scala:36) 15:55:45 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=9,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:84,requestQueueTime:0,localTime:83,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:45 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=10,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [Kafka Request Handler 2 on Broker 1], Kafka request handler 2 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545490,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=10,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@664e910d and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 10 to client consumer-1 (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545490,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@619691d5,SendAction) (Logging.scala:36) 15:55:45 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=10,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:3,requestQueueTime:1,localTime:1,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:45 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=11,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [Kafka Request Handler 3 on Broker 1], Kafka request handler 3 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545494,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=11,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:45 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,30] are: [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,30] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 11 to client consumer-1. (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545494,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@53cb7e70,SendAction) (Logging.scala:36) 15:55:45 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=11,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:27,requestQueueTime:1,localTime:25,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:45 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,47] are: [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,47] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:45 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=12,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [Kafka Request Handler 5 on Broker 1], Kafka request handler 5 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545594,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=12,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@af13e40 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 12 to client consumer-1 (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545594,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@3349167,SendAction) (Logging.scala:36) 15:55:45 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=12,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:3,requestQueueTime:1,localTime:1,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:45 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=13,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [Kafka Request Handler 7 on Broker 1], Kafka request handler 7 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545598,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=13,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:55:45 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,29] are: [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,29] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 13 to client consumer-1. (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545598,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@2af39d4f,SendAction) (Logging.scala:36) 15:55:45 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=13,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:41,requestQueueTime:1,localTime:39,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:45 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,41] are: [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,41] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:45 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=14,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [Kafka Request Handler 4 on Broker 1], Kafka request handler 4 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545698,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=14,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@4eb84984 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 14 to client consumer-1 (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545698,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@7f3ea9fd,SendAction) (Logging.scala:36) 15:55:45 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=14,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:2,requestQueueTime:1,localTime:1,remoteTime:0,responseQueueTime:0,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:45 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=15,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [Kafka Request Handler 0 on Broker 1], Kafka request handler 0 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545702,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=15,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 15 to client consumer-1. (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545702,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@770e6094,SendAction) (Logging.scala:36) 15:55:45 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,39] are: [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,39] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:45 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=15,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:53,requestQueueTime:0,localTime:52,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:45 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=16,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [Kafka Request Handler 3 on Broker 1], Kafka request handler 3 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545801,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=16,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@12eb49cc and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 16 to client consumer-1 (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545801,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@509678bb,SendAction) (Logging.scala:36) 15:55:45 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=16,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:8,requestQueueTime:7,localTime:0,remoteTime:0,responseQueueTime:1,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:45 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=17,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [Kafka Request Handler 3 on Broker 1], Kafka request handler 3 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545811,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=17,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:45 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,10] are: [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,10] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 17 to client consumer-1. (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545811,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@66b98e03,SendAction) (Logging.scala:36) 15:55:45 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=17,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:32,requestQueueTime:0,localTime:32,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:45 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,17] are: [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,17] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:45 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=18,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [Kafka Request Handler 7 on Broker 1], Kafka request handler 7 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545910,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=18,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@351a8ba and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 18 to client consumer-1 (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545910,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@6c42b268,SendAction) (Logging.scala:36) 15:55:45 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=18,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:4,requestQueueTime:1,localTime:2,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:45 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=19,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [Kafka Request Handler 4 on Broker 1], Kafka request handler 4 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545915,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=19,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:45 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,14] are: [List(1)] (Logging.scala:54) 15:55:45 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,14] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:45 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 19 to client consumer-1. (Logging.scala:36) 15:55:45 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400545915,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@684c65b4,SendAction) (Logging.scala:36) 15:55:45 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=19,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:52,requestQueueTime:1,localTime:50,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,40] are: [List(1)] (Logging.scala:54) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,40] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:46 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=20,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [Kafka Request Handler 0 on Broker 1], Kafka request handler 0 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546014,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=20,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@73d7a99b and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 20 to client consumer-1 (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546014,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@bfc8ba6,SendAction) (Logging.scala:36) 15:55:46 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=20,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:4,requestQueueTime:1,localTime:2,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:46 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=21,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [Kafka Request Handler 2 on Broker 1], Kafka request handler 2 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546019,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=21,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,18] are: [List(1)] (Logging.scala:54) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,18] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 21 to client consumer-1. (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546019,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@219fd54c,SendAction) (Logging.scala:36) 15:55:46 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=21,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:49,requestQueueTime:0,localTime:48,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:46 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=22,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [Kafka Request Handler 3 on Broker 1], Kafka request handler 3 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546119,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=22,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@116d6a67 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 22 to client consumer-1 (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546119,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@58b7e2e5,SendAction) (Logging.scala:36) 15:55:46 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=22,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:2,requestQueueTime:0,localTime:1,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:46 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=23,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [Kafka Request Handler 6 on Broker 1], Kafka request handler 6 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546122,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=23,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,26] are: [List(1)] (Logging.scala:54) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,26] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 23 to client consumer-1. (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546122,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@535d4d82,SendAction) (Logging.scala:36) 15:55:46 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=23,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:33,requestQueueTime:0,localTime:32,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,0] are: [List(1)] (Logging.scala:54) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,0] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:46 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=24,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [Kafka Request Handler 4 on Broker 1], Kafka request handler 4 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546221,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=24,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@5cd9fd11 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 24 to client consumer-1 (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546221,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@3c134df4,SendAction) (Logging.scala:36) 15:55:46 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=24,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:3,requestQueueTime:1,localTime:1,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:46 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=25,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [Kafka Request Handler 0 on Broker 1], Kafka request handler 0 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546225,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=25,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,24] are: [List(1)] (Logging.scala:54) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,24] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 25 to client consumer-1. (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546225,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@6379ca8f,SendAction) (Logging.scala:36) 15:55:46 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=25,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:52,requestQueueTime:0,localTime:51,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,33] are: [List(1)] (Logging.scala:54) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,33] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:46 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=26,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [Kafka Request Handler 2 on Broker 1], Kafka request handler 2 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546325,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=26,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@688973b4 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 26 to client consumer-1 (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546325,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@43eaccf,SendAction) (Logging.scala:36) 15:55:46 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=26,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:3,requestQueueTime:0,localTime:2,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:46 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=27,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [Kafka Request Handler 3 on Broker 1], Kafka request handler 3 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546333,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=27,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,20] are: [List(1)] (Logging.scala:54) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,20] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 27 to client consumer-1. (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546333,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@3d36d097,SendAction) (Logging.scala:36) 15:55:46 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=27,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:64,requestQueueTime:0,localTime:63,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:46 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=28,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [Kafka Request Handler 6 on Broker 1], Kafka request handler 6 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546431,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=28,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@3bbd48a2 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 28 to client consumer-1 (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546431,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@1bec5628,SendAction) (Logging.scala:36) 15:55:46 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=28,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:4,requestQueueTime:1,localTime:2,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:46 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=29,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [Kafka Request Handler 1 on Broker 1], Kafka request handler 1 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546438,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=29,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 29 to client consumer-1. (Logging.scala:36) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,21] are: [List(1)] (Logging.scala:54) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,21] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:46 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546438,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@366cca68,SendAction) (Logging.scala:36) 15:55:46 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=29,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:97,requestQueueTime:0,localTime:96,remoteTime:0,responseQueueTime:2,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:46 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=30,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [Kafka Request Handler 0 on Broker 1], Kafka request handler 0 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546538,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=30,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@7d104991 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 30 to client consumer-1 (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546538,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@474550be,SendAction) (Logging.scala:36) 15:55:46 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=30,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:4,requestQueueTime:0,localTime:3,remoteTime:0,responseQueueTime:1,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:46 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=31,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [Kafka Request Handler 2 on Broker 1], Kafka request handler 2 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546543,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=31,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,3] are: [List(1)] (Logging.scala:54) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 31 to client consumer-1. (Logging.scala:36) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,3] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:46 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546543,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@4cfc92a9,SendAction) (Logging.scala:36) 15:55:46 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=31,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:65,requestQueueTime:2,localTime:62,remoteTime:0,responseQueueTime:1,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:46 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=32,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [Kafka Request Handler 3 on Broker 1], Kafka request handler 3 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546642,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=32,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@118c098 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 32 to client consumer-1 (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546642,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@35d8c008,SendAction) (Logging.scala:36) 15:55:46 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=32,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:3,requestQueueTime:1,localTime:1,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:46 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=33,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [Kafka Request Handler 6 on Broker 1], Kafka request handler 6 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546646,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=33,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,5] are: [List(1)] (Logging.scala:54) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,5] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 33 to client consumer-1. (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546646,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@65376351,SendAction) (Logging.scala:36) 15:55:46 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=33,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:31,requestQueueTime:0,localTime:30,remoteTime:0,responseQueueTime:1,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,22] are: [List(1)] (Logging.scala:54) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,22] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:46 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=34,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [Kafka Request Handler 5 on Broker 1], Kafka request handler 5 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546746,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=34,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@436c16e and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 34 to client consumer-1 (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546746,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@81a0fe9,SendAction) (Logging.scala:36) 15:55:46 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=34,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:3,requestQueueTime:1,localTime:1,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:46 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=35,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [Kafka Request Handler 7 on Broker 1], Kafka request handler 7 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546750,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=35,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,12] are: [List(1)] (Logging.scala:54) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,12] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 35 to client consumer-1. (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546750,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@205bad99,SendAction) (Logging.scala:36) 15:55:46 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=35,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:49,requestQueueTime:0,localTime:48,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,8] are: [List(1)] (Logging.scala:54) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,8] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:46 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=36,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [Kafka Request Handler 2 on Broker 1], Kafka request handler 2 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546850,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=36,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@6cb6fa7d and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 36 to client consumer-1 (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546850,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@53f0e11a,SendAction) (Logging.scala:36) 15:55:46 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=36,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:4,requestQueueTime:1,localTime:2,remoteTime:0,responseQueueTime:1,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:46 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=37,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [Kafka Request Handler 3 on Broker 1], Kafka request handler 3 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546856,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=37,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,23] are: [List(1)] (Logging.scala:54) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,23] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 37 to client consumer-1. (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546856,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@d033dde,SendAction) (Logging.scala:36) 15:55:46 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=37,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:65,requestQueueTime:0,localTime:64,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:46 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=38,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [Kafka Request Handler 6 on Broker 1], Kafka request handler 6 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546954,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=38,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@69330a93 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 38 to client consumer-1 (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546954,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@553e1ed0,SendAction) (Logging.scala:36) 15:55:46 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=38,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:3,requestQueueTime:1,localTime:1,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:46 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=39,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [Kafka Request Handler 5 on Broker 1], Kafka request handler 5 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546958,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=39,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,15] are: [List(1)] (Logging.scala:54) 15:55:46 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,15] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:46 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 39 to client consumer-1. (Logging.scala:36) 15:55:46 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400546958,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@2874468b,SendAction) (Logging.scala:36) 15:55:46 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=39,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:38,requestQueueTime:0,localTime:37,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,48] are: [List(1)] (Logging.scala:54) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,48] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:47 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=40,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [Kafka Request Handler 7 on Broker 1], Kafka request handler 7 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547057,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=40,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@483fc9e9 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 40 to client consumer-1 (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547057,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@7cd88aae,SendAction) (Logging.scala:36) 15:55:47 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=40,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:2,requestQueueTime:1,localTime:1,remoteTime:0,responseQueueTime:1,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:47 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=41,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [Kafka Request Handler 4 on Broker 1], Kafka request handler 4 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547060,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=41,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,11] are: [List(1)] (Logging.scala:54) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,11] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 41 to client consumer-1. (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547060,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@6821a526,SendAction) (Logging.scala:36) 15:55:47 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=41,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:46,requestQueueTime:0,localTime:46,remoteTime:0,responseQueueTime:1,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,13] are: [List(1)] (Logging.scala:54) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,13] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:47 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=42,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [Kafka Request Handler 3 on Broker 1], Kafka request handler 3 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547159,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=42,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@777c8a66 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 42 to client consumer-1 (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547159,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@4d4cfe6c,SendAction) (Logging.scala:36) 15:55:47 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=42,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:3,requestQueueTime:1,localTime:1,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:47 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=43,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [Kafka Request Handler 6 on Broker 1], Kafka request handler 6 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547164,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=43,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,49] are: [List(1)] (Logging.scala:54) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,49] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 43 to client consumer-1. (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547164,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@4786b950,SendAction) (Logging.scala:36) 15:55:47 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=43,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:54,requestQueueTime:0,localTime:53,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,6] are: [List(1)] (Logging.scala:54) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,6] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:47 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=44,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [Kafka Request Handler 5 on Broker 1], Kafka request handler 5 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547262,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=44,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@95530 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 44 to client consumer-1 (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547262,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@7fb3c6f4,SendAction) (Logging.scala:36) 15:55:47 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=44,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:3,requestQueueTime:1,localTime:1,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:47 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=45,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [Kafka Request Handler 7 on Broker 1], Kafka request handler 7 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547266,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=45,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,28] are: [List(1)] (Logging.scala:54) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,28] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 45 to client consumer-1. (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547266,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@1ade44b1,SendAction) (Logging.scala:36) 15:55:47 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=45,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:57,requestQueueTime:0,localTime:57,remoteTime:0,responseQueueTime:0,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:47 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=46,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [Kafka Request Handler 4 on Broker 1], Kafka request handler 4 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547371,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=46,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@47da5419 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 46 to client consumer-1 (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547371,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@30cb7618,SendAction) (Logging.scala:36) 15:55:47 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=46,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:4,requestQueueTime:2,localTime:2,remoteTime:0,responseQueueTime:1,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:47 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=47,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [Kafka Request Handler 1 on Broker 1], Kafka request handler 1 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547377,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=47,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,4] are: [List(1)] (Logging.scala:54) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,4] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 47 to client consumer-1. (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547377,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@42f50568,SendAction) (Logging.scala:36) 15:55:47 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=47,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:55,requestQueueTime:1,localTime:53,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,37] are: [List(1)] (Logging.scala:54) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,37] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:47 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=48,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [Kafka Request Handler 6 on Broker 1], Kafka request handler 6 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547475,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=48,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@4a230eee and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 48 to client consumer-1 (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547475,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@2310efa4,SendAction) (Logging.scala:36) 15:55:47 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=48,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:3,requestQueueTime:1,localTime:1,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:47 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=49,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [Kafka Request Handler 5 on Broker 1], Kafka request handler 5 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547479,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=49,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,31] are: [List(1)] (Logging.scala:54) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 49 to client consumer-1. (Logging.scala:36) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,31] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:47 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547479,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@4b83dcbb,SendAction) (Logging.scala:36) 15:55:47 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=49,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:64,requestQueueTime:1,localTime:62,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:47 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=50,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [Kafka Request Handler 7 on Broker 1], Kafka request handler 7 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547578,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=50,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@46025fa2 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 50 to client consumer-1 (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547578,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@20931865,SendAction) (Logging.scala:36) 15:55:47 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=50,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:3,requestQueueTime:1,localTime:1,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:47 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=51,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [Kafka Request Handler 4 on Broker 1], Kafka request handler 4 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547583,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=51,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,44] are: [List(1)] (Logging.scala:54) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,44] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 51 to client consumer-1. (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547583,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@4aa7875a,SendAction) (Logging.scala:36) 15:55:47 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=51,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:65,requestQueueTime:1,localTime:64,remoteTime:0,responseQueueTime:1,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:47 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=52,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [Kafka Request Handler 1 on Broker 1], Kafka request handler 1 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547681,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=52,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@554cc6d0 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 52 to client consumer-1 (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547681,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@6e6e3c65,SendAction) (Logging.scala:36) 15:55:47 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=52,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:3,requestQueueTime:1,localTime:1,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:47 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=53,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,42] are: [List(1)] (Logging.scala:54) 15:55:47 [TRACE] Logging$class.trace - [Kafka Request Handler 0 on Broker 1], Kafka request handler 0 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547686,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,42] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=53,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 53 to client consumer-1. (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547686,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@3b5d5317,SendAction) (Logging.scala:36) 15:55:47 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=53,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:46,requestQueueTime:1,localTime:44,remoteTime:0,responseQueueTime:1,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,34] are: [List(1)] (Logging.scala:54) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,34] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:47 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=54,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [Kafka Request Handler 5 on Broker 1], Kafka request handler 5 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547786,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=54,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@450bcb55 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 54 to client consumer-1 (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547786,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@3b53429,SendAction) (Logging.scala:36) 15:55:47 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=54,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:4,requestQueueTime:0,localTime:3,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:47 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=55,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [Kafka Request Handler 7 on Broker 1], Kafka request handler 7 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547792,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=55,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,46] are: [List(1)] (Logging.scala:54) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,46] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 55 to client consumer-1. (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547792,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@75deee52,SendAction) (Logging.scala:36) 15:55:47 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=55,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:67,requestQueueTime:1,localTime:66,remoteTime:0,responseQueueTime:0,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,25] are: [List(1)] (Logging.scala:54) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,25] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:47 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=56,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [Kafka Request Handler 4 on Broker 1], Kafka request handler 4 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547891,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=56,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@600aa7e7 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 56 to client consumer-1 (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547891,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@6a4c7998,SendAction) (Logging.scala:36) 15:55:47 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=56,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:3,requestQueueTime:1,localTime:1,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:47 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=57,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [Kafka Request Handler 1 on Broker 1], Kafka request handler 1 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547894,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=57,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,45] are: [List(1)] (Logging.scala:54) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 57 to client consumer-1. (Logging.scala:36) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,45] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:47 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547894,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@3b585a2d,SendAction) (Logging.scala:36) 15:55:47 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=57,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:57,requestQueueTime:1,localTime:55,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,27] are: [List(1)] (Logging.scala:54) 15:55:47 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,27] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:47 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=58,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [Kafka Request Handler 0 on Broker 1], Kafka request handler 0 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547995,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=58,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@792fc2eb and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 58 to client consumer-1 (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547995,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@11662513,SendAction) (Logging.scala:36) 15:55:47 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=58,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:3,requestQueueTime:0,localTime:2,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:47 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=59,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:47 [TRACE] Logging$class.trace - [Kafka Request Handler 2 on Broker 1], Kafka request handler 2 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547999,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=59,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,32] are: [List(1)] (Logging.scala:54) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 59 to client consumer-1. (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,32] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:48 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400547999,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@1348e513,SendAction) (Logging.scala:36) 15:55:48 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=59,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:41,requestQueueTime:0,localTime:40,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:48 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,43] are: [List(1)] (Logging.scala:54) 15:55:48 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,43] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:48 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=60,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [Kafka Request Handler 7 on Broker 1], Kafka request handler 7 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548098,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=60,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@22746761 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 60 to client consumer-1 (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548098,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@1ac1de5b,SendAction) (Logging.scala:36) 15:55:48 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=60,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:3,requestQueueTime:1,localTime:1,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:48 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-expiration'. (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=61,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Evaluating ISR list of partitions to see which replicas can be removed from the ISR (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [Kafka Request Handler 4 on Broker 1], Kafka request handler 4 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548104,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=61,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-expiration'. (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,36] are: [List(1)] (Logging.scala:54) 15:55:48 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,36] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 61 to client consumer-1. (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548104,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@6331661c,SendAction) (Logging.scala:36) 15:55:48 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=61,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:39,requestQueueTime:1,localTime:38,remoteTime:0,responseQueueTime:1,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:48 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,35] are: [List(1)] (Logging.scala:54) 15:55:48 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,35] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:48 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=62,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [Kafka Request Handler 1 on Broker 1], Kafka request handler 1 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548203,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=62,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@534e5aff and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 62 to client consumer-1 (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548203,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@43df99d4,SendAction) (Logging.scala:36) 15:55:48 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=62,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:3,requestQueueTime:1,localTime:1,remoteTime:0,responseQueueTime:1,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:48 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=63,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [Kafka Request Handler 1 on Broker 1], Kafka request handler 1 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548207,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=63,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,7] are: [List(1)] (Logging.scala:54) 15:55:48 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,7] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 63 to client consumer-1. (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548207,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@17eb9cc1,SendAction) (Logging.scala:36) 15:55:48 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=63,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:56,requestQueueTime:0,localTime:55,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:48 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,9] are: [List(1)] (Logging.scala:54) 15:55:48 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,9] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:48 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=64,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [Kafka Request Handler 2 on Broker 1], Kafka request handler 2 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548306,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=64,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@29b4798f and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 64 to client consumer-1 (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548306,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@3ccbb61,SendAction) (Logging.scala:36) 15:55:48 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=64,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:2,requestQueueTime:1,localTime:0,remoteTime:0,responseQueueTime:1,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:48 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=65,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [Kafka Request Handler 2 on Broker 1], Kafka request handler 2 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548309,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=65,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,38] are: [List(1)] (Logging.scala:54) 15:55:48 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,38] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 65 to client consumer-1. (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548309,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@6b252c82,SendAction) (Logging.scala:36) 15:55:48 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=65,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:56,requestQueueTime:0,localTime:56,remoteTime:0,responseQueueTime:0,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:48 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=66,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [Kafka Request Handler 4 on Broker 1], Kafka request handler 4 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548408,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=66,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@6c72260b and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 66 to client consumer-1 (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548408,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@540b090b,SendAction) (Logging.scala:36) 15:55:48 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=66,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:2,requestQueueTime:1,localTime:1,remoteTime:0,responseQueueTime:0,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:48 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=67,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [Kafka Request Handler 1 on Broker 1], Kafka request handler 1 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548411,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=67,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,1] are: [List(1)] (Logging.scala:54) 15:55:48 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,1] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 67 to client consumer-1. (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548411,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@1fcd9adf,SendAction) (Logging.scala:36) 15:55:48 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=67,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:110,requestQueueTime:1,localTime:108,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:48 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=68,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [Kafka Request Handler 0 on Broker 1], Kafka request handler 0 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548521,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=68,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@3c2d43aa and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 68 to client consumer-1 (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548521,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@793c2df4,SendAction) (Logging.scala:36) 15:55:48 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=68,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:3,requestQueueTime:1,localTime:1,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:48 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=69,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [Kafka Request Handler 3 on Broker 1], Kafka request handler 3 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548525,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=69,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,16] are: [List(1)] (Logging.scala:54) 15:55:48 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,16] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:48 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'partition-rebalance-thread'. (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 69 to client consumer-1. (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548525,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@3b3f5f6d,SendAction) (Logging.scala:36) 15:55:48 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=69,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:49,requestQueueTime:0,localTime:48,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:48 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Live assigned replicas for partition [__consumer_offsets,2] are: [List(1)] (Logging.scala:54) 15:55:48 [DEBUG] Logging$class.debug - [Partition state machine on Controller 1]: Initializing leader and isr for partition [__consumer_offsets,2] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) (Logging.scala:54) 15:55:48 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=70,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [Kafka Request Handler 2 on Broker 1], Kafka request handler 2 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548624,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=70,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@3be0434d and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 70 to client consumer-1 (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548624,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@37704a50,SendAction) (Logging.scala:36) 15:55:48 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=70,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:2,requestQueueTime:1,localTime:1,remoteTime:0,responseQueueTime:0,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:48 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=71,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [Kafka Request Handler 6 on Broker 1], Kafka request handler 6 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548628,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=71,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - Processor 1 received request : {api_key=4,api_version=0,correlation_id=3,client_id=1} -- {controller_id=1,controller_epoch=1,partition_states=[{topic=__consumer_offsets,partition=49,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=38,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=27,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=16,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=8,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=19,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=2,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=13,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=24,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=46,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=35,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=5,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=43,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=32,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=21,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=10,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=37,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=48,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=18,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=40,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=29,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=7,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=45,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=34,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=23,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=26,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=15,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=4,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=42,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=20,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=31,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=9,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=12,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=1,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=17,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=28,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=6,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=39,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=44,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=47,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=36,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=3,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=14,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=25,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=30,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=41,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=22,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=33,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=11,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=0,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]}],live_leaders=[{id=1,host=ISI050.utenze.BANKIT.IT,port=9092}]} (Logging.scala:36) 15:55:48 [INFO ] Logging$class.info - [Replica state machine on controller 1]: Invoking state change to OnlineReplica for replicas [Topic=__consumer_offsets,Partition=28,Replica=1],[Topic=__consumer_offsets,Partition=48,Replica=1],[Topic=__consumer_offsets,Partition=5,Replica=1],[Topic=__consumer_offsets,Partition=21,Replica=1],[Topic=__consumer_offsets,Partition=2,Replica=1],[Topic=__consumer_offsets,Partition=18,Replica=1],[Topic=__consumer_offsets,Partition=23,Replica=1],[Topic=__consumer_offsets,Partition=9,Replica=1],[Topic=__consumer_offsets,Partition=39,Replica=1],[Topic=__consumer_offsets,Partition=31,Replica=1],[Topic=__consumer_offsets,Partition=19,Replica=1],[Topic=__consumer_offsets,Partition=10,Replica=1],[Topic=__consumer_offsets,Partition=22,Replica=1],[Topic=__consumer_offsets,Partition=43,Replica=1],[Topic=__consumer_offsets,Partition=40,Replica=1],[Topic=__consumer_offsets,Partition=27,Replica=1],[Topic=__consumer_offsets,Partition=6,Replica=1],[Topic=__consumer_offsets,Partition=1,Replica=1],[Topic=__consumer_offsets,Partition=47,Replica=1],[Topic=__consumer_offsets,Partition=30,Replica=1],[Topic=__consumer_offsets,Partition=42,Replica=1],[Topic=__consumer_offsets,Partition=41,Replica=1],[Topic=__consumer_offsets,Partition=3,Replica=1],[Topic=__consumer_offsets,Partition=13,Replica=1],[Topic=__consumer_offsets,Partition=4,Replica=1],[Topic=__consumer_offsets,Partition=16,Replica=1],[Topic=__consumer_offsets,Partition=46,Replica=1],[Topic=__consumer_offsets,Partition=49,Replica=1],[Topic=__consumer_offsets,Partition=14,Replica=1],[Topic=__consumer_offsets,Partition=45,Replica=1],[Topic=__consumer_offsets,Partition=37,Replica=1],[Topic=__consumer_offsets,Partition=29,Replica=1],[Topic=__consumer_offsets,Partition=20,Replica=1],[Topic=__consumer_offsets,Partition=8,Replica=1],[Topic=__consumer_offsets,Partition=38,Replica=1],[Topic=__consumer_offsets,Partition=7,Replica=1],[Topic=__consumer_offsets,Partition=0,Replica=1],[Topic=__consumer_offsets,Partition=34,Replica=1],[Topic=__consumer_offsets,Partition=33,Replica=1],[Topic=__consumer_offsets,Partition=26,Replica=1],[Topic=__consumer_offsets,Partition=44,Replica=1],[Topic=__consumer_offsets,Partition=32,Replica=1],[Topic=__consumer_offsets,Partition=25,Replica=1],[Topic=__consumer_offsets,Partition=11,Replica=1],[Topic=__consumer_offsets,Partition=36,Replica=1],[Topic=__consumer_offsets,Partition=12,Replica=1],[Topic=__consumer_offsets,Partition=35,Replica=1],[Topic=__consumer_offsets,Partition=15,Replica=1],[Topic=__consumer_offsets,Partition=17,Replica=1],[Topic=__consumer_offsets,Partition=24,Replica=1] (Logging.scala:70) 15:55:48 [TRACE] Logging$class.trace - [Kafka Request Handler 5 on Broker 1], Kafka request handler 5 on broker 1 handling request Request(1,10.36.240.33:9092-10.36.240.33:64679,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548653,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=4,api_version=0,correlation_id=3,client_id=1} -- {controller_id=1,controller_epoch=1,partition_states=[{topic=__consumer_offsets,partition=49,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=38,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=27,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=16,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=8,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=19,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=2,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=13,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=24,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=46,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=35,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=5,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=43,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=32,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=21,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=10,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=37,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=48,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=18,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=40,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=29,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=7,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=45,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=34,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=23,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=26,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=15,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=4,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=42,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=20,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=31,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=9,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=12,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=1,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=17,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=28,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=6,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=39,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=44,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=47,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=36,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=3,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=14,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=25,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=30,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=41,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=22,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=33,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=11,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=0,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]}],live_leaders=[{id=1,host=ISI050.utenze.BANKIT.IT,port=9092}]} from connection 10.36.240.33:9092-10.36.240.33:64679;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [Controller 1]: checking need to trigger partition rebalance (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 71 to client consumer-1. (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548628,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@44983503,SendAction) (Logging.scala:36) 15:55:48 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=71,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:48,requestQueueTime:1,localTime:46,remoteTime:0,responseQueueTime:1,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:48 [DEBUG] Logging$class.debug - [Controller 1]: preferred replicas by broker Map(1 -> Map([__consumer_offsets,19] -> List(1), [__consumer_offsets,30] -> List(1), [__consumer_offsets,47] -> List(1), [__consumer_offsets,29] -> List(1), [__consumer_offsets,41] -> List(1), [__consumer_offsets,39] -> List(1), [__consumer_offsets,10] -> List(1), [__consumer_offsets,17] -> List(1), [__consumer_offsets,14] -> List(1), [__consumer_offsets,40] -> List(1), [__consumer_offsets,18] -> List(1), [__consumer_offsets,0] -> List(1), [__consumer_offsets,26] -> List(1), [__consumer_offsets,24] -> List(1), [__consumer_offsets,33] -> List(1), [__consumer_offsets,20] -> List(1), [__consumer_offsets,21] -> List(1), [__consumer_offsets,3] -> List(1), [__consumer_offsets,5] -> List(1), [__consumer_offsets,22] -> List(1), [__consumer_offsets,12] -> List(1), [__consumer_offsets,8] -> List(1), [__consumer_offsets,23] -> List(1), [__consumer_offsets,15] -> List(1), [__consumer_offsets,48] -> List(1), [__consumer_offsets,11] -> List(1), [__consumer_offsets,13] -> List(1), [__consumer_offsets,49] -> List(1), [__consumer_offsets,6] -> List(1), [__consumer_offsets,28] -> List(1), [__consumer_offsets,4] -> List(1), [__consumer_offsets,37] -> List(1), [__consumer_offsets,31] -> List(1), [__consumer_offsets,44] -> List(1), [__consumer_offsets,42] -> List(1), [__consumer_offsets,34] -> List(1), [__consumer_offsets,46] -> List(1), [__consumer_offsets,25] -> List(1), [__consumer_offsets,45] -> List(1), [__consumer_offsets,27] -> List(1), [testOutputTopic,0] -> List(1), [__consumer_offsets,32] -> List(1), [__consumer_offsets,43] -> List(1), [__consumer_offsets,36] -> List(1), [__consumer_offsets,35] -> List(1), [__consumer_offsets,7] -> List(1), [__consumer_offsets,9] -> List(1), [__consumer_offsets,38] -> List(1), [__consumer_offsets,1] -> List(1), [__consumer_offsets,16] -> List(1), [__consumer_offsets,2] -> List(1))) (Logging.scala:54) 15:55:48 [DEBUG] Logging$class.debug - [Controller 1]: topics not in preferred replica Map() (Logging.scala:54) 15:55:48 [TRACE] Logging$class.trace - [Controller 1]: leader imbalance ratio for broker 1 is 0,000000 (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - Completed execution of scheduled task 'partition-rebalance-thread'. (Logging.scala:36) 15:55:48 [INFO ] Logging$class.info - [ReplicaFetcherManager on broker 1] Removed fetcher for partitions __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-38,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-13,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (Logging.scala:70) 15:55:48 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-0\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:48 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-0 with 1 log segments and log end offset 0 in 6 ms (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,0] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Partition [__consumer_offsets,0] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-0 (Logging.scala:70) 15:55:48 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-0 of length 0 bytes (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,0] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,0]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:48 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=72,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [Kafka Request Handler 1 on Broker 1], Kafka request handler 1 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548726,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=72,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@53a9c9a8 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 72 to client consumer-1 (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548726,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@5c53f3e9,SendAction) (Logging.scala:36) 15:55:48 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=72,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:3,requestQueueTime:1,localTime:1,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:48 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=73,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [Kafka Request Handler 0 on Broker 1], Kafka request handler 0 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548731,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=73,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-29\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:48 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-29 with 1 log segments and log end offset 0 in 6 ms (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,29] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Partition [__consumer_offsets,29] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-29 (Logging.scala:70) 15:55:48 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-29 of length 0 bytes (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,29] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,29]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 73 to client consumer-1. (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548731,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@1d560b39,SendAction) (Logging.scala:36) 15:55:48 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=73,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:10,requestQueueTime:0,localTime:10,remoteTime:0,responseQueueTime:1,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:48 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-48\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:48 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-48 with 1 log segments and log end offset 0 in 5 ms (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,48] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Partition [__consumer_offsets,48] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-48 (Logging.scala:70) 15:55:48 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-48 of length 0 bytes (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,48] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,48]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:48 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-10\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:48 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-10 with 1 log segments and log end offset 0 in 5 ms (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,10] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Partition [__consumer_offsets,10] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-10 (Logging.scala:70) 15:55:48 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-10 of length 0 bytes (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,10] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,10]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:48 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-45\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:48 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-45 with 1 log segments and log end offset 0 in 4 ms (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,45] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Partition [__consumer_offsets,45] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-45 (Logging.scala:70) 15:55:48 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-45 of length 0 bytes (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,45] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,45]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:48 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-26\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:48 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-26 with 1 log segments and log end offset 0 in 4 ms (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,26] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Partition [__consumer_offsets,26] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-26 (Logging.scala:70) 15:55:48 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-26 of length 0 bytes (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,26] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,26]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:48 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-7\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:48 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-7 with 1 log segments and log end offset 0 in 4 ms (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,7] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Partition [__consumer_offsets,7] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-7 (Logging.scala:70) 15:55:48 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-7 of length 0 bytes (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,7] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,7]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:48 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-42\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:48 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-42 with 1 log segments and log end offset 0 in 4 ms (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,42] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Partition [__consumer_offsets,42] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-42 (Logging.scala:70) 15:55:48 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-42 of length 0 bytes (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,42] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,42]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:48 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-4\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:48 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-4 with 1 log segments and log end offset 0 in 4 ms (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,4] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Partition [__consumer_offsets,4] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-4 (Logging.scala:70) 15:55:48 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-4 of length 0 bytes (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,4] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,4]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:48 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-23\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:48 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-23 with 1 log segments and log end offset 0 in 4 ms (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,23] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Partition [__consumer_offsets,23] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-23 (Logging.scala:70) 15:55:48 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-23 of length 0 bytes (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,23] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,23]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:48 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=74,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [Kafka Request Handler 3 on Broker 1], Kafka request handler 3 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548830,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=74,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@3f11086c and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 74 to client consumer-1 (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-1\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:48 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548830,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@28c1f009,SendAction) (Logging.scala:36) 15:55:48 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=74,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:4,requestQueueTime:1,localTime:2,remoteTime:0,responseQueueTime:1,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:48 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=75,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:48 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-1 with 1 log segments and log end offset 0 in 6 ms (Logging.scala:70) 15:55:48 [TRACE] Logging$class.trace - [Kafka Request Handler 2 on Broker 1], Kafka request handler 2 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548835,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=75,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:48 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,1] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Partition [__consumer_offsets,1] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-1 (Logging.scala:70) 15:55:48 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-1 of length 0 bytes (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,1] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,1]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 75 to client consumer-1. (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548835,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@2df4e83f,SendAction) (Logging.scala:36) 15:55:48 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=75,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:9,requestQueueTime:0,localTime:8,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:48 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-20\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:48 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-20 with 1 log segments and log end offset 0 in 5 ms (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,20] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Partition [__consumer_offsets,20] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-20 (Logging.scala:70) 15:55:48 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-20 of length 0 bytes (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,20] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,20]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:48 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-39\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:48 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-39 with 1 log segments and log end offset 0 in 3 ms (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,39] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Partition [__consumer_offsets,39] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-39 (Logging.scala:70) 15:55:48 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-39 of length 0 bytes (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,39] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,39]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:48 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-17\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:48 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-17 with 1 log segments and log end offset 0 in 4 ms (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,17] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Partition [__consumer_offsets,17] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-17 (Logging.scala:70) 15:55:48 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-17 of length 0 bytes (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,17] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,17]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:48 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-36\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:48 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-36 with 1 log segments and log end offset 0 in 4 ms (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,36] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Partition [__consumer_offsets,36] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-36 (Logging.scala:70) 15:55:48 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-36 of length 0 bytes (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,36] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,36]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:48 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-14\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:48 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-14 with 1 log segments and log end offset 0 in 5 ms (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,14] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Partition [__consumer_offsets,14] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-14 (Logging.scala:70) 15:55:48 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-14 of length 0 bytes (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,14] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,14]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:48 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-33\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:48 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-33 with 1 log segments and log end offset 0 in 5 ms (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,33] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Partition [__consumer_offsets,33] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-33 (Logging.scala:70) 15:55:48 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-33 of length 0 bytes (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,33] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,33]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:48 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-49\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:48 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-49 with 1 log segments and log end offset 0 in 6 ms (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,49] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Partition [__consumer_offsets,49] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-49 (Logging.scala:70) 15:55:48 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-49 of length 0 bytes (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,49] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,49]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:48 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-11\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:48 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-11 with 1 log segments and log end offset 0 in 7 ms (Logging.scala:70) 15:55:48 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=76,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [Kafka Request Handler 6 on Broker 1], Kafka request handler 6 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548934,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=76,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:48 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,11] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@3a5925c0 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 76 to client consumer-1 (Logging.scala:36) 15:55:48 [INFO ] Logging$class.info - Partition [__consumer_offsets,11] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-11 (Logging.scala:70) 15:55:48 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548934,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@6efb9ad3,SendAction) (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-11 of length 0 bytes (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,11] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,11]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:48 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=76,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:5,requestQueueTime:2,localTime:2,remoteTime:0,responseQueueTime:1,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:48 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=77,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [Kafka Request Handler 7 on Broker 1], Kafka request handler 7 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548940,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=77,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-30\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:48 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 77 to client consumer-1. (Logging.scala:36) 15:55:48 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548940,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@43d94bf6,SendAction) (Logging.scala:36) 15:55:48 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=77,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:12,requestQueueTime:1,localTime:10,remoteTime:0,responseQueueTime:1,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:48 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-30 with 1 log segments and log end offset 0 in 8 ms (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,30] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Partition [__consumer_offsets,30] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-30 (Logging.scala:70) 15:55:48 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-30 of length 0 bytes (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,30] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,30]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:48 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-46\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:48 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-46 with 1 log segments and log end offset 0 in 5 ms (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,46] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Partition [__consumer_offsets,46] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-46 (Logging.scala:70) 15:55:48 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-46 of length 0 bytes (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,46] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,46]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:48 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-27\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:48 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-27 with 1 log segments and log end offset 0 in 6 ms (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,27] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Partition [__consumer_offsets,27] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-27 (Logging.scala:70) 15:55:48 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-27 of length 0 bytes (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,27] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,27]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:48 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-8\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:48 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-8 with 1 log segments and log end offset 0 in 5 ms (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,8] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:48 [INFO ] Logging$class.info - Partition [__consumer_offsets,8] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-8 (Logging.scala:70) 15:55:48 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-8 of length 0 bytes (Logging.scala:36) 15:55:48 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,8] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,8]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:48 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-24\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:49 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-24 with 1 log segments and log end offset 0 in 5 ms (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,24] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Partition [__consumer_offsets,24] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-24 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-24 of length 0 bytes (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,24] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,24]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-43\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:49 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-43 with 1 log segments and log end offset 0 in 6 ms (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,43] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Partition [__consumer_offsets,43] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-43 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-43 of length 0 bytes (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,43] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,43]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-5\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:49 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-5 with 1 log segments and log end offset 0 in 6 ms (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,5] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Partition [__consumer_offsets,5] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-5 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-5 of length 0 bytes (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,5] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,5]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-21\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:49 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=78,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [Kafka Request Handler 0 on Broker 1], Kafka request handler 0 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549039,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=78,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@72f8aec5 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 78 to client consumer-1 (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549039,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@64bfaa3c,SendAction) (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-21 with 1 log segments and log end offset 0 in 6 ms (Logging.scala:70) 15:55:49 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=78,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:4,requestQueueTime:1,localTime:2,remoteTime:0,responseQueueTime:1,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:49 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,21] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=79,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - Partition [__consumer_offsets,21] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-21 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - [Kafka Request Handler 3 on Broker 1], Kafka request handler 3 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549045,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=79,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-21 of length 0 bytes (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,21] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,21]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-2\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 79 to client consumer-1. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549045,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@6e801384,SendAction) (Logging.scala:36) 15:55:49 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=79,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:13,requestQueueTime:0,localTime:12,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:49 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-2 with 1 log segments and log end offset 0 in 5 ms (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,2] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Partition [__consumer_offsets,2] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-2 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-2 of length 0 bytes (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,2] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,2]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-40\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:49 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-40 with 1 log segments and log end offset 0 in 4 ms (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,40] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Partition [__consumer_offsets,40] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-40 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-40 of length 0 bytes (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,40] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,40]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-37\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:49 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-37 with 1 log segments and log end offset 0 in 4 ms (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,37] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Partition [__consumer_offsets,37] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-37 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-37 of length 0 bytes (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,37] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,37]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-18\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:49 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-18 with 1 log segments and log end offset 0 in 4 ms (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,18] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Partition [__consumer_offsets,18] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-18 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-18 of length 0 bytes (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,18] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,18]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-34\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:49 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-34 with 1 log segments and log end offset 0 in 4 ms (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,34] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Partition [__consumer_offsets,34] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-34 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-34 of length 0 bytes (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,34] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,34]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-15\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:49 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-15 with 1 log segments and log end offset 0 in 4 ms (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,15] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Partition [__consumer_offsets,15] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-15 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-15 of length 0 bytes (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,15] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,15]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-12\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:49 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-12 with 1 log segments and log end offset 0 in 7 ms (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,12] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Partition [__consumer_offsets,12] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-12 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-12 of length 0 bytes (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,12] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,12]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-31\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:49 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-31 with 1 log segments and log end offset 0 in 6 ms (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,31] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Partition [__consumer_offsets,31] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-31 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-31 of length 0 bytes (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,31] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,31]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-9\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:49 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=80,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-9 with 1 log segments and log end offset 0 in 6 ms (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - [Kafka Request Handler 6 on Broker 1], Kafka request handler 6 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549144,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=80,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@6b708271 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 80 to client consumer-1 (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,9] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549144,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@5c9f3185,SendAction) (Logging.scala:36) 15:55:49 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=80,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:2,requestQueueTime:0,localTime:2,remoteTime:0,responseQueueTime:1,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:49 [INFO ] Logging$class.info - Partition [__consumer_offsets,9] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-9 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-9 of length 0 bytes (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,9] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,9]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:49 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=81,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [Kafka Request Handler 7 on Broker 1], Kafka request handler 7 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549148,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=81,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 81 to client consumer-1. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549148,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@5510471b,SendAction) (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-47\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:49 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=81,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:7,requestQueueTime:0,localTime:6,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:49 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-47 with 1 log segments and log end offset 0 in 5 ms (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,47] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Partition [__consumer_offsets,47] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-47 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-47 of length 0 bytes (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,47] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,47]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-19\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:49 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-19 with 1 log segments and log end offset 0 in 4 ms (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,19] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Partition [__consumer_offsets,19] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-19 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-19 of length 0 bytes (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,19] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,19]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-28\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:49 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-28 with 1 log segments and log end offset 0 in 4 ms (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,28] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Partition [__consumer_offsets,28] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-28 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-28 of length 0 bytes (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,28] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,28]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-38\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:49 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-38 with 1 log segments and log end offset 0 in 4 ms (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,38] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Partition [__consumer_offsets,38] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-38 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-38 of length 0 bytes (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,38] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,38]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-35\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:49 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-35 with 1 log segments and log end offset 0 in 5 ms (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,35] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Partition [__consumer_offsets,35] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-35 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-35 of length 0 bytes (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,35] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,35]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-44\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:49 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-44 with 1 log segments and log end offset 0 in 4 ms (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,44] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Partition [__consumer_offsets,44] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-44 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-44 of length 0 bytes (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,44] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,44]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-6\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:49 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-6 with 1 log segments and log end offset 0 in 4 ms (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,6] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Partition [__consumer_offsets,6] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-6 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-6 of length 0 bytes (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,6] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,6]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-25\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:49 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-25 with 1 log segments and log end offset 0 in 3 ms (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,25] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Partition [__consumer_offsets,25] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-25 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-25 of length 0 bytes (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,25] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,25]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-16\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:49 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-16 with 1 log segments and log end offset 0 in 3 ms (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,16] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Partition [__consumer_offsets,16] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-16 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-16 of length 0 bytes (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,16] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,16]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-22\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:49 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-22 with 1 log segments and log end offset 0 in 4 ms (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,22] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Partition [__consumer_offsets,22] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-22 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-22 of length 0 bytes (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,22] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,22]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:49 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=82,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [Kafka Request Handler 4 on Broker 1], Kafka request handler 4 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549247,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=82,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@7f506fe1 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 82 to client consumer-1 (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549247,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@4ce7e9cf,SendAction) (Logging.scala:36) 15:55:49 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=82,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:3,requestQueueTime:1,localTime:1,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:49 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=83,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-41\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:49 [TRACE] Logging$class.trace - [Kafka Request Handler 1 on Broker 1], Kafka request handler 1 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549251,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=83,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-41 with 1 log segments and log end offset 0 in 8 ms (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,41] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Partition [__consumer_offsets,41] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-41 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-41 of length 0 bytes (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,41] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,41]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-32\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:49 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-32 with 1 log segments and log end offset 0 in 10 ms (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,32] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Partition [__consumer_offsets,32] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-32 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-32 of length 0 bytes (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,32] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,32]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=15,coordinator={node_id=-1,host=,port=-1}} for correlation id 83 to client consumer-1. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549251,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@2295cfe4,SendAction) (Logging.scala:36) 15:55:49 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=83,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:50,requestQueueTime:1,localTime:49,remoteTime:0,responseQueueTime:1,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:49 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-3\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:49 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-3 with 1 log segments and log end offset 0 in 13 ms (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,3] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Partition [__consumer_offsets,3] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-3 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-3 of length 0 bytes (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,3] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,3]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Loaded index file D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka\__consumer_offsets-13\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 (Logging.scala:54) 15:55:49 [INFO ] Logging$class.info - Completed load of log __consumer_offsets-13 with 1 log segments and log end offset 0 in 16 ms (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Created log for partition [__consumer_offsets,13] in D:\Apps\NetBeans\workspace\SettlementEngine\.\target\tmp\kafka with properties {compression.type -> producer, message.format.version -> 0.10.2-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - Partition [__consumer_offsets,13] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-13 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Reading 1 bytes from offset 0 in log __consumer_offsets-13 of length 0 bytes (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,13] on broker 1: Skipping update high watermark since Old hw 0 [0 : 0] is larger than new hw 0 [0 : 0] for partition [__consumer_offsets,13]. All leo's are 0 [0 : 0] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-22 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-25 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-28 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-31 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-34 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-37 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-40 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-43 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-46 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-49 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-41 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-44 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-47 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-1 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-4 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-7 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-10 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-13 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-16 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-19 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-2 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-5 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-8 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-11 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-14 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-17 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-20 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-23 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-26 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-29 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-32 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-35 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-38 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-0 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-3 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-6 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-9 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-12 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-15 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-18 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-21 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-24 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-27 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-30 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-33 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-36 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-39 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-42 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-45 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Scheduling task __consumer_offsets-48 with initial delay 0 ms and period -1 ms. (Logging.scala:54) 15:55:49 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(1,Request(1,10.36.240.33:9092-10.36.240.33:64679,Session(User:ANONYMOUS,/10.36.240.33),null,1491400548653,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@5d9c422c,SendAction) (Logging.scala:36) 15:55:49 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=4,api_version=0,correlation_id=3,client_id=1} -- {controller_id=1,controller_epoch=1,partition_states=[{topic=__consumer_offsets,partition=49,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=38,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=27,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=16,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=8,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=19,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=2,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=13,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=24,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=46,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=35,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=5,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=43,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=32,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=21,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=10,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=37,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=48,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=18,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=40,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=29,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=7,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=45,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=34,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=23,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=26,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=15,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=4,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=42,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=20,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=31,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=9,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=12,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=1,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=17,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=28,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=6,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=39,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=44,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=47,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=36,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=3,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=14,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=25,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=30,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=41,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=22,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=33,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=11,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=0,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]}],live_leaders=[{id=1,host=ISI050.utenze.BANKIT.IT,port=9092}]} from connection 10.36.240.33:9092-10.36.240.33:64679;totalTime:691,requestQueueTime:5,localTime:685,remoteTime:0,responseQueueTime:1,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:49 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=3,api_version=2,correlation_id=84,client_id=consumer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Processor 1 received request : {api_key=6,api_version=3,correlation_id=4,client_id=1} -- {controller_id=1,controller_epoch=1,partition_states=[{topic=__consumer_offsets,partition=49,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=38,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=27,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=16,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=8,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=19,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=2,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=13,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=24,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=46,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=35,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=5,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=43,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=32,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=21,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=10,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=37,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=48,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=18,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=40,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=29,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=7,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=45,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=34,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=23,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=26,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=15,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=4,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=42,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=20,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=31,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=9,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=12,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=1,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=17,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=28,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=6,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=39,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=44,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=47,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=36,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=3,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=14,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=25,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=30,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=41,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=22,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=33,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=11,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=0,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]}],live_brokers=[{id=1,end_points=[{port=9092,host=ISI050.utenze.BANKIT.IT,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]} (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-22'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-22 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - [Kafka Request Handler 3 on Broker 1], Kafka request handler 3 on broker 1 handling request Request(1,10.36.240.33:9092-10.36.240.33:64679,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549351,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=6,api_version=3,correlation_id=4,client_id=1} -- {controller_id=1,controller_epoch=1,partition_states=[{topic=__consumer_offsets,partition=49,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=38,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=27,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=16,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=8,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=19,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=2,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=13,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=24,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=46,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=35,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=5,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=43,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=32,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=21,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=10,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=37,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=48,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=18,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=40,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=29,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=7,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=45,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=34,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=23,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=26,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=15,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=4,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=42,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=20,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=31,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=9,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=12,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=1,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=17,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=28,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=6,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=39,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=44,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=47,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=36,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=3,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=14,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=25,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=30,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=41,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=22,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=33,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=11,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=0,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]}],live_brokers=[{id=1,end_points=[{port=9092,host=ISI050.utenze.BANKIT.IT,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]} from connection 10.36.240.33:9092-10.36.240.33:64679;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(1,Request(1,10.36.240.33:9092-10.36.240.33:64679,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549351,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@75d77b3d,SendAction) (Logging.scala:36) 15:55:49 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=6,api_version=3,correlation_id=4,client_id=1} -- {controller_id=1,controller_epoch=1,partition_states=[{topic=__consumer_offsets,partition=49,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=38,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=27,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=16,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=8,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=19,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=2,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=13,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=24,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=46,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=35,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=5,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=43,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=32,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=21,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=10,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=37,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=48,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=18,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=40,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=29,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=7,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=45,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=34,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=23,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=26,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=15,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=4,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=42,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=20,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=31,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=9,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=12,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=1,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=17,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=28,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=6,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=39,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=44,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=47,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=36,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=3,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=14,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=25,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=30,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=41,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=22,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=33,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=11,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]},{topic=__consumer_offsets,partition=0,controller_epoch=1,leader=1,leader_epoch=0,isr=[1],zk_version=0,replicas=[1]}],live_brokers=[{id=1,end_points=[{port=9092,host=ISI050.utenze.BANKIT.IT,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]} from connection 10.36.240.33:9092-10.36.240.33:64679;totalTime:12,requestQueueTime:7,localTime:5,remoteTime:0,responseQueueTime:1,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:49 [TRACE] Logging$class.trace - [Kafka Request Handler 0 on Broker 1], Kafka request handler 0 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549351,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=84,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@d5b3dca and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 84 to client consumer-1 (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549351,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@723c12a,SendAction) (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-22 in 16 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=84,client_id=consumer-1} -- {topics=[testOutputTopic]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:23,requestQueueTime:8,localTime:13,remoteTime:0,responseQueueTime:0,sendTime:2,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-22'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-25'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-25 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-25 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-25'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-28'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-28 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=10,api_version=0,correlation_id=85,client_id=consumer-1} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-28 in 10 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-28'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-31'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-31 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - [Kafka Request Handler 2 on Broker 1], Kafka request handler 2 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549376,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=85,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-31 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-31'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-34'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-34 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-34 in 2 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-34'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-37'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-37 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-37 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-37'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-40'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-40 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-40 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-40'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-43'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-43 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-43 in 2 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-43'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-46'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-46 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-46 in 2 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-46'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-49'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-49 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-49 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-49'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-41'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-41 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-41 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-41'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-44'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-44 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=0,coordinator={node_id=1,host=ISI050.utenze.BANKIT.IT,port=9092}} for correlation id 85 to client consumer-1. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-44 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-44'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-47'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-47 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-47 in 10 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-47'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-1'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-1 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549376,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@50dc664a,SendAction) (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-1 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-1'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-4'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-4 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-4 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-4'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-7'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-7 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-7 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=85,client_id=consumer-1} -- {group_id=testOutputTopic} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:42,requestQueueTime:12,localTime:17,remoteTime:0,responseQueueTime:13,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-7'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-10'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-10 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-10 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-10'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-13'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-13 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-13 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-13'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-16'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-16 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-16 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-16'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-19'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-19 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-19 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-19'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-2'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-2 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-2 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-2'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-5'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-5 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-5 in 0 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-5'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-8'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-8 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-8 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-8'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-11'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-11 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-11 in 10 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-11'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-14'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-14 (Logging.scala:70) 15:55:49 [DEBUG] Logging$class.debug - Accepted connection from /10.36.240.33:64681 on /10.36.240.33:9092 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] (Logging.scala:54) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-14 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-14'. (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Processor 0 listening to new connection from /10.36.240.33:64681 (Logging.scala:54) 15:55:49 [TRACE] Logging$class.trace - Processor 0 received request : {api_key=18,api_version=0,correlation_id=87,client_id=consumer-1} -- {} (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-17'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [Kafka Request Handler 7 on Broker 1], Kafka request handler 7 on broker 1 handling request Request(0,10.36.240.33:9092-10.36.240.33:64681,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549447,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-17 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=18,api_version=0,correlation_id=87,client_id=consumer-1} -- {} from connection 10.36.240.33:9092-10.36.240.33:64681;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-17 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(0,Request(0,10.36.240.33:9092-10.36.240.33:64681,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549447,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@6cd4271b,SendAction) (Logging.scala:36) 15:55:49 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=18,api_version=0,correlation_id=87,client_id=consumer-1} -- {} from connection 10.36.240.33:9092-10.36.240.33:64681;totalTime:3,requestQueueTime:0,localTime:2,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-17'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-20'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-20 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-20 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-20'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-23'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-23 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Processor 0 received request : {api_key=11,api_version=1,correlation_id=86,client_id=consumer-1} -- {group_id=testOutputTopic,session_timeout=180000,rebalance_timeout=600000,member_id=,protocol_type=consumer,group_protocols=[{protocol_name=range,protocol_metadata=java.nio.HeapByteBuffer[pos=0 lim=27 cap=27]}]} (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-23 in 3 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-23'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-26'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-26 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - [Kafka Request Handler 4 on Broker 1], Kafka request handler 4 on broker 1 handling request Request(0,10.36.240.33:9092-10.36.240.33:64681,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549453,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=11,api_version=1,correlation_id=86,client_id=consumer-1} -- {group_id=testOutputTopic,session_timeout=180000,rebalance_timeout=600000,member_id=,protocol_type=consumer,group_protocols=[{protocol_name=range,protocol_metadata=java.nio.HeapByteBuffer[pos=0 lim=27 cap=27]}]} from connection 10.36.240.33:9092-10.36.240.33:64681;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-26 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-26'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-29'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-29 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-29 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-29'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-32'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-32 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-32 in 2 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-32'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-35'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-35 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-35 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-35'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-38'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-38 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-38 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-38'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-0'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-0 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-0 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-0'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-3'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-3 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-3 in 105 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-3'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-6'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-6 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-6 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-6'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-9'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-9 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [GroupCoordinator 1]: Preparing to restabilize group testOutputTopic with old generation 0 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-9 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-9'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-12'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-12 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-12 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-12'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-15'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-15 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-15 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-15'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-18'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-18 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-18 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-18'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-21'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-21 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-21 in 0 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-21'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-24'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-24 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-24 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-24'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-27'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-27 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-27 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-27'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-30'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-30 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-30 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-30'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-33'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-33 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-33 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-33'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-36'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-36 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-36 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-36'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [GroupCoordinator 1]: Stabilized group testOutputTopic generation 1 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-39'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-39 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Sending join group response {error_code=0,generation_id=1,group_protocol=range,leader_id=consumer-1-aef94ec1-7293-4b3d-bcd9-ae1590077c61,member_id=consumer-1-aef94ec1-7293-4b3d-bcd9-ae1590077c61,members=[{member_id=consumer-1-aef94ec1-7293-4b3d-bcd9-ae1590077c61,member_metadata=java.nio.HeapByteBuffer[pos=0 lim=27 cap=27]}]} for correlation id 86 to client consumer-1. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-39 in 1 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-39'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-42'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-42 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(0,Request(0,10.36.240.33:9092-10.36.240.33:64681,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549453,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@18d1b314,SendAction) (Logging.scala:36) 15:55:49 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=11,api_version=1,correlation_id=86,client_id=consumer-1} -- {group_id=testOutputTopic,session_timeout=180000,rebalance_timeout=600000,member_id=,protocol_type=consumer,group_protocols=[{protocol_name=range,protocol_metadata=java.nio.HeapByteBuffer[pos=0 lim=27 cap=27]}]} from connection 10.36.240.33:9092-10.36.240.33:64681;totalTime:151,requestQueueTime:2,localTime:145,remoteTime:0,responseQueueTime:1,sendTime:3,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-42 in 3 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-42'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-45'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-45 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-45 in 2 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-45'. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Processor 0 received request : {api_key=14,api_version=0,correlation_id=88,client_id=consumer-1} -- {group_id=testOutputTopic,generation_id=1,member_id=consumer-1-aef94ec1-7293-4b3d-bcd9-ae1590077c61,group_assignment=[{member_id=consumer-1-aef94ec1-7293-4b3d-bcd9-ae1590077c61,member_assignment=java.nio.HeapByteBuffer[pos=0 lim=35 cap=35]}]} (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Beginning execution of scheduled task '__consumer_offsets-48'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-48 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - [Kafka Request Handler 5 on Broker 1], Kafka request handler 5 on broker 1 handling request Request(0,10.36.240.33:9092-10.36.240.33:64681,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549608,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=14,api_version=0,correlation_id=88,client_id=consumer-1} -- {group_id=testOutputTopic,generation_id=1,member_id=consumer-1-aef94ec1-7293-4b3d-bcd9-ae1590077c61,group_assignment=[{member_id=consumer-1-aef94ec1-7293-4b3d-bcd9-ae1590077c61,member_assignment=java.nio.HeapByteBuffer[pos=0 lim=35 cap=35]}]} from connection 10.36.240.33:9092-10.36.240.33:64681;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-48 in 2 milliseconds. (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - Completed execution of scheduled task '__consumer_offsets-48'. (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [GroupCoordinator 1]: Assignment received from leader for group testOutputTopic for generation 1 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Append [Map(__consumer_offsets-40 -> [(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 1179105922, CreateTime = 1491400549621, key = 19 bytes, value = 230 bytes))])] to local log (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Inserting 283 bytes at offset 0 at position 0 with largest timestamp 1491400549621 at shallow offset 0 (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Appended 283 to .\target\tmp\kafka\__consumer_offsets-40\00000000000000000000.log at offset 0 (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Appended message set to log __consumer_offsets-40 with first offset: 0, next offset: 1, and messages: [(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 1179105922, CreateTime = 1491400549621, key = 19 bytes, value = 230 bytes))] (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key __consumer_offsets-40 unblocked 0 fetch requests. (Logging.scala:54) 15:55:49 [TRACE] Logging$class.trace - Setting high watermark for replica 1 partition __consumer_offsets-40 to [1 [0 : 283]] (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,40] on broker 1: High watermark for partition [__consumer_offsets,40] updated to 1 [0 : 283] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key __consumer_offsets-40 unblocked 0 fetch requests. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key __consumer_offsets-40 unblocked 0 producer requests. (Logging.scala:54) 15:55:49 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: 283 bytes written to log __consumer_offsets-40 beginning at offset 0 and ending at offset 0 (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Produce to local log in 20 ms (Logging.scala:54) 15:55:49 [TRACE] Logging$class.trace - Initial partition status for __consumer_offsets-40 is [acksPending: true, error: 7, startOffset: 0, requiredOffset: 1] (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Checking produce satisfaction for __consumer_offsets-40, current status [acksPending: true, error: 7, startOffset: 0, requiredOffset: 1] (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Partition [__consumer_offsets,40] on broker 1: 1 acks satisfied for __consumer_offsets-40 with acks = -1 (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(0,Request(0,10.36.240.33:9092-10.36.240.33:64681,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549608,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@5eb6cb59,SendAction) (Logging.scala:36) 15:55:49 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=14,api_version=0,correlation_id=88,client_id=consumer-1} -- {group_id=testOutputTopic,generation_id=1,member_id=consumer-1-aef94ec1-7293-4b3d-bcd9-ae1590077c61,group_assignment=[{member_id=consumer-1-aef94ec1-7293-4b3d-bcd9-ae1590077c61,member_assignment=java.nio.HeapByteBuffer[pos=0 lim=35 cap=35]}]} from connection 10.36.240.33:9092-10.36.240.33:64681;totalTime:63,requestQueueTime:2,localTime:60,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:49 [TRACE] Logging$class.trace - Processor 0 received request : {api_key=9,api_version=2,correlation_id=89,client_id=consumer-1} -- {group_id=testOutputTopic,topics=[{topic=testOutputTopic,partitions=[{partition=0}]}]} (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [Kafka Request Handler 2 on Broker 1], Kafka request handler 2 on broker 1 handling request Request(0,10.36.240.33:9092-10.36.240.33:64681,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549676,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=9,api_version=2,correlation_id=89,client_id=consumer-1} -- {group_id=testOutputTopic,topics=[{topic=testOutputTopic,partitions=[{partition=0}]}]} from connection 10.36.240.33:9092-10.36.240.33:64681;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [Group Metadata Manager on Broker 1]: Getting offsets of ArrayBuffer(testOutputTopic-0) for group testOutputTopic. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Sending offset fetch response {responses=[{topic=testOutputTopic,partition_responses=[{partition=0,offset=-1,metadata=,error_code=0}]}],error_code=0} for correlation id 89 to client consumer-1. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(0,Request(0,10.36.240.33:9092-10.36.240.33:64681,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549676,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@25aa2df0,SendAction) (Logging.scala:36) 15:55:49 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=9,api_version=2,correlation_id=89,client_id=consumer-1} -- {group_id=testOutputTopic,topics=[{topic=testOutputTopic,partitions=[{partition=0}]}]} from connection 10.36.240.33:9092-10.36.240.33:64681;totalTime:6,requestQueueTime:0,localTime:5,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:49 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=2,api_version=1,correlation_id=90,client_id=consumer-1} -- {replica_id=-1,topics=[{topic=testOutputTopic,partitions=[{partition=0,timestamp=-1}]}]} (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [Kafka Request Handler 6 on Broker 1], Kafka request handler 6 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549684,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=2,api_version=1,correlation_id=90,client_id=consumer-1} -- {replica_id=-1,topics=[{topic=testOutputTopic,partitions=[{partition=0,timestamp=-1}]}]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549684,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@2306540f,SendAction) (Logging.scala:36) 15:55:49 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=2,api_version=1,correlation_id=90,client_id=consumer-1} -- {replica_id=-1,topics=[{topic=testOutputTopic,partitions=[{partition=0,timestamp=-1}]}]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:5,requestQueueTime:1,localTime:4,remoteTime:0,responseQueueTime:1,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:49 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=1,api_version=3,correlation_id=91,client_id=consumer-1} -- {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,topics=[{topic=testOutputTopic,partitions=[{partition=0,fetch_offset=0,max_bytes=1048576}]}]} (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Processor 0 received request : {api_key=13,api_version=0,correlation_id=92,client_id=consumer-1} -- {group_id=testOutputTopic,member_id=consumer-1-aef94ec1-7293-4b3d-bcd9-ae1590077c61} (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [Kafka Request Handler 7 on Broker 1], Kafka request handler 7 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549699,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=1,api_version=3,correlation_id=91,client_id=consumer-1} -- {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,topics=[{topic=testOutputTopic,partitions=[{partition=0,fetch_offset=0,max_bytes=1048576}]}]} from connection 10.36.240.33:9092-10.36.240.33:64680;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [Kafka Request Handler 1 on Broker 1], Kafka request handler 1 on broker 1 handling request Request(0,10.36.240.33:9092-10.36.240.33:64681,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549699,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=13,api_version=0,correlation_id=92,client_id=consumer-1} -- {group_id=testOutputTopic,member_id=consumer-1-aef94ec1-7293-4b3d-bcd9-ae1590077c61} from connection 10.36.240.33:9092-10.36.240.33:64681;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [GroupCoordinator 1]: Member consumer-1-aef94ec1-7293-4b3d-bcd9-ae1590077c61 in group testOutputTopic has failed (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [GroupCoordinator 1]: Preparing to restabilize group testOutputTopic with old generation 1 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [GroupCoordinator 1]: Group testOutputTopic with generation 2 is now empty (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Append [Map(__consumer_offsets-40 -> [(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 3245306108, CreateTime = 1491400549704, key = 19 bytes, value = 24 bytes))])] to local log (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Inserting 77 bytes at offset 1 at position 283 with largest timestamp 1491400549704 at shallow offset 1 (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Fetching log segment for partition testOutputTopic-0, offset 0, partition fetch size 1048576, remaining response limit 52428800, ignoring response/partition size limits (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Appended 77 to .\target\tmp\kafka\__consumer_offsets-40\00000000000000000000.log at offset 1 (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Reading 1048576 bytes from offset 0 in log testOutputTopic-0 of length 0 bytes (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Appended message set to log __consumer_offsets-40 with first offset: 1, next offset: 2, and messages: [(offset=1,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 3245306108, CreateTime = 1491400549704, key = 19 bytes, value = 24 bytes))] (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key __consumer_offsets-40 unblocked 0 fetch requests. (Logging.scala:54) 15:55:49 [TRACE] Logging$class.trace - Setting high watermark for replica 1 partition __consumer_offsets-40 to [2 [0 : 360]] (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,40] on broker 1: High watermark for partition [__consumer_offsets,40] updated to 2 [0 : 360] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key __consumer_offsets-40 unblocked 0 fetch requests. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key __consumer_offsets-40 unblocked 0 producer requests. (Logging.scala:54) 15:55:49 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: 77 bytes written to log __consumer_offsets-40 beginning at offset 1 and ending at offset 1 (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Produce to local log in 8 ms (Logging.scala:54) 15:55:49 [TRACE] Logging$class.trace - Initial partition status for __consumer_offsets-40 is [acksPending: true, error: 7, startOffset: 1, requiredOffset: 2] (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Checking produce satisfaction for __consumer_offsets-40, current status [acksPending: true, error: 7, startOffset: 1, requiredOffset: 2] (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Partition [__consumer_offsets,40] on broker 1: 1 acks satisfied for __consumer_offsets-40 with acks = -1 (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Sending leave group response {error_code=0} for correlation id 92 to client consumer-1. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(0,Request(0,10.36.240.33:9092-10.36.240.33:64681,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549699,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@680b2aa1,SendAction) (Logging.scala:36) 15:55:49 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=13,api_version=0,correlation_id=92,client_id=consumer-1} -- {group_id=testOutputTopic,member_id=consumer-1-aef94ec1-7293-4b3d-bcd9-ae1590077c61} from connection 10.36.240.33:9092-10.36.240.33:64681;totalTime:17,requestQueueTime:1,localTime:15,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:49 [DEBUG] Logging$class.debug - Accepted connection from /127.0.0.1:64686 on /127.0.0.1:9092 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Processor 1 listening to new connection from /127.0.0.1:64686 (Logging.scala:54) 15:55:49 [TRACE] Logging$class.trace - Processor 1 received request : {api_key=18,api_version=0,correlation_id=1,client_id=consumer-2} -- {} (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [Kafka Request Handler 4 on Broker 1], Kafka request handler 4 on broker 1 handling request Request(1,127.0.0.1:9092-127.0.0.1:64686,Session(User:ANONYMOUS,/127.0.0.1),null,1491400549744,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=18,api_version=0,correlation_id=1,client_id=consumer-2} -- {} from connection 127.0.0.1:9092-127.0.0.1:64686;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(1,Request(1,127.0.0.1:9092-127.0.0.1:64686,Session(User:ANONYMOUS,/127.0.0.1),null,1491400549744,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@450f45a4,SendAction) (Logging.scala:36) 15:55:49 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=18,api_version=0,correlation_id=1,client_id=consumer-2} -- {} from connection 127.0.0.1:9092-127.0.0.1:64686;totalTime:2,requestQueueTime:0,localTime:1,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:49 [TRACE] Logging$class.trace - Processor 1 received request : {api_key=3,api_version=2,correlation_id=2,client_id=consumer-2} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [Kafka Request Handler 3 on Broker 1], Kafka request handler 3 on broker 1 handling request Request(1,127.0.0.1:9092-127.0.0.1:64686,Session(User:ANONYMOUS,/127.0.0.1),null,1491400549747,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=2,client_id=consumer-2} -- {topics=[testOutputTopic]} from connection 127.0.0.1:9092-127.0.0.1:64686;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@455573bb and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 2 to client consumer-2 (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(1,Request(1,127.0.0.1:9092-127.0.0.1:64686,Session(User:ANONYMOUS,/127.0.0.1),null,1491400549747,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@bcfc0ab,SendAction) (Logging.scala:36) 15:55:49 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=2,client_id=consumer-2} -- {topics=[testOutputTopic]} from connection 127.0.0.1:9092-127.0.0.1:64686;totalTime:2,requestQueueTime:1,localTime:1,remoteTime:0,responseQueueTime:0,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:49 [TRACE] Logging$class.trace - Processor 1 received request : {api_key=10,api_version=0,correlation_id=0,client_id=consumer-2} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [Kafka Request Handler 5 on Broker 1], Kafka request handler 5 on broker 1 handling request Request(1,127.0.0.1:9092-127.0.0.1:64686,Session(User:ANONYMOUS,/127.0.0.1),null,1491400549750,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=0,client_id=consumer-2} -- {group_id=testOutputTopic} from connection 127.0.0.1:9092-127.0.0.1:64686;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=0,coordinator={node_id=1,host=ISI050.utenze.BANKIT.IT,port=9092}} for correlation id 0 to client consumer-2. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(1,Request(1,127.0.0.1:9092-127.0.0.1:64686,Session(User:ANONYMOUS,/127.0.0.1),null,1491400549750,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@1ad37fe5,SendAction) (Logging.scala:36) 15:55:49 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=0,client_id=consumer-2} -- {group_id=testOutputTopic} from connection 127.0.0.1:9092-127.0.0.1:64686;totalTime:3,requestQueueTime:0,localTime:2,remoteTime:0,responseQueueTime:1,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:49 [DEBUG] Logging$class.debug - Accepted connection from /10.36.240.33:64687 on /10.36.240.33:9092 and assigned it to processor 2, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Processor 2 listening to new connection from /10.36.240.33:64687 (Logging.scala:54) 15:55:49 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=18,api_version=0,correlation_id=4,client_id=consumer-2} -- {} (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [Kafka Request Handler 0 on Broker 1], Kafka request handler 0 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64687,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549758,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=18,api_version=0,correlation_id=4,client_id=consumer-2} -- {} from connection 10.36.240.33:9092-10.36.240.33:64687;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64687,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549758,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@58e2595,SendAction) (Logging.scala:36) 15:55:49 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=18,api_version=0,correlation_id=4,client_id=consumer-2} -- {} from connection 10.36.240.33:9092-10.36.240.33:64687;totalTime:2,requestQueueTime:1,localTime:0,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:49 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=11,api_version=1,correlation_id=3,client_id=consumer-2} -- {group_id=testOutputTopic,session_timeout=180000,rebalance_timeout=600000,member_id=,protocol_type=consumer,group_protocols=[{protocol_name=range,protocol_metadata=java.nio.HeapByteBuffer[pos=0 lim=27 cap=27]}]} (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [Kafka Request Handler 2 on Broker 1], Kafka request handler 2 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64687,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549760,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=11,api_version=1,correlation_id=3,client_id=consumer-2} -- {group_id=testOutputTopic,session_timeout=180000,rebalance_timeout=600000,member_id=,protocol_type=consumer,group_protocols=[{protocol_name=range,protocol_metadata=java.nio.HeapByteBuffer[pos=0 lim=27 cap=27]}]} from connection 10.36.240.33:9092-10.36.240.33:64687;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [GroupCoordinator 1]: Preparing to restabilize group testOutputTopic with old generation 2 (Logging.scala:70) 15:55:49 [INFO ] Logging$class.info - [GroupCoordinator 1]: Stabilized group testOutputTopic generation 3 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Sending join group response {error_code=0,generation_id=3,group_protocol=range,leader_id=consumer-2-c2e1d959-d994-4471-b304-dd9144fa1b38,member_id=consumer-2-c2e1d959-d994-4471-b304-dd9144fa1b38,members=[{member_id=consumer-2-c2e1d959-d994-4471-b304-dd9144fa1b38,member_metadata=java.nio.HeapByteBuffer[pos=0 lim=27 cap=27]}]} for correlation id 3 to client consumer-2. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64687,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549760,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@1439bf07,SendAction) (Logging.scala:36) 15:55:49 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=11,api_version=1,correlation_id=3,client_id=consumer-2} -- {group_id=testOutputTopic,session_timeout=180000,rebalance_timeout=600000,member_id=,protocol_type=consumer,group_protocols=[{protocol_name=range,protocol_metadata=java.nio.HeapByteBuffer[pos=0 lim=27 cap=27]}]} from connection 10.36.240.33:9092-10.36.240.33:64687;totalTime:3,requestQueueTime:1,localTime:2,remoteTime:0,responseQueueTime:1,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:49 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=14,api_version=0,correlation_id=5,client_id=consumer-2} -- {group_id=testOutputTopic,generation_id=3,member_id=consumer-2-c2e1d959-d994-4471-b304-dd9144fa1b38,group_assignment=[{member_id=consumer-2-c2e1d959-d994-4471-b304-dd9144fa1b38,member_assignment=java.nio.HeapByteBuffer[pos=0 lim=35 cap=35]}]} (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [Kafka Request Handler 6 on Broker 1], Kafka request handler 6 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64687,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549764,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=14,api_version=0,correlation_id=5,client_id=consumer-2} -- {group_id=testOutputTopic,generation_id=3,member_id=consumer-2-c2e1d959-d994-4471-b304-dd9144fa1b38,group_assignment=[{member_id=consumer-2-c2e1d959-d994-4471-b304-dd9144fa1b38,member_assignment=java.nio.HeapByteBuffer[pos=0 lim=35 cap=35]}]} from connection 10.36.240.33:9092-10.36.240.33:64687;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:49 [INFO ] Logging$class.info - [GroupCoordinator 1]: Assignment received from leader for group testOutputTopic for generation 3 (Logging.scala:70) 15:55:49 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Append [Map(__consumer_offsets-40 -> [(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 1496628094, CreateTime = 1491400549765, key = 19 bytes, value = 230 bytes))])] to local log (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Inserting 283 bytes at offset 2 at position 360 with largest timestamp 1491400549765 at shallow offset 2 (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Appended 283 to .\target\tmp\kafka\__consumer_offsets-40\00000000000000000000.log at offset 2 (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Appended message set to log __consumer_offsets-40 with first offset: 2, next offset: 3, and messages: [(offset=2,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 1496628094, CreateTime = 1491400549765, key = 19 bytes, value = 230 bytes))] (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key __consumer_offsets-40 unblocked 0 fetch requests. (Logging.scala:54) 15:55:49 [TRACE] Logging$class.trace - Setting high watermark for replica 1 partition __consumer_offsets-40 to [3 [0 : 643]] (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,40] on broker 1: High watermark for partition [__consumer_offsets,40] updated to 3 [0 : 643] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key __consumer_offsets-40 unblocked 0 fetch requests. (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key __consumer_offsets-40 unblocked 0 producer requests. (Logging.scala:54) 15:55:49 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: 283 bytes written to log __consumer_offsets-40 beginning at offset 2 and ending at offset 2 (Logging.scala:36) 15:55:49 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Produce to local log in 2 ms (Logging.scala:54) 15:55:49 [TRACE] Logging$class.trace - Initial partition status for __consumer_offsets-40 is [acksPending: true, error: 7, startOffset: 2, requiredOffset: 3] (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Checking produce satisfaction for __consumer_offsets-40, current status [acksPending: true, error: 7, startOffset: 2, requiredOffset: 3] (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Partition [__consumer_offsets,40] on broker 1: 1 acks satisfied for __consumer_offsets-40 with acks = -1 (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64687,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549764,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@395f1391,SendAction) (Logging.scala:36) 15:55:49 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=14,api_version=0,correlation_id=5,client_id=consumer-2} -- {group_id=testOutputTopic,generation_id=3,member_id=consumer-2-c2e1d959-d994-4471-b304-dd9144fa1b38,group_assignment=[{member_id=consumer-2-c2e1d959-d994-4471-b304-dd9144fa1b38,member_assignment=java.nio.HeapByteBuffer[pos=0 lim=35 cap=35]}]} from connection 10.36.240.33:9092-10.36.240.33:64687;totalTime:6,requestQueueTime:0,localTime:5,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:49 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=9,api_version=2,correlation_id=6,client_id=consumer-2} -- {group_id=testOutputTopic,topics=[{topic=testOutputTopic,partitions=[{partition=0}]}]} (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [Kafka Request Handler 1 on Broker 1], Kafka request handler 1 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64687,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549772,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=9,api_version=2,correlation_id=6,client_id=consumer-2} -- {group_id=testOutputTopic,topics=[{topic=testOutputTopic,partitions=[{partition=0}]}]} from connection 10.36.240.33:9092-10.36.240.33:64687;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [Group Metadata Manager on Broker 1]: Getting offsets of ArrayBuffer(testOutputTopic-0) for group testOutputTopic. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Sending offset fetch response {responses=[{topic=testOutputTopic,partition_responses=[{partition=0,offset=-1,metadata=,error_code=0}]}],error_code=0} for correlation id 6 to client consumer-2. (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64687,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549772,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@10fc7dea,SendAction) (Logging.scala:36) 15:55:49 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=9,api_version=2,correlation_id=6,client_id=consumer-2} -- {group_id=testOutputTopic,topics=[{topic=testOutputTopic,partitions=[{partition=0}]}]} from connection 10.36.240.33:9092-10.36.240.33:64687;totalTime:2,requestQueueTime:0,localTime:1,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:49 [DEBUG] Logging$class.debug - Accepted connection from /10.36.240.33:64688 on /10.36.240.33:9092 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] (Logging.scala:54) 15:55:49 [DEBUG] Logging$class.debug - Processor 0 listening to new connection from /10.36.240.33:64688 (Logging.scala:54) 15:55:49 [TRACE] Logging$class.trace - Processor 0 received request : {api_key=18,api_version=0,correlation_id=8,client_id=consumer-2} -- {} (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [Kafka Request Handler 7 on Broker 1], Kafka request handler 7 on broker 1 handling request Request(0,10.36.240.33:9092-10.36.240.33:64688,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549778,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=18,api_version=0,correlation_id=8,client_id=consumer-2} -- {} from connection 10.36.240.33:9092-10.36.240.33:64688;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(0,Request(0,10.36.240.33:9092-10.36.240.33:64688,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549778,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@5d100820,SendAction) (Logging.scala:36) 15:55:49 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=18,api_version=0,correlation_id=8,client_id=consumer-2} -- {} from connection 10.36.240.33:9092-10.36.240.33:64688;totalTime:2,requestQueueTime:0,localTime:1,remoteTime:0,responseQueueTime:1,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:49 [TRACE] Logging$class.trace - Processor 0 received request : {api_key=2,api_version=1,correlation_id=7,client_id=consumer-2} -- {replica_id=-1,topics=[{topic=testOutputTopic,partitions=[{partition=0,timestamp=-1}]}]} (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [Kafka Request Handler 4 on Broker 1], Kafka request handler 4 on broker 1 handling request Request(0,10.36.240.33:9092-10.36.240.33:64688,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549782,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=2,api_version=1,correlation_id=7,client_id=consumer-2} -- {replica_id=-1,topics=[{topic=testOutputTopic,partitions=[{partition=0,timestamp=-1}]}]} from connection 10.36.240.33:9092-10.36.240.33:64688;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(0,Request(0,10.36.240.33:9092-10.36.240.33:64688,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549782,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@1219f0f5,SendAction) (Logging.scala:36) 15:55:49 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=2,api_version=1,correlation_id=7,client_id=consumer-2} -- {replica_id=-1,topics=[{topic=testOutputTopic,partitions=[{partition=0,timestamp=-1}]}]} from connection 10.36.240.33:9092-10.36.240.33:64688;totalTime:2,requestQueueTime:1,localTime:1,remoteTime:0,responseQueueTime:1,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:49 [TRACE] Logging$class.trace - Processor 0 received request : {api_key=1,api_version=3,correlation_id=9,client_id=consumer-2} -- {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,topics=[{topic=testOutputTopic,partitions=[{partition=0,fetch_offset=0,max_bytes=1048576}]}]} (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [Kafka Request Handler 3 on Broker 1], Kafka request handler 3 on broker 1 handling request Request(0,10.36.240.33:9092-10.36.240.33:64688,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549786,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=1,api_version=3,correlation_id=9,client_id=consumer-2} -- {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,topics=[{topic=testOutputTopic,partitions=[{partition=0,fetch_offset=0,max_bytes=1048576}]}]} from connection 10.36.240.33:9092-10.36.240.33:64688;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Fetching log segment for partition testOutputTopic-0, offset 0, partition fetch size 1048576, remaining response limit 52428800, ignoring response/partition size limits (Logging.scala:36) 15:55:49 [TRACE] Logging$class.trace - Reading 1048576 bytes from offset 0 in log testOutputTopic-0 of length 0 bytes (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Fetching log segment for partition testOutputTopic-0, offset 0, partition fetch size 1048576, remaining response limit 52428800, ignoring response/partition size limits (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Reading 1048576 bytes from offset 0 in log testOutputTopic-0 of length 0 bytes (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'highwatermark-checkpoint'. (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Sending fetch response to client consumer-1 of 0 bytes (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64680,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549699,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.MultiSend@64aa40ed,SendAction) (Logging.scala:36) 15:55:50 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=1,api_version=3,correlation_id=91,client_id=consumer-1} -- {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,topics=[{topic=testOutputTopic,partitions=[{partition=0,fetch_offset=0,max_bytes=1048576}]}]} from connection 10.36.240.33:9092-10.36.240.33:64680;totalTime:539,requestQueueTime:0,localTime:25,remoteTime:505,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:50 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Fetching log segment for partition testOutputTopic-0, offset 0, partition fetch size 1048576, remaining response limit 52428800, ignoring response/partition size limits (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Reading 1048576 bytes from offset 0 in log testOutputTopic-0 of length 0 bytes (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Sending fetch response to client consumer-2 of 0 bytes (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(0,Request(0,10.36.240.33:9092-10.36.240.33:64688,Session(User:ANONYMOUS,/10.36.240.33),null,1491400549786,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.MultiSend@52fffb08,SendAction) (Logging.scala:36) 15:55:50 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=1,api_version=3,correlation_id=9,client_id=consumer-2} -- {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,topics=[{topic=testOutputTopic,partitions=[{partition=0,fetch_offset=0,max_bytes=1048576}]}]} from connection 10.36.240.33:9092-10.36.240.33:64688;totalTime:506,requestQueueTime:1,localTime:2,remoteTime:502,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:50 [TRACE] Logging$class.trace - Processor 0 received request : {api_key=1,api_version=3,correlation_id=10,client_id=consumer-2} -- {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,topics=[{topic=testOutputTopic,partitions=[{partition=0,fetch_offset=0,max_bytes=1048576}]}]} (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Kafka Request Handler 5 on Broker 1], Kafka request handler 5 on broker 1 handling request Request(0,10.36.240.33:9092-10.36.240.33:64688,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550296,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=1,api_version=3,correlation_id=10,client_id=consumer-2} -- {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,topics=[{topic=testOutputTopic,partitions=[{partition=0,fetch_offset=0,max_bytes=1048576}]}]} from connection 10.36.240.33:9092-10.36.240.33:64688;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Fetching log segment for partition testOutputTopic-0, offset 0, partition fetch size 1048576, remaining response limit 52428800, ignoring response/partition size limits (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Reading 1048576 bytes from offset 0 in log testOutputTopic-0 of length 0 bytes (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Completed execution of scheduled task 'highwatermark-checkpoint'. (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - Accepted connection from /127.0.0.1:64691 on /127.0.0.1:9092 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] (Logging.scala:54) 15:55:50 [DEBUG] Logging$class.debug - Processor 1 listening to new connection from /127.0.0.1:64691 (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - Processor 1 received request : {api_key=18,api_version=0,correlation_id=0,client_id=producer-1} -- {} (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Kafka Request Handler 5 on Broker 1], Kafka request handler 5 on broker 1 handling request Request(1,127.0.0.1:9092-127.0.0.1:64691,Session(User:ANONYMOUS,/127.0.0.1),null,1491400550771,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=18,api_version=0,correlation_id=0,client_id=producer-1} -- {} from connection 127.0.0.1:9092-127.0.0.1:64691;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(1,Request(1,127.0.0.1:9092-127.0.0.1:64691,Session(User:ANONYMOUS,/127.0.0.1),null,1491400550771,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@6b10cb69,SendAction) (Logging.scala:36) 15:55:50 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=18,api_version=0,correlation_id=0,client_id=producer-1} -- {} from connection 127.0.0.1:9092-127.0.0.1:64691;totalTime:2,requestQueueTime:1,localTime:1,remoteTime:0,responseQueueTime:1,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:50 [TRACE] Logging$class.trace - Processor 1 received request : {api_key=3,api_version=2,correlation_id=1,client_id=producer-1} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Kafka Request Handler 0 on Broker 1], Kafka request handler 0 on broker 1 handling request Request(1,127.0.0.1:9092-127.0.0.1:64691,Session(User:ANONYMOUS,/127.0.0.1),null,1491400550774,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=1,client_id=producer-1} -- {topics=[testOutputTopic]} from connection 127.0.0.1:9092-127.0.0.1:64691;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@a648e10 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 1 to client producer-1 (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(1,Request(1,127.0.0.1:9092-127.0.0.1:64691,Session(User:ANONYMOUS,/127.0.0.1),null,1491400550774,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@6abdbdb9,SendAction) (Logging.scala:36) 15:55:50 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=1,client_id=producer-1} -- {topics=[testOutputTopic]} from connection 127.0.0.1:9092-127.0.0.1:64691;totalTime:2,requestQueueTime:0,localTime:2,remoteTime:0,responseQueueTime:1,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:50 [DEBUG] Logging$class.debug - Accepted connection from /10.36.240.33:64692 on /10.36.240.33:9092 and assigned it to processor 2, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] (Logging.scala:54) 15:55:50 [DEBUG] Logging$class.debug - Processor 2 listening to new connection from /10.36.240.33:64692 (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=18,api_version=0,correlation_id=2,client_id=producer-1} -- {} (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Kafka Request Handler 2 on Broker 1], Kafka request handler 2 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64692,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550783,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=18,api_version=0,correlation_id=2,client_id=producer-1} -- {} from connection 10.36.240.33:9092-10.36.240.33:64692;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64692,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550783,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@f507415,SendAction) (Logging.scala:36) 15:55:50 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=18,api_version=0,correlation_id=2,client_id=producer-1} -- {} from connection 10.36.240.33:9092-10.36.240.33:64692;totalTime:2,requestQueueTime:1,localTime:0,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:50 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=0,api_version=2,correlation_id=3,client_id=producer-1} -- {acks=1,timeout=30000,topic_data=[{topic=testOutputTopic,data=[{partition=0,record_set=[(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 1879727042, CreateTime = 1491400550777, key = 10 bytes, value = 74 bytes))]}]}]} (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Kafka Request Handler 6 on Broker 1], Kafka request handler 6 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64692,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550787,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=0,api_version=2,correlation_id=3,client_id=producer-1} -- {acks=1,timeout=30000,topic_data=[{topic=testOutputTopic,data=[{partition=0,record_set=[(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 1879727042, CreateTime = 1491400550777, key = 10 bytes, value = 74 bytes))]}]}]} from connection 10.36.240.33:9092-10.36.240.33:64692;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Append [Map(testOutputTopic-0 -> [(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 1879727042, CreateTime = 1491400550777, key = 10 bytes, value = 74 bytes))])] to local log (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Inserting 118 bytes at offset 0 at position 0 with largest timestamp 1491400550777 at shallow offset 0 (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Appended 118 to .\target\tmp\kafka\testOutputTopic-0\00000000000000000000.log at offset 0 (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Appended message set to log testOutputTopic-0 with first offset: 0, next offset: 1, and messages: [(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 1879727042, CreateTime = 1491400550777, key = 10 bytes, value = 74 bytes))] (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 fetch requests. (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - Setting high watermark for replica 1 partition testOutputTopic-0 to [1 [0 : 118]] (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - Partition [testOutputTopic,0] on broker 1: High watermark for partition [testOutputTopic,0] updated to 1 [0 : 118] (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Fetching log segment for partition testOutputTopic-0, offset 0, partition fetch size 1048576, remaining response limit 52428800, ignoring response/partition size limits (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Reading 1048576 bytes from offset 0 in log testOutputTopic-0 of length 118 bytes (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Sending fetch response to client consumer-2 of 118 bytes (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 1 fetch requests. (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(0,Request(0,10.36.240.33:9092-10.36.240.33:64688,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550296,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.MultiSend@57ef8298,SendAction) (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 producer requests. (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: 118 bytes written to log testOutputTopic-0 beginning at offset 0 and ending at offset 0 (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Produce to local log in 5 ms (Logging.scala:54) 15:55:50 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=1,api_version=3,correlation_id=10,client_id=consumer-2} -- {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,topics=[{topic=testOutputTopic,partitions=[{partition=0,fetch_offset=0,max_bytes=1048576}]}]} from connection 10.36.240.33:9092-10.36.240.33:64688;totalTime:501,requestQueueTime:1,localTime:1,remoteTime:496,responseQueueTime:0,sendTime:3,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:50 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64692,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550787,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@39cbe03a,SendAction) (Logging.scala:36) 15:55:50 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=0,api_version=2,correlation_id=3,client_id=producer-1} -- {acks=null,timeout=null,topic_data=null} from connection 10.36.240.33:9092-10.36.240.33:64692;totalTime:12,requestQueueTime:1,localTime:11,remoteTime:0,responseQueueTime:0,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:50 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=0,api_version=2,correlation_id=4,client_id=producer-1} -- {acks=1,timeout=30000,topic_data=[{topic=testOutputTopic,data=[{partition=0,record_set=[(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 3298261155, CreateTime = 1491400550800, key = 10 bytes, value = 94 bytes))]}]}]} (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Kafka Request Handler 1 on Broker 1], Kafka request handler 1 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64692,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550801,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=0,api_version=2,correlation_id=4,client_id=producer-1} -- {acks=1,timeout=30000,topic_data=[{topic=testOutputTopic,data=[{partition=0,record_set=[(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 3298261155, CreateTime = 1491400550800, key = 10 bytes, value = 94 bytes))]}]}]} from connection 10.36.240.33:9092-10.36.240.33:64692;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Append [Map(testOutputTopic-0 -> [(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 3298261155, CreateTime = 1491400550800, key = 10 bytes, value = 94 bytes))])] to local log (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Inserting 138 bytes at offset 1 at position 118 with largest timestamp 1491400550800 at shallow offset 1 (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Appended 138 to .\target\tmp\kafka\testOutputTopic-0\00000000000000000000.log at offset 1 (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Appended message set to log testOutputTopic-0 with first offset: 1, next offset: 2, and messages: [(offset=1,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 3298261155, CreateTime = 1491400550800, key = 10 bytes, value = 94 bytes))] (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 fetch requests. (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - Setting high watermark for replica 1 partition testOutputTopic-0 to [2 [0 : 256]] (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - Partition [testOutputTopic,0] on broker 1: High watermark for partition [testOutputTopic,0] updated to 2 [0 : 256] (Logging.scala:54) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 fetch requests. (Logging.scala:54) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 producer requests. (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: 138 bytes written to log testOutputTopic-0 beginning at offset 1 and ending at offset 1 (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Produce to local log in 3 ms (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64692,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550801,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@14035831,SendAction) (Logging.scala:36) 15:55:50 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=0,api_version=2,correlation_id=4,client_id=producer-1} -- {acks=null,timeout=null,topic_data=null} from connection 10.36.240.33:9092-10.36.240.33:64692;totalTime:5,requestQueueTime:1,localTime:3,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:50 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=0,api_version=2,correlation_id=5,client_id=producer-1} -- {acks=1,timeout=30000,topic_data=[{topic=testOutputTopic,data=[{partition=0,record_set=[(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 2573113506, CreateTime = 1491400550806, key = 10 bytes, value = 104 bytes))]}]}]} (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Kafka Request Handler 7 on Broker 1], Kafka request handler 7 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64692,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550807,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=0,api_version=2,correlation_id=5,client_id=producer-1} -- {acks=1,timeout=30000,topic_data=[{topic=testOutputTopic,data=[{partition=0,record_set=[(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 2573113506, CreateTime = 1491400550806, key = 10 bytes, value = 104 bytes))]}]}]} from connection 10.36.240.33:9092-10.36.240.33:64692;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Append [Map(testOutputTopic-0 -> [(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 2573113506, CreateTime = 1491400550806, key = 10 bytes, value = 104 bytes))])] to local log (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Inserting 148 bytes at offset 2 at position 256 with largest timestamp 1491400550806 at shallow offset 2 (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Appended 148 to .\target\tmp\kafka\testOutputTopic-0\00000000000000000000.log at offset 2 (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Appended message set to log testOutputTopic-0 with first offset: 2, next offset: 3, and messages: [(offset=2,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 2573113506, CreateTime = 1491400550806, key = 10 bytes, value = 104 bytes))] (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 fetch requests. (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - Setting high watermark for replica 1 partition testOutputTopic-0 to [3 [0 : 404]] (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - Partition [testOutputTopic,0] on broker 1: High watermark for partition [testOutputTopic,0] updated to 3 [0 : 404] (Logging.scala:54) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 fetch requests. (Logging.scala:54) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 producer requests. (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: 148 bytes written to log testOutputTopic-0 beginning at offset 2 and ending at offset 2 (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Produce to local log in 2 ms (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64692,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550807,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@6ae73510,SendAction) (Logging.scala:36) 15:55:50 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=0,api_version=2,correlation_id=5,client_id=producer-1} -- {acks=null,timeout=null,topic_data=null} from connection 10.36.240.33:9092-10.36.240.33:64692;totalTime:6,requestQueueTime:1,localTime:4,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:50 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=0,api_version=2,correlation_id=6,client_id=producer-1} -- {acks=1,timeout=30000,topic_data=[{topic=testOutputTopic,data=[{partition=0,record_set=[(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 1328117503, CreateTime = 1491400550814, key = 10 bytes, value = 114 bytes))]}]}]} (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Kafka Request Handler 4 on Broker 1], Kafka request handler 4 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64692,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550815,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=0,api_version=2,correlation_id=6,client_id=producer-1} -- {acks=1,timeout=30000,topic_data=[{topic=testOutputTopic,data=[{partition=0,record_set=[(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 1328117503, CreateTime = 1491400550814, key = 10 bytes, value = 114 bytes))]}]}]} from connection 10.36.240.33:9092-10.36.240.33:64692;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Append [Map(testOutputTopic-0 -> [(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 1328117503, CreateTime = 1491400550814, key = 10 bytes, value = 114 bytes))])] to local log (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Inserting 158 bytes at offset 3 at position 404 with largest timestamp 1491400550814 at shallow offset 3 (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Appended 158 to .\target\tmp\kafka\testOutputTopic-0\00000000000000000000.log at offset 3 (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Appended message set to log testOutputTopic-0 with first offset: 3, next offset: 4, and messages: [(offset=3,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 1328117503, CreateTime = 1491400550814, key = 10 bytes, value = 114 bytes))] (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 fetch requests. (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - Setting high watermark for replica 1 partition testOutputTopic-0 to [4 [0 : 562]] (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - Partition [testOutputTopic,0] on broker 1: High watermark for partition [testOutputTopic,0] updated to 4 [0 : 562] (Logging.scala:54) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 fetch requests. (Logging.scala:54) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 producer requests. (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: 158 bytes written to log testOutputTopic-0 beginning at offset 3 and ending at offset 3 (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Produce to local log in 3 ms (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64692,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550815,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@661b1485,SendAction) (Logging.scala:36) 15:55:50 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=0,api_version=2,correlation_id=6,client_id=producer-1} -- {acks=null,timeout=null,topic_data=null} from connection 10.36.240.33:9092-10.36.240.33:64692;totalTime:6,requestQueueTime:1,localTime:4,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:50 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=0,api_version=2,correlation_id=7,client_id=producer-1} -- {acks=1,timeout=30000,topic_data=[{topic=testOutputTopic,data=[{partition=0,record_set=[(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 3220645199, CreateTime = 1491400550822, key = 10 bytes, value = 124 bytes))]}]}]} (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Kafka Request Handler 3 on Broker 1], Kafka request handler 3 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64692,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550823,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=0,api_version=2,correlation_id=7,client_id=producer-1} -- {acks=1,timeout=30000,topic_data=[{topic=testOutputTopic,data=[{partition=0,record_set=[(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 3220645199, CreateTime = 1491400550822, key = 10 bytes, value = 124 bytes))]}]}]} from connection 10.36.240.33:9092-10.36.240.33:64692;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Append [Map(testOutputTopic-0 -> [(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 3220645199, CreateTime = 1491400550822, key = 10 bytes, value = 124 bytes))])] to local log (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Inserting 168 bytes at offset 4 at position 562 with largest timestamp 1491400550822 at shallow offset 4 (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Appended 168 to .\target\tmp\kafka\testOutputTopic-0\00000000000000000000.log at offset 4 (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Appended message set to log testOutputTopic-0 with first offset: 4, next offset: 5, and messages: [(offset=4,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 3220645199, CreateTime = 1491400550822, key = 10 bytes, value = 124 bytes))] (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 fetch requests. (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - Setting high watermark for replica 1 partition testOutputTopic-0 to [5 [0 : 730]] (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - Partition [testOutputTopic,0] on broker 1: High watermark for partition [testOutputTopic,0] updated to 5 [0 : 730] (Logging.scala:54) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 fetch requests. (Logging.scala:54) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 producer requests. (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: 168 bytes written to log testOutputTopic-0 beginning at offset 4 and ending at offset 4 (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Produce to local log in 2 ms (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64692,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550823,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@37e24c,SendAction) (Logging.scala:36) 15:55:50 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=0,api_version=2,correlation_id=7,client_id=producer-1} -- {acks=null,timeout=null,topic_data=null} from connection 10.36.240.33:9092-10.36.240.33:64692;totalTime:5,requestQueueTime:0,localTime:4,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:50 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=0,api_version=2,correlation_id=8,client_id=producer-1} -- {acks=1,timeout=30000,topic_data=[{topic=testOutputTopic,data=[{partition=0,record_set=[(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 1694366056, CreateTime = 1491400550828, key = 10 bytes, value = 134 bytes))]}]}]} (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Kafka Request Handler 5 on Broker 1], Kafka request handler 5 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64692,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550830,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=0,api_version=2,correlation_id=8,client_id=producer-1} -- {acks=1,timeout=30000,topic_data=[{topic=testOutputTopic,data=[{partition=0,record_set=[(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 1694366056, CreateTime = 1491400550828, key = 10 bytes, value = 134 bytes))]}]}]} from connection 10.36.240.33:9092-10.36.240.33:64692;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Append [Map(testOutputTopic-0 -> [(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 1694366056, CreateTime = 1491400550828, key = 10 bytes, value = 134 bytes))])] to local log (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Inserting 178 bytes at offset 5 at position 730 with largest timestamp 1491400550828 at shallow offset 5 (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Appended 178 to .\target\tmp\kafka\testOutputTopic-0\00000000000000000000.log at offset 5 (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Appended message set to log testOutputTopic-0 with first offset: 5, next offset: 6, and messages: [(offset=5,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 1694366056, CreateTime = 1491400550828, key = 10 bytes, value = 134 bytes))] (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 fetch requests. (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - Setting high watermark for replica 1 partition testOutputTopic-0 to [6 [0 : 908]] (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - Partition [testOutputTopic,0] on broker 1: High watermark for partition [testOutputTopic,0] updated to 6 [0 : 908] (Logging.scala:54) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 fetch requests. (Logging.scala:54) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 producer requests. (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: 178 bytes written to log testOutputTopic-0 beginning at offset 5 and ending at offset 5 (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Produce to local log in 3 ms (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64692,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550830,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@efbe0ab,SendAction) (Logging.scala:36) 15:55:50 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=0,api_version=2,correlation_id=8,client_id=producer-1} -- {acks=null,timeout=null,topic_data=null} from connection 10.36.240.33:9092-10.36.240.33:64692;totalTime:6,requestQueueTime:1,localTime:5,remoteTime:0,responseQueueTime:0,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:50 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=0,api_version=2,correlation_id=9,client_id=producer-1} -- {acks=1,timeout=30000,topic_data=[{topic=testOutputTopic,data=[{partition=0,record_set=[(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 3844199027, CreateTime = 1491400550837, key = 10 bytes, value = 144 bytes))]}]}]} (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Kafka Request Handler 0 on Broker 1], Kafka request handler 0 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64692,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550838,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=0,api_version=2,correlation_id=9,client_id=producer-1} -- {acks=1,timeout=30000,topic_data=[{topic=testOutputTopic,data=[{partition=0,record_set=[(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 3844199027, CreateTime = 1491400550837, key = 10 bytes, value = 144 bytes))]}]}]} from connection 10.36.240.33:9092-10.36.240.33:64692;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Append [Map(testOutputTopic-0 -> [(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 3844199027, CreateTime = 1491400550837, key = 10 bytes, value = 144 bytes))])] to local log (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Inserting 188 bytes at offset 6 at position 908 with largest timestamp 1491400550837 at shallow offset 6 (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Appended 188 to .\target\tmp\kafka\testOutputTopic-0\00000000000000000000.log at offset 6 (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Appended message set to log testOutputTopic-0 with first offset: 6, next offset: 7, and messages: [(offset=6,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 3844199027, CreateTime = 1491400550837, key = 10 bytes, value = 144 bytes))] (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 fetch requests. (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - Setting high watermark for replica 1 partition testOutputTopic-0 to [7 [0 : 1096]] (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - Partition [testOutputTopic,0] on broker 1: High watermark for partition [testOutputTopic,0] updated to 7 [0 : 1096] (Logging.scala:54) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 fetch requests. (Logging.scala:54) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 producer requests. (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: 188 bytes written to log testOutputTopic-0 beginning at offset 6 and ending at offset 6 (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Produce to local log in 3 ms (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64692,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550838,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@5de70bd1,SendAction) (Logging.scala:36) 15:55:50 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=0,api_version=2,correlation_id=9,client_id=producer-1} -- {acks=null,timeout=null,topic_data=null} from connection 10.36.240.33:9092-10.36.240.33:64692;totalTime:5,requestQueueTime:1,localTime:4,remoteTime:0,responseQueueTime:1,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:50 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=0,api_version=2,correlation_id=10,client_id=producer-1} -- {acks=1,timeout=30000,topic_data=[{topic=testOutputTopic,data=[{partition=0,record_set=[(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 86704742, CreateTime = 1491400550844, key = 10 bytes, value = 154 bytes))]}]}]} (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Kafka Request Handler 2 on Broker 1], Kafka request handler 2 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64692,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550845,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=0,api_version=2,correlation_id=10,client_id=producer-1} -- {acks=1,timeout=30000,topic_data=[{topic=testOutputTopic,data=[{partition=0,record_set=[(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 86704742, CreateTime = 1491400550844, key = 10 bytes, value = 154 bytes))]}]}]} from connection 10.36.240.33:9092-10.36.240.33:64692;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Append [Map(testOutputTopic-0 -> [(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 86704742, CreateTime = 1491400550844, key = 10 bytes, value = 154 bytes))])] to local log (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Inserting 198 bytes at offset 7 at position 1096 with largest timestamp 1491400550844 at shallow offset 7 (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Appended 198 to .\target\tmp\kafka\testOutputTopic-0\00000000000000000000.log at offset 7 (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Appended message set to log testOutputTopic-0 with first offset: 7, next offset: 8, and messages: [(offset=7,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 86704742, CreateTime = 1491400550844, key = 10 bytes, value = 154 bytes))] (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 fetch requests. (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - Setting high watermark for replica 1 partition testOutputTopic-0 to [8 [0 : 1294]] (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - Partition [testOutputTopic,0] on broker 1: High watermark for partition [testOutputTopic,0] updated to 8 [0 : 1294] (Logging.scala:54) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 fetch requests. (Logging.scala:54) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 producer requests. (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: 198 bytes written to log testOutputTopic-0 beginning at offset 7 and ending at offset 7 (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Produce to local log in 3 ms (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64692,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550845,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@63e693d4,SendAction) (Logging.scala:36) 15:55:50 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=0,api_version=2,correlation_id=10,client_id=producer-1} -- {acks=null,timeout=null,topic_data=null} from connection 10.36.240.33:9092-10.36.240.33:64692;totalTime:5,requestQueueTime:0,localTime:4,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:50 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=0,api_version=2,correlation_id=11,client_id=producer-1} -- {acks=1,timeout=30000,topic_data=[{topic=testOutputTopic,data=[{partition=0,record_set=[(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 2210678560, CreateTime = 1491400550851, key = 10 bytes, value = 164 bytes))]}]}]} (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Kafka Request Handler 6 on Broker 1], Kafka request handler 6 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64692,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550852,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=0,api_version=2,correlation_id=11,client_id=producer-1} -- {acks=1,timeout=30000,topic_data=[{topic=testOutputTopic,data=[{partition=0,record_set=[(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 2210678560, CreateTime = 1491400550851, key = 10 bytes, value = 164 bytes))]}]}]} from connection 10.36.240.33:9092-10.36.240.33:64692;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Append [Map(testOutputTopic-0 -> [(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 2210678560, CreateTime = 1491400550851, key = 10 bytes, value = 164 bytes))])] to local log (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Inserting 208 bytes at offset 8 at position 1294 with largest timestamp 1491400550851 at shallow offset 8 (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Appended 208 to .\target\tmp\kafka\testOutputTopic-0\00000000000000000000.log at offset 8 (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Appended message set to log testOutputTopic-0 with first offset: 8, next offset: 9, and messages: [(offset=8,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 2210678560, CreateTime = 1491400550851, key = 10 bytes, value = 164 bytes))] (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 fetch requests. (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - Setting high watermark for replica 1 partition testOutputTopic-0 to [9 [0 : 1502]] (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - Partition [testOutputTopic,0] on broker 1: High watermark for partition [testOutputTopic,0] updated to 9 [0 : 1502] (Logging.scala:54) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 fetch requests. (Logging.scala:54) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 producer requests. (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: 208 bytes written to log testOutputTopic-0 beginning at offset 8 and ending at offset 8 (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Produce to local log in 2 ms (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64692,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550852,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@5cfc3866,SendAction) (Logging.scala:36) 15:55:50 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=0,api_version=2,correlation_id=11,client_id=producer-1} -- {acks=null,timeout=null,topic_data=null} from connection 10.36.240.33:9092-10.36.240.33:64692;totalTime:5,requestQueueTime:0,localTime:4,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:50 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=0,api_version=2,correlation_id=12,client_id=producer-1} -- {acks=1,timeout=30000,topic_data=[{topic=testOutputTopic,data=[{partition=0,record_set=[(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 1303280112, CreateTime = 1491400550858, key = 10 bytes, value = 174 bytes))]}]}]} (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Kafka Request Handler 1 on Broker 1], Kafka request handler 1 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64692,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550860,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=0,api_version=2,correlation_id=12,client_id=producer-1} -- {acks=1,timeout=30000,topic_data=[{topic=testOutputTopic,data=[{partition=0,record_set=[(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 1303280112, CreateTime = 1491400550858, key = 10 bytes, value = 174 bytes))]}]}]} from connection 10.36.240.33:9092-10.36.240.33:64692;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Append [Map(testOutputTopic-0 -> [(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 1303280112, CreateTime = 1491400550858, key = 10 bytes, value = 174 bytes))])] to local log (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Inserting 218 bytes at offset 9 at position 1502 with largest timestamp 1491400550858 at shallow offset 9 (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Appended 218 to .\target\tmp\kafka\testOutputTopic-0\00000000000000000000.log at offset 9 (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Appended message set to log testOutputTopic-0 with first offset: 9, next offset: 10, and messages: [(offset=9,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 1303280112, CreateTime = 1491400550858, key = 10 bytes, value = 174 bytes))] (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 fetch requests. (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - Setting high watermark for replica 1 partition testOutputTopic-0 to [10 [0 : 1720]] (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - Partition [testOutputTopic,0] on broker 1: High watermark for partition [testOutputTopic,0] updated to 10 [0 : 1720] (Logging.scala:54) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 fetch requests. (Logging.scala:54) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key testOutputTopic-0 unblocked 0 producer requests. (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: 218 bytes written to log testOutputTopic-0 beginning at offset 9 and ending at offset 9 (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Produce to local log in 3 ms (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64692,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550860,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@d5375ea,SendAction) (Logging.scala:36) 15:55:50 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=0,api_version=2,correlation_id=12,client_id=producer-1} -- {acks=null,timeout=null,topic_data=null} from connection 10.36.240.33:9092-10.36.240.33:64692;totalTime:6,requestQueueTime:0,localTime:5,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:50 [TRACE] Logging$class.trace - Processor 0 received request : {api_key=1,api_version=3,correlation_id=11,client_id=consumer-2} -- {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,topics=[{topic=testOutputTopic,partitions=[{partition=0,fetch_offset=1,max_bytes=1048576}]}]} (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Kafka Request Handler 7 on Broker 1], Kafka request handler 7 on broker 1 handling request Request(0,10.36.240.33:9092-10.36.240.33:64688,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550869,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=1,api_version=3,correlation_id=11,client_id=consumer-2} -- {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,topics=[{topic=testOutputTopic,partitions=[{partition=0,fetch_offset=1,max_bytes=1048576}]}]} from connection 10.36.240.33:9092-10.36.240.33:64688;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Fetching log segment for partition testOutputTopic-0, offset 1, partition fetch size 1048576, remaining response limit 52428800, ignoring response/partition size limits (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Reading 1048576 bytes from offset 1 in log testOutputTopic-0 of length 1720 bytes (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Sending fetch response to client consumer-2 of 1602 bytes (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(0,Request(0,10.36.240.33:9092-10.36.240.33:64688,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550869,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.MultiSend@ae3d14d,SendAction) (Logging.scala:36) 15:55:50 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=1,api_version=3,correlation_id=11,client_id=consumer-2} -- {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,topics=[{topic=testOutputTopic,partitions=[{partition=0,fetch_offset=1,max_bytes=1048576}]}]} from connection 10.36.240.33:9092-10.36.240.33:64688;totalTime:5,requestQueueTime:1,localTime:3,remoteTime:0,responseQueueTime:1,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:50 [TRACE] Logging$class.trace - Processor 2 received request : {api_key=8,api_version=2,correlation_id=13,client_id=consumer-2} -- {group_id=testOutputTopic,group_generation_id=3,member_id=consumer-2-c2e1d959-d994-4471-b304-dd9144fa1b38,retention_time=-1,topics=[{topic=testOutputTopic,partitions=[{partition=0,offset=10,metadata=}]}]} (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Processor 0 received request : {api_key=1,api_version=3,correlation_id=12,client_id=consumer-2} -- {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,topics=[{topic=testOutputTopic,partitions=[{partition=0,fetch_offset=10,max_bytes=1048576}]}]} (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Kafka Request Handler 4 on Broker 1], Kafka request handler 4 on broker 1 handling request Request(2,10.36.240.33:9092-10.36.240.33:64687,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550881,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=8,api_version=2,correlation_id=13,client_id=consumer-2} -- {group_id=testOutputTopic,group_generation_id=3,member_id=consumer-2-c2e1d959-d994-4471-b304-dd9144fa1b38,retention_time=-1,topics=[{topic=testOutputTopic,partitions=[{partition=0,offset=10,metadata=}]}]} from connection 10.36.240.33:9092-10.36.240.33:64687;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Kafka Request Handler 3 on Broker 1], Kafka request handler 3 on broker 1 handling request Request(0,10.36.240.33:9092-10.36.240.33:64688,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550882,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=1,api_version=3,correlation_id=12,client_id=consumer-2} -- {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,topics=[{topic=testOutputTopic,partitions=[{partition=0,fetch_offset=10,max_bytes=1048576}]}]} from connection 10.36.240.33:9092-10.36.240.33:64688;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Fetching log segment for partition testOutputTopic-0, offset 10, partition fetch size 1048576, remaining response limit 52428800, ignoring response/partition size limits (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Reading 1048576 bytes from offset 10 in log testOutputTopic-0 of length 1720 bytes (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Append [Map(__consumer_offsets-40 -> [(offset=0,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 2408654222, CreateTime = 1491400550887, key = 40 bytes, value = 28 bytes))])] to local log (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Inserting 102 bytes at offset 3 at position 643 with largest timestamp 1491400550887 at shallow offset 3 (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Appended 102 to .\target\tmp\kafka\__consumer_offsets-40\00000000000000000000.log at offset 3 (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Appended message set to log __consumer_offsets-40 with first offset: 3, next offset: 4, and messages: [(offset=3,record=Record(magic = 1, attributes = 0, compression = NONE, crc = 2408654222, CreateTime = 1491400550887, key = 40 bytes, value = 28 bytes))] (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key __consumer_offsets-40 unblocked 0 fetch requests. (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - Setting high watermark for replica 1 partition __consumer_offsets-40 to [4 [0 : 745]] (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - Partition [__consumer_offsets,40] on broker 1: High watermark for partition [__consumer_offsets,40] updated to 4 [0 : 745] (Logging.scala:54) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key __consumer_offsets-40 unblocked 0 fetch requests. (Logging.scala:54) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Request key __consumer_offsets-40 unblocked 0 producer requests. (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: 102 bytes written to log __consumer_offsets-40 beginning at offset 3 and ending at offset 3 (Logging.scala:36) 15:55:50 [DEBUG] Logging$class.debug - [Replica Manager on Broker 1]: Produce to local log in 3 ms (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - Initial partition status for __consumer_offsets-40 is [acksPending: true, error: 7, startOffset: 3, requiredOffset: 4] (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Checking produce satisfaction for __consumer_offsets-40, current status [acksPending: true, error: 7, startOffset: 3, requiredOffset: 4] (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Partition [__consumer_offsets,40] on broker 1: 1 acks satisfied for __consumer_offsets-40 with acks = -1 (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(2,Request(2,10.36.240.33:9092-10.36.240.33:64687,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550881,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@680f4e12,SendAction) (Logging.scala:36) 15:55:50 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=8,api_version=2,correlation_id=13,client_id=consumer-2} -- {group_id=testOutputTopic,group_generation_id=3,member_id=consumer-2-c2e1d959-d994-4471-b304-dd9144fa1b38,retention_time=-1,topics=[{topic=testOutputTopic,partitions=[{partition=0,offset=10,metadata=}]}]} from connection 10.36.240.33:9092-10.36.240.33:64687;totalTime:17,requestQueueTime:1,localTime:15,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:50 [DEBUG] Logging$class.debug - Accepted connection from /127.0.0.1:64693 on /127.0.0.1:9092 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] (Logging.scala:54) 15:55:50 [DEBUG] Logging$class.debug - Processor 0 listening to new connection from /127.0.0.1:64693 (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - Processor 0 received request : {api_key=18,api_version=0,correlation_id=1,client_id=consumer-3} -- {} (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Kafka Request Handler 5 on Broker 1], Kafka request handler 5 on broker 1 handling request Request(0,127.0.0.1:9092-127.0.0.1:64693,Session(User:ANONYMOUS,/127.0.0.1),null,1491400550901,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=18,api_version=0,correlation_id=1,client_id=consumer-3} -- {} from connection 127.0.0.1:9092-127.0.0.1:64693;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(0,Request(0,127.0.0.1:9092-127.0.0.1:64693,Session(User:ANONYMOUS,/127.0.0.1),null,1491400550901,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@51fa2664,SendAction) (Logging.scala:36) 15:55:50 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=18,api_version=0,correlation_id=1,client_id=consumer-3} -- {} from connection 127.0.0.1:9092-127.0.0.1:64693;totalTime:2,requestQueueTime:1,localTime:0,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:50 [TRACE] Logging$class.trace - Processor 0 received request : {api_key=3,api_version=2,correlation_id=2,client_id=consumer-3} -- {topics=[testOutputTopic]} (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Kafka Request Handler 0 on Broker 1], Kafka request handler 0 on broker 1 handling request Request(0,127.0.0.1:9092-127.0.0.1:64693,Session(User:ANONYMOUS,/127.0.0.1),null,1491400550904,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=3,api_version=2,correlation_id=2,client_id=consumer-3} -- {topics=[testOutputTopic]} from connection 127.0.0.1:9092-127.0.0.1:64693;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Sending topic metadata org.apache.kafka.common.requests.MetadataResponse$TopicMetadata@5c02bce5 and brokers 1 : (EndPoint(ISI050.utenze.BANKIT.IT,9092,ListenerName(PLAINTEXT),PLAINTEXT)) : null for correlation id 2 to client consumer-3 (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(0,Request(0,127.0.0.1:9092-127.0.0.1:64693,Session(User:ANONYMOUS,/127.0.0.1),null,1491400550904,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@4587fcaf,SendAction) (Logging.scala:36) 15:55:50 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=3,api_version=2,correlation_id=2,client_id=consumer-3} -- {topics=[testOutputTopic]} from connection 127.0.0.1:9092-127.0.0.1:64693;totalTime:3,requestQueueTime:0,localTime:2,remoteTime:0,responseQueueTime:1,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:50 [TRACE] Logging$class.trace - Processor 0 received request : {api_key=10,api_version=0,correlation_id=0,client_id=consumer-3} -- {group_id=testOutputTopic} (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Kafka Request Handler 2 on Broker 1], Kafka request handler 2 on broker 1 handling request Request(0,127.0.0.1:9092-127.0.0.1:64693,Session(User:ANONYMOUS,/127.0.0.1),null,1491400550907,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=10,api_version=0,correlation_id=0,client_id=consumer-3} -- {group_id=testOutputTopic} from connection 127.0.0.1:9092-127.0.0.1:64693;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Sending consumer metadata {error_code=0,coordinator={node_id=1,host=ISI050.utenze.BANKIT.IT,port=9092}} for correlation id 0 to client consumer-3. (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(0,Request(0,127.0.0.1:9092-127.0.0.1:64693,Session(User:ANONYMOUS,/127.0.0.1),null,1491400550907,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@12451755,SendAction) (Logging.scala:36) 15:55:50 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=10,api_version=0,correlation_id=0,client_id=consumer-3} -- {group_id=testOutputTopic} from connection 127.0.0.1:9092-127.0.0.1:64693;totalTime:3,requestQueueTime:1,localTime:2,remoteTime:0,responseQueueTime:1,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:50 [DEBUG] Logging$class.debug - Accepted connection from /10.36.240.33:64694 on /10.36.240.33:9092 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] (Logging.scala:54) 15:55:50 [DEBUG] Logging$class.debug - Processor 1 listening to new connection from /10.36.240.33:64694 (Logging.scala:54) 15:55:50 [TRACE] Logging$class.trace - Processor 1 received request : {api_key=18,api_version=0,correlation_id=4,client_id=consumer-3} -- {} (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Kafka Request Handler 6 on Broker 1], Kafka request handler 6 on broker 1 handling request Request(1,10.36.240.33:9092-10.36.240.33:64694,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550915,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=18,api_version=0,correlation_id=4,client_id=consumer-3} -- {} from connection 10.36.240.33:9092-10.36.240.33:64694;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(1,Request(1,10.36.240.33:9092-10.36.240.33:64694,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550915,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.NetworkSend@524c0dcb,SendAction) (Logging.scala:36) 15:55:50 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=18,api_version=0,correlation_id=4,client_id=consumer-3} -- {} from connection 10.36.240.33:9092-10.36.240.33:64694;totalTime:2,requestQueueTime:0,localTime:1,remoteTime:0,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:50 [TRACE] Logging$class.trace - Processor 1 received request : {api_key=11,api_version=1,correlation_id=3,client_id=consumer-3} -- {group_id=testOutputTopic,session_timeout=180000,rebalance_timeout=600000,member_id=,protocol_type=consumer,group_protocols=[{protocol_name=range,protocol_metadata=java.nio.HeapByteBuffer[pos=0 lim=27 cap=27]}]} (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [Kafka Request Handler 1 on Broker 1], Kafka request handler 1 on broker 1 handling request Request(1,10.36.240.33:9092-10.36.240.33:64694,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550918,ListenerName(PLAINTEXT),PLAINTEXT) (Logging.scala:36) 15:55:50 [TRACE] Logging$class.trace - [KafkaApi-1] Handling request:{api_key=11,api_version=1,correlation_id=3,client_id=consumer-3} -- {group_id=testOutputTopic,session_timeout=180000,rebalance_timeout=600000,member_id=,protocol_type=consumer,group_protocols=[{protocol_name=range,protocol_metadata=java.nio.HeapByteBuffer[pos=0 lim=27 cap=27]}]} from connection 10.36.240.33:9092-10.36.240.33:64694;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (Logging.scala:36) 15:55:50 [INFO ] Logging$class.info - [GroupCoordinator 1]: Preparing to restabilize group testOutputTopic with old generation 3 (Logging.scala:70) FROM NOW ON THE SERVER HANGS WHILE "Preparing to restabilize group testOutputTopic", NO "Stabilized group testOutputTopic" CAN BE FOUND ! 15:55:51 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Fetching log segment for partition testOutputTopic-0, offset 10, partition fetch size 1048576, remaining response limit 52428800, ignoring response/partition size limits (Logging.scala:36) 15:55:51 [TRACE] Logging$class.trace - Reading 1048576 bytes from offset 10 in log testOutputTopic-0 of length 1720 bytes (Logging.scala:36) 15:55:51 [TRACE] Logging$class.trace - [KafkaApi-1] Sending fetch response to client consumer-2 of 0 bytes (Logging.scala:36) 15:55:51 [TRACE] Logging$class.trace - Socket server received response to send, registering for write and sending data: Response(0,Request(0,10.36.240.33:9092-10.36.240.33:64688,Session(User:ANONYMOUS,/10.36.240.33),null,1491400550882,ListenerName(PLAINTEXT),PLAINTEXT),org.apache.kafka.common.network.MultiSend@3cc6e0ae,SendAction) (Logging.scala:36) 15:55:51 [TRACE] RequestChannel$Request.updateRequestMetrics - Completed request:{api_key=1,api_version=3,correlation_id=12,client_id=consumer-2} -- {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,topics=[{topic=testOutputTopic,partitions=[{partition=0,fetch_offset=10,max_bytes=1048576}]}]} from connection 10.36.240.33:9092-10.36.240.33:64688;totalTime:506,requestQueueTime:0,localTime:2,remoteTime:502,responseQueueTime:0,sendTime:1,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT (RequestChannel.scala:157) 15:55:53 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-expiration'. (Logging.scala:36) 15:55:53 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Evaluating ISR list of partitions to see which replicas can be removed from the ISR (Logging.scala:36) 15:55:53 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-expiration'. (Logging.scala:36) 15:55:53 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:55:53 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:55:55 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'highwatermark-checkpoint'. (Logging.scala:36) 15:55:55 [TRACE] Logging$class.trace - Completed execution of scheduled task 'highwatermark-checkpoint'. (Logging.scala:36) 15:55:55 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:55:55 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-13 topicPartition=__consumer_offsets-13. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-46 topicPartition=__consumer_offsets-46. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-9 topicPartition=__consumer_offsets-9. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-42 topicPartition=__consumer_offsets-42. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-21 topicPartition=__consumer_offsets-21. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-17 topicPartition=__consumer_offsets-17. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-30 topicPartition=__consumer_offsets-30. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-26 topicPartition=__consumer_offsets-26. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-5 topicPartition=__consumer_offsets-5. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-38 topicPartition=__consumer_offsets-38. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-1 topicPartition=__consumer_offsets-1. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-34 topicPartition=__consumer_offsets-34. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-16 topicPartition=__consumer_offsets-16. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-45 topicPartition=__consumer_offsets-45. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-12 topicPartition=__consumer_offsets-12. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-41 topicPartition=__consumer_offsets-41. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-24 topicPartition=__consumer_offsets-24. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-20 topicPartition=__consumer_offsets-20. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-49 topicPartition=__consumer_offsets-49. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-0 topicPartition=__consumer_offsets-0. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-29 topicPartition=__consumer_offsets-29. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-25 topicPartition=__consumer_offsets-25. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-8 topicPartition=__consumer_offsets-8. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-37 topicPartition=__consumer_offsets-37. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-4 topicPartition=__consumer_offsets-4. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-33 topicPartition=__consumer_offsets-33. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-15 topicPartition=__consumer_offsets-15. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-48 topicPartition=__consumer_offsets-48. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-11 topicPartition=__consumer_offsets-11. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-44 topicPartition=__consumer_offsets-44. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-23 topicPartition=__consumer_offsets-23. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-19 topicPartition=__consumer_offsets-19. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-32 topicPartition=__consumer_offsets-32. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:57 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-28 topicPartition=__consumer_offsets-28. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:58 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-7 topicPartition=__consumer_offsets-7. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:58 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-40 topicPartition=__consumer_offsets-40. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:58 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-3 topicPartition=__consumer_offsets-3. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:58 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-36 topicPartition=__consumer_offsets-36. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:58 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-47 topicPartition=__consumer_offsets-47. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:58 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-14 topicPartition=__consumer_offsets-14. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:58 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-43 topicPartition=__consumer_offsets-43. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:58 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-10 topicPartition=__consumer_offsets-10. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:58 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-22 topicPartition=__consumer_offsets-22. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:58 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-18 topicPartition=__consumer_offsets-18. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:58 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-31 topicPartition=__consumer_offsets-31. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:58 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-27 topicPartition=__consumer_offsets-27. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:58 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-39 topicPartition=__consumer_offsets-39. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:58 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-6 topicPartition=__consumer_offsets-6. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:58 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-35 topicPartition=__consumer_offsets-35. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:58 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-2 topicPartition=__consumer_offsets-2. Last clean offset=None now=1491400557987 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:55:58 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-expiration'. (Logging.scala:36) 15:55:58 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Evaluating ISR list of partitions to see which replicas can be removed from the ISR (Logging.scala:36) 15:55:58 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-expiration'. (Logging.scala:36) 15:55:58 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:55:58 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:00 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'highwatermark-checkpoint'. (Logging.scala:36) 15:56:00 [TRACE] Logging$class.trace - Completed execution of scheduled task 'highwatermark-checkpoint'. (Logging.scala:36) 15:56:00 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:00 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:03 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-expiration'. (Logging.scala:36) 15:56:03 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Evaluating ISR list of partitions to see which replicas can be removed from the ISR (Logging.scala:36) 15:56:03 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-expiration'. (Logging.scala:36) 15:56:03 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:03 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:05 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'highwatermark-checkpoint'. (Logging.scala:36) 15:56:05 [TRACE] Logging$class.trace - Completed execution of scheduled task 'highwatermark-checkpoint'. (Logging.scala:36) 15:56:05 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:05 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:08 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-expiration'. (Logging.scala:36) 15:56:08 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Evaluating ISR list of partitions to see which replicas can be removed from the ISR (Logging.scala:36) 15:56:08 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-expiration'. (Logging.scala:36) 15:56:08 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:08 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:10 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'highwatermark-checkpoint'. (Logging.scala:36) 15:56:10 [TRACE] Logging$class.trace - Completed execution of scheduled task 'highwatermark-checkpoint'. (Logging.scala:36) 15:56:10 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:10 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:12 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'kafka-log-retention'. (Logging.scala:36) 15:56:12 [DEBUG] Logging$class.debug - Beginning log cleanup... (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Garbage collecting 'testOutputTopic-0' (Logging.scala:54) 15:56:12 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'kafka-log-flusher'. (Logging.scala:36) 15:56:12 [DEBUG] Logging$class.debug - Log cleanup completed. 0 files deleted in 0 seconds (Logging.scala:54) 15:56:12 [TRACE] Logging$class.trace - Completed execution of scheduled task 'kafka-log-retention'. (Logging.scala:36) 15:56:12 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'kafka-recovery-point-checkpoint'. (Logging.scala:36) 15:56:12 [DEBUG] Logging$class.debug - Checking for dirty logs to flush... (Logging.scala:54) 15:56:12 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'kafka-delete-logs'. (Logging.scala:36) 15:56:12 [TRACE] Logging$class.trace - Completed execution of scheduled task 'kafka-delete-logs'. (Logging.scala:36) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400549315 time since last flush: 23660 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400548960 time since last flush: 24017 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400549138 time since last flush: 23839 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400548800 time since last flush: 24177 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400549036 time since last flush: 23941 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on testOutputTopic flush interval 9223372036854775807 last flushed 1491400545047 time since last flush: 27930 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400548863 time since last flush: 24114 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400548945 time since last flush: 24032 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400548779 time since last flush: 24198 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400549023 time since last flush: 23955 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400549182 time since last flush: 23796 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400548829 time since last flush: 24149 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400549091 time since last flush: 23887 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400549229 time since last flush: 23749 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400548768 time since last flush: 24210 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400549110 time since last flush: 23868 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400549247 time since last flush: 23731 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400548995 time since last flush: 23983 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400548843 time since last flush: 24135 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400548913 time since last flush: 24065 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400548714 time since last flush: 24265 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400548730 time since last flush: 24249 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400549221 time since last flush: 23758 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400548984 time since last flush: 23995 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400549074 time since last flush: 23905 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400548810 time since last flush: 24169 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400548896 time since last flush: 24083 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400549101 time since last flush: 23878 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400548745 time since last flush: 24234 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400548928 time since last flush: 24052 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400549202 time since last flush: 23778 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400548819 time since last flush: 24161 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400549162 time since last flush: 23818 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400549264 time since last flush: 23716 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400549172 time since last flush: 23808 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400548788 time since last flush: 24192 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400549064 time since last flush: 23916 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400549292 time since last flush: 23688 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400548873 time since last flush: 24107 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400549152 time since last flush: 23828 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400548883 time since last flush: 24097 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400549009 time since last flush: 23971 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400548757 time since last flush: 24224 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400549238 time since last flush: 23743 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400549083 time since last flush: 23898 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400549125 time since last flush: 23856 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400548972 time since last flush: 24009 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400548854 time since last flush: 24127 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400549211 time since last flush: 23770 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400549192 time since last flush: 23789 (Logging.scala:54) 15:56:12 [DEBUG] Logging$class.debug - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1491400549054 time since last flush: 23927 (Logging.scala:54) 15:56:12 [TRACE] Logging$class.trace - Completed execution of scheduled task 'kafka-log-flusher'. (Logging.scala:36) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-13 topicPartition=__consumer_offsets-13. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-46 topicPartition=__consumer_offsets-46. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-9 topicPartition=__consumer_offsets-9. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-42 topicPartition=__consumer_offsets-42. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-21 topicPartition=__consumer_offsets-21. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-17 topicPartition=__consumer_offsets-17. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-30 topicPartition=__consumer_offsets-30. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-26 topicPartition=__consumer_offsets-26. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-5 topicPartition=__consumer_offsets-5. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-38 topicPartition=__consumer_offsets-38. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-1 topicPartition=__consumer_offsets-1. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-34 topicPartition=__consumer_offsets-34. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-16 topicPartition=__consumer_offsets-16. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-45 topicPartition=__consumer_offsets-45. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-12 topicPartition=__consumer_offsets-12. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-41 topicPartition=__consumer_offsets-41. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-24 topicPartition=__consumer_offsets-24. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-20 topicPartition=__consumer_offsets-20. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-49 topicPartition=__consumer_offsets-49. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-0 topicPartition=__consumer_offsets-0. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-29 topicPartition=__consumer_offsets-29. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-25 topicPartition=__consumer_offsets-25. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-8 topicPartition=__consumer_offsets-8. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-37 topicPartition=__consumer_offsets-37. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-4 topicPartition=__consumer_offsets-4. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-33 topicPartition=__consumer_offsets-33. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-15 topicPartition=__consumer_offsets-15. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-48 topicPartition=__consumer_offsets-48. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-11 topicPartition=__consumer_offsets-11. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-44 topicPartition=__consumer_offsets-44. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-23 topicPartition=__consumer_offsets-23. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-19 topicPartition=__consumer_offsets-19. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-32 topicPartition=__consumer_offsets-32. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-28 topicPartition=__consumer_offsets-28. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-7 topicPartition=__consumer_offsets-7. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-40 topicPartition=__consumer_offsets-40. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-3 topicPartition=__consumer_offsets-3. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-36 topicPartition=__consumer_offsets-36. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-47 topicPartition=__consumer_offsets-47. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-14 topicPartition=__consumer_offsets-14. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-43 topicPartition=__consumer_offsets-43. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-10 topicPartition=__consumer_offsets-10. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-22 topicPartition=__consumer_offsets-22. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-18 topicPartition=__consumer_offsets-18. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-31 topicPartition=__consumer_offsets-31. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-27 topicPartition=__consumer_offsets-27. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-39 topicPartition=__consumer_offsets-39. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-6 topicPartition=__consumer_offsets-6. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-35 topicPartition=__consumer_offsets-35. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-2 topicPartition=__consumer_offsets-2. Last clean offset=None now=1491400573004 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:13 [TRACE] Logging$class.trace - Completed execution of scheduled task 'kafka-recovery-point-checkpoint'. (Logging.scala:36) 15:56:13 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-expiration'. (Logging.scala:36) 15:56:13 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Evaluating ISR list of partitions to see which replicas can be removed from the ISR (Logging.scala:36) 15:56:13 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-expiration'. (Logging.scala:36) 15:56:13 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:13 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:15 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'highwatermark-checkpoint'. (Logging.scala:36) 15:56:15 [TRACE] Logging$class.trace - Completed execution of scheduled task 'highwatermark-checkpoint'. (Logging.scala:36) 15:56:15 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:15 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:18 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-expiration'. (Logging.scala:36) 15:56:18 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Evaluating ISR list of partitions to see which replicas can be removed from the ISR (Logging.scala:36) 15:56:18 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-expiration'. (Logging.scala:36) 15:56:18 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:18 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:20 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'highwatermark-checkpoint'. (Logging.scala:36) 15:56:20 [TRACE] Logging$class.trace - Completed execution of scheduled task 'highwatermark-checkpoint'. (Logging.scala:36) 15:56:20 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:20 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:23 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-expiration'. (Logging.scala:36) 15:56:23 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Evaluating ISR list of partitions to see which replicas can be removed from the ISR (Logging.scala:36) 15:56:23 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-expiration'. (Logging.scala:36) 15:56:23 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:23 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:25 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'highwatermark-checkpoint'. (Logging.scala:36) 15:56:25 [TRACE] Logging$class.trace - Completed execution of scheduled task 'highwatermark-checkpoint'. (Logging.scala:36) 15:56:25 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:25 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-13 topicPartition=__consumer_offsets-13. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-46 topicPartition=__consumer_offsets-46. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-9 topicPartition=__consumer_offsets-9. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-42 topicPartition=__consumer_offsets-42. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-21 topicPartition=__consumer_offsets-21. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-17 topicPartition=__consumer_offsets-17. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-30 topicPartition=__consumer_offsets-30. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-26 topicPartition=__consumer_offsets-26. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-5 topicPartition=__consumer_offsets-5. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-38 topicPartition=__consumer_offsets-38. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-1 topicPartition=__consumer_offsets-1. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-34 topicPartition=__consumer_offsets-34. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-16 topicPartition=__consumer_offsets-16. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-45 topicPartition=__consumer_offsets-45. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-12 topicPartition=__consumer_offsets-12. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-41 topicPartition=__consumer_offsets-41. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-24 topicPartition=__consumer_offsets-24. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-20 topicPartition=__consumer_offsets-20. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-49 topicPartition=__consumer_offsets-49. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-0 topicPartition=__consumer_offsets-0. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-29 topicPartition=__consumer_offsets-29. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-25 topicPartition=__consumer_offsets-25. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-8 topicPartition=__consumer_offsets-8. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-37 topicPartition=__consumer_offsets-37. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-4 topicPartition=__consumer_offsets-4. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-33 topicPartition=__consumer_offsets-33. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-15 topicPartition=__consumer_offsets-15. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-48 topicPartition=__consumer_offsets-48. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-11 topicPartition=__consumer_offsets-11. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-44 topicPartition=__consumer_offsets-44. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-23 topicPartition=__consumer_offsets-23. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-19 topicPartition=__consumer_offsets-19. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-32 topicPartition=__consumer_offsets-32. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-28 topicPartition=__consumer_offsets-28. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-7 topicPartition=__consumer_offsets-7. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-40 topicPartition=__consumer_offsets-40. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-3 topicPartition=__consumer_offsets-3. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-36 topicPartition=__consumer_offsets-36. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-47 topicPartition=__consumer_offsets-47. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-14 topicPartition=__consumer_offsets-14. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-43 topicPartition=__consumer_offsets-43. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-10 topicPartition=__consumer_offsets-10. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-22 topicPartition=__consumer_offsets-22. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-18 topicPartition=__consumer_offsets-18. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-31 topicPartition=__consumer_offsets-31. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-27 topicPartition=__consumer_offsets-27. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-39 topicPartition=__consumer_offsets-39. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-6 topicPartition=__consumer_offsets-6. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-35 topicPartition=__consumer_offsets-35. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [DEBUG] Logging$class.debug - Finding range of cleanable offsets for log=__consumer_offsets-2 topicPartition=__consumer_offsets-2. Last clean offset=None now=1491400588016 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 (Logging.scala:54) 15:56:28 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-expiration'. (Logging.scala:36) 15:56:28 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Evaluating ISR list of partitions to see which replicas can be removed from the ISR (Logging.scala:36) 15:56:28 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-expiration'. (Logging.scala:36) 15:56:28 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:28 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:30 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'highwatermark-checkpoint'. (Logging.scala:36) 15:56:30 [TRACE] Logging$class.trace - Completed execution of scheduled task 'highwatermark-checkpoint'. (Logging.scala:36) 15:56:30 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:30 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:33 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-expiration'. (Logging.scala:36) 15:56:33 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Evaluating ISR list of partitions to see which replicas can be removed from the ISR (Logging.scala:36) 15:56:33 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-expiration'. (Logging.scala:36) 15:56:33 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:33 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:35 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'highwatermark-checkpoint'. (Logging.scala:36) 15:56:35 [TRACE] Logging$class.trace - Completed execution of scheduled task 'highwatermark-checkpoint'. (Logging.scala:36) 15:56:35 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:35 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:38 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-expiration'. (Logging.scala:36) 15:56:38 [TRACE] Logging$class.trace - [Replica Manager on Broker 1]: Evaluating ISR list of partitions to see which replicas can be removed from the ISR (Logging.scala:36) 15:56:38 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-expiration'. (Logging.scala:36) 15:56:38 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:38 [TRACE] Logging$class.trace - Completed execution of scheduled task 'isr-change-propagation'. (Logging.scala:36) 15:56:40 [TRACE] Logging$class.trace - Beginning execution of scheduled task 'highwatermark-checkpoint'. (Logging.scala:36) 15:56:40 [TRACE] Logging$class.trace - Completed execution of scheduled task 'highwatermark-checkpoint'. (Logging.scala:36)