[2019-12-04 13:28:49,478] INFO [kafka-producer-network-thread | stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3-1_1-producer] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3-1_1-producer, transactionalId=stream-soak-test-1_1] Resetting sequence number of batch with current sequence 6227169 for partition windowed-node-counts-1 to 6227157 (org.apache.kafka.clients.producer.internals.TransactionManager) [2019-12-04 13:28:49,479] ERROR [kafka-producer-network-thread | stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3-1_1-producer] task [1_1] Error sending record to topic windowed-node-counts due to This exception is raised by the broker if it could not locate the producer metadata associated with the producerId in question. This could happen if, for instance, the producer's records were deleted because their retention time had elapsed. Once the last records of the producerId are removed, the producer's metadata is removed from the broker, and future appends by the producer will return this exception.; No more records will be sent and no more offsets will be recorded for this task. Enable TRACE logging to view failed record key and value. (org.apache.kafka.streams.processor.internals.RecordCollectorImpl) org.apache.kafka.common.errors.UnknownProducerIdException: This exception is raised by the broker if it could not locate the producer metadata associated with the producerId in question. This could happen if, for instance, the producer's records were deleted because their retention time had elapsed. Once the last records of the producerId are removed, the producer's metadata is removed from the broker, and future appends by the producer will return this exception. [2019-12-04 13:28:49,479] WARN [kafka-producer-network-thread | stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3-1_1-producer] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3-1_1-producer, transactionalId=stream-soak-test-1_1] Got error produce response with correlation id 401435 on topic-partition windowed-node-counts-1, retrying (2147483646 attempts left). Error: UNKNOWN_PRODUCER_ID (org.apache.kafka.clients.producer.internals.Sender) [2019-12-04 13:28:49,479] ERROR [kafka-producer-network-thread | stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3-1_1-producer] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3-1_1-producer, transactionalId=stream-soak-test-1_1] Aborting producer batches due to fatal error (org.apache.kafka.clients.producer.internals.Sender) org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker. [2019-12-04 13:28:49,482] ERROR [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] Failed to commit stream task 1_1 due to the following error: (org.apache.kafka.streams.processor.internals.AssignedStreamsTasks) org.apache.kafka.streams.errors.StreamsException: task [1_1] Abort sending since an error caught with a previous record (key gke-k8s-sz-b1-us-central-default-pool-45hu2c39-3z07 value 1 timestamp 1575465915058) to topic windowed-node-counts due to org.apache.kafka.common.errors.UnknownProducerIdException: This exception is raised by the broker if it could not locate the producer metadata associated with the producerId in question. This could happen if, for instance, the producer's records were deleted because their retention time had elapsed. Once the last records of the producerId are removed, the producer's metadata is removed from the broker, and future appends by the producer will return this exception. at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.recordSendError(RecordCollectorImpl.java:139) at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.access$500(RecordCollectorImpl.java:51) at org.apache.kafka.streams.processor.internals.RecordCollectorImpl$1.onCompletion(RecordCollectorImpl.java:202) at org.apache.kafka.clients.producer.KafkaProducer$InterceptorCallback.onCompletion(KafkaProducer.java:1310) at org.apache.kafka.clients.producer.internals.ProducerBatch.completeFutureAndFireCallbacks(ProducerBatch.java:230) at org.apache.kafka.clients.producer.internals.ProducerBatch.done(ProducerBatch.java:196) at org.apache.kafka.clients.producer.internals.Sender.failBatch(Sender.java:719) at org.apache.kafka.clients.producer.internals.Sender.failBatch(Sender.java:687) at org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:637) at org.apache.kafka.clients.producer.internals.Sender.handleProduceResponse(Sender.java:559) at org.apache.kafka.clients.producer.internals.Sender.access$100(Sender.java:74) at org.apache.kafka.clients.producer.internals.Sender$1.onComplete(Sender.java:788) at org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:109) at org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:557) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:288) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:235) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.kafka.common.errors.UnknownProducerIdException: This exception is raised by the broker if it could not locate the producer metadata associated with the producerId in question. This could happen if, for instance, the producer's records were deleted because their retention time had elapsed. Once the last records of the producerId are removed, the producer's metadata is removed from the broker, and future appends by the producer will return this exception. [2019-12-04 13:28:49,483] ERROR [kafka-producer-network-thread | stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3-3_2-producer] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3-3_2-producer, transactionalId=stream-soak-test-3_2] Aborting producer batches due to fatal error (org.apache.kafka.clients.producer.internals.Sender) org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker. [2019-12-04 13:28:49,483] WARN [kafka-producer-network-thread | stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3-3_2-producer] task [3_2] Error sending record to topic stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000049-changelog due to Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker.; No more records will be sent and no more offsets will be recorded for this task. Enable TRACE logging to view failed record key and value. (org.apache.kafka.streams.processor.internals.RecordCollectorImpl) org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker. [2019-12-04 13:28:49,484] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] Failed to commit stream task 3_2 since it got migrated to another thread already. Closing it as zombie before triggering a new rebalance. (org.apache.kafka.streams.processor.internals.AssignedStreamsTasks) [2019-12-04 13:28:49,484] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3-3_2-producer, transactionalId=stream-soak-test-3_2] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 13:28:49,486] ERROR [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] task [3_2] Failed to close producer due to the following error: (org.apache.kafka.streams.processor.internals.StreamTask) org.apache.kafka.common.errors.ProducerFencedException: task [3_2] Abort sending since producer got fenced with a previous record (key k8-43 value [B@718c8043 timestamp 1575465915611) to topic stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000049-changelog due to org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker. [2019-12-04 13:28:49,487] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] Detected task 3_2 that got migrated to another thread. This implies that this thread missed a rebalance and dropped out of the consumer group. Will try to rejoin the consumer group. Below is the detailed description of the task: >TaskId: 3_2 >> ProcessorTopology: > KSTREAM-SOURCE-0000000047: > topics: [k8sName-id-repartition] > children: [KSTREAM-AGGREGATE-0000000050] > KSTREAM-AGGREGATE-0000000050: > states: [KSTREAM-AGGREGATE-STATE-STORE-0000000049] > children: [KTABLE-TOSTREAM-0000000051] > KTABLE-TOSTREAM-0000000051: > children: [KSTREAM-SINK-0000000052] > KSTREAM-SINK-0000000052: > topic: StaticTopicNameExtractor(k8sName-counts) >Partitions [k8sName-id-repartition-2] (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:49,487] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3-consumer, groupId=stream-soak-test] Unsubscribed all topics or patterns and assigned partitions (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 13:28:49,487] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3-consumer, groupId=stream-soak-test] Subscribed to pattern: 'k8sName-id-repartition|logs.json.kafka|logs.json.zookeeper|logs.kubernetes|logs.operator|logs.syslog|network-id-repartition|node-name-repartition|windowed-node-counts' (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 13:28:49,487] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3-consumer, groupId=stream-soak-test] Connection to node 2 (ip-172-31-29-20.us-west-2.compute.internal/172.31.29.20:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2019-12-04 13:28:49,521] WARN [kafka-producer-network-thread | stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-1_2-producer] task [1_2] Error sending record to topic stream-soak-test-logData10MinuteSuppressedCount-store-changelog due to Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker.; No more records will be sent and no more offsets will be recorded for this task. Enable TRACE logging to view failed record key and value. (org.apache.kafka.streams.processor.internals.RecordCollectorImpl) org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker. [2019-12-04 13:28:49,522] ERROR [kafka-producer-network-thread | stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-1_2-producer] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-1_2-producer, transactionalId=stream-soak-test-1_2] Aborting producer batches due to fatal error (org.apache.kafka.clients.producer.internals.Sender) org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker. [2019-12-04 13:28:49,522] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] Failed to commit stream task 1_2 since it got migrated to another thread already. Closing it as zombie before triggering a new rebalance. (org.apache.kafka.streams.processor.internals.AssignedStreamsTasks) [2019-12-04 13:28:49,525] ERROR [kafka-producer-network-thread | stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-1_2-producer] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-1_2-producer, transactionalId=stream-soak-test-1_2] Uncaught error in request completion: (org.apache.kafka.clients.NetworkClient) org.apache.kafka.common.KafkaException: TransactionalId stream-soak-test-1_2: Invalid transition attempted from state FATAL_ERROR to state ABORTABLE_ERROR at org.apache.kafka.clients.producer.internals.TransactionManager.transitionTo(TransactionManager.java:759) at org.apache.kafka.clients.producer.internals.TransactionManager.transitionToAbortableError(TransactionManager.java:333) at org.apache.kafka.clients.producer.internals.Sender.failBatch(Sender.java:710) at org.apache.kafka.clients.producer.internals.Sender.failBatch(Sender.java:687) at org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:637) at org.apache.kafka.clients.producer.internals.Sender.handleProduceResponse(Sender.java:559) at org.apache.kafka.clients.producer.internals.Sender.access$100(Sender.java:74) at org.apache.kafka.clients.producer.internals.Sender$1.onComplete(Sender.java:788) at org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:109) at org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:557) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:298) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:235) at java.lang.Thread.run(Thread.java:748) [2019-12-04 13:28:49,525] ERROR [kafka-producer-network-thread | stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-1_2-producer] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-1_2-producer, transactionalId=stream-soak-test-1_2] Uncaught error in request completion: (org.apache.kafka.clients.NetworkClient) org.apache.kafka.common.KafkaException: TransactionalId stream-soak-test-1_2: Invalid transition attempted from state FATAL_ERROR to state ABORTABLE_ERROR at org.apache.kafka.clients.producer.internals.TransactionManager.transitionTo(TransactionManager.java:759) at org.apache.kafka.clients.producer.internals.TransactionManager.transitionToAbortableError(TransactionManager.java:333) at org.apache.kafka.clients.producer.internals.Sender.failBatch(Sender.java:710) at org.apache.kafka.clients.producer.internals.Sender.failBatch(Sender.java:687) at org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:637) at org.apache.kafka.clients.producer.internals.Sender.handleProduceResponse(Sender.java:559) at org.apache.kafka.clients.producer.internals.Sender.access$100(Sender.java:74) at org.apache.kafka.clients.producer.internals.Sender$1.onComplete(Sender.java:788) at org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:109) at org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:557) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:298) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:235) at java.lang.Thread.run(Thread.java:748) [2019-12-04 13:28:49,528] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-1_2-producer, transactionalId=stream-soak-test-1_2] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 13:28:49,539] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3-consumer, groupId=stream-soak-test] Revoking previously assigned partitions [] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2019-12-04 13:28:49,539] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] State transition from RUNNING to PARTITIONS_REVOKED (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:49,539] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] stream-client [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f] State transition from RUNNING to REBALANCING (org.apache.kafka.streams.KafkaStreams) [2019-12-04 13:28:49,544] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3-1_1-producer, transactionalId=stream-soak-test-1_1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 13:28:50,085] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Failed to commit stream task 2_0 since it got migrated to another thread already. Closing it as zombie before triggering a new rebalance. (org.apache.kafka.streams.processor.internals.AssignedStreamsTasks) [2019-12-04 13:28:50,086] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-2_0-producer, transactionalId=stream-soak-test-2_0] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 13:28:50,096] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Detected task 2_0 that got migrated to another thread. This implies that this thread missed a rebalance and dropped out of the consumer group. Will try to rejoin the consumer group. Below is the detailed description of the task: >TaskId: 2_0 >> ProcessorTopology: > KSTREAM-SOURCE-0000000038: > topics: [network-id-repartition] > children: [KSTREAM-AGGREGATE-0000000041] > KSTREAM-AGGREGATE-0000000041: > states: [KSTREAM-AGGREGATE-STATE-STORE-0000000040] > children: [KTABLE-TOSTREAM-0000000042] > KTABLE-TOSTREAM-0000000042: > children: [KSTREAM-SINK-0000000043] > KSTREAM-SINK-0000000043: > topic: StaticTopicNameExtractor(network-id-counts) >Partitions [network-id-repartition-0] (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:50,096] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-consumer, groupId=stream-soak-test] Unsubscribed all topics or patterns and assigned partitions (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 13:28:50,096] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-consumer, groupId=stream-soak-test] Subscribed to pattern: 'k8sName-id-repartition|logs.json.kafka|logs.json.zookeeper|logs.kubernetes|logs.operator|logs.syslog|network-id-repartition|node-name-repartition|windowed-node-counts' (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 13:28:50,097] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-consumer, groupId=stream-soak-test] Connection to node 1 (ip-172-31-26-44.us-west-2.compute.internal/172.31.26.44:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2019-12-04 13:28:50,099] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-consumer, groupId=stream-soak-test] Revoking previously assigned partitions [] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2019-12-04 13:28:50,099] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] State transition from RUNNING to PARTITIONS_REVOKED (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:50,099] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-restore-consumer, groupId=null] Unsubscribed all topics or patterns and assigned partitions (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 13:28:50,099] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] partition revocation took 0 ms. suspended active tasks: [] suspended standby tasks: [] (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:50,099] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-consumer, groupId=stream-soak-test] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2019-12-04 13:28:51,532] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-consumer, groupId=stream-soak-test] Successfully joined group with generation 70 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2019-12-04 13:28:51,533] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-consumer, groupId=stream-soak-test] Setting newly assigned partitions: node-name-repartition-1, network-id-repartition-0, windowed-node-counts-1, windowed-node-counts-2, k8sName-id-repartition-0, node-name-repartition-2 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2019-12-04 13:28:51,533] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] State transition from PARTITIONS_REVOKED to PARTITIONS_ASSIGNED (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:51,533] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Creating producer client for task 1_1 (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:51,533] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [172.31.26.44:9092, 172.31.29.20:9092, 172.31.31.132:9092] buffer.memory = 33554432 client.dns.lookup = default client.id = stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-1_1-producer compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 100 max.block.ms = 2147483647 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = DEBUG metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 305000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = stream-soak-test-1_1 value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:51,534] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-1_1-producer, transactionalId=stream-soak-test-1_1] Instantiated a transactional producer. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 13:28:51,535] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-1_1-producer, transactionalId=stream-soak-test-1_1] Overriding the default acks to all since idempotence is enabled. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 13:28:51,535] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] The configuration 'rocksdb.stats.dump.freq' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:51,535] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] The configuration 'topic.retention.bytes' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:51,535] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] The configuration 'topic.retention.ms' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:51,535] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Kafka version: 2.2.3-61c8228f3 (org.apache.kafka.common.utils.AppInfoParser) [2019-12-04 13:28:51,535] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Kafka commitId: 61c8228f31479422 (org.apache.kafka.common.utils.AppInfoParser) [2019-12-04 13:28:51,535] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-1_1-producer, transactionalId=stream-soak-test-1_1] ProducerId set to -1 with epoch -1 (org.apache.kafka.clients.producer.internals.TransactionManager) [2019-12-04 13:28:51,640] INFO [kafka-producer-network-thread | stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-1_1-producer] Cluster ID: 0TWKzLUNRB-3tQMTjQrFyQ (org.apache.kafka.clients.Metadata) [2019-12-04 13:28:51,641] INFO [kafka-producer-network-thread | stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-1_1-producer] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-1_1-producer, transactionalId=stream-soak-test-1_1] ProducerId set to 2001 with epoch 75 (org.apache.kafka.clients.producer.internals.TransactionManager) [2019-12-04 13:28:51,642] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Creating producer client for task 2_0 (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:51,642] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [172.31.26.44:9092, 172.31.29.20:9092, 172.31.31.132:9092] buffer.memory = 33554432 client.dns.lookup = default client.id = stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-2_0-producer compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 100 max.block.ms = 2147483647 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = DEBUG metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 305000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = stream-soak-test-2_0 value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:51,642] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-2_0-producer, transactionalId=stream-soak-test-2_0] Instantiated a transactional producer. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 13:28:51,643] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-2_0-producer, transactionalId=stream-soak-test-2_0] Overriding the default acks to all since idempotence is enabled. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 13:28:51,645] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] The configuration 'rocksdb.stats.dump.freq' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:51,645] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] The configuration 'topic.retention.bytes' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:51,645] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] The configuration 'topic.retention.ms' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:51,645] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Kafka version: 2.2.3-61c8228f3 (org.apache.kafka.common.utils.AppInfoParser) [2019-12-04 13:28:51,645] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Kafka commitId: 61c8228f31479422 (org.apache.kafka.common.utils.AppInfoParser) [2019-12-04 13:28:51,646] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-2_0-producer, transactionalId=stream-soak-test-2_0] ProducerId set to -1 with epoch -1 (org.apache.kafka.clients.producer.internals.TransactionManager) [2019-12-04 13:28:51,750] INFO [kafka-producer-network-thread | stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-2_0-producer] Cluster ID: 0TWKzLUNRB-3tQMTjQrFyQ (org.apache.kafka.clients.Metadata) [2019-12-04 13:28:51,752] INFO [kafka-producer-network-thread | stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-2_0-producer] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-2_0-producer, transactionalId=stream-soak-test-2_0] ProducerId set to 1000 with epoch 71 (org.apache.kafka.clients.producer.internals.TransactionManager) [2019-12-04 13:28:51,753] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Creating producer client for task 1_2 (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:51,753] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [172.31.26.44:9092, 172.31.29.20:9092, 172.31.31.132:9092] buffer.memory = 33554432 client.dns.lookup = default client.id = stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-1_2-producer compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 100 max.block.ms = 2147483647 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = DEBUG metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 305000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = stream-soak-test-1_2 value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:51,753] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-1_2-producer, transactionalId=stream-soak-test-1_2] Instantiated a transactional producer. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 13:28:51,754] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-1_2-producer, transactionalId=stream-soak-test-1_2] Overriding the default acks to all since idempotence is enabled. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 13:28:51,754] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] The configuration 'rocksdb.stats.dump.freq' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:51,754] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] The configuration 'topic.retention.bytes' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:51,754] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] The configuration 'topic.retention.ms' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:51,754] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Kafka version: 2.2.3-61c8228f3 (org.apache.kafka.common.utils.AppInfoParser) [2019-12-04 13:28:51,754] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Kafka commitId: 61c8228f31479422 (org.apache.kafka.common.utils.AppInfoParser) [2019-12-04 13:28:51,755] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Error registering AppInfo mbean (org.apache.kafka.common.utils.AppInfoParser) javax.management.InstanceAlreadyExistsException: kafka.producer:type=app-info,id=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-1_2-producer at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:62) at org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:425) at org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:288) at org.apache.kafka.streams.processor.internals.DefaultKafkaClientSupplier.getProducer(DefaultKafkaClientSupplier.java:39) at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createProducer(StreamThread.java:469) at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.lambda$createTask$0(StreamThread.java:459) at org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:192) at org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:172) at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:460) at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:411) at org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.createTasks(StreamThread.java:396) at org.apache.kafka.streams.processor.internals.TaskManager.addStreamTasks(TaskManager.java:148) at org.apache.kafka.streams.processor.internals.TaskManager.createTasks(TaskManager.java:107) at org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsAssigned(StreamThread.java:295) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:292) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:410) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:344) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:342) at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1226) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1191) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1176) at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:961) at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:857) at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:817) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:786) [2019-12-04 13:28:51,755] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-1_2-producer, transactionalId=stream-soak-test-1_2] ProducerId set to -1 with epoch -1 (org.apache.kafka.clients.producer.internals.TransactionManager) [2019-12-04 13:28:51,859] INFO [kafka-producer-network-thread | stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-1_2-producer] Cluster ID: 0TWKzLUNRB-3tQMTjQrFyQ (org.apache.kafka.clients.Metadata) [2019-12-04 13:28:51,860] INFO [kafka-producer-network-thread | stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-1_2-producer] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-1_2-producer, transactionalId=stream-soak-test-1_2] ProducerId set to 1002 with epoch 75 (org.apache.kafka.clients.producer.internals.TransactionManager) [2019-12-04 13:28:51,861] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Creating producer client for task 3_0 (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:51,861] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [172.31.26.44:9092, 172.31.29.20:9092, 172.31.31.132:9092] buffer.memory = 33554432 client.dns.lookup = default client.id = stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-3_0-producer compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 100 max.block.ms = 2147483647 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = DEBUG metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 305000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = stream-soak-test-3_0 value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:51,861] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-3_0-producer, transactionalId=stream-soak-test-3_0] Instantiated a transactional producer. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 13:28:51,862] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-3_0-producer, transactionalId=stream-soak-test-3_0] Overriding the default acks to all since idempotence is enabled. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 13:28:51,863] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] The configuration 'rocksdb.stats.dump.freq' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:51,863] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] The configuration 'topic.retention.bytes' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:51,863] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] The configuration 'topic.retention.ms' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:51,863] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Kafka version: 2.2.3-61c8228f3 (org.apache.kafka.common.utils.AppInfoParser) [2019-12-04 13:28:51,863] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Kafka commitId: 61c8228f31479422 (org.apache.kafka.common.utils.AppInfoParser) [2019-12-04 13:28:51,863] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-3_0-producer, transactionalId=stream-soak-test-3_0] ProducerId set to -1 with epoch -1 (org.apache.kafka.clients.producer.internals.TransactionManager) [2019-12-04 13:28:51,969] INFO [kafka-producer-network-thread | stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-3_0-producer] Cluster ID: 0TWKzLUNRB-3tQMTjQrFyQ (org.apache.kafka.clients.Metadata) [2019-12-04 13:28:51,969] INFO [kafka-producer-network-thread | stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-3_0-producer] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-3_0-producer, transactionalId=stream-soak-test-3_0] ProducerId set to 1003 with epoch 70 (org.apache.kafka.clients.producer.internals.TransactionManager) [2019-12-04 13:28:51,969] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] partition assignment took 436 ms. current active tasks: [1_1, 2_0, 1_2, 3_0] current standby tasks: [] previous active tasks: [] (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:51,971] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:52,068] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:52,276] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-restore-consumer, groupId=null] Subscribed to partition(s): stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000049-changelog-0, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000040-changelog-0 (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 13:28:52,276] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] No checkpoint found for task 3_0 state store KSTREAM-AGGREGATE-STATE-STORE-0000000049 changelog stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000049-changelog-0 with EOS turned on. Reinitializing the task and restore its state from the beginning. (org.apache.kafka.streams.processor.internals.StoreChangelogReader) [2019-12-04 13:28:52,277] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-restore-consumer, groupId=null] Resetting offset for partition stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000049-changelog-0 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher) [2019-12-04 13:28:52,277] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-restore-consumer, groupId=null] Resetting offset for partition stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000040-changelog-0 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher) [2019-12-04 13:28:52,278] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:52,282] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] No checkpoint found for task 2_0 state store KSTREAM-AGGREGATE-STATE-STORE-0000000040 changelog stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000040-changelog-0 with EOS turned on. Reinitializing the task and restore its state from the beginning. (org.apache.kafka.streams.processor.internals.StoreChangelogReader) [2019-12-04 13:28:52,318] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:52,323] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-restore-consumer, groupId=null] Unsubscribed all topics or patterns and assigned partitions (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 13:28:52,325] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-restore-consumer, groupId=null] Subscribed to partition(s): stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000049-changelog-0, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000040-changelog-0 (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 13:28:52,326] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:52,330] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:53,576] ERROR [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] task [1_2] Failed to close producer due to the following error: (org.apache.kafka.streams.processor.internals.StreamTask) org.apache.kafka.common.errors.ProducerFencedException: task [1_2] Abort sending since producer got fenced with a previous record (key gke-k8s-sz-b1-us-central-default-pool-mx1q0535-42a5\x00\x00\x01n\xD1\x11D\x00 value null timestamp null) to topic stream-soak-test-logData10MinuteSuppressedCount-store-changelog due to org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker. [2019-12-04 13:28:53,708] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] Detected task 1_2 that got migrated to another thread. This implies that this thread missed a rebalance and dropped out of the consumer group. Will try to rejoin the consumer group. Below is the detailed description of the task: >TaskId: 1_2 >> ProcessorTopology: > KSTREAM-SOURCE-0000000005: > topics: [node-name-repartition] > children: [KSTREAM-AGGREGATE-0000000008, KSTREAM-AGGREGATE-0000000014, KSTREAM-AGGREGATE-0000000020, KSTREAM-AGGREGATE-0000000026, KSTREAM-JOIN-0000000033] > KSTREAM-AGGREGATE-0000000008: > states: [KSTREAM-AGGREGATE-STATE-STORE-0000000007] > children: [KTABLE-TOSTREAM-0000000009, logData10MinuteFinalCount, logData10MinuteSuppressedCount] > KTABLE-TOSTREAM-0000000009: > children: [KSTREAM-MAP-0000000010] > KSTREAM-MAP-0000000010: > children: [KSTREAM-SINK-0000000011] > KSTREAM-SINK-0000000011: > topic: StaticTopicNameExtractor(windowed-node-counts) > logData10MinuteFinalCount: > states: [logData10MinuteFinalCount-store] > children: [KTABLE-TOSTREAM-0000000056] > KTABLE-TOSTREAM-0000000056: > children: [KSTREAM-MAP-0000000057] > KSTREAM-MAP-0000000057: > children: [KSTREAM-SINK-0000000058] > KSTREAM-SINK-0000000058: > topic: StaticTopicNameExtractor(windowed-node-counts) > logData10MinuteSuppressedCount: > states: [logData10MinuteSuppressedCount-store] > children: [KTABLE-TOSTREAM-0000000059] > KTABLE-TOSTREAM-0000000059: > children: [KSTREAM-MAP-0000000060] > KSTREAM-MAP-0000000060: > children: [KSTREAM-SINK-0000000061] > KSTREAM-SINK-0000000061: > topic: StaticTopicNameExtractor(windowed-node-counts) > KSTREAM-AGGREGATE-0000000014: > states: [KSTREAM-AGGREGATE-STATE-STORE-0000000013] > children: [KTABLE-TOSTREAM-0000000015] > KTABLE-TOSTREAM-0000000015: > children: [KSTREAM-MAP-0000000016] > KSTREAM-MAP-0000000016: > children: [KSTREAM-SINK-0000000017] > KSTREAM-SINK-0000000017: > topic: StaticTopicNameExtractor(windowed-node-counts) > KSTREAM-AGGREGATE-0000000020: > states: [KSTREAM-AGGREGATE-STATE-STORE-0000000019] > children: [KTABLE-TOSTREAM-0000000021] > KTABLE-TOSTREAM-0000000021: > children: [KSTREAM-MAP-0000000022] > KSTREAM-MAP-0000000022: > children: [KSTREAM-SINK-0000000023] > KSTREAM-SINK-0000000023: > topic: StaticTopicNameExtractor(windowed-node-counts) > KSTREAM-AGGREGATE-0000000026: > states: [KSTREAM-AGGREGATE-STATE-STORE-0000000025] > children: [KTABLE-TOSTREAM-0000000027] > KTABLE-TOSTREAM-0000000027: > children: [KSTREAM-MAP-0000000028] > KSTREAM-MAP-0000000028: > children: [KSTREAM-SINK-0000000029] > KSTREAM-SINK-0000000029: > topic: StaticTopicNameExtractor(windowed-node-counts) > KSTREAM-JOIN-0000000033: > states: [windowed-node-counts-STATE-STORE-0000000030] > children: [KSTREAM-SINK-0000000034] > KSTREAM-SINK-0000000034: > topic: StaticTopicNameExtractor(joined-counts) > KSTREAM-SOURCE-0000000031: > topics: [windowed-node-counts] > children: [KTABLE-SOURCE-0000000032] > KTABLE-SOURCE-0000000032: > states: [windowed-node-counts-STATE-STORE-0000000030] >Partitions [windowed-node-counts-2, node-name-repartition-2] (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:53,708] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-consumer, groupId=stream-soak-test] Unsubscribed all topics or patterns and assigned partitions (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 13:28:53,708] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-consumer, groupId=stream-soak-test] Subscribed to pattern: 'k8sName-id-repartition|logs.json.kafka|logs.json.zookeeper|logs.kubernetes|logs.operator|logs.syslog|network-id-repartition|node-name-repartition|windowed-node-counts' (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 13:28:53,710] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-consumer, groupId=stream-soak-test] Revoking previously assigned partitions [] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2019-12-04 13:28:53,710] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] State transition from RUNNING to PARTITIONS_REVOKED (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:53,710] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-restore-consumer, groupId=null] Unsubscribed all topics or patterns and assigned partitions (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 13:28:53,710] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] partition revocation took 0 ms. suspended active tasks: [] suspended standby tasks: [] (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:53,710] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-consumer, groupId=stream-soak-test] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2019-12-04 13:28:53,711] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:53,719] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:53,736] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:53,807] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:53,811] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:53,833] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:53,904] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:53,945] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:54,020] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:54,024] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:54,045] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:54,116] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:54,192] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-restore-consumer, groupId=null] Subscribed to partition(s): stream-soak-test-windowed-node-counts-STATE-STORE-0000000030-changelog-2, stream-soak-test-logData10MinuteSuppressedCount-store-changelog-2, stream-soak-test-logData10MinuteFinalCount-store-changelog-2, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000049-changelog-0, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000007-changelog-2, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000040-changelog-0, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000025-changelog-2, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000013-changelog-2, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000019-changelog-2 (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 13:28:54,192] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] No checkpoint found for task 1_2 state store windowed-node-counts-STATE-STORE-0000000030 changelog stream-soak-test-windowed-node-counts-STATE-STORE-0000000030-changelog-2 with EOS turned on. Reinitializing the task and restore its state from the beginning. (org.apache.kafka.streams.processor.internals.StoreChangelogReader) [2019-12-04 13:28:54,192] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-restore-consumer, groupId=null] Resetting offset for partition stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000025-changelog-2 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher) [2019-12-04 13:28:54,192] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-restore-consumer, groupId=null] Resetting offset for partition stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000013-changelog-2 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher) [2019-12-04 13:28:54,192] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-restore-consumer, groupId=null] Resetting offset for partition stream-soak-test-windowed-node-counts-STATE-STORE-0000000030-changelog-2 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher) [2019-12-04 13:28:54,192] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-restore-consumer, groupId=null] Resetting offset for partition stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000007-changelog-2 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher) [2019-12-04 13:28:54,232] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:54,236] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] No checkpoint found for task 1_2 state store logData10MinuteSuppressedCount-store changelog stream-soak-test-logData10MinuteSuppressedCount-store-changelog-2 with EOS turned on. Reinitializing the task and restore its state from the beginning. (org.apache.kafka.streams.processor.internals.StoreChangelogReader) [2019-12-04 13:28:54,236] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-restore-consumer, groupId=null] Resetting offset for partition stream-soak-test-logData10MinuteSuppressedCount-store-changelog-2 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher) [2019-12-04 13:28:54,236] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-restore-consumer, groupId=null] Resetting offset for partition stream-soak-test-logData10MinuteFinalCount-store-changelog-2 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher) [2019-12-04 13:28:54,236] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-restore-consumer, groupId=null] Resetting offset for partition stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000019-changelog-2 to offset 26850383. (org.apache.kafka.clients.consumer.internals.Fetcher) [2019-12-04 13:28:54,236] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] No checkpoint found for task 1_2 state store logData10MinuteFinalCount-store changelog stream-soak-test-logData10MinuteFinalCount-store-changelog-2 with EOS turned on. Reinitializing the task and restore its state from the beginning. (org.apache.kafka.streams.processor.internals.StoreChangelogReader) [2019-12-04 13:28:54,236] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] No checkpoint found for task 1_2 state store KSTREAM-AGGREGATE-STATE-STORE-0000000007 changelog stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000007-changelog-2 with EOS turned on. Reinitializing the task and restore its state from the beginning. (org.apache.kafka.streams.processor.internals.StoreChangelogReader) [2019-12-04 13:28:54,279] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] No checkpoint found for task 1_2 state store KSTREAM-AGGREGATE-STATE-STORE-0000000025 changelog stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000025-changelog-2 with EOS turned on. Reinitializing the task and restore its state from the beginning. (org.apache.kafka.streams.processor.internals.StoreChangelogReader) [2019-12-04 13:28:54,341] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] No checkpoint found for task 1_2 state store KSTREAM-AGGREGATE-STATE-STORE-0000000013 changelog stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000013-changelog-2 with EOS turned on. Reinitializing the task and restore its state from the beginning. (org.apache.kafka.streams.processor.internals.StoreChangelogReader) [2019-12-04 13:28:54,389] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] No checkpoint found for task 1_2 state store KSTREAM-AGGREGATE-STATE-STORE-0000000019 changelog stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000019-changelog-2 with EOS turned on. Reinitializing the task and restore its state from the beginning. (org.apache.kafka.streams.processor.internals.StoreChangelogReader) [2019-12-04 13:28:54,509] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-restore-consumer, groupId=null] Subscribed to partition(s): stream-soak-test-windowed-node-counts-STATE-STORE-0000000030-changelog-2, stream-soak-test-logData10MinuteSuppressedCount-store-changelog-2, stream-soak-test-logData10MinuteFinalCount-store-changelog-2, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000007-changelog-2, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000049-changelog-0, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000040-changelog-0, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000025-changelog-2, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000013-changelog-2, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000019-changelog-2 (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 13:28:54,535] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:54,545] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-consumer, groupId=stream-soak-test] Attempt to heartbeat failed since group is rebalancing (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2019-12-04 13:28:54,551] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-consumer, groupId=stream-soak-test] Revoking previously assigned partitions [node-name-repartition-1, network-id-repartition-0, windowed-node-counts-1, windowed-node-counts-2, k8sName-id-repartition-0, node-name-repartition-2] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2019-12-04 13:28:54,551] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] State transition from PARTITIONS_ASSIGNED to PARTITIONS_REVOKED (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:54,551] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-1_1-producer, transactionalId=stream-soak-test-1_1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 13:28:54,684] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-restore-consumer, groupId=null] Unsubscribed all topics or patterns and assigned partitions (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 13:28:54,684] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] partition revocation took 133 ms. suspended active tasks: [] suspended standby tasks: [] (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:54,685] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-consumer, groupId=stream-soak-test] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2019-12-04 13:28:55,623] ERROR [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] Suspending stream task 1_1 failed due to the following error: (org.apache.kafka.streams.processor.internals.AssignedStreamsTasks) org.apache.kafka.streams.errors.StreamsException: task [1_1] Abort sending since an error caught with a previous record (key gke-k8s-sz-b1-us-central-default-pool-45hu2c39-3z07 value 1 timestamp 1575465915058) to topic windowed-node-counts due to org.apache.kafka.common.errors.UnknownProducerIdException: This exception is raised by the broker if it could not locate the producer metadata associated with the producerId in question. This could happen if, for instance, the producer's records were deleted because their retention time had elapsed. Once the last records of the producerId are removed, the producer's metadata is removed from the broker, and future appends by the producer will return this exception. at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.recordSendError(RecordCollectorImpl.java:139) at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.access$500(RecordCollectorImpl.java:51) at org.apache.kafka.streams.processor.internals.RecordCollectorImpl$1.onCompletion(RecordCollectorImpl.java:202) at org.apache.kafka.clients.producer.KafkaProducer$InterceptorCallback.onCompletion(KafkaProducer.java:1310) at org.apache.kafka.clients.producer.internals.ProducerBatch.completeFutureAndFireCallbacks(ProducerBatch.java:230) at org.apache.kafka.clients.producer.internals.ProducerBatch.done(ProducerBatch.java:196) at org.apache.kafka.clients.producer.internals.Sender.failBatch(Sender.java:719) at org.apache.kafka.clients.producer.internals.Sender.failBatch(Sender.java:687) at org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:637) at org.apache.kafka.clients.producer.internals.Sender.handleProduceResponse(Sender.java:559) at org.apache.kafka.clients.producer.internals.Sender.access$100(Sender.java:74) at org.apache.kafka.clients.producer.internals.Sender$1.onComplete(Sender.java:788) at org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:109) at org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:557) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:288) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:235) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.kafka.common.errors.UnknownProducerIdException: This exception is raised by the broker if it could not locate the producer metadata associated with the producerId in question. This could happen if, for instance, the producer's records were deleted because their retention time had elapsed. Once the last records of the producerId are removed, the producer's metadata is removed from the broker, and future appends by the producer will return this exception. [2019-12-04 13:28:55,623] ERROR [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] task [1_1] Could not close task due to the following error: (org.apache.kafka.streams.processor.internals.StreamTask) java.lang.NullPointerException at org.apache.kafka.streams.processor.internals.StreamTask.maybeAbortTransactionAndCloseRecordCollector(StreamTask.java:623) at org.apache.kafka.streams.processor.internals.StreamTask.suspend(StreamTask.java:615) at org.apache.kafka.streams.processor.internals.StreamTask.close(StreamTask.java:724) at org.apache.kafka.streams.processor.internals.AssignedTasks.suspendTasks(AssignedTasks.java:140) at org.apache.kafka.streams.processor.internals.AssignedTasks.suspend(AssignedTasks.java:97) at org.apache.kafka.streams.processor.internals.TaskManager.suspendTasksAndState(TaskManager.java:242) at org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsRevoked(StreamThread.java:331) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinPrepare(ConsumerCoordinator.java:461) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:396) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:344) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:342) at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1226) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1191) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1176) at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:961) at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:861) at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:817) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:786) [2019-12-04 13:28:55,720] ERROR [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] After suspending failed, closing the same stream task 1_1 failed again due to the following error: (org.apache.kafka.streams.processor.internals.AssignedStreamsTasks) java.lang.NullPointerException at org.apache.kafka.streams.processor.internals.StreamTask.maybeAbortTransactionAndCloseRecordCollector(StreamTask.java:623) at org.apache.kafka.streams.processor.internals.StreamTask.suspend(StreamTask.java:615) at org.apache.kafka.streams.processor.internals.StreamTask.close(StreamTask.java:724) at org.apache.kafka.streams.processor.internals.AssignedTasks.suspendTasks(AssignedTasks.java:140) at org.apache.kafka.streams.processor.internals.AssignedTasks.suspend(AssignedTasks.java:97) at org.apache.kafka.streams.processor.internals.TaskManager.suspendTasksAndState(TaskManager.java:242) at org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsRevoked(StreamThread.java:331) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinPrepare(ConsumerCoordinator.java:461) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:396) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:344) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:342) at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1226) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1191) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1176) at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:961) at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:861) at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:817) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:786) [2019-12-04 13:28:55,721] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3-restore-consumer, groupId=null] Unsubscribed all topics or patterns and assigned partitions (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 13:28:55,721] ERROR [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] Error caught during partition revocation, will abort the current process and re-throw at the end of rebalance: {} (org.apache.kafka.streams.processor.internals.StreamThread) org.apache.kafka.streams.errors.StreamsException: stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] failed to suspend stream tasks at org.apache.kafka.streams.processor.internals.TaskManager.suspendTasksAndState(TaskManager.java:256) at org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsRevoked(StreamThread.java:331) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinPrepare(ConsumerCoordinator.java:461) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:396) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:344) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:342) at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1226) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1191) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1176) at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:961) at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:861) at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:817) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:786) Caused by: org.apache.kafka.streams.errors.StreamsException: task [1_1] Abort sending since an error caught with a previous record (key gke-k8s-sz-b1-us-central-default-pool-45hu2c39-3z07 value 1 timestamp 1575465915058) to topic windowed-node-counts due to org.apache.kafka.common.errors.UnknownProducerIdException: This exception is raised by the broker if it could not locate the producer metadata associated with the producerId in question. This could happen if, for instance, the producer's records were deleted because their retention time had elapsed. Once the last records of the producerId are removed, the producer's metadata is removed from the broker, and future appends by the producer will return this exception. at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.recordSendError(RecordCollectorImpl.java:139) at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.access$500(RecordCollectorImpl.java:51) at org.apache.kafka.streams.processor.internals.RecordCollectorImpl$1.onCompletion(RecordCollectorImpl.java:202) at org.apache.kafka.clients.producer.KafkaProducer$InterceptorCallback.onCompletion(KafkaProducer.java:1310) at org.apache.kafka.clients.producer.internals.ProducerBatch.completeFutureAndFireCallbacks(ProducerBatch.java:230) at org.apache.kafka.clients.producer.internals.ProducerBatch.done(ProducerBatch.java:196) at org.apache.kafka.clients.producer.internals.Sender.failBatch(Sender.java:719) at org.apache.kafka.clients.producer.internals.Sender.failBatch(Sender.java:687) at org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:637) at org.apache.kafka.clients.producer.internals.Sender.handleProduceResponse(Sender.java:559) at org.apache.kafka.clients.producer.internals.Sender.access$100(Sender.java:74) at org.apache.kafka.clients.producer.internals.Sender$1.onComplete(Sender.java:788) at org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:109) at org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:557) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:288) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:235) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.kafka.common.errors.UnknownProducerIdException: This exception is raised by the broker if it could not locate the producer metadata associated with the producerId in question. This could happen if, for instance, the producer's records were deleted because their retention time had elapsed. Once the last records of the producerId are removed, the producer's metadata is removed from the broker, and future appends by the producer will return this exception. [2019-12-04 13:28:55,721] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] partition revocation took 6182 ms. suspended active tasks: [1_1] suspended standby tasks: [] (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:55,721] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3-consumer, groupId=stream-soak-test] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2019-12-04 13:28:55,721] ERROR [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] Encountered the following unexpected Kafka exception during processing, this usually indicate Streams internal errors: (org.apache.kafka.streams.processor.internals.StreamThread) org.apache.kafka.streams.errors.StreamsException: stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] Failed to rebalance. at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:970) at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:861) at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:817) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:786) Caused by: org.apache.kafka.streams.errors.StreamsException: stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] failed to suspend stream tasks at org.apache.kafka.streams.processor.internals.TaskManager.suspendTasksAndState(TaskManager.java:256) at org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsRevoked(StreamThread.java:331) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinPrepare(ConsumerCoordinator.java:461) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:396) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:344) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:342) at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1226) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1191) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1176) at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:961) ... 3 more Caused by: org.apache.kafka.streams.errors.StreamsException: task [1_1] Abort sending since an error caught with a previous record (key gke-k8s-sz-b1-us-central-default-pool-45hu2c39-3z07 value 1 timestamp 1575465915058) to topic windowed-node-counts due to org.apache.kafka.common.errors.UnknownProducerIdException: This exception is raised by the broker if it could not locate the producer metadata associated with the producerId in question. This could happen if, for instance, the producer's records were deleted because their retention time had elapsed. Once the last records of the producerId are removed, the producer's metadata is removed from the broker, and future appends by the producer will return this exception. at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.recordSendError(RecordCollectorImpl.java:139) at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.access$500(RecordCollectorImpl.java:51) at org.apache.kafka.streams.processor.internals.RecordCollectorImpl$1.onCompletion(RecordCollectorImpl.java:202) at org.apache.kafka.clients.producer.KafkaProducer$InterceptorCallback.onCompletion(KafkaProducer.java:1310) at org.apache.kafka.clients.producer.internals.ProducerBatch.completeFutureAndFireCallbacks(ProducerBatch.java:230) at org.apache.kafka.clients.producer.internals.ProducerBatch.done(ProducerBatch.java:196) at org.apache.kafka.clients.producer.internals.Sender.failBatch(Sender.java:719) at org.apache.kafka.clients.producer.internals.Sender.failBatch(Sender.java:687) at org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:637) at org.apache.kafka.clients.producer.internals.Sender.handleProduceResponse(Sender.java:559) at org.apache.kafka.clients.producer.internals.Sender.access$100(Sender.java:74) at org.apache.kafka.clients.producer.internals.Sender$1.onComplete(Sender.java:788) at org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:109) at org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:557) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:288) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:235) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.kafka.common.errors.UnknownProducerIdException: This exception is raised by the broker if it could not locate the producer metadata associated with the producerId in question. This could happen if, for instance, the producer's records were deleted because their retention time had elapsed. Once the last records of the producerId are removed, the producer's metadata is removed from the broker, and future appends by the producer will return this exception. [2019-12-04 13:28:55,721] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] State transition from PARTITIONS_REVOKED to PENDING_SHUTDOWN (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:55,721] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] Shutting down (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:55,722] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3-restore-consumer, groupId=null] Unsubscribed all topics or patterns and assigned partitions (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 13:28:55,962] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3-consumer, groupId=stream-soak-test] Successfully joined group with generation 71 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2019-12-04 13:28:55,962] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-consumer, groupId=stream-soak-test] Successfully joined group with generation 71 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2019-12-04 13:28:55,963] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-consumer, groupId=stream-soak-test] Setting newly assigned partitions: network-id-repartition-0, k8sName-id-repartition-0 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2019-12-04 13:28:55,963] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] State transition from PARTITIONS_REVOKED to PARTITIONS_ASSIGNED (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:55,963] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] Creating producer client for task 2_0 (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:55,963] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [172.31.26.44:9092, 172.31.29.20:9092, 172.31.31.132:9092] buffer.memory = 33554432 client.dns.lookup = default client.id = stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-2_0-producer compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 100 max.block.ms = 2147483647 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = DEBUG metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 305000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = stream-soak-test-2_0 value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:55,964] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-2_0-producer, transactionalId=stream-soak-test-2_0] Instantiated a transactional producer. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 13:28:55,964] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-consumer, groupId=stream-soak-test] Successfully joined group with generation 71 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2019-12-04 13:28:55,964] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-consumer, groupId=stream-soak-test] Setting newly assigned partitions: network-id-repartition-2, windowed-node-counts-2, node-name-repartition-2 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2019-12-04 13:28:55,964] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] State transition from PARTITIONS_REVOKED to PARTITIONS_ASSIGNED (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:55,964] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Creating producer client for task 1_2 (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:55,965] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [172.31.26.44:9092, 172.31.29.20:9092, 172.31.31.132:9092] buffer.memory = 33554432 client.dns.lookup = default client.id = stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-1_2-producer compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 100 max.block.ms = 2147483647 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = DEBUG metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 305000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = stream-soak-test-1_2 value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:55,965] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-2_0-producer, transactionalId=stream-soak-test-2_0] Overriding the default acks to all since idempotence is enabled. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 13:28:55,965] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-1_2-producer, transactionalId=stream-soak-test-1_2] Instantiated a transactional producer. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 13:28:55,966] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] The configuration 'rocksdb.stats.dump.freq' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:55,966] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] The configuration 'topic.retention.bytes' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:55,966] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] The configuration 'topic.retention.ms' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:55,966] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] Kafka version: 2.2.3-61c8228f3 (org.apache.kafka.common.utils.AppInfoParser) [2019-12-04 13:28:55,966] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] Kafka commitId: 61c8228f31479422 (org.apache.kafka.common.utils.AppInfoParser) [2019-12-04 13:28:55,967] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-2_0-producer, transactionalId=stream-soak-test-2_0] ProducerId set to -1 with epoch -1 (org.apache.kafka.clients.producer.internals.TransactionManager) [2019-12-04 13:28:55,968] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] State transition from PENDING_SHUTDOWN to DEAD (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:55,968] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] Shutdown complete (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:55,968] ERROR [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] Thread StreamsThread threadId: stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3 TaskManager MetadataState: GlobalMetadata: [] GlobalStores: [] My HostInfo: HostInfo{host='unknown', port=-1} Cluster(id = null, nodes = [], partitions = [], controller = null) Active tasks: Running: Suspended: New: Restoring: Standby tasks: Running: Suspended: New: encountered an error processing soak test (org.apache.kafka.streams.StreamsSoakTest) org.apache.kafka.streams.errors.StreamsException: stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] Failed to rebalance. at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:970) at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:861) at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:817) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:786) Caused by: org.apache.kafka.streams.errors.StreamsException: stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-3] failed to suspend stream tasks at org.apache.kafka.streams.processor.internals.TaskManager.suspendTasksAndState(TaskManager.java:256) at org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsRevoked(StreamThread.java:331) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinPrepare(ConsumerCoordinator.java:461) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:396) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:344) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:342) at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1226) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1191) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1176) at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:961) ... 3 more Caused by: org.apache.kafka.streams.errors.StreamsException: task [1_1] Abort sending since an error caught with a previous record (key gke-k8s-sz-b1-us-central-default-pool-45hu2c39-3z07 value 1 timestamp 1575465915058) to topic windowed-node-counts due to org.apache.kafka.common.errors.UnknownProducerIdException: This exception is raised by the broker if it could not locate the producer metadata associated with the producerId in question. This could happen if, for instance, the producer's records were deleted because their retention time had elapsed. Once the last records of the producerId are removed, the producer's metadata is removed from the broker, and future appends by the producer will return this exception. at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.recordSendError(RecordCollectorImpl.java:139) at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.access$500(RecordCollectorImpl.java:51) at org.apache.kafka.streams.processor.internals.RecordCollectorImpl$1.onCompletion(RecordCollectorImpl.java:202) at org.apache.kafka.clients.producer.KafkaProducer$InterceptorCallback.onCompletion(KafkaProducer.java:1310) at org.apache.kafka.clients.producer.internals.ProducerBatch.completeFutureAndFireCallbacks(ProducerBatch.java:230) at org.apache.kafka.clients.producer.internals.ProducerBatch.done(ProducerBatch.java:196) at org.apache.kafka.clients.producer.internals.Sender.failBatch(Sender.java:719) at org.apache.kafka.clients.producer.internals.Sender.failBatch(Sender.java:687) at org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:637) at org.apache.kafka.clients.producer.internals.Sender.handleProduceResponse(Sender.java:559) at org.apache.kafka.clients.producer.internals.Sender.access$100(Sender.java:74) at org.apache.kafka.clients.producer.internals.Sender$1.onComplete(Sender.java:788) at org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:109) at org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:557) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:288) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:235) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.kafka.common.errors.UnknownProducerIdException: This exception is raised by the broker if it could not locate the producer metadata associated with the producerId in question. This could happen if, for instance, the producer's records were deleted because their retention time had elapsed. Once the last records of the producerId are removed, the producer's metadata is removed from the broker, and future appends by the producer will return this exception. [2019-12-04 13:28:55,969] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-1_2-producer, transactionalId=stream-soak-test-1_2] Overriding the default acks to all since idempotence is enabled. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 13:28:55,969] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] The configuration 'rocksdb.stats.dump.freq' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:55,969] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] The configuration 'topic.retention.bytes' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:55,969] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] The configuration 'topic.retention.ms' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:55,970] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Kafka version: 2.2.3-61c8228f3 (org.apache.kafka.common.utils.AppInfoParser) [2019-12-04 13:28:55,970] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Kafka commitId: 61c8228f31479422 (org.apache.kafka.common.utils.AppInfoParser) [2019-12-04 13:28:55,970] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Error registering AppInfo mbean (org.apache.kafka.common.utils.AppInfoParser) javax.management.InstanceAlreadyExistsException: kafka.producer:type=app-info,id=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-1_2-producer at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:62) at org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:425) at org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:288) at org.apache.kafka.streams.processor.internals.DefaultKafkaClientSupplier.getProducer(DefaultKafkaClientSupplier.java:39) at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createProducer(StreamThread.java:469) at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.lambda$createTask$0(StreamThread.java:459) at org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:192) at org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:172) at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:460) at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:411) at org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.createTasks(StreamThread.java:396) at org.apache.kafka.streams.processor.internals.TaskManager.addStreamTasks(TaskManager.java:148) at org.apache.kafka.streams.processor.internals.TaskManager.createTasks(TaskManager.java:107) at org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsAssigned(StreamThread.java:295) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:292) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:410) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:344) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:342) at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1226) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1191) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1176) at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:961) at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:857) at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:817) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:786) [2019-12-04 13:28:55,970] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-1_2-producer, transactionalId=stream-soak-test-1_2] ProducerId set to -1 with epoch -1 (org.apache.kafka.clients.producer.internals.TransactionManager) [2019-12-04 13:28:56,066] INFO [kafka-producer-network-thread | stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-2_0-producer] Cluster ID: 0TWKzLUNRB-3tQMTjQrFyQ (org.apache.kafka.clients.Metadata) [2019-12-04 13:28:56,067] INFO [kafka-producer-network-thread | stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-1_2-producer] Cluster ID: 0TWKzLUNRB-3tQMTjQrFyQ (org.apache.kafka.clients.Metadata) [2019-12-04 13:28:56,170] INFO [kafka-producer-network-thread | stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-2_0-producer] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-2_0-producer, transactionalId=stream-soak-test-2_0] ProducerId set to 1000 with epoch 72 (org.apache.kafka.clients.producer.internals.TransactionManager) [2019-12-04 13:28:56,170] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] Creating producer client for task 3_0 (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:56,170] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [172.31.26.44:9092, 172.31.29.20:9092, 172.31.31.132:9092] buffer.memory = 33554432 client.dns.lookup = default client.id = stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-3_0-producer compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 100 max.block.ms = 2147483647 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = DEBUG metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 305000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = stream-soak-test-3_0 value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:56,170] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-3_0-producer, transactionalId=stream-soak-test-3_0] Instantiated a transactional producer. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 13:28:56,171] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-3_0-producer, transactionalId=stream-soak-test-3_0] Overriding the default acks to all since idempotence is enabled. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 13:28:56,171] INFO [kafka-producer-network-thread | stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-1_2-producer] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-1_2-producer, transactionalId=stream-soak-test-1_2] ProducerId set to 1002 with epoch 76 (org.apache.kafka.clients.producer.internals.TransactionManager) [2019-12-04 13:28:56,171] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] The configuration 'rocksdb.stats.dump.freq' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:56,171] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] The configuration 'topic.retention.bytes' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:56,171] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] The configuration 'topic.retention.ms' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:56,172] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] Kafka version: 2.2.3-61c8228f3 (org.apache.kafka.common.utils.AppInfoParser) [2019-12-04 13:28:56,172] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] Kafka commitId: 61c8228f31479422 (org.apache.kafka.common.utils.AppInfoParser) [2019-12-04 13:28:56,172] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Creating producer client for task 2_2 (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:56,172] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [172.31.26.44:9092, 172.31.29.20:9092, 172.31.31.132:9092] buffer.memory = 33554432 client.dns.lookup = default client.id = stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-2_2-producer compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 100 max.block.ms = 2147483647 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = DEBUG metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 305000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = stream-soak-test-2_2 value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:56,172] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-3_0-producer, transactionalId=stream-soak-test-3_0] ProducerId set to -1 with epoch -1 (org.apache.kafka.clients.producer.internals.TransactionManager) [2019-12-04 13:28:56,172] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-2_2-producer, transactionalId=stream-soak-test-2_2] Instantiated a transactional producer. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 13:28:56,175] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-2_2-producer, transactionalId=stream-soak-test-2_2] Overriding the default acks to all since idempotence is enabled. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 13:28:56,176] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] The configuration 'rocksdb.stats.dump.freq' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:56,176] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] The configuration 'topic.retention.bytes' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:56,176] WARN [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] The configuration 'topic.retention.ms' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 13:28:56,176] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Kafka version: 2.2.3-61c8228f3 (org.apache.kafka.common.utils.AppInfoParser) [2019-12-04 13:28:56,176] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Kafka commitId: 61c8228f31479422 (org.apache.kafka.common.utils.AppInfoParser) [2019-12-04 13:28:56,177] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-2_2-producer, transactionalId=stream-soak-test-2_2] ProducerId set to -1 with epoch -1 (org.apache.kafka.clients.producer.internals.TransactionManager) [2019-12-04 13:28:56,276] INFO [kafka-producer-network-thread | stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-2_2-producer] Cluster ID: 0TWKzLUNRB-3tQMTjQrFyQ (org.apache.kafka.clients.Metadata) [2019-12-04 13:28:56,278] INFO [kafka-producer-network-thread | stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-3_0-producer] Cluster ID: 0TWKzLUNRB-3tQMTjQrFyQ (org.apache.kafka.clients.Metadata) [2019-12-04 13:28:56,279] INFO [kafka-producer-network-thread | stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-3_0-producer] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-3_0-producer, transactionalId=stream-soak-test-3_0] ProducerId set to 1003 with epoch 71 (org.apache.kafka.clients.producer.internals.TransactionManager) [2019-12-04 13:28:56,279] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] partition assignment took 316 ms. current active tasks: [2_0, 3_0] current standby tasks: [] previous active tasks: [] (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:56,281] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:56,287] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:56,379] INFO [kafka-producer-network-thread | stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-2_2-producer] [Producer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-2_2-producer, transactionalId=stream-soak-test-2_2] ProducerId set to 2002 with epoch 70 (org.apache.kafka.clients.producer.internals.TransactionManager) [2019-12-04 13:28:56,379] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] partition assignment took 415 ms. current active tasks: [1_2, 2_2] current standby tasks: [] previous active tasks: [] (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 13:28:56,382] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:56,391] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:56,421] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-restore-consumer, groupId=null] Subscribed to partition(s): stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000049-changelog-0, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000040-changelog-0 (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 13:28:56,421] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] No checkpoint found for task 3_0 state store KSTREAM-AGGREGATE-STATE-STORE-0000000049 changelog stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000049-changelog-0 with EOS turned on. Reinitializing the task and restore its state from the beginning. (org.apache.kafka.streams.processor.internals.StoreChangelogReader) [2019-12-04 13:28:56,422] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-restore-consumer, groupId=null] Resetting offset for partition stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000049-changelog-0 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher) [2019-12-04 13:28:56,422] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-restore-consumer, groupId=null] Resetting offset for partition stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000040-changelog-0 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher) [2019-12-04 13:28:56,422] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-restore-consumer, groupId=null] Subscribed to partition(s): stream-soak-test-windowed-node-counts-STATE-STORE-0000000030-changelog-2, stream-soak-test-logData10MinuteSuppressedCount-store-changelog-2, stream-soak-test-logData10MinuteFinalCount-store-changelog-2, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000007-changelog-2, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000025-changelog-2, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000013-changelog-2, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000019-changelog-2, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000040-changelog-2 (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 13:28:56,422] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] No checkpoint found for task 1_2 state store windowed-node-counts-STATE-STORE-0000000030 changelog stream-soak-test-windowed-node-counts-STATE-STORE-0000000030-changelog-2 with EOS turned on. Reinitializing the task and restore its state from the beginning. (org.apache.kafka.streams.processor.internals.StoreChangelogReader) [2019-12-04 13:28:56,423] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-restore-consumer, groupId=null] Resetting offset for partition stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000025-changelog-2 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher) [2019-12-04 13:28:56,423] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-restore-consumer, groupId=null] Resetting offset for partition stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000013-changelog-2 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher) [2019-12-04 13:28:56,423] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-restore-consumer, groupId=null] Resetting offset for partition stream-soak-test-windowed-node-counts-STATE-STORE-0000000030-changelog-2 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher) [2019-12-04 13:28:56,423] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-restore-consumer, groupId=null] Resetting offset for partition stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000007-changelog-2 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher) [2019-12-04 13:28:56,423] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-restore-consumer, groupId=null] Resetting offset for partition stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000040-changelog-2 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher) [2019-12-04 13:28:56,425] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:56,425] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:56,432] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] No checkpoint found for task 2_0 state store KSTREAM-AGGREGATE-STATE-STORE-0000000040 changelog stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000040-changelog-0 with EOS turned on. Reinitializing the task and restore its state from the beginning. (org.apache.kafka.streams.processor.internals.StoreChangelogReader) [2019-12-04 13:28:56,435] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:56,436] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] No checkpoint found for task 1_2 state store logData10MinuteSuppressedCount-store changelog stream-soak-test-logData10MinuteSuppressedCount-store-changelog-2 with EOS turned on. Reinitializing the task and restore its state from the beginning. (org.apache.kafka.streams.processor.internals.StoreChangelogReader) [2019-12-04 13:28:56,436] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-restore-consumer, groupId=null] Resetting offset for partition stream-soak-test-logData10MinuteSuppressedCount-store-changelog-2 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher) [2019-12-04 13:28:56,436] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-restore-consumer, groupId=null] Resetting offset for partition stream-soak-test-logData10MinuteFinalCount-store-changelog-2 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher) [2019-12-04 13:28:56,436] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-restore-consumer, groupId=null] Resetting offset for partition stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000019-changelog-2 to offset 26850383. (org.apache.kafka.clients.consumer.internals.Fetcher) [2019-12-04 13:28:56,437] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] No checkpoint found for task 1_2 state store logData10MinuteFinalCount-store changelog stream-soak-test-logData10MinuteFinalCount-store-changelog-2 with EOS turned on. Reinitializing the task and restore its state from the beginning. (org.apache.kafka.streams.processor.internals.StoreChangelogReader) [2019-12-04 13:28:56,437] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] No checkpoint found for task 1_2 state store KSTREAM-AGGREGATE-STATE-STORE-0000000007 changelog stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000007-changelog-2 with EOS turned on. Reinitializing the task and restore its state from the beginning. (org.apache.kafka.streams.processor.internals.StoreChangelogReader) [2019-12-04 13:28:56,438] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] No checkpoint found for task 1_2 state store KSTREAM-AGGREGATE-STATE-STORE-0000000025 changelog stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000025-changelog-2 with EOS turned on. Reinitializing the task and restore its state from the beginning. (org.apache.kafka.streams.processor.internals.StoreChangelogReader) [2019-12-04 13:28:56,439] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] No checkpoint found for task 1_2 state store KSTREAM-AGGREGATE-STATE-STORE-0000000013 changelog stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000013-changelog-2 with EOS turned on. Reinitializing the task and restore its state from the beginning. (org.apache.kafka.streams.processor.internals.StoreChangelogReader) [2019-12-04 13:28:56,440] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] No checkpoint found for task 1_2 state store KSTREAM-AGGREGATE-STATE-STORE-0000000019 changelog stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000019-changelog-2 with EOS turned on. Reinitializing the task and restore its state from the beginning. (org.apache.kafka.streams.processor.internals.StoreChangelogReader) [2019-12-04 13:28:56,441] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] stream-thread [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] No checkpoint found for task 2_2 state store KSTREAM-AGGREGATE-STATE-STORE-0000000040 changelog stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000040-changelog-2 with EOS turned on. Reinitializing the task and restore its state from the beginning. (org.apache.kafka.streams.processor.internals.StoreChangelogReader) [2019-12-04 13:28:56,443] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:56,456] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-restore-consumer, groupId=null] Unsubscribed all topics or patterns and assigned partitions (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 13:28:56,458] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1-restore-consumer, groupId=null] Subscribed to partition(s): stream-soak-test-windowed-node-counts-STATE-STORE-0000000030-changelog-2, stream-soak-test-logData10MinuteSuppressedCount-store-changelog-2, stream-soak-test-logData10MinuteFinalCount-store-changelog-2, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000007-changelog-2, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000025-changelog-2, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000013-changelog-2, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000019-changelog-2, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000040-changelog-2 (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 13:28:56,459] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:56,462] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-restore-consumer, groupId=null] Unsubscribed all topics or patterns and assigned partitions (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 13:28:56,464] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] [Consumer clientId=stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2-restore-consumer, groupId=null] Subscribed to partition(s): stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000049-changelog-0, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000040-changelog-0 (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 13:28:56,464] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:56,470] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-2] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 13:28:56,470] INFO [stream-soak-test-f4273931-0cf8-429e-8e35-cc0a8bd9075f-StreamThread-1] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig)