[2019-12-04 07:29:46,906] ERROR [kafka-producer-network-thread | stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-1-1_0-producer] [Producer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-1-1_0-producer, transactionalId=stream-soak-test-1_0] Aborting producer batches due to fatal error (org.apache.kafka.clients.producer.internals.Sender) org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker. [2019-12-04 07:29:46,906] WARN [kafka-producer-network-thread | stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-1-1_0-producer] task [1_0] Error sending record to topic windowed-node-counts due to Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker.; No more records will be sent and no more offsets will be recorded for this task. Enable TRACE logging to view failed record key and value. (org.apache.kafka.streams.processor.internals.RecordCollectorImpl) org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker. [2019-12-04 07:29:46,906] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-1] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-1] Failed to commit stream task 1_0 since it got migrated to another thread already. Closing it as zombie before triggering a new rebalance. (org.apache.kafka.streams.processor.internals.AssignedStreamsTasks) [2019-12-04 07:29:46,911] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-1] [Producer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-1-1_0-producer, transactionalId=stream-soak-test-1_0] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 07:29:51,004] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Failed to commit stream task 0_2 since it got migrated to another thread already. Closing it as zombie before triggering a new rebalance. (org.apache.kafka.streams.processor.internals.AssignedStreamsTasks) [2019-12-04 07:29:51,006] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] [Producer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-0_2-producer, transactionalId=stream-soak-test-0_2] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 07:29:51,008] WARN [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Detected task 0_2 that got migrated to another thread. This implies that this thread missed a rebalance and dropped out of the consumer group. Will try to rejoin the consumer group. Below is the detailed description of the task: >TaskId: 0_2 >> ProcessorTopology: > KSTREAM-SOURCE-0000000000: > topics: [logs.json.zookeeper, logs.kubernetes, logs.operator, logs.syslog, logs.json.kafka] > children: [KSTREAM-MAPVALUES-0000000001] > KSTREAM-MAPVALUES-0000000001: > children: [KSTREAM-FILTER-0000000002, KSTREAM-FILTER-0000000035, KSTREAM-FILTER-0000000044, KSTREAM-FILTER-0000000053] > KSTREAM-FILTER-0000000002: > children: [KSTREAM-MAP-0000000003] > KSTREAM-MAP-0000000003: > children: [KSTREAM-SINK-0000000004] > KSTREAM-SINK-0000000004: > topic: StaticTopicNameExtractor(node-name-repartition) > KSTREAM-FILTER-0000000035: > children: [KSTREAM-KEY-SELECT-0000000036] > KSTREAM-KEY-SELECT-0000000036: > children: [KSTREAM-SINK-0000000037] > KSTREAM-SINK-0000000037: > topic: StaticTopicNameExtractor(network-id-repartition) > KSTREAM-FILTER-0000000044: > children: [KSTREAM-KEY-SELECT-0000000045] > KSTREAM-KEY-SELECT-0000000045: > children: [KSTREAM-SINK-0000000046] > KSTREAM-SINK-0000000046: > topic: StaticTopicNameExtractor(k8sName-id-repartition) > KSTREAM-FILTER-0000000053: > children: [KSTREAM-MAPVALUES-0000000054] > KSTREAM-MAPVALUES-0000000054: > children: [KSTREAM-SINK-0000000055] > KSTREAM-SINK-0000000055: > topic: StaticTopicNameExtractor(streams-soak-out) > KSTREAM-SOURCE-0000000000: > topics: [logs.json.zookeeper, logs.kubernetes, logs.operator, logs.syslog, logs.json.kafka] > children: [KSTREAM-MAPVALUES-0000000001] > KSTREAM-MAPVALUES-0000000001: > children: [KSTREAM-FILTER-0000000002, KSTREAM-FILTER-0000000035, KSTREAM-FILTER-0000000044, KSTREAM-FILTER-0000000053] > KSTREAM-FILTER-0000000002: > children: [KSTREAM-MAP-0000000003] > KSTREAM-MAP-0000000003: > children: [KSTREAM-SINK-0000000004] > KSTREAM-SINK-0000000004: > topic: StaticTopicNameExtractor(node-name-repartition) > KSTREAM-FILTER-0000000035: > children: [KSTREAM-KEY-SELECT-0000000036] > KSTREAM-KEY-SELECT-0000000036: > children: [KSTREAM-SINK-0000000037] > KSTREAM-SINK-0000000037: > topic: StaticTopicNameExtractor(network-id-repartition) > KSTREAM-FILTER-0000000044: > children: [KSTREAM-KEY-SELECT-0000000045] > KSTREAM-KEY-SELECT-0000000045: > children: [KSTREAM-SINK-0000000046] > KSTREAM-SINK-0000000046: > topic: StaticTopicNameExtractor(k8sName-id-repartition) > KSTREAM-FILTER-0000000053: > children: [KSTREAM-MAPVALUES-0000000054] > KSTREAM-MAPVALUES-0000000054: > children: [KSTREAM-SINK-0000000055] > KSTREAM-SINK-0000000055: > topic: StaticTopicNameExtractor(streams-soak-out) > KSTREAM-SOURCE-0000000000: > topics: [logs.json.zookeeper, logs.kubernetes, logs.operator, logs.syslog, logs.json.kafka] > children: [KSTREAM-MAPVALUES-0000000001] > KSTREAM-MAPVALUES-0000000001: > children: [KSTREAM-FILTER-0000000002, KSTREAM-FILTER-0000000035, KSTREAM-FILTER-0000000044, KSTREAM-FILTER-0000000053] > KSTREAM-FILTER-0000000002: > children: [KSTREAM-MAP-0000000003] > KSTREAM-MAP-0000000003: > children: [KSTREAM-SINK-0000000004] > KSTREAM-SINK-0000000004: > topic: StaticTopicNameExtractor(node-name-repartition) > KSTREAM-FILTER-0000000035: > children: [KSTREAM-KEY-SELECT-0000000036] > KSTREAM-KEY-SELECT-0000000036: > children: [KSTREAM-SINK-0000000037] > KSTREAM-SINK-0000000037: > topic: StaticTopicNameExtractor(network-id-repartition) > KSTREAM-FILTER-0000000044: > children: [KSTREAM-KEY-SELECT-0000000045] > KSTREAM-KEY-SELECT-0000000045: > children: [KSTREAM-SINK-0000000046] > KSTREAM-SINK-0000000046: > topic: StaticTopicNameExtractor(k8sName-id-repartition) > KSTREAM-FILTER-0000000053: > children: [KSTREAM-MAPVALUES-0000000054] > KSTREAM-MAPVALUES-0000000054: > children: [KSTREAM-SINK-0000000055] > KSTREAM-SINK-0000000055: > topic: StaticTopicNameExtractor(streams-soak-out) > KSTREAM-SOURCE-0000000000: > topics: [logs.json.zookeeper, logs.kubernetes, logs.operator, logs.syslog, logs.json.kafka] > children: [KSTREAM-MAPVALUES-0000000001] > KSTREAM-MAPVALUES-0000000001: > children: [KSTREAM-FILTER-0000000002, KSTREAM-FILTER-0000000035, KSTREAM-FILTER-0000000044, KSTREAM-FILTER-0000000053] > KSTREAM-FILTER-0000000002: > children: [KSTREAM-MAP-0000000003] > KSTREAM-MAP-0000000003: > children: [KSTREAM-SINK-0000000004] > KSTREAM-SINK-0000000004: > topic: StaticTopicNameExtractor(node-name-repartition) > KSTREAM-FILTER-0000000035: > children: [KSTREAM-KEY-SELECT-0000000036] > KSTREAM-KEY-SELECT-0000000036: > children: [KSTREAM-SINK-0000000037] > KSTREAM-SINK-0000000037: > topic: StaticTopicNameExtractor(network-id-repartition) > KSTREAM-FILTER-0000000044: > children: [KSTREAM-KEY-SELECT-0000000045] > KSTREAM-KEY-SELECT-0000000045: > children: [KSTREAM-SINK-0000000046] > KSTREAM-SINK-0000000046: > topic: StaticTopicNameExtractor(k8sName-id-repartition) > KSTREAM-FILTER-0000000053: > children: [KSTREAM-MAPVALUES-0000000054] > KSTREAM-MAPVALUES-0000000054: > children: [KSTREAM-SINK-0000000055] > KSTREAM-SINK-0000000055: > topic: StaticTopicNameExtractor(streams-soak-out) > KSTREAM-SOURCE-0000000000: > topics: [logs.json.zookeeper, logs.kubernetes, logs.operator, logs.syslog, logs.json.kafka] > children: [KSTREAM-MAPVALUES-0000000001] > KSTREAM-MAPVALUES-0000000001: > children: [KSTREAM-FILTER-0000000002, KSTREAM-FILTER-0000000035, KSTREAM-FILTER-0000000044, KSTREAM-FILTER-0000000053] > KSTREAM-FILTER-0000000002: > children: [KSTREAM-MAP-0000000003] > KSTREAM-MAP-0000000003: > children: [KSTREAM-SINK-0000000004] > KSTREAM-SINK-0000000004: > topic: StaticTopicNameExtractor(node-name-repartition) > KSTREAM-FILTER-0000000035: > children: [KSTREAM-KEY-SELECT-0000000036] > KSTREAM-KEY-SELECT-0000000036: > children: [KSTREAM-SINK-0000000037] > KSTREAM-SINK-0000000037: > topic: StaticTopicNameExtractor(network-id-repartition) > KSTREAM-FILTER-0000000044: > children: [KSTREAM-KEY-SELECT-0000000045] > KSTREAM-KEY-SELECT-0000000045: > children: [KSTREAM-SINK-0000000046] > KSTREAM-SINK-0000000046: > topic: StaticTopicNameExtractor(k8sName-id-repartition) > KSTREAM-FILTER-0000000053: > children: [KSTREAM-MAPVALUES-0000000054] > KSTREAM-MAPVALUES-0000000054: > children: [KSTREAM-SINK-0000000055] > KSTREAM-SINK-0000000055: > topic: StaticTopicNameExtractor(streams-soak-out) >Partitions [logs.json.kafka-2, logs.json.zookeeper-2, logs.kubernetes-2, logs.operator-2, logs.syslog-2] (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 07:29:51,008] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] [Consumer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-consumer, groupId=stream-soak-test] Unsubscribed all topics or patterns and assigned partitions (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 07:29:51,008] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] [Consumer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-consumer, groupId=stream-soak-test] Subscribed to pattern: 'k8sName-id-repartition|logs.json.kafka|logs.json.zookeeper|logs.kubernetes|logs.operator|logs.syslog|network-id-repartition|node-name-repartition|windowed-node-counts' (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 07:29:51,011] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] [Consumer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-consumer, groupId=stream-soak-test] Revoking previously assigned partitions [] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2019-12-04 07:29:51,011] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] State transition from RUNNING to PARTITIONS_REVOKED (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 07:29:51,011] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] stream-client [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738] State transition from RUNNING to REBALANCING (org.apache.kafka.streams.KafkaStreams) [2019-12-04 07:29:51,012] ERROR [kafka-producer-network-thread | stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-2_1-producer] [Producer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-2_1-producer, transactionalId=stream-soak-test-2_1] Aborting producer batches due to fatal error (org.apache.kafka.clients.producer.internals.Sender) org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker. [2019-12-04 07:29:51,012] WARN [kafka-producer-network-thread | stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-2_1-producer] task [2_1] Error sending record to topic network-id-counts due to Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker.; No more records will be sent and no more offsets will be recorded for this task. Enable TRACE logging to view failed record key and value. (org.apache.kafka.streams.processor.internals.RecordCollectorImpl) org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker. [2019-12-04 07:29:51,017] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] [Producer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-2_1-producer, transactionalId=stream-soak-test-2_1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 07:29:51,018] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Failed to suspend stream task 2_1 since it got migrated to another thread already. Closing it as zombie and move on. (org.apache.kafka.streams.processor.internals.AssignedStreamsTasks) [2019-12-04 07:29:51,018] ERROR [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] task [2_1] Failed to close producer due to the following error: (org.apache.kafka.streams.processor.internals.StreamTask) org.apache.kafka.common.errors.ProducerFencedException: task [2_1] Abort sending since producer got fenced with a previous record (key k8s-je-l0-us-central1 value 16 timestamp 1575444479998) to topic network-id-counts due to org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker. [2019-12-04 07:29:51,022] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] [Consumer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-restore-consumer, groupId=null] Unsubscribed all topics or patterns and assigned partitions (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 07:29:51,022] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] partition revocation took 11 ms. suspended active tasks: [] suspended standby tasks: [] (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 07:29:51,022] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] [Consumer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-consumer, groupId=stream-soak-test] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2019-12-04 07:29:53,408] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] [Consumer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-consumer, groupId=stream-soak-test] Successfully joined group with generation 47 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2019-12-04 07:29:53,409] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] [Consumer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-consumer, groupId=stream-soak-test] Setting newly assigned partitions: node-name-repartition-0, windowed-node-counts-0, k8sName-id-repartition-0 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2019-12-04 07:29:53,409] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] State transition from PARTITIONS_REVOKED to PARTITIONS_ASSIGNED (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 07:29:53,409] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Creating producer client for task 1_0 (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 07:29:53,409] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [172.31.26.44:9092, 172.31.29.20:9092, 172.31.31.132:9092] buffer.memory = 33554432 client.dns.lookup = default client.id = stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-1_0-producer compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 100 max.block.ms = 2147483647 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = DEBUG metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 305000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = stream-soak-test-1_0 value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 07:29:53,410] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] [Producer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-1_0-producer, transactionalId=stream-soak-test-1_0] Instantiated a transactional producer. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 07:29:53,410] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] [Producer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-1_0-producer, transactionalId=stream-soak-test-1_0] Overriding the default acks to all since idempotence is enabled. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 07:29:53,411] WARN [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] The configuration 'rocksdb.stats.dump.freq' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 07:29:53,411] WARN [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] The configuration 'topic.retention.bytes' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 07:29:53,411] WARN [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] The configuration 'topic.retention.ms' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 07:29:53,411] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Kafka version: 2.2.3-61c8228f3 (org.apache.kafka.common.utils.AppInfoParser) [2019-12-04 07:29:53,411] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Kafka commitId: 61c8228f31479422 (org.apache.kafka.common.utils.AppInfoParser) [2019-12-04 07:29:53,411] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] [Producer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-1_0-producer, transactionalId=stream-soak-test-1_0] ProducerId set to -1 with epoch -1 (org.apache.kafka.clients.producer.internals.TransactionManager) [2019-12-04 07:29:53,513] INFO [kafka-producer-network-thread | stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-1_0-producer] Cluster ID: 0TWKzLUNRB-3tQMTjQrFyQ (org.apache.kafka.clients.Metadata) [2019-12-04 07:29:53,622] INFO [kafka-producer-network-thread | stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-1_0-producer] [Producer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-1_0-producer, transactionalId=stream-soak-test-1_0] ProducerId set to 2 with epoch 48 (org.apache.kafka.clients.producer.internals.TransactionManager) [2019-12-04 07:29:53,623] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Creating producer client for task 3_0 (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 07:29:53,623] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [172.31.26.44:9092, 172.31.29.20:9092, 172.31.31.132:9092] buffer.memory = 33554432 client.dns.lookup = default client.id = stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-3_0-producer compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 100 max.block.ms = 2147483647 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = DEBUG metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 305000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = stream-soak-test-3_0 value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 07:29:53,623] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] [Producer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-3_0-producer, transactionalId=stream-soak-test-3_0] Instantiated a transactional producer. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 07:29:53,624] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] [Producer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-3_0-producer, transactionalId=stream-soak-test-3_0] Overriding the default acks to all since idempotence is enabled. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 07:29:53,624] WARN [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] The configuration 'rocksdb.stats.dump.freq' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 07:29:53,624] WARN [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] The configuration 'topic.retention.bytes' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 07:29:53,624] WARN [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] The configuration 'topic.retention.ms' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-12-04 07:29:53,624] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Kafka version: 2.2.3-61c8228f3 (org.apache.kafka.common.utils.AppInfoParser) [2019-12-04 07:29:53,624] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Kafka commitId: 61c8228f31479422 (org.apache.kafka.common.utils.AppInfoParser) [2019-12-04 07:29:53,629] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] [Producer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-3_0-producer, transactionalId=stream-soak-test-3_0] ProducerId set to -1 with epoch -1 (org.apache.kafka.clients.producer.internals.TransactionManager) [2019-12-04 07:29:53,725] INFO [kafka-producer-network-thread | stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-3_0-producer] Cluster ID: 0TWKzLUNRB-3tQMTjQrFyQ (org.apache.kafka.clients.Metadata) [2019-12-04 07:29:53,832] INFO [kafka-producer-network-thread | stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-3_0-producer] [Producer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-3_0-producer, transactionalId=stream-soak-test-3_0] ProducerId set to 1003 with epoch 47 (org.apache.kafka.clients.producer.internals.TransactionManager) [2019-12-04 07:29:53,832] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] partition assignment took 423 ms. current active tasks: [1_0, 3_0] current standby tasks: [] previous active tasks: [] (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 07:29:53,834] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 07:29:53,841] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] [Consumer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-restore-consumer, groupId=null] Subscribed to partition(s): stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000049-changelog-0 (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 07:29:53,841] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] No checkpoint found for task 3_0 state store KSTREAM-AGGREGATE-STATE-STORE-0000000049 changelog stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000049-changelog-0 with EOS turned on. Reinitializing the task and restore its state from the beginning. (org.apache.kafka.streams.processor.internals.StoreChangelogReader) [2019-12-04 07:29:53,841] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] [Consumer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-restore-consumer, groupId=null] Resetting offset for partition stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000049-changelog-0 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher) [2019-12-04 07:29:53,842] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 07:29:53,846] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] [Consumer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-restore-consumer, groupId=null] Unsubscribed all topics or patterns and assigned partitions (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 07:29:53,848] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] [Consumer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-restore-consumer, groupId=null] Subscribed to partition(s): stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000049-changelog-0 (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 07:29:53,848] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 07:29:55,370] ERROR [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-1] task [1_0] Failed to close producer due to the following error: (org.apache.kafka.streams.processor.internals.StreamTask) org.apache.kafka.common.errors.ProducerFencedException: task [1_0] Abort sending since producer got fenced with a previous record (key gke-k8s-sz-b1-us-central-default-pool-i03fpedn-6rh0 value 1 timestamp 1575443741599) to topic windowed-node-counts due to org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker. [2019-12-04 07:29:55,384] ERROR [kafka-producer-network-thread | stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2-1_1-producer] [Producer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2-1_1-producer, transactionalId=stream-soak-test-1_1] Aborting producer batches due to fatal error (org.apache.kafka.clients.producer.internals.Sender) org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker. [2019-12-04 07:29:55,384] WARN [kafka-producer-network-thread | stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2-1_1-producer] task [1_1] Error sending record to topic stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000025-changelog due to Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker.; No more records will be sent and no more offsets will be recorded for this task. Enable TRACE logging to view failed record key and value. (org.apache.kafka.streams.processor.internals.RecordCollectorImpl) org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker. [2019-12-04 07:29:55,401] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] Failed to process stream task 1_1 since it got migrated to another thread already. Closing it as zombie before triggering a new rebalance. (org.apache.kafka.streams.processor.internals.AssignedStreamsTasks) [2019-12-04 07:29:55,410] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] [Producer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2-1_1-producer, transactionalId=stream-soak-test-1_1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 07:29:55,416] ERROR [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] task [1_1] Failed to close producer due to the following error: (org.apache.kafka.streams.processor.internals.StreamTask) org.apache.kafka.common.errors.ProducerFencedException: task [1_1] Abort sending since producer got fenced with a previous record (key gke-k8s-sz-b1-us-central-default-pool-0232896p-5182\x00\x00\x01n\xCF\xCE\xFF\x90\x00\x00\x01n\xCF\xCE\xFF\x90 value [B@6c473d9d timestamp 1575444479909) to topic stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000025-changelog due to org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker. [2019-12-04 07:29:55,416] ERROR [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] task [1_1] Failed to close state store KSTREAM-AGGREGATE-STATE-STORE-0000000007: (org.apache.kafka.streams.processor.internals.ProcessorStateManager) org.apache.kafka.common.errors.ProducerFencedException: task [1_1] Abort sending since producer got fenced with a previous record (key gke-k8s-sz-b1-us-central-default-pool-0232896p-5182\x00\x00\x01n\xCF\xCE\xFF\x90\x00\x00\x01n\xCF\xCE\xFF\x90 value [B@6c473d9d timestamp 1575444479909) to topic stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000025-changelog due to org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker. [2019-12-04 07:29:55,416] ERROR [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] task [1_1] Failed to close state store KSTREAM-AGGREGATE-STATE-STORE-0000000013: (org.apache.kafka.streams.processor.internals.ProcessorStateManager) org.apache.kafka.common.errors.ProducerFencedException: task [1_1] Abort sending since producer got fenced with a previous record (key gke-k8s-sz-b1-us-central-default-pool-0232896p-5182\x00\x00\x01n\xCF\xCE\xFF\x90\x00\x00\x01n\xCF\xCE\xFF\x90 value [B@6c473d9d timestamp 1575444479909) to topic stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000025-changelog due to org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker. [2019-12-04 07:29:55,416] ERROR [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] task [1_1] Failed to close state store KSTREAM-AGGREGATE-STATE-STORE-0000000019: (org.apache.kafka.streams.processor.internals.ProcessorStateManager) org.apache.kafka.common.errors.ProducerFencedException: task [1_1] Abort sending since producer got fenced with a previous record (key gke-k8s-sz-b1-us-central-default-pool-0232896p-5182\x00\x00\x01n\xCF\xCE\xFF\x90\x00\x00\x01n\xCF\xCE\xFF\x90 value [B@6c473d9d timestamp 1575444479909) to topic stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000025-changelog due to org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker. [2019-12-04 07:29:55,425] WARN [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-1] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-1] Detected task 1_0 that got migrated to another thread. This implies that this thread missed a rebalance and dropped out of the consumer group. Will try to rejoin the consumer group. Below is the detailed description of the task: >TaskId: 1_0 >> ProcessorTopology: > KSTREAM-SOURCE-0000000005: > topics: [node-name-repartition] > children: [KSTREAM-AGGREGATE-0000000008, KSTREAM-AGGREGATE-0000000014, KSTREAM-AGGREGATE-0000000020, KSTREAM-AGGREGATE-0000000026, KSTREAM-JOIN-0000000033] > KSTREAM-AGGREGATE-0000000008: > states: [KSTREAM-AGGREGATE-STATE-STORE-0000000007] > children: [KTABLE-TOSTREAM-0000000009, logData10MinuteFinalCount, logData10MinuteSuppressedCount] > KTABLE-TOSTREAM-0000000009: > children: [KSTREAM-MAP-0000000010] > KSTREAM-MAP-0000000010: > children: [KSTREAM-SINK-0000000011] > KSTREAM-SINK-0000000011: > topic: StaticTopicNameExtractor(windowed-node-counts) > logData10MinuteFinalCount: > states: [logData10MinuteFinalCount-store] > children: [KTABLE-TOSTREAM-0000000056] > KTABLE-TOSTREAM-0000000056: > children: [KSTREAM-MAP-0000000057] > KSTREAM-MAP-0000000057: > children: [KSTREAM-SINK-0000000058] > KSTREAM-SINK-0000000058: > topic: StaticTopicNameExtractor(windowed-node-counts) > logData10MinuteSuppressedCount: > states: [logData10MinuteSuppressedCount-store] > children: [KTABLE-TOSTREAM-0000000059] > KTABLE-TOSTREAM-0000000059: > children: [KSTREAM-MAP-0000000060] > KSTREAM-MAP-0000000060: > children: [KSTREAM-SINK-0000000061] > KSTREAM-SINK-0000000061: > topic: StaticTopicNameExtractor(windowed-node-counts) > KSTREAM-AGGREGATE-0000000014: > states: [KSTREAM-AGGREGATE-STATE-STORE-0000000013] > children: [KTABLE-TOSTREAM-0000000015] > KTABLE-TOSTREAM-0000000015: > children: [KSTREAM-MAP-0000000016] > KSTREAM-MAP-0000000016: > children: [KSTREAM-SINK-0000000017] > KSTREAM-SINK-0000000017: > topic: StaticTopicNameExtractor(windowed-node-counts) > KSTREAM-AGGREGATE-0000000020: > states: [KSTREAM-AGGREGATE-STATE-STORE-0000000019] > children: [KTABLE-TOSTREAM-0000000021] > KTABLE-TOSTREAM-0000000021: > children: [KSTREAM-MAP-0000000022] > KSTREAM-MAP-0000000022: > children: [KSTREAM-SINK-0000000023] > KSTREAM-SINK-0000000023: > topic: StaticTopicNameExtractor(windowed-node-counts) > KSTREAM-AGGREGATE-0000000026: > states: [KSTREAM-AGGREGATE-STATE-STORE-0000000025] > children: [KTABLE-TOSTREAM-0000000027] > KTABLE-TOSTREAM-0000000027: > children: [KSTREAM-MAP-0000000028] > KSTREAM-MAP-0000000028: > children: [KSTREAM-SINK-0000000029] > KSTREAM-SINK-0000000029: > topic: StaticTopicNameExtractor(windowed-node-counts) > KSTREAM-JOIN-0000000033: > states: [windowed-node-counts-STATE-STORE-0000000030] > children: [KSTREAM-SINK-0000000034] > KSTREAM-SINK-0000000034: > topic: StaticTopicNameExtractor(joined-counts) > KSTREAM-SOURCE-0000000031: > topics: [windowed-node-counts] > children: [KTABLE-SOURCE-0000000032] > KTABLE-SOURCE-0000000032: > states: [windowed-node-counts-STATE-STORE-0000000030] >Partitions [node-name-repartition-0, windowed-node-counts-0] (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 07:29:55,426] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-1] [Consumer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-1-consumer, groupId=stream-soak-test] Unsubscribed all topics or patterns and assigned partitions (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 07:29:55,426] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-1] [Consumer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-1-consumer, groupId=stream-soak-test] Subscribed to pattern: 'k8sName-id-repartition|logs.json.kafka|logs.json.zookeeper|logs.kubernetes|logs.operator|logs.syslog|network-id-repartition|node-name-repartition|windowed-node-counts' (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 07:29:55,429] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 07:29:55,429] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-1] [Consumer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-1-consumer, groupId=stream-soak-test] Revoking previously assigned partitions [] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2019-12-04 07:29:55,429] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-1] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-1] State transition from RUNNING to PARTITIONS_REVOKED (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 07:29:55,436] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-1] [Producer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-1-3_1-producer, transactionalId=stream-soak-test-3_1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 07:29:55,438] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-1] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-1] Failed to suspend stream task 3_1 since it got migrated to another thread already. Closing it as zombie and move on. (org.apache.kafka.streams.processor.internals.AssignedStreamsTasks) [2019-12-04 07:29:55,441] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-1] [Consumer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-1-restore-consumer, groupId=null] Unsubscribed all topics or patterns and assigned partitions (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 07:29:55,441] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-1] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-1] partition revocation took 12 ms. suspended active tasks: [] suspended standby tasks: [] (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 07:29:55,441] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-1] [Consumer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-1-consumer, groupId=stream-soak-test] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2019-12-04 07:29:55,441] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 07:29:55,447] ERROR [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] task [1_1] Could not close state manager due to the following error: (org.apache.kafka.streams.processor.internals.StreamTask) org.apache.kafka.streams.errors.ProcessorStateException: task [1_1] Failed to close state store KSTREAM-AGGREGATE-STATE-STORE-0000000007 at org.apache.kafka.streams.processor.internals.ProcessorStateManager.close(ProcessorStateManager.java:317) at org.apache.kafka.streams.processor.internals.AbstractTask.closeStateManager(AbstractTask.java:250) at org.apache.kafka.streams.processor.internals.StreamTask.closeSuspended(StreamTask.java:681) at org.apache.kafka.streams.processor.internals.StreamTask.close(StreamTask.java:731) at org.apache.kafka.streams.processor.internals.AssignedTasks.closeZombieTask(AssignedTasks.java:151) at org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.process(AssignedStreamsTasks.java:205) at org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:425) at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:910) at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:817) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:786) Caused by: org.apache.kafka.common.errors.ProducerFencedException: task [1_1] Abort sending since producer got fenced with a previous record (key gke-k8s-sz-b1-us-central-default-pool-0232896p-5182\x00\x00\x01n\xCF\xCE\xFF\x90\x00\x00\x01n\xCF\xCE\xFF\x90 value [B@6c473d9d timestamp 1575444479909) to topic stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000025-changelog due to org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker. [2019-12-04 07:29:55,447] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 07:29:55,450] WARN [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] Failed to close zombie stream task 1_1 due to org.apache.kafka.streams.errors.ProcessorStateException: task [1_1] Failed to close state store KSTREAM-AGGREGATE-STATE-STORE-0000000007; ignore and proceed. (org.apache.kafka.streams.processor.internals.AssignedStreamsTasks) [2019-12-04 07:29:55,450] ERROR [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] Encountered the following unexpected Kafka exception during processing, this usually indicate Streams internal errors: (org.apache.kafka.streams.processor.internals.StreamThread) org.apache.kafka.streams.errors.ProcessorStateException: task [1_1] Failed to close state store KSTREAM-AGGREGATE-STATE-STORE-0000000007 at org.apache.kafka.streams.processor.internals.ProcessorStateManager.close(ProcessorStateManager.java:317) at org.apache.kafka.streams.processor.internals.AbstractTask.closeStateManager(AbstractTask.java:250) at org.apache.kafka.streams.processor.internals.StreamTask.closeSuspended(StreamTask.java:681) at org.apache.kafka.streams.processor.internals.StreamTask.close(StreamTask.java:731) at org.apache.kafka.streams.processor.internals.AssignedTasks.closeZombieTask(AssignedTasks.java:151) at org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.process(AssignedStreamsTasks.java:205) at org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:425) at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:910) at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:817) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:786) Caused by: org.apache.kafka.common.errors.ProducerFencedException: task [1_1] Abort sending since producer got fenced with a previous record (key gke-k8s-sz-b1-us-central-default-pool-0232896p-5182\x00\x00\x01n\xCF\xCE\xFF\x90\x00\x00\x01n\xCF\xCE\xFF\x90 value [B@6c473d9d timestamp 1575444479909) to topic stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000025-changelog due to org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker. [2019-12-04 07:29:55,450] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] State transition from RUNNING to PENDING_SHUTDOWN (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 07:29:55,450] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] Shutting down (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 07:29:55,450] ERROR [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] task [1_1] Could not close task due to the following error: (org.apache.kafka.streams.processor.internals.StreamTask) java.lang.NullPointerException at org.apache.kafka.streams.processor.internals.StreamTask.maybeAbortTransactionAndCloseRecordCollector(StreamTask.java:623) at org.apache.kafka.streams.processor.internals.StreamTask.suspend(StreamTask.java:615) at org.apache.kafka.streams.processor.internals.StreamTask.close(StreamTask.java:724) at org.apache.kafka.streams.processor.internals.AssignedTasks.close(AssignedTasks.java:341) at org.apache.kafka.streams.processor.internals.TaskManager.shutdown(TaskManager.java:267) at org.apache.kafka.streams.processor.internals.StreamThread.completeShutdown(StreamThread.java:1228) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:798) [2019-12-04 07:29:55,484] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] task [1_1] Skipping to close non-initialized store KSTREAM-AGGREGATE-STATE-STORE-0000000025 (org.apache.kafka.streams.processor.internals.ProcessorStateManager) [2019-12-04 07:29:55,484] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] task [1_1] Skipping to close non-initialized store windowed-node-counts-STATE-STORE-0000000030 (org.apache.kafka.streams.processor.internals.ProcessorStateManager) [2019-12-04 07:29:55,484] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] task [1_1] Skipping to close non-initialized store logData10MinuteFinalCount-store (org.apache.kafka.streams.processor.internals.ProcessorStateManager) [2019-12-04 07:29:55,484] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] task [1_1] Skipping to close non-initialized store logData10MinuteSuppressedCount-store (org.apache.kafka.streams.processor.internals.ProcessorStateManager) [2019-12-04 07:29:55,485] ERROR [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] Failed while closing StreamTask 1_1 due to the following error: (org.apache.kafka.streams.processor.internals.AssignedStreamsTasks) java.lang.NullPointerException at org.apache.kafka.streams.processor.internals.StreamTask.maybeAbortTransactionAndCloseRecordCollector(StreamTask.java:623) at org.apache.kafka.streams.processor.internals.StreamTask.suspend(StreamTask.java:615) at org.apache.kafka.streams.processor.internals.StreamTask.close(StreamTask.java:724) at org.apache.kafka.streams.processor.internals.AssignedTasks.close(AssignedTasks.java:341) at org.apache.kafka.streams.processor.internals.TaskManager.shutdown(TaskManager.java:267) at org.apache.kafka.streams.processor.internals.StreamThread.completeShutdown(StreamThread.java:1228) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:798) [2019-12-04 07:29:55,486] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] [Producer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2-3_2-producer, transactionalId=stream-soak-test-3_2] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer) [2019-12-04 07:29:55,490] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] [Consumer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2-restore-consumer, groupId=null] Unsubscribed all topics or patterns and assigned partitions (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 07:29:55,490] ERROR [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] Failed to close task manager due to the following error: (org.apache.kafka.streams.processor.internals.StreamThread) java.lang.NullPointerException at org.apache.kafka.streams.processor.internals.StreamTask.maybeAbortTransactionAndCloseRecordCollector(StreamTask.java:623) at org.apache.kafka.streams.processor.internals.StreamTask.suspend(StreamTask.java:615) at org.apache.kafka.streams.processor.internals.StreamTask.close(StreamTask.java:724) at org.apache.kafka.streams.processor.internals.AssignedTasks.close(AssignedTasks.java:341) at org.apache.kafka.streams.processor.internals.TaskManager.shutdown(TaskManager.java:267) at org.apache.kafka.streams.processor.internals.StreamThread.completeShutdown(StreamThread.java:1228) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:798) [2019-12-04 07:29:55,491] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 07:29:55,494] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] State transition from PENDING_SHUTDOWN to DEAD (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 07:29:55,494] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] Shutdown complete (org.apache.kafka.streams.processor.internals.StreamThread) [2019-12-04 07:29:55,494] ERROR [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2] Thread StreamsThread threadId: stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-2 TaskManager MetadataState: GlobalMetadata: [] GlobalStores: [] My HostInfo: HostInfo{host='unknown', port=-1} Cluster(id = null, nodes = [], partitions = [], controller = null) Active tasks: Running: Suspended: New: Restoring: Standby tasks: Running: Suspended: New: encountered an error processing soak test (org.apache.kafka.streams.StreamsSoakTest) org.apache.kafka.streams.errors.ProcessorStateException: task [1_1] Failed to close state store KSTREAM-AGGREGATE-STATE-STORE-0000000007 at org.apache.kafka.streams.processor.internals.ProcessorStateManager.close(ProcessorStateManager.java:317) at org.apache.kafka.streams.processor.internals.AbstractTask.closeStateManager(AbstractTask.java:250) at org.apache.kafka.streams.processor.internals.StreamTask.closeSuspended(StreamTask.java:681) at org.apache.kafka.streams.processor.internals.StreamTask.close(StreamTask.java:731) at org.apache.kafka.streams.processor.internals.AssignedTasks.closeZombieTask(AssignedTasks.java:151) at org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.process(AssignedStreamsTasks.java:205) at org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:425) at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:910) at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:817) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:786) Caused by: org.apache.kafka.common.errors.ProducerFencedException: task [1_1] Abort sending since producer got fenced with a previous record (key gke-k8s-sz-b1-us-central-default-pool-0232896p-5182\x00\x00\x01n\xCF\xCE\xFF\x90\x00\x00\x01n\xCF\xCE\xFF\x90 value [B@6c473d9d timestamp 1575444479909) to topic stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000025-changelog due to org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker. [2019-12-04 07:29:55,497] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 07:29:55,520] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 07:29:55,546] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 07:29:55,550] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 07:29:55,573] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 07:29:55,584] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] [Consumer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-restore-consumer, groupId=null] Subscribed to partition(s): stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000019-changelog-0, stream-soak-test-logData10MinuteFinalCount-store-changelog-0, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000007-changelog-0, stream-soak-test-logData10MinuteSuppressedCount-store-changelog-0, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000049-changelog-0, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000025-changelog-0, stream-soak-test-windowed-node-counts-STATE-STORE-0000000030-changelog-0, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000013-changelog-0 (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 07:29:55,585] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] No checkpoint found for task 1_0 state store KSTREAM-AGGREGATE-STATE-STORE-0000000019 changelog stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000019-changelog-0 with EOS turned on. Reinitializing the task and restore its state from the beginning. (org.apache.kafka.streams.processor.internals.StoreChangelogReader) [2019-12-04 07:29:55,585] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] [Consumer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-restore-consumer, groupId=null] Resetting offset for partition stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000019-changelog-0 to offset 13424351. (org.apache.kafka.clients.consumer.internals.Fetcher) [2019-12-04 07:29:55,585] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] [Consumer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-restore-consumer, groupId=null] Resetting offset for partition stream-soak-test-logData10MinuteFinalCount-store-changelog-0 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher) [2019-12-04 07:29:55,585] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] [Consumer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-restore-consumer, groupId=null] Resetting offset for partition stream-soak-test-logData10MinuteSuppressedCount-store-changelog-0 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher) [2019-12-04 07:29:55,612] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] No checkpoint found for task 1_0 state store logData10MinuteFinalCount-store changelog stream-soak-test-logData10MinuteFinalCount-store-changelog-0 with EOS turned on. Reinitializing the task and restore its state from the beginning. (org.apache.kafka.streams.processor.internals.StoreChangelogReader) [2019-12-04 07:29:55,612] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] No checkpoint found for task 1_0 state store KSTREAM-AGGREGATE-STATE-STORE-0000000007 changelog stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000007-changelog-0 with EOS turned on. Reinitializing the task and restore its state from the beginning. (org.apache.kafka.streams.processor.internals.StoreChangelogReader) [2019-12-04 07:29:55,612] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] [Consumer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-restore-consumer, groupId=null] Resetting offset for partition stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000025-changelog-0 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher) [2019-12-04 07:29:55,612] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] [Consumer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-restore-consumer, groupId=null] Resetting offset for partition stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000013-changelog-0 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher) [2019-12-04 07:29:55,612] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] [Consumer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-restore-consumer, groupId=null] Resetting offset for partition stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000007-changelog-0 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher) [2019-12-04 07:29:55,612] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] [Consumer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-restore-consumer, groupId=null] Resetting offset for partition stream-soak-test-windowed-node-counts-STATE-STORE-0000000030-changelog-0 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher) [2019-12-04 07:29:55,627] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] No checkpoint found for task 1_0 state store logData10MinuteSuppressedCount-store changelog stream-soak-test-logData10MinuteSuppressedCount-store-changelog-0 with EOS turned on. Reinitializing the task and restore its state from the beginning. (org.apache.kafka.streams.processor.internals.StoreChangelogReader) [2019-12-04 07:29:55,627] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] No checkpoint found for task 1_0 state store KSTREAM-AGGREGATE-STATE-STORE-0000000025 changelog stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000025-changelog-0 with EOS turned on. Reinitializing the task and restore its state from the beginning. (org.apache.kafka.streams.processor.internals.StoreChangelogReader) [2019-12-04 07:29:55,649] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] No checkpoint found for task 1_0 state store windowed-node-counts-STATE-STORE-0000000030 changelog stream-soak-test-windowed-node-counts-STATE-STORE-0000000030-changelog-0 with EOS turned on. Reinitializing the task and restore its state from the beginning. (org.apache.kafka.streams.processor.internals.StoreChangelogReader) [2019-12-04 07:29:55,660] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 07:29:55,665] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] stream-thread [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] No checkpoint found for task 1_0 state store KSTREAM-AGGREGATE-STATE-STORE-0000000013 changelog stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000013-changelog-0 with EOS turned on. Reinitializing the task and restore its state from the beginning. (org.apache.kafka.streams.processor.internals.StoreChangelogReader) [2019-12-04 07:29:55,706] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] [Consumer clientId=stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3-restore-consumer, groupId=null] Subscribed to partition(s): stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000019-changelog-0, stream-soak-test-logData10MinuteFinalCount-store-changelog-0, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000007-changelog-0, stream-soak-test-logData10MinuteSuppressedCount-store-changelog-0, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000049-changelog-0, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000025-changelog-0, stream-soak-test-windowed-node-counts-STATE-STORE-0000000030-changelog-0, stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-0000000013-changelog-0 (org.apache.kafka.clients.consumer.KafkaConsumer) [2019-12-04 07:29:55,706] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 07:29:55,759] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 07:29:55,763] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 07:29:55,841] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 07:29:55,846] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 07:29:55,874] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 07:29:55,878] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 07:29:55,906] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig) [2019-12-04 07:29:55,910] INFO [stream-soak-test-45939848-c705-44e1-a5b9-3a6150e0c738-StreamThread-3] Using 5000000 for max log size 5 for max number logs and /mnt/data/deploy/streams/logs for log dir dropping stats every 3600 seconds (org.apache.kafka.streams.logging.RocksDbLoggingConfig)