[2021-06-17 15:31:35,378] INFO [main] StreamsConfig values: acceptable.recovery.lag = 10000 application.id = wordcount-lambda-example application.server = bootstrap.servers = [localhost:9092] buffered.records.per.partition = 1000 built.in.metrics.version = latest cache.max.bytes.buffering = 0 client.id = wordcount-lambda-example-client commit.interval.ms = 10000 connections.max.idle.ms = 540000 default.deserialization.exception.handler = class org.apache.kafka.streams.errors.LogAndFailExceptionHandler default.key.serde = class org.apache.kafka.common.serialization.Serdes$StringSerde default.production.exception.handler = class org.apache.kafka.streams.errors.DefaultProductionExceptionHandler default.timestamp.extractor = class org.apache.kafka.streams.processor.FailOnInvalidTimestamp default.value.serde = class org.apache.kafka.common.serialization.Serdes$StringSerde default.windowed.key.serde.inner = null default.windowed.value.serde.inner = null max.task.idle.ms = 0 max.warmup.replicas = 2 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 num.standby.replicas = 2 num.stream.threads = 4 partition.grouper = class org.apache.kafka.streams.processor.DefaultPartitionGrouper poll.ms = 100 probing.rebalance.interval.ms = 600000 processing.guarantee = at_least_once receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 replication.factor = 1 request.timeout.ms = 40000 retries = 0 retry.backoff.ms = 100 rocksdb.config.setter = null security.protocol = PLAINTEXT send.buffer.bytes = 131072 state.cleanup.delay.ms = 600000 state.dir = /var/folders/t1/_165jr6j6lv6n4d2z66hpr8w0000gn/T/confluent14481924749337076647 task.timeout.ms = 300000 topology.optimization = none upgrade.from = null window.size.ms = null windowstore.changelog.additional.retention.ms = 86400000 (org.apache.kafka.streams.StreamsConfig) [2021-06-17 15:31:35,400] WARN [main] Using an OS temp directory in the state.dir property can cause failures with writing the checkpoint file due to the fact that this directory can be cleared by the OS. Resolved state.dir: [/var/folders/t1/_165jr6j6lv6n4d2z66hpr8w0000gn/T/confluent14481924749337076647] (org.apache.kafka.streams.processor.internals.StateDirectory) [2021-06-17 15:31:35,538] INFO [main] No process id found on disk, got fresh process id ae1e0fef-92c1-4431-800b-8aa4ad7567fa (org.apache.kafka.streams.processor.internals.StateDirectory) [2021-06-17 15:31:35,580] INFO [main] AdminClientConfig values: bootstrap.servers = [localhost:9092] client.dns.lookup = use_all_dns_ips client.id = wordcount-lambda-example-client-admin connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS (org.apache.kafka.clients.admin.AdminClientConfig) [2021-06-17 15:31:35,731] INFO [main] Kafka version: 6.2.0-ccs (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,731] INFO [main] Kafka commitId: 1a5755cf9401c84f (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,731] INFO [main] Kafka startTimeMs: 1623907895729 (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,734] INFO [main] stream-client [wordcount-lambda-example-client] Kafka Streams version: 6.2.0-ccs (org.apache.kafka.streams.KafkaStreams) [2021-06-17 15:31:35,734] INFO [main] stream-client [wordcount-lambda-example-client] Kafka Streams commit ID: 1a5755cf9401c84f (org.apache.kafka.streams.KafkaStreams) [2021-06-17 15:31:35,745] INFO [main] ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.offset.reset = none bootstrap.servers = [localhost:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = wordcount-lambda-example-client-StreamThread-1-restore-consumer client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = null group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = false internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 1000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig) [2021-06-17 15:31:35,769] INFO [main] Kafka version: 6.2.0-ccs (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,769] INFO [main] Kafka commitId: 1a5755cf9401c84f (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,769] INFO [main] Kafka startTimeMs: 1623907895769 (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,777] INFO [main] ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [localhost:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = wordcount-lambda-example-client-StreamThread-1-producer compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] internal.auto.downgrade.txn.commit = false key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 100 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer (org.apache.kafka.clients.producer.ProducerConfig) [2021-06-17 15:31:35,800] INFO [main] Kafka version: 6.2.0-ccs (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,800] INFO [main] Kafka commitId: 1a5755cf9401c84f (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,800] INFO [main] Kafka startTimeMs: 1623907895800 (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,809] INFO [main] ConsumerConfig values: allow.auto.create.topics = false auto.commit.interval.ms = 5000 auto.offset.reset = earliest bootstrap.servers = [localhost:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = wordcount-lambda-example-client-StreamThread-1-consumer client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = wordcount-lambda-example group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = false internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 1000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig) [2021-06-17 15:31:35,825] INFO [main] stream-thread [wordcount-lambda-example-client-StreamThread-1-consumer] Cooperative rebalancing enabled now (org.apache.kafka.streams.processor.internals.assignment.AssignorConfiguration) [2021-06-17 15:31:35,839] INFO [main] Kafka version: 6.2.0-ccs (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,839] INFO [main] Kafka commitId: 1a5755cf9401c84f (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,839] INFO [main] Kafka startTimeMs: 1623907895839 (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,843] INFO [main] ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.offset.reset = none bootstrap.servers = [localhost:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = wordcount-lambda-example-client-StreamThread-2-restore-consumer client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = null group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = false internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 1000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig) [2021-06-17 15:31:35,847] INFO [main] Kafka version: 6.2.0-ccs (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,847] INFO [main] Kafka commitId: 1a5755cf9401c84f (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,847] INFO [main] Kafka startTimeMs: 1623907895847 (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,848] INFO [main] ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [localhost:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = wordcount-lambda-example-client-StreamThread-2-producer compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] internal.auto.downgrade.txn.commit = false key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 100 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer (org.apache.kafka.clients.producer.ProducerConfig) [2021-06-17 15:31:35,852] INFO [main] Kafka version: 6.2.0-ccs (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,853] INFO [main] Kafka commitId: 1a5755cf9401c84f (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,853] INFO [main] Kafka startTimeMs: 1623907895852 (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,854] INFO [main] ConsumerConfig values: allow.auto.create.topics = false auto.commit.interval.ms = 5000 auto.offset.reset = earliest bootstrap.servers = [localhost:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = wordcount-lambda-example-client-StreamThread-2-consumer client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = wordcount-lambda-example group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = false internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 1000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig) [2021-06-17 15:31:35,857] INFO [main] stream-thread [wordcount-lambda-example-client-StreamThread-2-consumer] Cooperative rebalancing enabled now (org.apache.kafka.streams.processor.internals.assignment.AssignorConfiguration) [2021-06-17 15:31:35,859] INFO [main] Kafka version: 6.2.0-ccs (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,859] INFO [main] Kafka commitId: 1a5755cf9401c84f (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,859] INFO [main] Kafka startTimeMs: 1623907895859 (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,861] INFO [main] ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.offset.reset = none bootstrap.servers = [localhost:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = wordcount-lambda-example-client-StreamThread-3-restore-consumer client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = null group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = false internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 1000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig) [2021-06-17 15:31:35,865] INFO [main] Kafka version: 6.2.0-ccs (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,865] INFO [main] Kafka commitId: 1a5755cf9401c84f (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,865] INFO [main] Kafka startTimeMs: 1623907895865 (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,866] INFO [main] ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [localhost:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = wordcount-lambda-example-client-StreamThread-3-producer compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] internal.auto.downgrade.txn.commit = false key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 100 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer (org.apache.kafka.clients.producer.ProducerConfig) [2021-06-17 15:31:35,870] INFO [main] Kafka version: 6.2.0-ccs (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,870] INFO [main] Kafka commitId: 1a5755cf9401c84f (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,870] INFO [main] Kafka startTimeMs: 1623907895870 (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,872] INFO [main] ConsumerConfig values: allow.auto.create.topics = false auto.commit.interval.ms = 5000 auto.offset.reset = earliest bootstrap.servers = [localhost:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = wordcount-lambda-example-client-StreamThread-3-consumer client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = wordcount-lambda-example group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = false internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 1000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig) [2021-06-17 15:31:35,875] INFO [main] stream-thread [wordcount-lambda-example-client-StreamThread-3-consumer] Cooperative rebalancing enabled now (org.apache.kafka.streams.processor.internals.assignment.AssignorConfiguration) [2021-06-17 15:31:35,877] INFO [main] Kafka version: 6.2.0-ccs (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,877] INFO [main] Kafka commitId: 1a5755cf9401c84f (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,877] INFO [main] Kafka startTimeMs: 1623907895877 (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,878] INFO [main] ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.offset.reset = none bootstrap.servers = [localhost:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = wordcount-lambda-example-client-StreamThread-4-restore-consumer client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = null group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = false internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 1000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig) [2021-06-17 15:31:35,882] INFO [main] Kafka version: 6.2.0-ccs (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,882] INFO [main] Kafka commitId: 1a5755cf9401c84f (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,882] INFO [main] Kafka startTimeMs: 1623907895881 (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,882] INFO [main] ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [localhost:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = wordcount-lambda-example-client-StreamThread-4-producer compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] internal.auto.downgrade.txn.commit = false key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 100 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer (org.apache.kafka.clients.producer.ProducerConfig) [2021-06-17 15:31:35,885] INFO [main] Kafka version: 6.2.0-ccs (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,885] INFO [main] Kafka commitId: 1a5755cf9401c84f (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,886] INFO [main] Kafka startTimeMs: 1623907895885 (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,887] INFO [main] ConsumerConfig values: allow.auto.create.topics = false auto.commit.interval.ms = 5000 auto.offset.reset = earliest bootstrap.servers = [localhost:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = wordcount-lambda-example-client-StreamThread-4-consumer client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = wordcount-lambda-example group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = false internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 1000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig) [2021-06-17 15:31:35,889] INFO [main] stream-thread [wordcount-lambda-example-client-StreamThread-4-consumer] Cooperative rebalancing enabled now (org.apache.kafka.streams.processor.internals.assignment.AssignorConfiguration) [2021-06-17 15:31:35,890] INFO [main] Kafka version: 6.2.0-ccs (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,890] INFO [main] Kafka commitId: 1a5755cf9401c84f (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,890] INFO [main] Kafka startTimeMs: 1623907895890 (org.apache.kafka.common.utils.AppInfoParser) [2021-06-17 15:31:35,897] INFO [main] stream-client [wordcount-lambda-example-client] State transition from CREATED to REBALANCING (org.apache.kafka.streams.KafkaStreams) [2021-06-17 15:31:35,899] INFO [wordcount-lambda-example-client-StreamThread-3] [Consumer clientId=wordcount-lambda-example-client-StreamThread-3-consumer, groupId=wordcount-lambda-example] Subscribed to topic(s): streams-plaintext-input, wordcount-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000003-repartition (org.apache.kafka.clients.consumer.KafkaConsumer) [2021-06-17 15:31:35,899] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-consumer, groupId=wordcount-lambda-example] Subscribed to topic(s): streams-plaintext-input, wordcount-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000003-repartition (org.apache.kafka.clients.consumer.KafkaConsumer) [2021-06-17 15:31:35,899] INFO [wordcount-lambda-example-client-StreamThread-2] [Consumer clientId=wordcount-lambda-example-client-StreamThread-2-consumer, groupId=wordcount-lambda-example] Subscribed to topic(s): streams-plaintext-input, wordcount-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000003-repartition (org.apache.kafka.clients.consumer.KafkaConsumer) [2021-06-17 15:31:35,899] INFO [wordcount-lambda-example-client-StreamThread-4] [Consumer clientId=wordcount-lambda-example-client-StreamThread-4-consumer, groupId=wordcount-lambda-example] Subscribed to topic(s): streams-plaintext-input, wordcount-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000003-repartition (org.apache.kafka.clients.consumer.KafkaConsumer) [2021-06-17 15:31:36,013] INFO [kafka-producer-network-thread | wordcount-lambda-example-client-StreamThread-3-producer] [Producer clientId=wordcount-lambda-example-client-StreamThread-3-producer] Cluster ID: yO9eJzpGRG6RCRYDz4MoIw (org.apache.kafka.clients.Metadata) [2021-06-17 15:31:36,014] INFO [kafka-producer-network-thread | wordcount-lambda-example-client-StreamThread-2-producer] [Producer clientId=wordcount-lambda-example-client-StreamThread-2-producer] Cluster ID: yO9eJzpGRG6RCRYDz4MoIw (org.apache.kafka.clients.Metadata) [2021-06-17 15:31:36,014] INFO [kafka-producer-network-thread | wordcount-lambda-example-client-StreamThread-4-producer] [Producer clientId=wordcount-lambda-example-client-StreamThread-4-producer] Cluster ID: yO9eJzpGRG6RCRYDz4MoIw (org.apache.kafka.clients.Metadata) [2021-06-17 15:31:36,013] INFO [kafka-producer-network-thread | wordcount-lambda-example-client-StreamThread-1-producer] [Producer clientId=wordcount-lambda-example-client-StreamThread-1-producer] Cluster ID: yO9eJzpGRG6RCRYDz4MoIw (org.apache.kafka.clients.Metadata) [2021-06-17 15:31:36,021] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-consumer, groupId=wordcount-lambda-example] Cluster ID: yO9eJzpGRG6RCRYDz4MoIw (org.apache.kafka.clients.Metadata) [2021-06-17 15:31:36,021] INFO [wordcount-lambda-example-client-StreamThread-2] [Consumer clientId=wordcount-lambda-example-client-StreamThread-2-consumer, groupId=wordcount-lambda-example] Cluster ID: yO9eJzpGRG6RCRYDz4MoIw (org.apache.kafka.clients.Metadata) [2021-06-17 15:31:36,021] INFO [wordcount-lambda-example-client-StreamThread-4] [Consumer clientId=wordcount-lambda-example-client-StreamThread-4-consumer, groupId=wordcount-lambda-example] Cluster ID: yO9eJzpGRG6RCRYDz4MoIw (org.apache.kafka.clients.Metadata) [2021-06-17 15:31:36,021] INFO [wordcount-lambda-example-client-StreamThread-3] [Consumer clientId=wordcount-lambda-example-client-StreamThread-3-consumer, groupId=wordcount-lambda-example] Cluster ID: yO9eJzpGRG6RCRYDz4MoIw (org.apache.kafka.clients.Metadata) [2021-06-17 15:31:36,022] INFO [wordcount-lambda-example-client-StreamThread-2] [Consumer clientId=wordcount-lambda-example-client-StreamThread-2-consumer, groupId=wordcount-lambda-example] Discovered group coordinator localhost:9092 (id: 2147483647 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:36,022] INFO [wordcount-lambda-example-client-StreamThread-3] [Consumer clientId=wordcount-lambda-example-client-StreamThread-3-consumer, groupId=wordcount-lambda-example] Discovered group coordinator localhost:9092 (id: 2147483647 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:36,022] INFO [wordcount-lambda-example-client-StreamThread-4] [Consumer clientId=wordcount-lambda-example-client-StreamThread-4-consumer, groupId=wordcount-lambda-example] Discovered group coordinator localhost:9092 (id: 2147483647 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:36,022] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-consumer, groupId=wordcount-lambda-example] Discovered group coordinator localhost:9092 (id: 2147483647 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:36,024] INFO [wordcount-lambda-example-client-StreamThread-3] [Consumer clientId=wordcount-lambda-example-client-StreamThread-3-consumer, groupId=wordcount-lambda-example] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:36,024] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-consumer, groupId=wordcount-lambda-example] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:36,025] INFO [wordcount-lambda-example-client-StreamThread-2] [Consumer clientId=wordcount-lambda-example-client-StreamThread-2-consumer, groupId=wordcount-lambda-example] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:36,025] INFO [wordcount-lambda-example-client-StreamThread-4] [Consumer clientId=wordcount-lambda-example-client-StreamThread-4-consumer, groupId=wordcount-lambda-example] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:36,038] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-consumer, groupId=wordcount-lambda-example] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:36,038] INFO [wordcount-lambda-example-client-StreamThread-3] [Consumer clientId=wordcount-lambda-example-client-StreamThread-3-consumer, groupId=wordcount-lambda-example] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:36,038] INFO [wordcount-lambda-example-client-StreamThread-2] [Consumer clientId=wordcount-lambda-example-client-StreamThread-2-consumer, groupId=wordcount-lambda-example] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:36,038] INFO [wordcount-lambda-example-client-StreamThread-4] [Consumer clientId=wordcount-lambda-example-client-StreamThread-4-consumer, groupId=wordcount-lambda-example] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:36,916] INFO [wordcount-lambda-example-client-StreamThread-4] [Consumer clientId=wordcount-lambda-example-client-StreamThread-4-consumer, groupId=wordcount-lambda-example] Successfully joined group with generation Generation{generationId=3, memberId='wordcount-lambda-example-client-StreamThread-4-consumer-d2b60749-e522-4602-9eed-ff33b0c868a1', protocol='stream'} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:36,916] INFO [wordcount-lambda-example-client-StreamThread-3] [Consumer clientId=wordcount-lambda-example-client-StreamThread-3-consumer, groupId=wordcount-lambda-example] Successfully joined group with generation Generation{generationId=3, memberId='wordcount-lambda-example-client-StreamThread-3-consumer-7c861acb-38f0-44f3-ac83-1826329bb24c', protocol='stream'} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:36,916] INFO [wordcount-lambda-example-client-StreamThread-2] [Consumer clientId=wordcount-lambda-example-client-StreamThread-2-consumer, groupId=wordcount-lambda-example] Successfully joined group with generation Generation{generationId=3, memberId='wordcount-lambda-example-client-StreamThread-2-consumer-2a0bbdef-8575-472c-a1ce-dc6581d3358d', protocol='stream'} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:36,916] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-consumer, groupId=wordcount-lambda-example] Successfully joined group with generation Generation{generationId=3, memberId='wordcount-lambda-example-client-StreamThread-1-consumer-4b620d17-81e2-4aee-a417-3c57c6afcff7', protocol='stream'} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:36,924] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-consumer, groupId=wordcount-lambda-example] Successfully synced group in generation Generation{generationId=3, memberId='wordcount-lambda-example-client-StreamThread-1-consumer-4b620d17-81e2-4aee-a417-3c57c6afcff7', protocol='stream'} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:36,924] INFO [wordcount-lambda-example-client-StreamThread-3] [Consumer clientId=wordcount-lambda-example-client-StreamThread-3-consumer, groupId=wordcount-lambda-example] Successfully synced group in generation Generation{generationId=3, memberId='wordcount-lambda-example-client-StreamThread-3-consumer-7c861acb-38f0-44f3-ac83-1826329bb24c', protocol='stream'} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:36,924] INFO [wordcount-lambda-example-client-StreamThread-2] [Consumer clientId=wordcount-lambda-example-client-StreamThread-2-consumer, groupId=wordcount-lambda-example] Successfully synced group in generation Generation{generationId=3, memberId='wordcount-lambda-example-client-StreamThread-2-consumer-2a0bbdef-8575-472c-a1ce-dc6581d3358d', protocol='stream'} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:36,924] INFO [wordcount-lambda-example-client-StreamThread-4] [Consumer clientId=wordcount-lambda-example-client-StreamThread-4-consumer, groupId=wordcount-lambda-example] Successfully synced group in generation Generation{generationId=3, memberId='wordcount-lambda-example-client-StreamThread-4-consumer-d2b60749-e522-4602-9eed-ff33b0c868a1', protocol='stream'} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:36,925] INFO [wordcount-lambda-example-client-StreamThread-2] [Consumer clientId=wordcount-lambda-example-client-StreamThread-2-consumer, groupId=wordcount-lambda-example] Updating assignment with Assigned partitions: [] Current owned partitions: [] Added partitions (assigned - owned): [] Revoked partitions (owned - assigned): [] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2021-06-17 15:31:36,925] INFO [wordcount-lambda-example-client-StreamThread-4] [Consumer clientId=wordcount-lambda-example-client-StreamThread-4-consumer, groupId=wordcount-lambda-example] Updating assignment with Assigned partitions: [] Current owned partitions: [] Added partitions (assigned - owned): [] Revoked partitions (owned - assigned): [] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2021-06-17 15:31:36,925] INFO [wordcount-lambda-example-client-StreamThread-3] [Consumer clientId=wordcount-lambda-example-client-StreamThread-3-consumer, groupId=wordcount-lambda-example] Updating assignment with Assigned partitions: [] Current owned partitions: [] Added partitions (assigned - owned): [] Revoked partitions (owned - assigned): [] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2021-06-17 15:31:36,925] INFO [wordcount-lambda-example-client-StreamThread-3] [Consumer clientId=wordcount-lambda-example-client-StreamThread-3-consumer, groupId=wordcount-lambda-example] Notifying assignor about the new Assignment(partitions=[], userDataSize=40) (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2021-06-17 15:31:36,925] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-consumer, groupId=wordcount-lambda-example] Updating assignment with Assigned partitions: [] Current owned partitions: [] Added partitions (assigned - owned): [] Revoked partitions (owned - assigned): [] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2021-06-17 15:31:36,925] INFO [wordcount-lambda-example-client-StreamThread-4] [Consumer clientId=wordcount-lambda-example-client-StreamThread-4-consumer, groupId=wordcount-lambda-example] Notifying assignor about the new Assignment(partitions=[], userDataSize=40) (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2021-06-17 15:31:36,925] INFO [wordcount-lambda-example-client-StreamThread-2] [Consumer clientId=wordcount-lambda-example-client-StreamThread-2-consumer, groupId=wordcount-lambda-example] Notifying assignor about the new Assignment(partitions=[], userDataSize=40) (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2021-06-17 15:31:36,926] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-consumer, groupId=wordcount-lambda-example] Notifying assignor about the new Assignment(partitions=[], userDataSize=230) (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2021-06-17 15:31:36,927] INFO [wordcount-lambda-example-client-StreamThread-4] stream-thread [wordcount-lambda-example-client-StreamThread-4-consumer] No followup rebalance was requested, resetting the rebalance schedule. (org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor) [2021-06-17 15:31:36,927] INFO [wordcount-lambda-example-client-StreamThread-3] stream-thread [wordcount-lambda-example-client-StreamThread-3-consumer] No followup rebalance was requested, resetting the rebalance schedule. (org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor) [2021-06-17 15:31:36,927] INFO [wordcount-lambda-example-client-StreamThread-2] stream-thread [wordcount-lambda-example-client-StreamThread-2-consumer] Requested to schedule immediate rebalance for new tasks to be safely revoked from current owner. (org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor) [2021-06-17 15:31:36,927] INFO [wordcount-lambda-example-client-StreamThread-1] stream-thread [wordcount-lambda-example-client-StreamThread-1-consumer] Requested to schedule immediate rebalance for new tasks to be safely revoked from current owner. (org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor) [2021-06-17 15:31:36,929] INFO [wordcount-lambda-example-client-StreamThread-3] stream-thread [wordcount-lambda-example-client-StreamThread-3] Handle new assignment with: New active tasks: [] New standby tasks: [] Existing active tasks: [] Existing standby tasks: [] (org.apache.kafka.streams.processor.internals.TaskManager) [2021-06-17 15:31:36,929] INFO [wordcount-lambda-example-client-StreamThread-2] stream-thread [wordcount-lambda-example-client-StreamThread-2] Handle new assignment with: New active tasks: [] New standby tasks: [] Existing active tasks: [] Existing standby tasks: [] (org.apache.kafka.streams.processor.internals.TaskManager) [2021-06-17 15:31:36,929] INFO [wordcount-lambda-example-client-StreamThread-1] stream-thread [wordcount-lambda-example-client-StreamThread-1] Handle new assignment with: New active tasks: [] New standby tasks: [1_0, 1_1] Existing active tasks: [] Existing standby tasks: [] (org.apache.kafka.streams.processor.internals.TaskManager) [2021-06-17 15:31:36,929] INFO [wordcount-lambda-example-client-StreamThread-4] stream-thread [wordcount-lambda-example-client-StreamThread-4] Handle new assignment with: New active tasks: [] New standby tasks: [] Existing active tasks: [] Existing standby tasks: [] (org.apache.kafka.streams.processor.internals.TaskManager) [2021-06-17 15:31:36,930] INFO [wordcount-lambda-example-client-StreamThread-2] [Consumer clientId=wordcount-lambda-example-client-StreamThread-2-consumer, groupId=wordcount-lambda-example] Adding newly assigned partitions: (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2021-06-17 15:31:36,930] INFO [wordcount-lambda-example-client-StreamThread-3] [Consumer clientId=wordcount-lambda-example-client-StreamThread-3-consumer, groupId=wordcount-lambda-example] Adding newly assigned partitions: (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2021-06-17 15:31:36,930] INFO [wordcount-lambda-example-client-StreamThread-4] [Consumer clientId=wordcount-lambda-example-client-StreamThread-4-consumer, groupId=wordcount-lambda-example] Adding newly assigned partitions: (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2021-06-17 15:31:36,936] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-consumer, groupId=wordcount-lambda-example] Adding newly assigned partitions: (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2021-06-17 15:31:36,950] INFO [wordcount-lambda-example-client-StreamThread-2] [Consumer clientId=wordcount-lambda-example-client-StreamThread-2-consumer, groupId=wordcount-lambda-example] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:37,186] INFO [wordcount-lambda-example-client-StreamThread-1] Opening store KSTREAM-AGGREGATE-STATE-STORE-0000000003 in regular mode (org.apache.kafka.streams.state.internals.RocksDBTimestampedStore) [2021-06-17 15:31:37,189] INFO [wordcount-lambda-example-client-StreamThread-1] stream-thread [wordcount-lambda-example-client-StreamThread-1] standby-task [1_0] State store KSTREAM-AGGREGATE-STATE-STORE-0000000003 did not find checkpoint offset, hence would default to the starting offset at changelog wordcount-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000003-changelog-0 (org.apache.kafka.streams.processor.internals.ProcessorStateManager) [2021-06-17 15:31:37,189] INFO [wordcount-lambda-example-client-StreamThread-1] stream-thread [wordcount-lambda-example-client-StreamThread-1] standby-task [1_0] Initialized (org.apache.kafka.streams.processor.internals.StandbyTask) [2021-06-17 15:31:37,212] INFO [wordcount-lambda-example-client-StreamThread-1] Opening store KSTREAM-AGGREGATE-STATE-STORE-0000000003 in regular mode (org.apache.kafka.streams.state.internals.RocksDBTimestampedStore) [2021-06-17 15:31:37,212] INFO [wordcount-lambda-example-client-StreamThread-1] stream-thread [wordcount-lambda-example-client-StreamThread-1] standby-task [1_1] State store KSTREAM-AGGREGATE-STATE-STORE-0000000003 did not find checkpoint offset, hence would default to the starting offset at changelog wordcount-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000003-changelog-1 (org.apache.kafka.streams.processor.internals.ProcessorStateManager) [2021-06-17 15:31:37,212] INFO [wordcount-lambda-example-client-StreamThread-1] stream-thread [wordcount-lambda-example-client-StreamThread-1] standby-task [1_1] Initialized (org.apache.kafka.streams.processor.internals.StandbyTask) [2021-06-17 15:31:37,212] INFO [wordcount-lambda-example-client-StreamThread-1] stream-client [wordcount-lambda-example-client] State transition from REBALANCING to RUNNING (org.apache.kafka.streams.KafkaStreams) [2021-06-17 15:31:37,214] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-restore-consumer, groupId=null] Subscribed to partition(s): wordcount-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000003-changelog-1, wordcount-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000003-changelog-0 (org.apache.kafka.clients.consumer.KafkaConsumer) [2021-06-17 15:31:37,216] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-restore-consumer, groupId=null] Seeking to EARLIEST offset of partition wordcount-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000003-changelog-1 (org.apache.kafka.clients.consumer.internals.SubscriptionState) [2021-06-17 15:31:37,216] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-restore-consumer, groupId=null] Seeking to EARLIEST offset of partition wordcount-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000003-changelog-0 (org.apache.kafka.clients.consumer.internals.SubscriptionState) [2021-06-17 15:31:37,220] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-consumer, groupId=wordcount-lambda-example] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:37,634] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-restore-consumer, groupId=null] Cluster ID: yO9eJzpGRG6RCRYDz4MoIw (org.apache.kafka.clients.Metadata) [2021-06-17 15:31:38,267] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-restore-consumer, groupId=null] Resetting offset for partition wordcount-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000003-changelog-1 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState) [2021-06-17 15:31:38,268] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-restore-consumer, groupId=null] Resetting offset for partition wordcount-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000003-changelog-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState) [2021-06-17 15:31:39,922] INFO [wordcount-lambda-example-client-StreamThread-3] [Consumer clientId=wordcount-lambda-example-client-StreamThread-3-consumer, groupId=wordcount-lambda-example] Attempt to heartbeat failed since group is rebalancing (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:39,922] INFO [wordcount-lambda-example-client-StreamThread-4] [Consumer clientId=wordcount-lambda-example-client-StreamThread-4-consumer, groupId=wordcount-lambda-example] Attempt to heartbeat failed since group is rebalancing (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:39,922] INFO [wordcount-lambda-example-client-StreamThread-3] [Consumer clientId=wordcount-lambda-example-client-StreamThread-3-consumer, groupId=wordcount-lambda-example] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:39,922] INFO [wordcount-lambda-example-client-StreamThread-4] [Consumer clientId=wordcount-lambda-example-client-StreamThread-4-consumer, groupId=wordcount-lambda-example] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:39,924] INFO [wordcount-lambda-example-client-StreamThread-2] [Consumer clientId=wordcount-lambda-example-client-StreamThread-2-consumer, groupId=wordcount-lambda-example] Successfully joined group with generation Generation{generationId=4, memberId='wordcount-lambda-example-client-StreamThread-2-consumer-2a0bbdef-8575-472c-a1ce-dc6581d3358d', protocol='stream'} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:39,924] INFO [wordcount-lambda-example-client-StreamThread-4] [Consumer clientId=wordcount-lambda-example-client-StreamThread-4-consumer, groupId=wordcount-lambda-example] Successfully joined group with generation Generation{generationId=4, memberId='wordcount-lambda-example-client-StreamThread-4-consumer-d2b60749-e522-4602-9eed-ff33b0c868a1', protocol='stream'} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:39,924] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-consumer, groupId=wordcount-lambda-example] Successfully joined group with generation Generation{generationId=4, memberId='wordcount-lambda-example-client-StreamThread-1-consumer-4b620d17-81e2-4aee-a417-3c57c6afcff7', protocol='stream'} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:39,924] INFO [wordcount-lambda-example-client-StreamThread-3] [Consumer clientId=wordcount-lambda-example-client-StreamThread-3-consumer, groupId=wordcount-lambda-example] Successfully joined group with generation Generation{generationId=4, memberId='wordcount-lambda-example-client-StreamThread-3-consumer-7c861acb-38f0-44f3-ac83-1826329bb24c', protocol='stream'} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:39,932] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-consumer, groupId=wordcount-lambda-example] Successfully synced group in generation Generation{generationId=4, memberId='wordcount-lambda-example-client-StreamThread-1-consumer-4b620d17-81e2-4aee-a417-3c57c6afcff7', protocol='stream'} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:39,932] INFO [wordcount-lambda-example-client-StreamThread-2] [Consumer clientId=wordcount-lambda-example-client-StreamThread-2-consumer, groupId=wordcount-lambda-example] Successfully synced group in generation Generation{generationId=4, memberId='wordcount-lambda-example-client-StreamThread-2-consumer-2a0bbdef-8575-472c-a1ce-dc6581d3358d', protocol='stream'} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:39,932] INFO [wordcount-lambda-example-client-StreamThread-3] [Consumer clientId=wordcount-lambda-example-client-StreamThread-3-consumer, groupId=wordcount-lambda-example] Successfully synced group in generation Generation{generationId=4, memberId='wordcount-lambda-example-client-StreamThread-3-consumer-7c861acb-38f0-44f3-ac83-1826329bb24c', protocol='stream'} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:39,932] INFO [wordcount-lambda-example-client-StreamThread-4] [Consumer clientId=wordcount-lambda-example-client-StreamThread-4-consumer, groupId=wordcount-lambda-example] Successfully synced group in generation Generation{generationId=4, memberId='wordcount-lambda-example-client-StreamThread-4-consumer-d2b60749-e522-4602-9eed-ff33b0c868a1', protocol='stream'} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [2021-06-17 15:31:39,932] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-consumer, groupId=wordcount-lambda-example] Updating assignment with Assigned partitions: [wordcount-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000003-repartition-0] Current owned partitions: [] Added partitions (assigned - owned): [wordcount-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000003-repartition-0] Revoked partitions (owned - assigned): [] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2021-06-17 15:31:39,933] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-consumer, groupId=wordcount-lambda-example] Notifying assignor about the new Assignment(partitions=[wordcount-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000003-repartition-0], userDataSize=143) (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2021-06-17 15:31:39,933] INFO [wordcount-lambda-example-client-StreamThread-4] [Consumer clientId=wordcount-lambda-example-client-StreamThread-4-consumer, groupId=wordcount-lambda-example] Updating assignment with Assigned partitions: [] Current owned partitions: [] Added partitions (assigned - owned): [] Revoked partitions (owned - assigned): [] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2021-06-17 15:31:39,932] INFO [wordcount-lambda-example-client-StreamThread-2] [Consumer clientId=wordcount-lambda-example-client-StreamThread-2-consumer, groupId=wordcount-lambda-example] Updating assignment with Assigned partitions: [streams-plaintext-input-0] Current owned partitions: [] Added partitions (assigned - owned): [streams-plaintext-input-0] Revoked partitions (owned - assigned): [] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2021-06-17 15:31:39,933] INFO [wordcount-lambda-example-client-StreamThread-4] [Consumer clientId=wordcount-lambda-example-client-StreamThread-4-consumer, groupId=wordcount-lambda-example] Notifying assignor about the new Assignment(partitions=[], userDataSize=40) (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2021-06-17 15:31:39,933] INFO [wordcount-lambda-example-client-StreamThread-3] [Consumer clientId=wordcount-lambda-example-client-StreamThread-3-consumer, groupId=wordcount-lambda-example] Updating assignment with Assigned partitions: [] Current owned partitions: [] Added partitions (assigned - owned): [] Revoked partitions (owned - assigned): [] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2021-06-17 15:31:39,933] INFO [wordcount-lambda-example-client-StreamThread-4] stream-thread [wordcount-lambda-example-client-StreamThread-4-consumer] No followup rebalance was requested, resetting the rebalance schedule. (org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor) [2021-06-17 15:31:39,933] INFO [wordcount-lambda-example-client-StreamThread-2] [Consumer clientId=wordcount-lambda-example-client-StreamThread-2-consumer, groupId=wordcount-lambda-example] Notifying assignor about the new Assignment(partitions=[streams-plaintext-input-0], userDataSize=48) (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2021-06-17 15:31:39,933] INFO [wordcount-lambda-example-client-StreamThread-3] [Consumer clientId=wordcount-lambda-example-client-StreamThread-3-consumer, groupId=wordcount-lambda-example] Notifying assignor about the new Assignment(partitions=[], userDataSize=40) (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2021-06-17 15:31:39,933] INFO [wordcount-lambda-example-client-StreamThread-1] stream-thread [wordcount-lambda-example-client-StreamThread-1-consumer] No followup rebalance was requested, resetting the rebalance schedule. (org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor) [2021-06-17 15:31:39,934] INFO [wordcount-lambda-example-client-StreamThread-3] stream-thread [wordcount-lambda-example-client-StreamThread-3-consumer] No followup rebalance was requested, resetting the rebalance schedule. (org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor) [2021-06-17 15:31:39,933] INFO [wordcount-lambda-example-client-StreamThread-2] stream-thread [wordcount-lambda-example-client-StreamThread-2-consumer] No followup rebalance was requested, resetting the rebalance schedule. (org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor) [2021-06-17 15:31:39,934] INFO [wordcount-lambda-example-client-StreamThread-3] stream-thread [wordcount-lambda-example-client-StreamThread-3] Handle new assignment with: New active tasks: [] New standby tasks: [] Existing active tasks: [] Existing standby tasks: [] (org.apache.kafka.streams.processor.internals.TaskManager) [2021-06-17 15:31:39,934] INFO [wordcount-lambda-example-client-StreamThread-2] stream-thread [wordcount-lambda-example-client-StreamThread-2] Handle new assignment with: New active tasks: [0_0] New standby tasks: [] Existing active tasks: [] Existing standby tasks: [] (org.apache.kafka.streams.processor.internals.TaskManager) [2021-06-17 15:31:39,933] INFO [wordcount-lambda-example-client-StreamThread-4] stream-thread [wordcount-lambda-example-client-StreamThread-4] Handle new assignment with: New active tasks: [] New standby tasks: [] Existing active tasks: [] Existing standby tasks: [] (org.apache.kafka.streams.processor.internals.TaskManager) [2021-06-17 15:31:39,934] INFO [wordcount-lambda-example-client-StreamThread-3] [Consumer clientId=wordcount-lambda-example-client-StreamThread-3-consumer, groupId=wordcount-lambda-example] Adding newly assigned partitions: (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2021-06-17 15:31:39,934] INFO [wordcount-lambda-example-client-StreamThread-1] stream-thread [wordcount-lambda-example-client-StreamThread-1] Handle new assignment with: New active tasks: [1_0] New standby tasks: [1_1] Existing active tasks: [] Existing standby tasks: [1_0, 1_1] (org.apache.kafka.streams.processor.internals.TaskManager) [2021-06-17 15:31:39,934] INFO [wordcount-lambda-example-client-StreamThread-3] stream-client [wordcount-lambda-example-client] State transition from RUNNING to REBALANCING (org.apache.kafka.streams.KafkaStreams) [2021-06-17 15:31:39,934] INFO [wordcount-lambda-example-client-StreamThread-4] [Consumer clientId=wordcount-lambda-example-client-StreamThread-4-consumer, groupId=wordcount-lambda-example] Adding newly assigned partitions: (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2021-06-17 15:31:39,934] INFO [wordcount-lambda-example-client-StreamThread-1] stream-thread [wordcount-lambda-example-client-StreamThread-1] standby-task [1_0] Suspended running (org.apache.kafka.streams.processor.internals.StandbyTask) [2021-06-17 15:31:39,939] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-restore-consumer, groupId=null] Subscribed to partition(s): wordcount-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000003-changelog-1 (org.apache.kafka.clients.consumer.KafkaConsumer) [2021-06-17 15:31:39,939] INFO [wordcount-lambda-example-client-StreamThread-1] stream-thread [wordcount-lambda-example-client-StreamThread-1] standby-task [1_0] Closed clean and recycled state (org.apache.kafka.streams.processor.internals.StandbyTask) [2021-06-17 15:31:39,944] INFO [wordcount-lambda-example-client-StreamThread-2] [Consumer clientId=wordcount-lambda-example-client-StreamThread-2-consumer, groupId=wordcount-lambda-example] Adding newly assigned partitions: streams-plaintext-input-0 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2021-06-17 15:31:39,944] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-consumer, groupId=wordcount-lambda-example] Adding newly assigned partitions: wordcount-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000003-repartition-0 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2021-06-17 15:31:39,946] INFO [wordcount-lambda-example-client-StreamThread-2] stream-thread [wordcount-lambda-example-client-StreamThread-2] task [0_0] Initialized (org.apache.kafka.streams.processor.internals.StreamTask) [2021-06-17 15:31:39,947] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-consumer, groupId=wordcount-lambda-example] Found no committed offset for partition wordcount-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000003-repartition-0 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2021-06-17 15:31:39,947] INFO [wordcount-lambda-example-client-StreamThread-2] [Consumer clientId=wordcount-lambda-example-client-StreamThread-2-consumer, groupId=wordcount-lambda-example] Found no committed offset for partition streams-plaintext-input-0 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2021-06-17 15:31:39,949] INFO [wordcount-lambda-example-client-StreamThread-2] stream-thread [wordcount-lambda-example-client-StreamThread-2] task [0_0] Restored and ready to run (org.apache.kafka.streams.processor.internals.StreamTask) [2021-06-17 15:31:39,950] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-consumer, groupId=wordcount-lambda-example] Resetting offset for partition wordcount-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000003-repartition-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState) [2021-06-17 15:31:39,950] INFO [wordcount-lambda-example-client-StreamThread-2] [Consumer clientId=wordcount-lambda-example-client-StreamThread-2-consumer, groupId=wordcount-lambda-example] Found no committed offset for partition streams-plaintext-input-0 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2021-06-17 15:31:39,952] INFO [wordcount-lambda-example-client-StreamThread-2] [Consumer clientId=wordcount-lambda-example-client-StreamThread-2-consumer, groupId=wordcount-lambda-example] Resetting offset for partition streams-plaintext-input-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState) [2021-06-17 15:31:40,031] INFO [wordcount-lambda-example-client-StreamThread-1] stream-thread [wordcount-lambda-example-client-StreamThread-1] task [1_0] Initialized (org.apache.kafka.streams.processor.internals.StreamTask) [2021-06-17 15:31:40,040] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-restore-consumer, groupId=null] Subscribed to partition(s): wordcount-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000003-changelog-1, wordcount-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000003-changelog-0 (org.apache.kafka.clients.consumer.KafkaConsumer) [2021-06-17 15:31:40,040] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-restore-consumer, groupId=null] Seeking to EARLIEST offset of partition wordcount-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000003-changelog-0 (org.apache.kafka.clients.consumer.internals.SubscriptionState) [2021-06-17 15:31:40,117] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-restore-consumer, groupId=null] Resetting offset for partition wordcount-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000003-changelog-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState) [2021-06-17 15:31:40,220] INFO [wordcount-lambda-example-client-StreamThread-1] stream-thread [wordcount-lambda-example-client-StreamThread-1] Finished restoring changelog wordcount-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000003-changelog-0 to store KSTREAM-AGGREGATE-STATE-STORE-0000000003 with a total number of 0 records (org.apache.kafka.streams.processor.internals.StoreChangelogReader) [2021-06-17 15:31:40,222] INFO [wordcount-lambda-example-client-StreamThread-1] [Consumer clientId=wordcount-lambda-example-client-StreamThread-1-consumer, groupId=wordcount-lambda-example] Found no committed offset for partition wordcount-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000003-repartition-0 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2021-06-17 15:31:40,225] INFO [wordcount-lambda-example-client-StreamThread-1] stream-thread [wordcount-lambda-example-client-StreamThread-1] task [1_0] Restored and ready to run (org.apache.kafka.streams.processor.internals.StreamTask) [2021-06-17 15:31:40,225] INFO [wordcount-lambda-example-client-StreamThread-1] stream-client [wordcount-lambda-example-client] State transition from REBALANCING to RUNNING (org.apache.kafka.streams.KafkaStreams)