[info] Loading global plugins from C:\Users\Ryan\.sbt\0.13\plugins [info] Loading project definition from C:\workspace\scala\exactly-once\project [info] Set current project to exactly-once (in build file:/C:/workspace/scala/exactly-once/) 11:30:48.963 [pool-6-thread-1] DEBUG org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster - Initiating embedded Kafka cluster startup 11:30:48.979 [pool-6-thread-1] DEBUG org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster - Starting a ZooKeeper instance 11:30:49.115 [pool-6-thread-1] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT 11:30:49.115 [pool-6-thread-1] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:host.name=TRANSCOGNIFY 11:30:49.115 [pool-6-thread-1] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.version=1.8.0_144 11:30:49.115 [pool-6-thread-1] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.vendor=Oracle Corporation 11:30:49.115 [pool-6-thread-1] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.home=C:\Program Files\Java\jre1.8.0_144 11:30:49.115 [pool-6-thread-1] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.class.path=C:/Program Files (x86)/sbt/bin/sbt-launch.jar 11:30:49.115 [pool-6-thread-1] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.library.path=C:\ProgramData\Oracle\Java\javapath;C:\WINDOWS\Sun\Java\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\Users\Ryan\bin;C:\Program Files\Git\mingw64\bin;C:\Program Files\Git\usr\local\bin;C:\Program Files\Git\usr\bin;C:\Program Files\Git\usr\bin;C:\Program Files\Git\mingw64\bin;C:\Program Files\Git\usr\bin;C:\Users\Ryan\bin;C:\ProgramData\Oracle\Java\javapath;C:\Program Files (x86)\Intel\iCLS Client;C:\Program Files\Intel\iCLS Client;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0;C:\Program Files\Intel\WiFi\bin;C:\Program Files\Common Files\Intel\WirelessCommon;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files (x86)\Windows Kits\8.1\Windows Performance Toolkit;C:\Program Files\Microsoft SQL Server\110\Tools\Binn;C:\Program Files (x86)\Microsoft SDKs\TypeScript\1.0;C:\Program Files\Microsoft SQL Server\120\Tools\Binn;C:\Program Files (x86)\sbt\bin;C:\Program Files\Git\cmd;C:\Program Files\Git\mingw64\bin;C:\Program Files\Git\usr\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0;C:\gradle-3.1\bin;C:\apache-maven-3.3.9\bin;C:\Program Files (x86)\Skype\Phone;C:\Program Files (x86)\scala\bin;C:\Users\Ryan\Anaconda3;C:\Users\Ryan\Anaconda3\Scripts;C:\Users\Ryan\Anaconda3\Library\bin;C:\Users\Ryan\AppData\Local\Programs\Python\Python36\Scripts;C:\Users\Ryan\AppData\Local\Programs\Python\Python36;C:\Users\Ryan\AppData\Local\Microsoft\WindowsApps;C:\Users\Ryan\AppData\Roaming\Dashlane\4.6.5.21982\bin\Firefox_Extension\{442718d9-475e-452a-b3e1-fb1ee16b8e9f}\components;C:\Users\Ryan\AppData\Roaming\Dashlane\4.6.6.23032\bin\Firefox_Extension\{442718d9-475e-452a-b3e1-fb1ee16b8e9f}\components;%DASHLANE_DLL_DIR%;C:\Program Files\Git\usr\bin\vendor_perl;C:\Program Files\Git\usr\bin\core_perl;. 11:30:49.115 [pool-6-thread-1] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.io.tmpdir=C:\Users\Ryan\AppData\Local\Temp\ 11:30:49.115 [pool-6-thread-1] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.compiler= 11:30:49.115 [pool-6-thread-1] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.name=Windows 10 11:30:49.115 [pool-6-thread-1] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.arch=amd64 11:30:49.115 [pool-6-thread-1] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.version=10.0 11:30:49.116 [pool-6-thread-1] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.name=Ryan 11:30:49.116 [pool-6-thread-1] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.home=C:\Users\Ryan 11:30:49.116 [pool-6-thread-1] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.dir=C:\workspace\scala\exactly-once 11:30:49.117 [pool-6-thread-1] DEBUG org.apache.zookeeper.server.persistence.FileTxnSnapLog - Opening datadir:C:\Users\Ryan\AppData\Local\Temp\kafka-2483555969984076021 snapDir:C:\Users\Ryan\AppData\Local\Temp\kafka-2942329367983523370 11:30:49.132 [pool-6-thread-1] INFO org.apache.zookeeper.server.ZooKeeperServer - Created server with tickTime 500 minSessionTimeout 1000 maxSessionTimeout 10000 datadir C:\Users\Ryan\AppData\Local\Temp\kafka-2483555969984076021\version-2 snapdir C:\Users\Ryan\AppData\Local\Temp\kafka-2942329367983523370\version-2 11:30:49.163 [pool-6-thread-1] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - binding to port /127.0.0.1:0 11:30:49.195 [pool-6-thread-1] ERROR org.apache.zookeeper.server.ZooKeeperServer - ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes 11:30:49.195 [pool-6-thread-1] DEBUG org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster - ZooKeeper instance is running at localhost:63309 11:30:49.217 [pool-6-thread-1] DEBUG org.I0Itec.zkclient.ZkConnection - Creating new ZookKeeper instance to connect to localhost:63309. 11:30:49.217 [ZkClient-EventThread-74-localhost:63309] INFO org.I0Itec.zkclient.ZkEventThread - Starting ZkClient event thread. 11:30:49.217 [pool-6-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT 11:30:49.217 [pool-6-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=TRANSCOGNIFY 11:30:49.217 [pool-6-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_144 11:30:49.217 [pool-6-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Oracle Corporation 11:30:49.217 [pool-6-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=C:\Program Files\Java\jre1.8.0_144 11:30:49.217 [pool-6-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=C:/Program Files (x86)/sbt/bin/sbt-launch.jar 11:30:49.217 [pool-6-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=C:\ProgramData\Oracle\Java\javapath;C:\WINDOWS\Sun\Java\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\Users\Ryan\bin;C:\Program Files\Git\mingw64\bin;C:\Program Files\Git\usr\local\bin;C:\Program Files\Git\usr\bin;C:\Program Files\Git\usr\bin;C:\Program Files\Git\mingw64\bin;C:\Program Files\Git\usr\bin;C:\Users\Ryan\bin;C:\ProgramData\Oracle\Java\javapath;C:\Program Files (x86)\Intel\iCLS Client;C:\Program Files\Intel\iCLS Client;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0;C:\Program Files\Intel\WiFi\bin;C:\Program Files\Common Files\Intel\WirelessCommon;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files (x86)\Windows Kits\8.1\Windows Performance Toolkit;C:\Program Files\Microsoft SQL Server\110\Tools\Binn;C:\Program Files (x86)\Microsoft SDKs\TypeScript\1.0;C:\Program Files\Microsoft SQL Server\120\Tools\Binn;C:\Program Files (x86)\sbt\bin;C:\Program Files\Git\cmd;C:\Program Files\Git\mingw64\bin;C:\Program Files\Git\usr\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0;C:\gradle-3.1\bin;C:\apache-maven-3.3.9\bin;C:\Program Files (x86)\Skype\Phone;C:\Program Files (x86)\scala\bin;C:\Users\Ryan\Anaconda3;C:\Users\Ryan\Anaconda3\Scripts;C:\Users\Ryan\Anaconda3\Library\bin;C:\Users\Ryan\AppData\Local\Programs\Python\Python36\Scripts;C:\Users\Ryan\AppData\Local\Programs\Python\Python36;C:\Users\Ryan\AppData\Local\Microsoft\WindowsApps;C:\Users\Ryan\AppData\Roaming\Dashlane\4.6.5.21982\bin\Firefox_Extension\{442718d9-475e-452a-b3e1-fb1ee16b8e9f}\components;C:\Users\Ryan\AppData\Roaming\Dashlane\4.6.6.23032\bin\Firefox_Extension\{442718d9-475e-452a-b3e1-fb1ee16b8e9f}\components;%DASHLANE_DLL_DIR%;C:\Program Files\Git\usr\bin\vendor_perl;C:\Program Files\Git\usr\bin\core_perl;. 11:30:49.217 [pool-6-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=C:\Users\Ryan\AppData\Local\Temp\ 11:30:49.217 [pool-6-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler= 11:30:49.217 [pool-6-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Windows 10 11:30:49.217 [pool-6-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64 11:30:49.217 [pool-6-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=10.0 11:30:49.217 [pool-6-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=Ryan 11:30:49.217 [pool-6-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=C:\Users\Ryan 11:30:49.217 [pool-6-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=C:\workspace\scala\exactly-once 11:30:49.217 [pool-6-thread-1] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=localhost:63309 sessionTimeout=30000 watcher=org.I0Itec.zkclient.ZkClient@101118c8 11:30:49.232 [pool-6-thread-1] DEBUG org.apache.zookeeper.ClientCnxn - zookeeper.disableAutoWatchReset is false 11:30:49.248 [pool-6-thread-1] DEBUG org.I0Itec.zkclient.ZkClient - Awaiting connection to Zookeeper server 11:30:49.248 [pool-6-thread-1] INFO org.I0Itec.zkclient.ZkClient - Waiting for keeper state SyncConnected 11:30:49.248 [pool-6-thread-1-SendThread(127.0.0.1:63309)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server 127.0.0.1/127.0.0.1:63309. Will not attempt to authenticate using SASL (unknown error) 11:30:49.248 [pool-6-thread-1-SendThread(127.0.0.1:63309)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established to 127.0.0.1/127.0.0.1:63309, initiating session 11:30:49.248 [NIOServerCxn.Factory:/127.0.0.1:0] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - Accepted socket connection from /127.0.0.1:63312 11:30:49.248 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Session establishment request sent on 127.0.0.1/127.0.0.1:63309 11:30:49.248 [NIOServerCxn.Factory:/127.0.0.1:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Session establishment request from client /127.0.0.1:63312 client's lastZxid is 0x0 11:30:49.248 [NIOServerCxn.Factory:/127.0.0.1:0] INFO org.apache.zookeeper.server.ZooKeeperServer - Client attempting to establish new session at /127.0.0.1:63312 11:30:49.264 [SyncThread:0] INFO org.apache.zookeeper.server.persistence.FileTxnLog - Creating new log file: log.1 11:30:49.279 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0000 type:createSession cxid:0x0 zxid:0x1 txntype:-10 reqpath:n/a 11:30:49.279 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0000 type:createSession cxid:0x0 zxid:0x1 txntype:-10 reqpath:n/a 11:30:49.279 [SyncThread:0] INFO org.apache.zookeeper.server.ZooKeeperServer - Established session 0x15e7aca904b0000 with negotiated timeout 10000 for client /127.0.0.1:63312 11:30:49.279 [pool-6-thread-1-SendThread(127.0.0.1:63309)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server 127.0.0.1/127.0.0.1:63309, sessionid = 0x15e7aca904b0000, negotiated timeout = 10000 11:30:49.279 [pool-6-thread-1-EventThread] DEBUG org.I0Itec.zkclient.ZkClient - Received event: WatchedEvent state:SyncConnected type:None path:null 11:30:49.279 [pool-6-thread-1-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (SyncConnected) 11:30:49.279 [pool-6-thread-1-EventThread] DEBUG org.I0Itec.zkclient.ZkClient - Leaving process event 11:30:49.279 [pool-6-thread-1] DEBUG org.I0Itec.zkclient.ZkClient - State is SyncConnected 11:30:49.415 [pool-6-thread-1] DEBUG org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster - Starting a Kafka instance on port null ... 11:30:49.417 [pool-6-thread-1] INFO kafka.server.KafkaConfig - KafkaConfig values: advertised.host.name = null advertised.listeners = null advertised.port = null alter.config.policy.class.name = null authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.id = 0 broker.id.generation.enable = true broker.rack = null compression.type = producer connections.max.idle.ms = 600000 controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true fetch.purgatory.purge.interval.requests = 1000 group.initial.rebalance.delay.ms = 0 group.max.session.timeout.ms = 300000 group.min.session.timeout.ms = 0 host.name = 127.0.0.1 inter.broker.listener.name = null inter.broker.protocol.version = 0.11.0-IV2 leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,TRACE:TRACE,SASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT listeners = null log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 2097152 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081 log.dirs = null log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.format.version = 0.11.0-IV2 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 300000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = message.max.bytes = 1000000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 num.io.threads = 8 num.network.threads = 3 num.partitions = 1 num.recovery.threads.per.data.dir = 1 num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 1440 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 1 offsets.topic.segment.bytes = 104857600 port = 0 principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder producer.purgatory.purge.interval.requests = 1000 queued.max.requests = 500 quota.consumer.default = 9223372036854775807 quota.producer.default = 9223372036854775807 quota.window.num = 11 quota.window.size.seconds = 1 replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 10000 replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.enabled.mechanisms = [GSSAPI] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.mechanism.inter.broker.protocol = GSSAPI security.inter.broker.protocol = PLAINTEXT socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = null ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000 transaction.max.timeout.ms = 900000 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 2 transaction.state.log.num.partitions = 3 transaction.state.log.replication.factor = 3 transaction.state.log.segment.bytes = 104857600 transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false zookeeper.connect = localhost:63309 zookeeper.connection.timeout.ms = null zookeeper.session.timeout.ms = 6000 zookeeper.set.acl = false zookeeper.sync.time.ms = 2000 11:30:49.468 [pool-6-thread-1] DEBUG org.apache.kafka.streams.integration.utils.KafkaEmbedded - Starting embedded Kafka broker (with log.dirs=C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081 and ZK ensemble at localhost:63309) ... 11:30:49.499 [pool-6-thread-1] INFO kafka.server.KafkaServer - starting 11:30:49.499 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 11:30:49.499 [pool-6-thread-1] INFO kafka.server.KafkaServer - Connecting to zookeeper on localhost:63309 11:30:49.516 [pool-6-thread-1] DEBUG org.I0Itec.zkclient.ZkConnection - Creating new ZookKeeper instance to connect to localhost:63309. 11:30:49.516 [pool-6-thread-1] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=localhost:63309 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@16d82729 11:30:49.516 [ZkClient-EventThread-78-localhost:63309] INFO org.I0Itec.zkclient.ZkEventThread - Starting ZkClient event thread. 11:30:49.517 [pool-6-thread-1] DEBUG org.I0Itec.zkclient.ZkClient - Awaiting connection to Zookeeper server 11:30:49.517 [pool-6-thread-1] INFO org.I0Itec.zkclient.ZkClient - Waiting for keeper state SyncConnected 11:30:49.517 [pool-6-thread-1-SendThread(0:0:0:0:0:0:0:1:63309)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server 0:0:0:0:0:0:0:1/0:0:0:0:0:0:0:1:63309. Will not attempt to authenticate using SASL (unknown error) 11:30:52.420 [pool-6-thread-1-SendThread(0:0:0:0:0:0:0:1:63309)] WARN org.apache.zookeeper.ClientCnxn - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused: no further information at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) 11:30:52.435 [pool-6-thread-1-SendThread(0:0:0:0:0:0:0:1:63309)] DEBUG org.apache.zookeeper.ClientCnxnSocketNIO - Ignoring exception during shutdown input java.nio.channels.ClosedChannelException: null at sun.nio.ch.SocketChannelImpl.shutdownInput(Unknown Source) at sun.nio.ch.SocketAdaptor.shutdownInput(Unknown Source) at org.apache.zookeeper.ClientCnxnSocketNIO.cleanup(ClientCnxnSocketNIO.java:200) at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1246) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1170) 11:30:52.435 [pool-6-thread-1-SendThread(0:0:0:0:0:0:0:1:63309)] DEBUG org.apache.zookeeper.ClientCnxnSocketNIO - Ignoring exception during shutdown output java.nio.channels.ClosedChannelException: null at sun.nio.ch.SocketChannelImpl.shutdownOutput(Unknown Source) at sun.nio.ch.SocketAdaptor.shutdownOutput(Unknown Source) at org.apache.zookeeper.ClientCnxnSocketNIO.cleanup(ClientCnxnSocketNIO.java:207) at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1246) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1170) 11:30:52.551 [pool-6-thread-1-SendThread(127.0.0.1:63309)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server 127.0.0.1/127.0.0.1:63309. Will not attempt to authenticate using SASL (unknown error) 11:30:52.551 [NIOServerCxn.Factory:/127.0.0.1:0] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - Accepted socket connection from /127.0.0.1:63316 11:30:52.551 [pool-6-thread-1-SendThread(127.0.0.1:63309)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established to 127.0.0.1/127.0.0.1:63309, initiating session 11:30:52.551 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Session establishment request sent on 127.0.0.1/127.0.0.1:63309 11:30:52.551 [NIOServerCxn.Factory:/127.0.0.1:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Session establishment request from client /127.0.0.1:63316 client's lastZxid is 0x0 11:30:52.551 [NIOServerCxn.Factory:/127.0.0.1:0] INFO org.apache.zookeeper.server.ZooKeeperServer - Client attempting to establish new session at /127.0.0.1:63316 11:30:52.551 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:createSession cxid:0x0 zxid:0x2 txntype:-10 reqpath:n/a 11:30:52.551 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:createSession cxid:0x0 zxid:0x2 txntype:-10 reqpath:n/a 11:30:52.551 [SyncThread:0] INFO org.apache.zookeeper.server.ZooKeeperServer - Established session 0x15e7aca904b0001 with negotiated timeout 6000 for client /127.0.0.1:63316 11:30:52.551 [pool-6-thread-1-SendThread(127.0.0.1:63309)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server 127.0.0.1/127.0.0.1:63309, sessionid = 0x15e7aca904b0001, negotiated timeout = 6000 11:30:52.551 [pool-6-thread-1-EventThread] DEBUG org.I0Itec.zkclient.ZkClient - Received event: WatchedEvent state:SyncConnected type:None path:null 11:30:52.551 [pool-6-thread-1-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (SyncConnected) 11:30:52.551 [pool-6-thread-1-EventThread] DEBUG org.I0Itec.zkclient.ZkClient - Leaving process event 11:30:52.551 [pool-6-thread-1] DEBUG org.I0Itec.zkclient.ZkClient - State is SyncConnected 11:30:52.566 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/consumers 11:30:52.566 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/consumers 11:30:52.566 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 1,3 replyHeader:: 1,2,-101 request:: '/consumers,F response:: 11:30:52.566 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/ 11:30:52.566 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/ 11:30:52.566 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 2,3 replyHeader:: 2,2,0 request:: '/,F response:: s{0,0,0,0,0,-1,0,0,0,1,0} 11:30:52.582 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x3 zxid:0x3 txntype:1 reqpath:n/a 11:30:52.582 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x3 zxid:0x3 txntype:1 reqpath:n/a 11:30:52.582 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:30:52.582 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:30:52.582 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0000 after 6ms 11:30:52.582 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 3,1 replyHeader:: 3,3,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: '/consumers 11:30:52.582 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:52.582 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:52.582 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 4,3 replyHeader:: 4,3,-101 request:: '/brokers/ids,F response:: 11:30:52.582 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x5 zxid:0x4 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NoNode for /brokers 11:30:52.582 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x5 zxid:0x4 txntype:-1 reqpath:n/a 11:30:52.582 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:52.582 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 5,1 replyHeader:: 5,4,-101 request:: '/brokers/ids,,v{s{31,s{'world,'anyone}}},0 response:: 11:30:52.598 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x6 zxid:0x5 txntype:1 reqpath:n/a 11:30:52.598 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x6 zxid:0x5 txntype:1 reqpath:n/a 11:30:52.598 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 6,1 replyHeader:: 6,5,0 request:: '/brokers,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers 11:30:52.598 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x7 zxid:0x6 txntype:1 reqpath:n/a 11:30:52.598 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x7 zxid:0x6 txntype:1 reqpath:n/a 11:30:52.598 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 7,1 replyHeader:: 7,6,0 request:: '/brokers/ids,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/ids 11:30:52.598 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:52.598 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:52.598 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 8,3 replyHeader:: 8,6,-101 request:: '/brokers/topics,F response:: 11:30:52.598 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x9 zxid:0x7 txntype:1 reqpath:n/a 11:30:52.598 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x9 zxid:0x7 txntype:1 reqpath:n/a 11:30:52.598 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 9,1 replyHeader:: 9,7,0 request:: '/brokers/topics,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics 11:30:52.598 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0xa zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:52.598 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0xa zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:52.598 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 10,3 replyHeader:: 10,7,-101 request:: '/config/changes,F response:: 11:30:52.598 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0xb zxid:0x8 txntype:-1 reqpath:n/a Error Path:/config Error:KeeperErrorCode = NoNode for /config 11:30:52.598 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xb zxid:0x8 txntype:-1 reqpath:n/a 11:30:52.598 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:52.598 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 11,1 replyHeader:: 11,8,-101 request:: '/config/changes,,v{s{31,s{'world,'anyone}}},0 response:: 11:30:52.615 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xc zxid:0x9 txntype:1 reqpath:n/a 11:30:52.616 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xc zxid:0x9 txntype:1 reqpath:n/a 11:30:52.616 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 12,1 replyHeader:: 12,9,0 request:: '/config,,v{s{31,s{'world,'anyone}}},0 response:: '/config 11:30:52.619 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xd zxid:0xa txntype:1 reqpath:n/a 11:30:52.620 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xd zxid:0xa txntype:1 reqpath:n/a 11:30:52.620 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 13,1 replyHeader:: 13,10,0 request:: '/config/changes,,v{s{31,s{'world,'anyone}}},0 response:: '/config/changes 11:30:52.620 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0xe zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics 11:30:52.620 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0xe zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics 11:30:52.620 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 14,3 replyHeader:: 14,10,-101 request:: '/config/topics,F response:: 11:30:52.620 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xf zxid:0xb txntype:1 reqpath:n/a 11:30:52.620 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xf zxid:0xb txntype:1 reqpath:n/a 11:30:52.620 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 15,1 replyHeader:: 15,11,0 request:: '/config/topics,,v{s{31,s{'world,'anyone}}},0 response:: '/config/topics 11:30:52.620 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x10 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/clients 11:30:52.620 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x10 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/clients 11:30:52.620 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 16,3 replyHeader:: 16,11,-101 request:: '/config/clients,F response:: 11:30:52.620 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x11 zxid:0xc txntype:1 reqpath:n/a 11:30:52.620 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x11 zxid:0xc txntype:1 reqpath:n/a 11:30:52.620 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 17,1 replyHeader:: 17,12,0 request:: '/config/clients,,v{s{31,s{'world,'anyone}}},0 response:: '/config/clients 11:30:52.620 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x12 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics 11:30:52.620 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x12 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics 11:30:52.620 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 18,3 replyHeader:: 18,12,-101 request:: '/admin/delete_topics,F response:: 11:30:52.620 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x13 zxid:0xd txntype:-1 reqpath:n/a Error Path:/admin Error:KeeperErrorCode = NoNode for /admin 11:30:52.620 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x13 zxid:0xd txntype:-1 reqpath:n/a 11:30:52.620 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:52.620 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 19,1 replyHeader:: 19,13,-101 request:: '/admin/delete_topics,,v{s{31,s{'world,'anyone}}},0 response:: 11:30:52.635 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x14 zxid:0xe txntype:1 reqpath:n/a 11:30:52.635 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x14 zxid:0xe txntype:1 reqpath:n/a 11:30:52.635 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 20,1 replyHeader:: 20,14,0 request:: '/admin,,v{s{31,s{'world,'anyone}}},0 response:: '/admin 11:30:52.635 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x15 zxid:0xf txntype:1 reqpath:n/a 11:30:52.635 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x15 zxid:0xf txntype:1 reqpath:n/a 11:30:52.635 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 21,1 replyHeader:: 21,15,0 request:: '/admin/delete_topics,,v{s{31,s{'world,'anyone}}},0 response:: '/admin/delete_topics 11:30:52.635 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x16 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/seqid 11:30:52.635 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x16 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/seqid 11:30:52.635 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 22,3 replyHeader:: 22,15,-101 request:: '/brokers/seqid,F response:: 11:30:52.635 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x17 zxid:0x10 txntype:1 reqpath:n/a 11:30:52.635 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x17 zxid:0x10 txntype:1 reqpath:n/a 11:30:52.635 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 23,1 replyHeader:: 23,16,0 request:: '/brokers/seqid,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/seqid 11:30:52.635 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x18 zxid:0xfffffffffffffffe txntype:unknown reqpath:/isr_change_notification 11:30:52.635 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x18 zxid:0xfffffffffffffffe txntype:unknown reqpath:/isr_change_notification 11:30:52.635 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 24,3 replyHeader:: 24,16,-101 request:: '/isr_change_notification,F response:: 11:30:52.635 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x19 zxid:0x11 txntype:1 reqpath:n/a 11:30:52.635 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x19 zxid:0x11 txntype:1 reqpath:n/a 11:30:52.635 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 25,1 replyHeader:: 25,17,0 request:: '/isr_change_notification,,v{s{31,s{'world,'anyone}}},0 response:: '/isr_change_notification 11:30:52.635 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x1a zxid:0xfffffffffffffffe txntype:unknown reqpath:/latest_producer_id_block 11:30:52.635 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x1a zxid:0xfffffffffffffffe txntype:unknown reqpath:/latest_producer_id_block 11:30:52.651 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 26,3 replyHeader:: 26,17,-101 request:: '/latest_producer_id_block,F response:: 11:30:52.651 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x1b zxid:0x12 txntype:1 reqpath:n/a 11:30:52.651 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x1b zxid:0x12 txntype:1 reqpath:n/a 11:30:52.651 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 27,1 replyHeader:: 27,18,0 request:: '/latest_producer_id_block,,v{s{31,s{'world,'anyone}}},0 response:: '/latest_producer_id_block 11:30:52.651 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x1c zxid:0xfffffffffffffffe txntype:unknown reqpath:/cluster/id 11:30:52.651 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x1c zxid:0xfffffffffffffffe txntype:unknown reqpath:/cluster/id 11:30:52.651 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 28,4 replyHeader:: 28,18,-101 request:: '/cluster/id,F response:: 11:30:52.682 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x1d zxid:0x13 txntype:-1 reqpath:n/a Error Path:/cluster Error:KeeperErrorCode = NoNode for /cluster 11:30:52.682 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x1d zxid:0x13 txntype:-1 reqpath:n/a 11:30:52.682 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:52.698 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 29,1 replyHeader:: 29,19,-101 request:: '/cluster/id,#7b2276657273696f6e223a2231222c226964223a226d5867735161326952362d4c776a6d48463446614177227d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:52.698 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x1e zxid:0x14 txntype:1 reqpath:n/a 11:30:52.698 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x1e zxid:0x14 txntype:1 reqpath:n/a 11:30:52.698 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 30,1 replyHeader:: 30,20,0 request:: '/cluster,,v{s{31,s{'world,'anyone}}},0 response:: '/cluster 11:30:52.698 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x1f zxid:0x15 txntype:1 reqpath:n/a 11:30:52.698 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x1f zxid:0x15 txntype:1 reqpath:n/a 11:30:52.698 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 31,1 replyHeader:: 31,21,0 request:: '/cluster/id,#7b2276657273696f6e223a2231222c226964223a226d5867735161326952362d4c776a6d48463446614177227d,v{s{31,s{'world,'anyone}}},0 response:: '/cluster/id 11:30:52.714 [pool-6-thread-1] INFO kafka.server.KafkaServer - Cluster ID = mXgsQa2iR6-LwjmHF4FaAw 11:30:52.719 [pool-6-thread-1] WARN kafka.server.BrokerMetadataCheckpoint - No meta.properties file under dir C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081\meta.properties 11:30:52.767 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name Fetch-delayQueue 11:30:52.767 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name Produce-delayQueue 11:30:52.767 [ThrottledRequestReaper-Fetch] INFO kafka.server.ClientQuotaManager$ThrottledRequestReaper - [ThrottledRequestReaper-Fetch]: Starting 11:30:52.767 [ThrottledRequestReaper-Produce] INFO kafka.server.ClientQuotaManager$ThrottledRequestReaper - [ThrottledRequestReaper-Produce]: Starting 11:30:52.767 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name Request-delayQueue 11:30:52.767 [ThrottledRequestReaper-Request] INFO kafka.server.ClientQuotaManager$ThrottledRequestReaper - [ThrottledRequestReaper-Request]: Starting 11:30:52.767 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name exempt-Request 11:30:52.798 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x20 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:52.798 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x20 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:52.798 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 32,8 replyHeader:: 32,21,0 request:: '/brokers/topics,F response:: v{} 11:30:52.820 [pool-6-thread-1] INFO kafka.log.LogManager - Loading logs. 11:30:52.836 [pool-6-thread-1] INFO kafka.log.LogManager - Logs loading complete in 0 ms. 11:30:52.869 [pool-6-thread-1] INFO kafka.log.LogManager - Starting log cleanup with a period of 300000 ms. 11:30:52.871 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-retention with initial delay 30000 ms and period 300000 ms. 11:30:52.873 [pool-6-thread-1] INFO kafka.log.LogManager - Starting log flusher with a default period of 9223372036854775807 ms. 11:30:52.873 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-flusher with initial delay 30000 ms and period 9223372036854775807 ms. 11:30:52.874 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-recovery-point-checkpoint with initial delay 30000 ms and period 60000 ms. 11:30:52.875 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-start-offset-checkpoint with initial delay 30000 ms and period 60000 ms. 11:30:52.876 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-delete-logs with initial delay 30000 ms and period 60000 ms. 11:30:52.876 [pool-6-thread-1] INFO kafka.log.LogCleaner - Starting the log cleaner 11:30:52.878 [kafka-log-cleaner-thread-0] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Starting 11:30:52.920 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed:listener-PLAINTEXTnetworkProcessor-0 11:30:52.920 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created:listener-PLAINTEXTnetworkProcessor-0 11:30:52.920 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:listener-PLAINTEXTnetworkProcessor-0 11:30:52.920 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:listener-PLAINTEXTnetworkProcessor-0 11:30:52.920 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received:listener-PLAINTEXTnetworkProcessor-0 11:30:52.920 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time:listener-PLAINTEXTnetworkProcessor-0 11:30:52.920 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time:listener-PLAINTEXTnetworkProcessor-0 11:30:52.920 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed:listener-PLAINTEXTnetworkProcessor-1 11:30:52.920 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created:listener-PLAINTEXTnetworkProcessor-1 11:30:52.920 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:listener-PLAINTEXTnetworkProcessor-1 11:30:52.920 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:listener-PLAINTEXTnetworkProcessor-1 11:30:52.920 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received:listener-PLAINTEXTnetworkProcessor-1 11:30:52.920 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time:listener-PLAINTEXTnetworkProcessor-1 11:30:52.920 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time:listener-PLAINTEXTnetworkProcessor-1 11:30:52.920 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed:listener-PLAINTEXTnetworkProcessor-2 11:30:52.920 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created:listener-PLAINTEXTnetworkProcessor-2 11:30:52.920 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:listener-PLAINTEXTnetworkProcessor-2 11:30:52.920 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:listener-PLAINTEXTnetworkProcessor-2 11:30:52.920 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received:listener-PLAINTEXTnetworkProcessor-2 11:30:52.920 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time:listener-PLAINTEXTnetworkProcessor-2 11:30:52.920 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time:listener-PLAINTEXTnetworkProcessor-2 11:30:52.936 [pool-6-thread-1] INFO kafka.network.Acceptor - Awaiting socket connections on 127.0.0.1:63325. 11:30:52.936 [pool-6-thread-1] INFO kafka.network.SocketServer - [Socket Server on Broker 0], Started 1 acceptor threads 11:30:52.951 [ExpirationReaper-0-Produce] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-Produce]: Starting 11:30:52.951 [ExpirationReaper-0-Fetch] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-Fetch]: Starting 11:30:52.951 [ExpirationReaper-0-DeleteRecords] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-DeleteRecords]: Starting 11:30:52.967 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task isr-expiration with initial delay 0 ms and period 5000 ms. 11:30:52.967 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task isr-change-propagation with initial delay 0 ms and period 2500 ms. 11:30:53.020 [controller-event-thread] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [controller-event-thread]: Starting 11:30:53.036 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x21 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 11:30:53.036 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x21 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 11:30:53.036 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 33,3 replyHeader:: 33,21,-101 request:: '/controller,T response:: 11:30:53.036 [ExpirationReaper-0-topic] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-topic]: Starting 11:30:53.036 [controller-event-thread] DEBUG org.I0Itec.zkclient.ZkClient - Subscribed data changes for /controller 11:30:53.036 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x22 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 11:30:53.036 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x22 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 11:30:53.036 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 34,4 replyHeader:: 34,21,-101 request:: '/controller,T response:: 11:30:53.036 [ExpirationReaper-0-Heartbeat] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-Heartbeat]: Starting 11:30:53.036 [controller-event-thread] DEBUG kafka.utils.ZKCheckedEphemeral - Path: /controller, Prefix: /controller, Suffix: 11:30:53.036 [ExpirationReaper-0-Rebalance] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-0-Rebalance]: Starting 11:30:53.036 [controller-event-thread] INFO kafka.utils.ZKCheckedEphemeral - Creating /controller (is it secure? false) 11:30:53.036 [controller-event-thread] DEBUG kafka.utils.ZKCheckedEphemeral - Path: /controller, Prefix: /controller, Suffix: 11:30:53.052 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x23 zxid:0x16 txntype:1 reqpath:n/a 11:30:53.052 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x23 zxid:0x16 txntype:1 reqpath:n/a 11:30:53.052 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification sessionid:0x15e7aca904b0001 11:30:53.052 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeCreated path:/controller for sessionid 0x15e7aca904b0001 11:30:53.052 [pool-6-thread-1-EventThread] DEBUG org.I0Itec.zkclient.ZkClient - Received event: WatchedEvent state:SyncConnected type:NodeCreated path:/controller 11:30:53.052 [pool-6-thread-1-EventThread] DEBUG org.I0Itec.zkclient.ZkEventThread - New event: ZkEvent[Data of /controller changed sent to kafka.controller.ControllerChangeListener@5dfd269c] 11:30:53.052 [pool-6-thread-1-EventThread] DEBUG org.I0Itec.zkclient.ZkClient - Leaving process event 11:30:53.052 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:/controller serverPath:/controller finished:false header:: 35,1 replyHeader:: 35,22,0 request:: '/controller,#7b2276657273696f6e223a312c2262726f6b65726964223a302c2274696d657374616d70223a2231353035323938363438393437227d,v{s{31,s{'world,'anyone}}},1 response:: '/controller 11:30:53.052 [ZkClient-EventThread-78-localhost:63309] DEBUG org.I0Itec.zkclient.ZkEventThread - Delivering event #1 ZkEvent[Data of /controller changed sent to kafka.controller.ControllerChangeListener@5dfd269c] 11:30:53.052 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x24 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 11:30:53.052 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x24 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 11:30:53.052 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 36,3 replyHeader:: 36,22,0 request:: '/controller,T response:: s{22,22,1505298653036,1505298653036,0,0,0,98651252271546369,54,0,22} 11:30:53.052 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x25 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 11:30:53.052 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x25 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 11:30:53.052 [controller-event-thread] INFO kafka.utils.ZKCheckedEphemeral - Result of znode creation is: OK 11:30:53.052 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 37,4 replyHeader:: 37,22,0 request:: '/controller,T response:: #7b2276657273696f6e223a312c2262726f6b65726964223a302c2274696d657374616d70223a2231353035323938363438393437227d,s{22,22,1505298653036,1505298653036,0,0,0,98651252271546369,54,0,22} 11:30:53.052 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: 0 successfully elected as the controller 11:30:53.052 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: Broker 0 starting become controller state transition 11:30:53.052 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x26 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller_epoch 11:30:53.052 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x26 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller_epoch 11:30:53.052 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x27 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:53.052 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x27 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:53.052 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 38,3 replyHeader:: 38,22,-101 request:: '/controller_epoch,F response:: 11:30:53.052 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 39,4 replyHeader:: 39,22,-101 request:: '/brokers/topics/__consumer_offsets,F response:: 11:30:53.052 [pool-6-thread-1] DEBUG kafka.utils.ZkUtils - Partition map for /brokers/topics/__consumer_offsets is Map() 11:30:53.052 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:setData cxid:0x28 zxid:0x17 txntype:-1 reqpath:n/a Error Path:/controller_epoch Error:KeeperErrorCode = NoNode for /controller_epoch 11:30:53.052 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:setData cxid:0x28 zxid:0x17 txntype:-1 reqpath:n/a 11:30:53.052 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:53.067 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 40,5 replyHeader:: 40,23,-101 request:: '/controller_epoch,#31,0 response:: 11:30:53.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x29 zxid:0x18 txntype:1 reqpath:n/a 11:30:53.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x29 zxid:0x18 txntype:1 reqpath:n/a 11:30:53.067 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 41,1 replyHeader:: 41,24,0 request:: '/controller_epoch,#31,v{s{31,s{'world,'anyone}}},0 response:: '/controller_epoch 11:30:53.067 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: Controller 0 incremented epoch to 1 11:30:53.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x2a zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 11:30:53.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x2a zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 11:30:53.067 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 42,3 replyHeader:: 42,24,-101 request:: '/admin/reassign_partitions,T response:: 11:30:53.067 [controller-event-thread] DEBUG org.I0Itec.zkclient.ZkClient - Subscribed data changes for /admin/reassign_partitions 11:30:53.067 [pool-6-thread-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Starting up. 11:30:53.067 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller 0]: Registering IsrChangeNotificationListener 11:30:53.067 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 11:30:53.067 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task delete-expired-group-metadata with initial delay 0 ms and period 600000 ms. 11:30:53.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x2b zxid:0xfffffffffffffffe txntype:unknown reqpath:/isr_change_notification 11:30:53.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x2b zxid:0xfffffffffffffffe txntype:unknown reqpath:/isr_change_notification 11:30:53.067 [pool-6-thread-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Startup complete. 11:30:53.067 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 43,3 replyHeader:: 43,24,0 request:: '/isr_change_notification,T response:: s{17,17,1505298652635,1505298652635,0,0,0,0,0,0,17} 11:30:53.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x2c zxid:0xfffffffffffffffe txntype:unknown reqpath:/isr_change_notification 11:30:53.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x2c zxid:0xfffffffffffffffe txntype:unknown reqpath:/isr_change_notification 11:30:53.067 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 44,8 replyHeader:: 44,24,0 request:: '/isr_change_notification,T response:: v{} 11:30:53.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x2d zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 11:30:53.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x2d zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 11:30:53.067 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 45,3 replyHeader:: 45,24,-101 request:: '/admin/preferred_replica_election,T response:: 11:30:53.067 [controller-event-thread] DEBUG org.I0Itec.zkclient.ZkClient - Subscribed data changes for /admin/preferred_replica_election 11:30:53.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x2e zxid:0xfffffffffffffffe txntype:unknown reqpath:/latest_producer_id_block 11:30:53.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x2e zxid:0xfffffffffffffffe txntype:unknown reqpath:/latest_producer_id_block 11:30:53.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x2f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:53.067 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Removed 0 expired offsets in 0 milliseconds. 11:30:53.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x2f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:53.067 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 46,4 replyHeader:: 46,24,0 request:: '/latest_producer_id_block,F response:: ,s{18,18,1505298652651,1505298652651,0,0,0,0,0,0,18} 11:30:53.067 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 47,3 replyHeader:: 47,24,0 request:: '/brokers/topics,T response:: s{7,7,1505298652598,1505298652598,0,0,0,0,0,0,7} 11:30:53.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x30 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:53.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x30 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:53.067 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 48,8 replyHeader:: 48,24,0 request:: '/brokers/topics,T response:: v{} 11:30:53.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x31 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics 11:30:53.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x31 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics 11:30:53.067 [pool-6-thread-1] DEBUG kafka.coordinator.transaction.ProducerIdManager - [ProducerId Manager 0]: There is no producerId block yet (Zk path version 0), creating the first block 11:30:53.067 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 49,3 replyHeader:: 49,24,0 request:: '/admin/delete_topics,T response:: s{15,15,1505298652635,1505298652635,0,0,0,0,0,0,15} 11:30:53.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x32 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics 11:30:53.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x32 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics 11:30:53.067 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 50,8 replyHeader:: 50,24,0 request:: '/admin/delete_topics,T response:: v{} 11:30:53.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x33 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:53.083 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x33 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:53.083 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 51,3 replyHeader:: 51,24,0 request:: '/brokers/ids,T response:: s{6,6,1505298652598,1505298652598,0,0,0,0,0,0,6} 11:30:53.083 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x34 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:53.083 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x34 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:53.083 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 52,8 replyHeader:: 52,24,0 request:: '/brokers/ids,T response:: v{} 11:30:53.083 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x35 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:53.083 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x35 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:53.083 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 53,8 replyHeader:: 53,24,0 request:: '/brokers/ids,T response:: v{} 11:30:53.083 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:setData cxid:0x36 zxid:0x19 txntype:5 reqpath:n/a 11:30:53.083 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:setData cxid:0x36 zxid:0x19 txntype:5 reqpath:n/a 11:30:53.083 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 54,5 replyHeader:: 54,25,0 request:: '/latest_producer_id_block,#7b2276657273696f6e223a312c2262726f6b6572223a302c22626c6f636b5f7374617274223a2230222c22626c6f636b5f656e64223a22393939227d,0 response:: s{18,25,1505298652651,1505298653083,1,0,0,0,60,0,18} 11:30:53.083 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x37 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:53.083 [pool-6-thread-1] DEBUG kafka.utils.ZkUtils - Conditional update of path /latest_producer_id_block with value {"version":1,"broker":0,"block_start":"0","block_end":"999"} and expected version 0 succeeded, returning the new version: 1 11:30:53.083 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x37 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:53.083 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 55,8 replyHeader:: 55,25,0 request:: '/brokers/topics,T response:: v{} 11:30:53.083 [pool-6-thread-1] INFO kafka.coordinator.transaction.ProducerIdManager - [ProducerId Manager 0]: Acquired new producerId block (brokerId:0,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 11:30:53.099 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x38 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:53.099 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x38 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:53.099 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 56,4 replyHeader:: 56,25,-101 request:: '/brokers/topics/__transaction_state,F response:: 11:30:53.099 [pool-6-thread-1] DEBUG kafka.utils.ZkUtils - Partition map for /brokers/topics/__transaction_state is Map() 11:30:53.099 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x39 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 11:30:53.099 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x39 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 11:30:53.099 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 57,4 replyHeader:: 57,25,-101 request:: '/admin/reassign_partitions,T response:: 11:30:53.099 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed: 11:30:53.099 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created: 11:30:53.099 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received: 11:30:53.099 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent: 11:30:53.099 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received: 11:30:53.099 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: Partitions being reassigned: Map() 11:30:53.099 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time: 11:30:53.099 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time: 11:30:53.099 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: Partitions already reassigned: Set() 11:30:53.099 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: Resuming reassignment of partitions: Map() 11:30:53.116 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: Currently active brokers in the cluster: Set() 11:30:53.116 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: Currently shutting brokers in the cluster: Set() 11:30:53.117 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: Current list of topics in the cluster: Set() 11:30:53.118 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x3a zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics 11:30:53.118 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x3a zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics 11:30:53.118 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 58,8 replyHeader:: 58,25,0 request:: '/admin/delete_topics,T response:: v{} 11:30:53.120 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: List of topics to be deleted: 11:30:53.120 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: List of topics ineligible for deletion: 11:30:53.136 [controller-event-thread] INFO kafka.controller.ReplicaStateMachine - [Replica state machine on controller 0]: Started replica state machine with initial state -> Map() 11:30:53.136 [controller-event-thread] INFO kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Started partition state machine with initial state -> Map() 11:30:53.136 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: Broker 0 is ready to serve as the new controller with epoch 1 11:30:53.136 [pool-6-thread-1] INFO kafka.coordinator.transaction.TransactionCoordinator - [Transaction Coordinator 0]: Starting up. 11:30:53.136 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 11:30:53.152 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task transaction-abort with initial delay 60000 ms and period 60000 ms. 11:30:53.152 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x3b zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 11:30:53.152 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x3b zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 11:30:53.152 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task transactionalId-expiration with initial delay 3600000 ms and period 3600000 ms. 11:30:53.152 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 59,4 replyHeader:: 59,25,-101 request:: '/admin/preferred_replica_election,T response:: 11:30:53.152 [pool-6-thread-1] INFO kafka.coordinator.transaction.TransactionCoordinator - [Transaction Coordinator 0]: Startup complete. 11:30:53.152 [TxnMarkerSenderThread-0] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 0]: Starting 11:30:53.152 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: Partitions undergoing preferred replica election: 11:30:53.152 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: Partitions that completed preferred replica election: 11:30:53.152 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: Skipping preferred replica election for partitions due to topic deletion: 11:30:53.152 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: Resuming preferred replica election for partitions: 11:30:53.152 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: Starting preferred replica leader election for partitions 11:30:53.167 [controller-event-thread] INFO kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Invoking state change to OnlinePartition for partitions 11:30:53.167 [ZkClient-EventThread-78-localhost:63309] DEBUG org.I0Itec.zkclient.ZkEventThread - Delivering event #1 done 11:30:53.167 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:delete cxid:0x3c zxid:0x1a txntype:-1 reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election 11:30:53.167 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:delete cxid:0x3c zxid:0x1a txntype:-1 reqpath:n/a 11:30:53.167 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:53.167 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 60,2 replyHeader:: 60,26,-101 request:: '/admin/preferred_replica_election,-1 response:: null 11:30:53.183 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: starting the controller scheduler 11:30:53.183 [controller-event-thread] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 11:30:53.183 [controller-event-thread] DEBUG kafka.utils.KafkaScheduler - Scheduling task auto-leader-rebalance-task with initial delay 5000 ms and period -1000 ms. 11:30:53.199 [pool-6-thread-1] DEBUG kafka.utils.Mx4jLoader$ - Will try to load MX4j now, if it's in the classpath 11:30:53.199 [pool-6-thread-1] INFO kafka.utils.Mx4jLoader$ - Will not load MX4J, mx4j-tools.jar is not in the classpath 11:30:53.199 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x3d zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:53.199 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x3d zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:53.215 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 61,3 replyHeader:: 61,26,0 request:: '/config/changes,F response:: s{10,10,1505298652617,1505298652617,0,0,0,0,0,0,10} 11:30:53.217 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x3e zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:53.217 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x3e zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:53.218 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 62,3 replyHeader:: 62,26,0 request:: '/config/changes,T response:: s{10,10,1505298652617,1505298652617,0,0,0,0,0,0,10} 11:30:53.218 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x3f zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:53.218 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x3f zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:53.219 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 63,8 replyHeader:: 63,26,0 request:: '/config/changes,T response:: v{} 11:30:53.221 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x40 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:53.221 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x40 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:53.221 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 64,8 replyHeader:: 64,26,0 request:: '/config/changes,T response:: v{} 11:30:53.221 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x41 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics 11:30:53.221 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x41 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics 11:30:53.221 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 65,8 replyHeader:: 65,26,0 request:: '/config/topics,F response:: v{} 11:30:53.221 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x42 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/clients 11:30:53.221 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x42 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/clients 11:30:53.221 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 66,8 replyHeader:: 66,26,0 request:: '/config/clients,F response:: v{} 11:30:53.221 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x43 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 11:30:53.221 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x43 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 11:30:53.221 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 67,8 replyHeader:: 67,26,-101 request:: '/config/users,F response:: v{} 11:30:53.221 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x44 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 11:30:53.221 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x44 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 11:30:53.221 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 68,8 replyHeader:: 68,26,-101 request:: '/config/users,F response:: v{} 11:30:53.221 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x45 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers 11:30:53.221 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x45 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers 11:30:53.221 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 69,8 replyHeader:: 69,26,-101 request:: '/config/brokers,F response:: v{} 11:30:53.236 [pool-6-thread-1] DEBUG kafka.utils.ZKCheckedEphemeral - Path: /brokers/ids/0, Prefix: /brokers, Suffix: /ids/0 11:30:53.236 [pool-6-thread-1] INFO kafka.utils.ZKCheckedEphemeral - Creating /brokers/ids/0 (is it secure? false) 11:30:53.236 [pool-6-thread-1] DEBUG kafka.utils.ZKCheckedEphemeral - Path: /brokers/ids/0, Prefix: /brokers, Suffix: /ids/0 11:30:53.236 [pool-6-thread-1] DEBUG kafka.utils.ZKCheckedEphemeral - Path: /brokers/ids/0, Prefix: /brokers/ids, Suffix: /0 11:30:53.236 [pool-6-thread-1] DEBUG kafka.utils.ZKCheckedEphemeral - Path: /brokers/ids/0, Prefix: /brokers/ids/0, Suffix: 11:30:53.236 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x46 zxid:0x1b txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers 11:30:53.236 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x47 zxid:0x1c txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids 11:30:53.236 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x46 zxid:0x1b txntype:-1 reqpath:n/a 11:30:53.236 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -110 11:30:53.236 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:/brokers serverPath:/brokers finished:false header:: 70,1 replyHeader:: 70,27,-110 request:: '/brokers,,v{s{31,s{'world,'anyone}}},0 response:: 11:30:53.236 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x47 zxid:0x1c txntype:-1 reqpath:n/a 11:30:53.236 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -110 11:30:53.236 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x48 zxid:0x1d txntype:1 reqpath:n/a 11:30:53.236 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x48 zxid:0x1d txntype:1 reqpath:n/a 11:30:53.236 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 71,1 replyHeader:: 71,28,-110 request:: '/brokers/ids,,v{s{31,s{'world,'anyone}}},0 response:: 11:30:53.236 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification sessionid:0x15e7aca904b0001 11:30:53.236 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/ids for sessionid 0x15e7aca904b0001 11:30:53.236 [pool-6-thread-1-EventThread] DEBUG org.I0Itec.zkclient.ZkClient - Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/ids 11:30:53.236 [pool-6-thread-1-EventThread] DEBUG org.I0Itec.zkclient.ZkEventThread - New event: ZkEvent[Children of /brokers/ids changed sent to kafka.controller.BrokerChangeListener@72282627] 11:30:53.236 [pool-6-thread-1-EventThread] DEBUG org.I0Itec.zkclient.ZkClient - Leaving process event 11:30:53.236 [ZkClient-EventThread-78-localhost:63309] DEBUG org.I0Itec.zkclient.ZkEventThread - Delivering event #2 ZkEvent[Children of /brokers/ids changed sent to kafka.controller.BrokerChangeListener@72282627] 11:30:53.236 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:/brokers/ids/0 serverPath:/brokers/ids/0 finished:false header:: 72,1 replyHeader:: 72,29,0 request:: '/brokers/ids/0,#7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,v{s{31,s{'world,'anyone}}},1 response:: '/brokers/ids/0 11:30:53.236 [pool-6-thread-1] INFO kafka.utils.ZKCheckedEphemeral - Result of znode creation is: OK 11:30:53.236 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x49 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:53.236 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x49 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:53.252 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 73,3 replyHeader:: 73,29,0 request:: '/brokers/ids,T response:: s{6,6,1505298652598,1505298652598,0,1,0,0,0,1,29} 11:30:53.252 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x4a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:53.252 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x4a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:53.252 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 74,8 replyHeader:: 74,29,0 request:: '/brokers/ids,T response:: v{'0} 11:30:53.252 [pool-6-thread-1] INFO kafka.utils.ZkUtils - Registered broker 0 at path /brokers/ids/0 with addresses: EndPoint(127.0.0.1,63325,ListenerName(PLAINTEXT),PLAINTEXT) 11:30:53.252 [pool-6-thread-1] WARN kafka.server.BrokerMetadataCheckpoint - No meta.properties file under dir C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081\meta.properties 11:30:53.252 [ZkClient-EventThread-78-localhost:63309] DEBUG org.I0Itec.zkclient.ZkEventThread - Delivering event #2 done 11:30:53.252 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x4b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:53.252 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x4b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:53.252 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 75,8 replyHeader:: 75,29,0 request:: '/brokers/ids,T response:: v{'0} 11:30:53.252 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x4c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:53.252 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x4c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:53.252 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 76,4 replyHeader:: 76,29,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:53.268 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: Newly added brokers: 0, deleted brokers: , all live brokers: 0 11:30:53.268 [controller-event-thread] DEBUG kafka.controller.ControllerChannelManager - [Channel manager on controller 0]: Controller 0 trying to connect to broker 0 11:30:53.268 [controller-event-thread] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed:broker-id-0 11:30:53.268 [controller-event-thread] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created:broker-id-0 11:30:53.268 [controller-event-thread] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:broker-id-0 11:30:53.268 [controller-event-thread] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:broker-id-0 11:30:53.268 [controller-event-thread] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received:broker-id-0 11:30:53.268 [controller-event-thread] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time:broker-id-0 11:30:53.268 [controller-event-thread] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time:broker-id-0 11:30:53.268 [pool-6-thread-1] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 0.11.0.0 11:30:53.268 [pool-6-thread-1] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : cb8625948210849f 11:30:53.268 [pool-6-thread-1] INFO kafka.server.KafkaServer - [Kafka Server 0], started 11:30:53.268 [pool-6-thread-1] DEBUG org.apache.kafka.streams.integration.utils.KafkaEmbedded - Startup of embedded Kafka broker at 127.0.0.1:63325 completed (with ZK ensemble at localhost:63309) ... 11:30:53.268 [pool-6-thread-1] DEBUG org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster - Kafka instance is running at 127.0.0.1:63325, connected to ZooKeeper at localhost:63309 11:30:53.268 [pool-6-thread-1] DEBUG org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster - Starting a Kafka instance on port null ... 11:30:53.283 [Controller-0-to-broker-0-send-thread] INFO kafka.controller.RequestSendThread - [Controller-0-to-broker-0-send-thread]: Starting 11:30:53.283 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: New broker startup callback for 0 11:30:53.283 [pool-6-thread-1] INFO kafka.server.KafkaConfig - KafkaConfig values: advertised.host.name = null advertised.listeners = null advertised.port = null alter.config.policy.class.name = null authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.id = 1 broker.id.generation.enable = true broker.rack = null compression.type = producer connections.max.idle.ms = 600000 controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true fetch.purgatory.purge.interval.requests = 1000 group.initial.rebalance.delay.ms = 0 group.max.session.timeout.ms = 300000 group.min.session.timeout.ms = 0 host.name = 127.0.0.1 inter.broker.listener.name = null inter.broker.protocol.version = 0.11.0-IV2 leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,TRACE:TRACE,SASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT listeners = null log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 2097152 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316 log.dirs = null log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.format.version = 0.11.0-IV2 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 300000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = message.max.bytes = 1000000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 num.io.threads = 8 num.network.threads = 3 num.partitions = 1 num.recovery.threads.per.data.dir = 1 num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 1440 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 1 offsets.topic.segment.bytes = 104857600 port = 0 principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder producer.purgatory.purge.interval.requests = 1000 queued.max.requests = 500 quota.consumer.default = 9223372036854775807 quota.producer.default = 9223372036854775807 quota.window.num = 11 quota.window.size.seconds = 1 replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 10000 replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.enabled.mechanisms = [GSSAPI] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.mechanism.inter.broker.protocol = GSSAPI security.inter.broker.protocol = PLAINTEXT socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = null ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000 transaction.max.timeout.ms = 900000 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 2 transaction.state.log.num.partitions = 3 transaction.state.log.replication.factor = 3 transaction.state.log.segment.bytes = 104857600 transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false zookeeper.connect = localhost:63309 zookeeper.connection.timeout.ms = null zookeeper.session.timeout.ms = 6000 zookeeper.set.acl = false zookeeper.sync.time.ms = 2000 11:30:53.283 [Controller-0-to-broker-0-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node 0 at 127.0.0.1:63325. 11:30:53.283 [pool-6-thread-1] DEBUG org.apache.kafka.streams.integration.utils.KafkaEmbedded - Starting embedded Kafka broker (with log.dirs=C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316 and ZK ensemble at localhost:63309) ... 11:30:53.283 [pool-6-thread-1] INFO kafka.server.KafkaServer - starting 11:30:53.283 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 11:30:53.283 [pool-6-thread-1] INFO kafka.server.KafkaServer - Connecting to zookeeper on localhost:63309 11:30:53.283 [pool-6-thread-1] DEBUG org.I0Itec.zkclient.ZkConnection - Creating new ZookKeeper instance to connect to localhost:63309. 11:30:53.283 [pool-6-thread-1] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=localhost:63309 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@12da176d 11:30:53.283 [ZkClient-EventThread-122-localhost:63309] INFO org.I0Itec.zkclient.ZkEventThread - Starting ZkClient event thread. 11:30:53.283 [pool-6-thread-1] DEBUG org.I0Itec.zkclient.ZkClient - Awaiting connection to Zookeeper server 11:30:53.283 [pool-6-thread-1] INFO org.I0Itec.zkclient.ZkClient - Waiting for keeper state SyncConnected 11:30:53.283 [pool-6-thread-1-SendThread(0:0:0:0:0:0:0:1:63309)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server 0:0:0:0:0:0:0:1/0:0:0:0:0:0:0:1:63309. Will not attempt to authenticate using SASL (unknown error) 11:30:53.283 [kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Acceptor - Accepted connection from /127.0.0.1:63330 on /127.0.0.1:63325 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:30:53.283 [Controller-0-to-broker-0-send-thread] DEBUG org.apache.kafka.common.network.Selector - Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 65536, SO_TIMEOUT = 0 to node 0 11:30:53.283 [Controller-0-to-broker-0-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - Completed connection to node 0. Ready. 11:30:53.283 [Controller-0-to-broker-0-send-thread] INFO kafka.controller.RequestSendThread - [Controller-0-to-broker-0-send-thread]: Controller 0 connected to 127.0.0.1:63325 (id: 0 rack: null) for sending state change requests 11:30:53.299 [kafka-network-thread-0-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:63330 11:30:54.299 [pool-6-thread-1-SendThread(0:0:0:0:0:0:0:1:63309)] WARN org.apache.zookeeper.ClientCnxn - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused: no further information at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) 11:30:54.299 [pool-6-thread-1-SendThread(0:0:0:0:0:0:0:1:63309)] DEBUG org.apache.zookeeper.ClientCnxnSocketNIO - Ignoring exception during shutdown input java.nio.channels.ClosedChannelException: null at sun.nio.ch.SocketChannelImpl.shutdownInput(Unknown Source) at sun.nio.ch.SocketAdaptor.shutdownInput(Unknown Source) at org.apache.zookeeper.ClientCnxnSocketNIO.cleanup(ClientCnxnSocketNIO.java:200) at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1246) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1170) 11:30:54.299 [pool-6-thread-1-SendThread(0:0:0:0:0:0:0:1:63309)] DEBUG org.apache.zookeeper.ClientCnxnSocketNIO - Ignoring exception during shutdown output java.nio.channels.ClosedChannelException: null at sun.nio.ch.SocketChannelImpl.shutdownOutput(Unknown Source) at sun.nio.ch.SocketAdaptor.shutdownOutput(Unknown Source) at org.apache.zookeeper.ClientCnxnSocketNIO.cleanup(ClientCnxnSocketNIO.java:207) at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1246) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1170) 11:30:54.408 [pool-6-thread-1-SendThread(127.0.0.1:63309)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server 127.0.0.1/127.0.0.1:63309. Will not attempt to authenticate using SASL (unknown error) 11:30:54.409 [pool-6-thread-1-SendThread(127.0.0.1:63309)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established to 127.0.0.1/127.0.0.1:63309, initiating session 11:30:54.409 [NIOServerCxn.Factory:/127.0.0.1:0] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - Accepted socket connection from /127.0.0.1:63334 11:30:54.409 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Session establishment request sent on 127.0.0.1/127.0.0.1:63309 11:30:54.410 [NIOServerCxn.Factory:/127.0.0.1:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Session establishment request from client /127.0.0.1:63334 client's lastZxid is 0x0 11:30:54.410 [NIOServerCxn.Factory:/127.0.0.1:0] INFO org.apache.zookeeper.server.ZooKeeperServer - Client attempting to establish new session at /127.0.0.1:63334 11:30:54.413 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:createSession cxid:0x0 zxid:0x1e txntype:-10 reqpath:n/a 11:30:54.413 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:createSession cxid:0x0 zxid:0x1e txntype:-10 reqpath:n/a 11:30:54.414 [SyncThread:0] INFO org.apache.zookeeper.server.ZooKeeperServer - Established session 0x15e7aca904b0002 with negotiated timeout 6000 for client /127.0.0.1:63334 11:30:54.414 [pool-6-thread-1-SendThread(127.0.0.1:63309)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server 127.0.0.1/127.0.0.1:63309, sessionid = 0x15e7aca904b0002, negotiated timeout = 6000 11:30:54.414 [pool-6-thread-1-EventThread] DEBUG org.I0Itec.zkclient.ZkClient - Received event: WatchedEvent state:SyncConnected type:None path:null 11:30:54.414 [pool-6-thread-1-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (SyncConnected) 11:30:54.414 [pool-6-thread-1-EventThread] DEBUG org.I0Itec.zkclient.ZkClient - Leaving process event 11:30:54.414 [pool-6-thread-1] DEBUG org.I0Itec.zkclient.ZkClient - State is SyncConnected 11:30:54.415 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:exists cxid:0x1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/consumers 11:30:54.415 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:exists cxid:0x1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/consumers 11:30:54.416 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 1,3 replyHeader:: 1,30,0 request:: '/consumers,F response:: s{3,3,1505298652566,1505298652566,0,0,0,0,0,0,3} 11:30:54.416 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:exists cxid:0x2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:54.416 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:exists cxid:0x2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:54.417 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 2,3 replyHeader:: 2,30,0 request:: '/brokers/ids,F response:: s{6,6,1505298652598,1505298652598,0,1,0,0,0,1,29} 11:30:54.417 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:exists cxid:0x3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:54.417 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:exists cxid:0x3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:54.417 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 3,3 replyHeader:: 3,30,0 request:: '/brokers/topics,F response:: s{7,7,1505298652598,1505298652598,0,0,0,0,0,0,7} 11:30:54.418 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:exists cxid:0x4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:54.418 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:exists cxid:0x4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:54.418 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 4,3 replyHeader:: 4,30,0 request:: '/config/changes,F response:: s{10,10,1505298652617,1505298652617,0,0,0,0,0,0,10} 11:30:54.419 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:exists cxid:0x5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics 11:30:54.419 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:exists cxid:0x5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics 11:30:54.419 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 5,3 replyHeader:: 5,30,0 request:: '/config/topics,F response:: s{11,11,1505298652620,1505298652620,0,0,0,0,0,0,11} 11:30:54.420 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:exists cxid:0x6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/clients 11:30:54.420 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:exists cxid:0x6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/clients 11:30:54.420 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 6,3 replyHeader:: 6,30,0 request:: '/config/clients,F response:: s{12,12,1505298652620,1505298652620,0,0,0,0,0,0,12} 11:30:54.420 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:exists cxid:0x7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics 11:30:54.420 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:exists cxid:0x7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics 11:30:54.421 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 7,3 replyHeader:: 7,30,0 request:: '/admin/delete_topics,F response:: s{15,15,1505298652635,1505298652635,0,0,0,0,0,0,15} 11:30:54.421 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:exists cxid:0x8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/seqid 11:30:54.421 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:exists cxid:0x8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/seqid 11:30:54.421 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 8,3 replyHeader:: 8,30,0 request:: '/brokers/seqid,F response:: s{16,16,1505298652635,1505298652635,0,0,0,0,0,0,16} 11:30:54.421 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:exists cxid:0x9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/isr_change_notification 11:30:54.421 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:exists cxid:0x9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/isr_change_notification 11:30:54.421 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 9,3 replyHeader:: 9,30,0 request:: '/isr_change_notification,F response:: s{17,17,1505298652635,1505298652635,0,0,0,0,0,0,17} 11:30:54.421 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:exists cxid:0xa zxid:0xfffffffffffffffe txntype:unknown reqpath:/latest_producer_id_block 11:30:54.421 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:exists cxid:0xa zxid:0xfffffffffffffffe txntype:unknown reqpath:/latest_producer_id_block 11:30:54.421 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 10,3 replyHeader:: 10,30,0 request:: '/latest_producer_id_block,F response:: s{18,25,1505298652651,1505298653083,1,0,0,0,60,0,18} 11:30:54.421 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0xb zxid:0xfffffffffffffffe txntype:unknown reqpath:/cluster/id 11:30:54.421 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0xb zxid:0xfffffffffffffffe txntype:unknown reqpath:/cluster/id 11:30:54.421 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 11,4 replyHeader:: 11,30,0 request:: '/cluster/id,F response:: #7b2276657273696f6e223a2231222c226964223a226d5867735161326952362d4c776a6d48463446614177227d,s{21,21,1505298652698,1505298652698,0,0,0,0,45,0,21} 11:30:54.421 [pool-6-thread-1] INFO kafka.server.KafkaServer - Cluster ID = mXgsQa2iR6-LwjmHF4FaAw 11:30:54.421 [pool-6-thread-1] WARN kafka.server.BrokerMetadataCheckpoint - No meta.properties file under dir C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316\meta.properties 11:30:54.421 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name Fetch-delayQueue 11:30:54.421 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name Produce-delayQueue 11:30:54.421 [ThrottledRequestReaper-Fetch] INFO kafka.server.ClientQuotaManager$ThrottledRequestReaper - [ThrottledRequestReaper-Fetch]: Starting 11:30:54.421 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name Request-delayQueue 11:30:54.421 [ThrottledRequestReaper-Produce] INFO kafka.server.ClientQuotaManager$ThrottledRequestReaper - [ThrottledRequestReaper-Produce]: Starting 11:30:54.421 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name exempt-Request 11:30:54.421 [ThrottledRequestReaper-Request] INFO kafka.server.ClientQuotaManager$ThrottledRequestReaper - [ThrottledRequestReaper-Request]: Starting 11:30:54.421 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getChildren cxid:0xc zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:54.421 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getChildren cxid:0xc zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:54.421 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 12,8 replyHeader:: 12,30,0 request:: '/brokers/topics,F response:: v{} 11:30:54.437 [pool-6-thread-1] INFO kafka.log.LogManager - Loading logs. 11:30:54.437 [pool-6-thread-1] INFO kafka.log.LogManager - Logs loading complete in 0 ms. 11:30:54.437 [pool-6-thread-1] INFO kafka.log.LogManager - Starting log cleanup with a period of 300000 ms. 11:30:54.437 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-retention with initial delay 30000 ms and period 300000 ms. 11:30:54.437 [pool-6-thread-1] INFO kafka.log.LogManager - Starting log flusher with a default period of 9223372036854775807 ms. 11:30:54.437 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-flusher with initial delay 30000 ms and period 9223372036854775807 ms. 11:30:54.437 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-recovery-point-checkpoint with initial delay 30000 ms and period 60000 ms. 11:30:54.437 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-start-offset-checkpoint with initial delay 30000 ms and period 60000 ms. 11:30:54.437 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-delete-logs with initial delay 30000 ms and period 60000 ms. 11:30:54.437 [pool-6-thread-1] INFO kafka.log.LogCleaner - Starting the log cleaner 11:30:54.437 [kafka-log-cleaner-thread-0] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Starting 11:30:54.437 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed:listener-PLAINTEXTnetworkProcessor-0 11:30:54.437 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created:listener-PLAINTEXTnetworkProcessor-0 11:30:54.437 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:listener-PLAINTEXTnetworkProcessor-0 11:30:54.437 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:listener-PLAINTEXTnetworkProcessor-0 11:30:54.437 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received:listener-PLAINTEXTnetworkProcessor-0 11:30:54.437 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time:listener-PLAINTEXTnetworkProcessor-0 11:30:54.437 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time:listener-PLAINTEXTnetworkProcessor-0 11:30:54.437 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed:listener-PLAINTEXTnetworkProcessor-1 11:30:54.437 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created:listener-PLAINTEXTnetworkProcessor-1 11:30:54.437 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:listener-PLAINTEXTnetworkProcessor-1 11:30:54.437 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:listener-PLAINTEXTnetworkProcessor-1 11:30:54.437 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received:listener-PLAINTEXTnetworkProcessor-1 11:30:54.437 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time:listener-PLAINTEXTnetworkProcessor-1 11:30:54.437 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time:listener-PLAINTEXTnetworkProcessor-1 11:30:54.437 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed:listener-PLAINTEXTnetworkProcessor-2 11:30:54.437 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created:listener-PLAINTEXTnetworkProcessor-2 11:30:54.437 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:listener-PLAINTEXTnetworkProcessor-2 11:30:54.452 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:listener-PLAINTEXTnetworkProcessor-2 11:30:54.452 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received:listener-PLAINTEXTnetworkProcessor-2 11:30:54.452 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time:listener-PLAINTEXTnetworkProcessor-2 11:30:54.452 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time:listener-PLAINTEXTnetworkProcessor-2 11:30:54.452 [pool-6-thread-1] INFO kafka.network.Acceptor - Awaiting socket connections on 127.0.0.1:63344. 11:30:54.452 [pool-6-thread-1] INFO kafka.network.SocketServer - [Socket Server on Broker 1], Started 1 acceptor threads 11:30:54.452 [ExpirationReaper-1-Produce] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Starting 11:30:54.452 [ExpirationReaper-1-Fetch] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Starting 11:30:54.452 [ExpirationReaper-1-DeleteRecords] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Starting 11:30:54.452 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task isr-expiration with initial delay 0 ms and period 5000 ms. 11:30:54.452 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task isr-change-propagation with initial delay 0 ms and period 2500 ms. 11:30:54.452 [controller-event-thread] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [controller-event-thread]: Starting 11:30:54.452 [ExpirationReaper-1-topic] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Starting 11:30:54.452 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:exists cxid:0xd zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 11:30:54.452 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:exists cxid:0xd zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 11:30:54.452 [ExpirationReaper-1-Heartbeat] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Starting 11:30:54.452 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 13,3 replyHeader:: 13,30,0 request:: '/controller,T response:: s{22,22,1505298653036,1505298653036,0,0,0,98651252271546369,54,0,22} 11:30:54.452 [controller-event-thread] DEBUG org.I0Itec.zkclient.ZkClient - Subscribed data changes for /controller 11:30:54.452 [ExpirationReaper-1-Rebalance] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Starting 11:30:54.452 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0xe zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:54.452 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0xe zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:54.452 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0xf zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 11:30:54.452 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0xf zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 11:30:54.452 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 14,4 replyHeader:: 14,30,-101 request:: '/brokers/topics/__consumer_offsets,F response:: 11:30:54.468 [pool-6-thread-1] DEBUG kafka.utils.ZkUtils - Partition map for /brokers/topics/__consumer_offsets is Map() 11:30:54.468 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 15,4 replyHeader:: 15,30,0 request:: '/controller,T response:: #7b2276657273696f6e223a312c2262726f6b65726964223a302c2274696d657374616d70223a2231353035323938363438393437227d,s{22,22,1505298653036,1505298653036,0,0,0,98651252271546369,54,0,22} 11:30:54.468 [pool-6-thread-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Starting up. 11:30:54.468 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 11:30:54.468 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task delete-expired-group-metadata with initial delay 0 ms and period 600000 ms. 11:30:54.468 [pool-6-thread-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Startup complete. 11:30:54.468 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Removed 0 expired offsets in 0 milliseconds. 11:30:54.468 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x10 zxid:0xfffffffffffffffe txntype:unknown reqpath:/latest_producer_id_block 11:30:54.468 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x10 zxid:0xfffffffffffffffe txntype:unknown reqpath:/latest_producer_id_block 11:30:54.468 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 16,4 replyHeader:: 16,30,0 request:: '/latest_producer_id_block,F response:: #7b2276657273696f6e223a312c2262726f6b6572223a302c22626c6f636b5f7374617274223a2230222c22626c6f636b5f656e64223a22393939227d,s{18,25,1505298652651,1505298653083,1,0,0,0,60,0,18} 11:30:54.468 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller 1]: Broker 0 has been elected as the controller, so stopping the election process. 11:30:54.468 [pool-6-thread-1] DEBUG kafka.coordinator.transaction.ProducerIdManager - [ProducerId Manager 1]: Read current producerId block (brokerId:0,blockStartProducerId:0,blockEndProducerId:999), Zk path version 1 11:30:54.484 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:setData cxid:0x11 zxid:0x1f txntype:5 reqpath:n/a 11:30:54.484 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:setData cxid:0x11 zxid:0x1f txntype:5 reqpath:n/a 11:30:54.484 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 17,5 replyHeader:: 17,31,0 request:: '/latest_producer_id_block,#7b2276657273696f6e223a312c2262726f6b6572223a312c22626c6f636b5f7374617274223a2231303030222c22626c6f636b5f656e64223a2231393939227d,1 response:: s{18,31,1505298652651,1505298654468,2,0,0,0,64,0,18} 11:30:54.484 [pool-6-thread-1] DEBUG kafka.utils.ZkUtils - Conditional update of path /latest_producer_id_block with value {"version":1,"broker":1,"block_start":"1000","block_end":"1999"} and expected version 1 succeeded, returning the new version: 2 11:30:54.484 [pool-6-thread-1] INFO kafka.coordinator.transaction.ProducerIdManager - [ProducerId Manager 1]: Acquired new producerId block (brokerId:1,blockStartProducerId:1000,blockEndProducerId:1999) by writing to Zk with path version 2 11:30:54.484 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x12 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:54.484 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x12 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:54.484 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 18,4 replyHeader:: 18,31,-101 request:: '/brokers/topics/__transaction_state,F response:: 11:30:54.484 [pool-6-thread-1] DEBUG kafka.utils.ZkUtils - Partition map for /brokers/topics/__transaction_state is Map() 11:30:54.484 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed: 11:30:54.484 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created: 11:30:54.484 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received: 11:30:54.484 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent: 11:30:54.484 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received: 11:30:54.484 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time: 11:30:54.484 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time: 11:30:54.484 [pool-6-thread-1] INFO kafka.coordinator.transaction.TransactionCoordinator - [Transaction Coordinator 1]: Starting up. 11:30:54.484 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 11:30:54.484 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task transaction-abort with initial delay 60000 ms and period 60000 ms. 11:30:54.484 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task transactionalId-expiration with initial delay 3600000 ms and period 3600000 ms. 11:30:54.484 [pool-6-thread-1] INFO kafka.coordinator.transaction.TransactionCoordinator - [Transaction Coordinator 1]: Startup complete. 11:30:54.484 [TxnMarkerSenderThread-1] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Starting 11:30:54.484 [pool-6-thread-1] DEBUG kafka.utils.Mx4jLoader$ - Will try to load MX4j now, if it's in the classpath 11:30:54.484 [pool-6-thread-1] INFO kafka.utils.Mx4jLoader$ - Will not load MX4J, mx4j-tools.jar is not in the classpath 11:30:54.484 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:exists cxid:0x13 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:54.484 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:exists cxid:0x13 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:54.484 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 19,3 replyHeader:: 19,31,0 request:: '/config/changes,F response:: s{10,10,1505298652617,1505298652617,0,0,0,0,0,0,10} 11:30:54.484 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:exists cxid:0x14 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:54.484 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:exists cxid:0x14 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:54.484 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 20,3 replyHeader:: 20,31,0 request:: '/config/changes,T response:: s{10,10,1505298652617,1505298652617,0,0,0,0,0,0,10} 11:30:54.484 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getChildren cxid:0x15 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:54.484 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getChildren cxid:0x15 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:54.484 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 21,8 replyHeader:: 21,31,0 request:: '/config/changes,T response:: v{} 11:30:54.484 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getChildren cxid:0x16 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:54.484 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getChildren cxid:0x16 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:54.484 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 22,8 replyHeader:: 22,31,0 request:: '/config/changes,T response:: v{} 11:30:54.484 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getChildren cxid:0x17 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics 11:30:54.484 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getChildren cxid:0x17 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics 11:30:54.484 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 23,8 replyHeader:: 23,31,0 request:: '/config/topics,F response:: v{} 11:30:54.484 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getChildren cxid:0x18 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/clients 11:30:54.484 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getChildren cxid:0x18 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/clients 11:30:54.484 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 24,8 replyHeader:: 24,31,0 request:: '/config/clients,F response:: v{} 11:30:54.484 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getChildren cxid:0x19 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 11:30:54.484 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getChildren cxid:0x19 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 11:30:54.484 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 25,8 replyHeader:: 25,31,-101 request:: '/config/users,F response:: v{} 11:30:54.484 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getChildren cxid:0x1a zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 11:30:54.484 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getChildren cxid:0x1a zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 11:30:54.499 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 26,8 replyHeader:: 26,31,-101 request:: '/config/users,F response:: v{} 11:30:54.499 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getChildren cxid:0x1b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers 11:30:54.499 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getChildren cxid:0x1b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers 11:30:54.499 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 27,8 replyHeader:: 27,31,-101 request:: '/config/brokers,F response:: v{} 11:30:54.499 [pool-6-thread-1] DEBUG kafka.utils.ZKCheckedEphemeral - Path: /brokers/ids/1, Prefix: /brokers, Suffix: /ids/1 11:30:54.499 [pool-6-thread-1] INFO kafka.utils.ZKCheckedEphemeral - Creating /brokers/ids/1 (is it secure? false) 11:30:54.499 [pool-6-thread-1] DEBUG kafka.utils.ZKCheckedEphemeral - Path: /brokers/ids/1, Prefix: /brokers, Suffix: /ids/1 11:30:54.499 [pool-6-thread-1] DEBUG kafka.utils.ZKCheckedEphemeral - Path: /brokers/ids/1, Prefix: /brokers/ids, Suffix: /1 11:30:54.499 [pool-6-thread-1] DEBUG kafka.utils.ZKCheckedEphemeral - Path: /brokers/ids/1, Prefix: /brokers/ids/1, Suffix: 11:30:54.499 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0002 type:create cxid:0x1c zxid:0x20 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers 11:30:54.499 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0002 type:create cxid:0x1d zxid:0x21 txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids 11:30:54.499 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:create cxid:0x1c zxid:0x20 txntype:-1 reqpath:n/a 11:30:54.499 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -110 11:30:54.499 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:/brokers serverPath:/brokers finished:false header:: 28,1 replyHeader:: 28,32,-110 request:: '/brokers,,v{s{31,s{'world,'anyone}}},0 response:: 11:30:54.499 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:create cxid:0x1d zxid:0x21 txntype:-1 reqpath:n/a 11:30:54.499 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -110 11:30:54.499 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:create cxid:0x1e zxid:0x22 txntype:1 reqpath:n/a 11:30:54.499 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 29,1 replyHeader:: 29,33,-110 request:: '/brokers/ids,,v{s{31,s{'world,'anyone}}},0 response:: 11:30:54.499 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:create cxid:0x1e zxid:0x22 txntype:1 reqpath:n/a 11:30:54.499 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification sessionid:0x15e7aca904b0001 11:30:54.499 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/ids for sessionid 0x15e7aca904b0001 11:30:54.499 [pool-6-thread-1-EventThread] DEBUG org.I0Itec.zkclient.ZkClient - Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/ids 11:30:54.499 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:/brokers/ids/1 serverPath:/brokers/ids/1 finished:false header:: 30,1 replyHeader:: 30,34,0 request:: '/brokers/ids/1,#7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,v{s{31,s{'world,'anyone}}},1 response:: '/brokers/ids/1 11:30:54.499 [pool-6-thread-1-EventThread] DEBUG org.I0Itec.zkclient.ZkEventThread - New event: ZkEvent[Children of /brokers/ids changed sent to kafka.controller.BrokerChangeListener@72282627] 11:30:54.499 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:30:54.499 [pool-6-thread-1-EventThread] DEBUG org.I0Itec.zkclient.ZkClient - Leaving process event 11:30:54.499 [ZkClient-EventThread-78-localhost:63309] DEBUG org.I0Itec.zkclient.ZkEventThread - Delivering event #3 ZkEvent[Children of /brokers/ids changed sent to kafka.controller.BrokerChangeListener@72282627] 11:30:54.499 [pool-6-thread-1] INFO kafka.utils.ZKCheckedEphemeral - Result of znode creation is: OK 11:30:54.499 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:30:54.499 [pool-6-thread-1] INFO kafka.utils.ZkUtils - Registered broker 1 at path /brokers/ids/1 with addresses: EndPoint(127.0.0.1,63344,ListenerName(PLAINTEXT),PLAINTEXT) 11:30:54.499 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0001 after 0ms 11:30:54.499 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x4d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:54.499 [pool-6-thread-1] WARN kafka.server.BrokerMetadataCheckpoint - No meta.properties file under dir C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316\meta.properties 11:30:54.499 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x4d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:54.499 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 77,3 replyHeader:: 77,34,0 request:: '/brokers/ids,T response:: s{6,6,1505298652598,1505298652598,0,2,0,0,0,2,34} 11:30:54.499 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x4e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:54.499 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x4e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:54.499 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 78,8 replyHeader:: 78,34,0 request:: '/brokers/ids,T response:: v{'0,'1} 11:30:54.499 [ZkClient-EventThread-78-localhost:63309] DEBUG org.I0Itec.zkclient.ZkEventThread - Delivering event #3 done 11:30:54.499 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x4f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:54.499 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x4f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:54.499 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 79,8 replyHeader:: 79,34,0 request:: '/brokers/ids,T response:: v{'0,'1} 11:30:54.499 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x50 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:54.499 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x50 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:54.499 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 80,4 replyHeader:: 80,34,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:54.521 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x51 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:54.521 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x51 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:54.521 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 81,4 replyHeader:: 81,34,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:54.521 [pool-6-thread-1] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 0.11.0.0 11:30:54.521 [pool-6-thread-1] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : cb8625948210849f 11:30:54.521 [pool-6-thread-1] INFO kafka.server.KafkaServer - [Kafka Server 1], started 11:30:54.521 [pool-6-thread-1] DEBUG org.apache.kafka.streams.integration.utils.KafkaEmbedded - Startup of embedded Kafka broker at 127.0.0.1:63344 completed (with ZK ensemble at localhost:63309) ... 11:30:54.521 [pool-6-thread-1] DEBUG org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster - Kafka instance is running at 127.0.0.1:63344, connected to ZooKeeper at localhost:63309 11:30:54.521 [pool-6-thread-1] DEBUG org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster - Starting a Kafka instance on port null ... 11:30:54.521 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: Newly added brokers: 1, deleted brokers: , all live brokers: 0,1 11:30:54.521 [controller-event-thread] DEBUG kafka.controller.ControllerChannelManager - [Channel manager on controller 0]: Controller 0 trying to connect to broker 1 11:30:54.521 [pool-6-thread-1] INFO kafka.server.KafkaConfig - KafkaConfig values: advertised.host.name = null advertised.listeners = null advertised.port = null alter.config.policy.class.name = null authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.id = 2 broker.id.generation.enable = true broker.rack = null compression.type = producer connections.max.idle.ms = 600000 controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true fetch.purgatory.purge.interval.requests = 1000 group.initial.rebalance.delay.ms = 0 group.max.session.timeout.ms = 300000 group.min.session.timeout.ms = 0 host.name = 127.0.0.1 inter.broker.listener.name = null inter.broker.protocol.version = 0.11.0-IV2 leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,TRACE:TRACE,SASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT listeners = null log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 2097152 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749 log.dirs = null log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.format.version = 0.11.0-IV2 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 300000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = message.max.bytes = 1000000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 num.io.threads = 8 num.network.threads = 3 num.partitions = 1 num.recovery.threads.per.data.dir = 1 num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 1440 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 1 offsets.topic.segment.bytes = 104857600 port = 0 principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder producer.purgatory.purge.interval.requests = 1000 queued.max.requests = 500 quota.consumer.default = 9223372036854775807 quota.producer.default = 9223372036854775807 quota.window.num = 11 quota.window.size.seconds = 1 replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 10000 replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.enabled.mechanisms = [GSSAPI] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.mechanism.inter.broker.protocol = GSSAPI security.inter.broker.protocol = PLAINTEXT socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = null ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000 transaction.max.timeout.ms = 900000 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 2 transaction.state.log.num.partitions = 3 transaction.state.log.replication.factor = 3 transaction.state.log.segment.bytes = 104857600 transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false zookeeper.connect = localhost:63309 zookeeper.connection.timeout.ms = null zookeeper.session.timeout.ms = 6000 zookeeper.set.acl = false zookeeper.sync.time.ms = 2000 11:30:54.521 [controller-event-thread] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed:broker-id-1 11:30:54.521 [controller-event-thread] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created:broker-id-1 11:30:54.521 [controller-event-thread] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:broker-id-1 11:30:54.521 [controller-event-thread] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:broker-id-1 11:30:54.521 [controller-event-thread] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received:broker-id-1 11:30:54.521 [controller-event-thread] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time:broker-id-1 11:30:54.521 [controller-event-thread] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time:broker-id-1 11:30:54.521 [pool-6-thread-1] DEBUG org.apache.kafka.streams.integration.utils.KafkaEmbedded - Starting embedded Kafka broker (with log.dirs=C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749 and ZK ensemble at localhost:63309) ... 11:30:54.521 [pool-6-thread-1] INFO kafka.server.KafkaServer - starting 11:30:54.521 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: New broker startup callback for 1 11:30:54.521 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 11:30:54.537 [pool-6-thread-1] INFO kafka.server.KafkaServer - Connecting to zookeeper on localhost:63309 11:30:54.537 [Controller-0-to-broker-1-send-thread] INFO kafka.controller.RequestSendThread - [Controller-0-to-broker-1-send-thread]: Starting 11:30:54.537 [pool-6-thread-1] DEBUG org.I0Itec.zkclient.ZkConnection - Creating new ZookKeeper instance to connect to localhost:63309. 11:30:54.537 [pool-6-thread-1] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=localhost:63309 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@26b7bd79 11:30:54.537 [Controller-0-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node 1 at 127.0.0.1:63344. 11:30:54.537 [ZkClient-EventThread-163-localhost:63309] INFO org.I0Itec.zkclient.ZkEventThread - Starting ZkClient event thread. 11:30:54.537 [Controller-0-to-broker-1-send-thread] DEBUG org.apache.kafka.common.network.Selector - Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 65536, SO_TIMEOUT = 0 to node 1 11:30:54.537 [Controller-0-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - Completed connection to node 1. Ready. 11:30:54.537 [Controller-0-to-broker-1-send-thread] INFO kafka.controller.RequestSendThread - [Controller-0-to-broker-1-send-thread]: Controller 0 connected to 127.0.0.1:63344 (id: 1 rack: null) for sending state change requests 11:30:54.537 [kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Acceptor - Accepted connection from /127.0.0.1:63350 on /127.0.0.1:63344 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:30:54.537 [pool-6-thread-1] DEBUG org.I0Itec.zkclient.ZkClient - Awaiting connection to Zookeeper server 11:30:54.537 [pool-6-thread-1] INFO org.I0Itec.zkclient.ZkClient - Waiting for keeper state SyncConnected 11:30:54.537 [kafka-network-thread-1-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:63350 11:30:54.537 [pool-6-thread-1-SendThread(127.0.0.1:63309)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server 127.0.0.1/127.0.0.1:63309. Will not attempt to authenticate using SASL (unknown error) 11:30:54.537 [pool-6-thread-1-SendThread(127.0.0.1:63309)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established to 127.0.0.1/127.0.0.1:63309, initiating session 11:30:54.537 [NIOServerCxn.Factory:/127.0.0.1:0] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - Accepted socket connection from /127.0.0.1:63352 11:30:54.537 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Session establishment request sent on 127.0.0.1/127.0.0.1:63309 11:30:54.537 [NIOServerCxn.Factory:/127.0.0.1:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Session establishment request from client /127.0.0.1:63352 client's lastZxid is 0x0 11:30:54.537 [NIOServerCxn.Factory:/127.0.0.1:0] INFO org.apache.zookeeper.server.ZooKeeperServer - Client attempting to establish new session at /127.0.0.1:63352 11:30:54.537 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:createSession cxid:0x0 zxid:0x23 txntype:-10 reqpath:n/a 11:30:54.537 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:createSession cxid:0x0 zxid:0x23 txntype:-10 reqpath:n/a 11:30:54.537 [SyncThread:0] INFO org.apache.zookeeper.server.ZooKeeperServer - Established session 0x15e7aca904b0003 with negotiated timeout 6000 for client /127.0.0.1:63352 11:30:54.537 [pool-6-thread-1-SendThread(127.0.0.1:63309)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server 127.0.0.1/127.0.0.1:63309, sessionid = 0x15e7aca904b0003, negotiated timeout = 6000 11:30:54.537 [pool-6-thread-1-EventThread] DEBUG org.I0Itec.zkclient.ZkClient - Received event: WatchedEvent state:SyncConnected type:None path:null 11:30:54.537 [pool-6-thread-1-EventThread] INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (SyncConnected) 11:30:54.537 [pool-6-thread-1-EventThread] DEBUG org.I0Itec.zkclient.ZkClient - Leaving process event 11:30:54.537 [pool-6-thread-1] DEBUG org.I0Itec.zkclient.ZkClient - State is SyncConnected 11:30:54.537 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:exists cxid:0x1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/consumers 11:30:54.537 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:exists cxid:0x1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/consumers 11:30:54.537 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 1,3 replyHeader:: 1,35,0 request:: '/consumers,F response:: s{3,3,1505298652566,1505298652566,0,0,0,0,0,0,3} 11:30:54.537 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:exists cxid:0x2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:54.537 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:exists cxid:0x2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:54.537 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 2,3 replyHeader:: 2,35,0 request:: '/brokers/ids,F response:: s{6,6,1505298652598,1505298652598,0,2,0,0,0,2,34} 11:30:54.537 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:exists cxid:0x3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:54.537 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:exists cxid:0x3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:54.537 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 3,3 replyHeader:: 3,35,0 request:: '/brokers/topics,F response:: s{7,7,1505298652598,1505298652598,0,0,0,0,0,0,7} 11:30:54.537 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:exists cxid:0x4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:54.537 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:exists cxid:0x4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:54.537 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 4,3 replyHeader:: 4,35,0 request:: '/config/changes,F response:: s{10,10,1505298652617,1505298652617,0,0,0,0,0,0,10} 11:30:54.553 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:exists cxid:0x5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics 11:30:54.553 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:exists cxid:0x5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics 11:30:54.553 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 5,3 replyHeader:: 5,35,0 request:: '/config/topics,F response:: s{11,11,1505298652620,1505298652620,0,0,0,0,0,0,11} 11:30:54.553 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:exists cxid:0x6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/clients 11:30:54.553 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:exists cxid:0x6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/clients 11:30:54.553 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 6,3 replyHeader:: 6,35,0 request:: '/config/clients,F response:: s{12,12,1505298652620,1505298652620,0,0,0,0,0,0,12} 11:30:54.553 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:exists cxid:0x7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics 11:30:54.553 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:exists cxid:0x7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics 11:30:54.553 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 7,3 replyHeader:: 7,35,0 request:: '/admin/delete_topics,F response:: s{15,15,1505298652635,1505298652635,0,0,0,0,0,0,15} 11:30:54.553 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:exists cxid:0x8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/seqid 11:30:54.553 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:exists cxid:0x8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/seqid 11:30:54.553 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 8,3 replyHeader:: 8,35,0 request:: '/brokers/seqid,F response:: s{16,16,1505298652635,1505298652635,0,0,0,0,0,0,16} 11:30:54.553 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:exists cxid:0x9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/isr_change_notification 11:30:54.553 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:exists cxid:0x9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/isr_change_notification 11:30:54.553 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 9,3 replyHeader:: 9,35,0 request:: '/isr_change_notification,F response:: s{17,17,1505298652635,1505298652635,0,0,0,0,0,0,17} 11:30:54.553 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:exists cxid:0xa zxid:0xfffffffffffffffe txntype:unknown reqpath:/latest_producer_id_block 11:30:54.553 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:exists cxid:0xa zxid:0xfffffffffffffffe txntype:unknown reqpath:/latest_producer_id_block 11:30:54.553 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 10,3 replyHeader:: 10,35,0 request:: '/latest_producer_id_block,F response:: s{18,31,1505298652651,1505298654468,2,0,0,0,64,0,18} 11:30:54.553 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0xb zxid:0xfffffffffffffffe txntype:unknown reqpath:/cluster/id 11:30:54.553 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0xb zxid:0xfffffffffffffffe txntype:unknown reqpath:/cluster/id 11:30:54.553 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 11,4 replyHeader:: 11,35,0 request:: '/cluster/id,F response:: #7b2276657273696f6e223a2231222c226964223a226d5867735161326952362d4c776a6d48463446614177227d,s{21,21,1505298652698,1505298652698,0,0,0,0,45,0,21} 11:30:54.553 [pool-6-thread-1] INFO kafka.server.KafkaServer - Cluster ID = mXgsQa2iR6-LwjmHF4FaAw 11:30:54.553 [pool-6-thread-1] WARN kafka.server.BrokerMetadataCheckpoint - No meta.properties file under dir C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749\meta.properties 11:30:54.553 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name Fetch-delayQueue 11:30:54.553 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name Produce-delayQueue 11:30:54.553 [ThrottledRequestReaper-Fetch] INFO kafka.server.ClientQuotaManager$ThrottledRequestReaper - [ThrottledRequestReaper-Fetch]: Starting 11:30:54.553 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name Request-delayQueue 11:30:54.553 [ThrottledRequestReaper-Produce] INFO kafka.server.ClientQuotaManager$ThrottledRequestReaper - [ThrottledRequestReaper-Produce]: Starting 11:30:54.553 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name exempt-Request 11:30:54.553 [ThrottledRequestReaper-Request] INFO kafka.server.ClientQuotaManager$ThrottledRequestReaper - [ThrottledRequestReaper-Request]: Starting 11:30:54.553 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getChildren cxid:0xc zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:54.553 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getChildren cxid:0xc zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:54.553 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 12,8 replyHeader:: 12,35,0 request:: '/brokers/topics,F response:: v{} 11:30:54.568 [pool-6-thread-1] INFO kafka.log.LogManager - Loading logs. 11:30:54.568 [pool-6-thread-1] INFO kafka.log.LogManager - Logs loading complete in 0 ms. 11:30:54.568 [pool-6-thread-1] INFO kafka.log.LogManager - Starting log cleanup with a period of 300000 ms. 11:30:54.568 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-retention with initial delay 30000 ms and period 300000 ms. 11:30:54.568 [pool-6-thread-1] INFO kafka.log.LogManager - Starting log flusher with a default period of 9223372036854775807 ms. 11:30:54.568 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-flusher with initial delay 30000 ms and period 9223372036854775807 ms. 11:30:54.568 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-recovery-point-checkpoint with initial delay 30000 ms and period 60000 ms. 11:30:54.568 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-start-offset-checkpoint with initial delay 30000 ms and period 60000 ms. 11:30:54.568 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-delete-logs with initial delay 30000 ms and period 60000 ms. 11:30:54.568 [pool-6-thread-1] INFO kafka.log.LogCleaner - Starting the log cleaner 11:30:54.568 [kafka-log-cleaner-thread-0] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Starting 11:30:54.584 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed:listener-PLAINTEXTnetworkProcessor-0 11:30:54.584 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created:listener-PLAINTEXTnetworkProcessor-0 11:30:54.584 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:listener-PLAINTEXTnetworkProcessor-0 11:30:54.584 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:listener-PLAINTEXTnetworkProcessor-0 11:30:54.584 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received:listener-PLAINTEXTnetworkProcessor-0 11:30:54.584 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time:listener-PLAINTEXTnetworkProcessor-0 11:30:54.584 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time:listener-PLAINTEXTnetworkProcessor-0 11:30:54.584 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed:listener-PLAINTEXTnetworkProcessor-1 11:30:54.584 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created:listener-PLAINTEXTnetworkProcessor-1 11:30:54.584 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:listener-PLAINTEXTnetworkProcessor-1 11:30:54.584 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:listener-PLAINTEXTnetworkProcessor-1 11:30:54.584 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received:listener-PLAINTEXTnetworkProcessor-1 11:30:54.584 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time:listener-PLAINTEXTnetworkProcessor-1 11:30:54.584 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time:listener-PLAINTEXTnetworkProcessor-1 11:30:54.584 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed:listener-PLAINTEXTnetworkProcessor-2 11:30:54.584 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created:listener-PLAINTEXTnetworkProcessor-2 11:30:54.584 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:listener-PLAINTEXTnetworkProcessor-2 11:30:54.584 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:listener-PLAINTEXTnetworkProcessor-2 11:30:54.584 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received:listener-PLAINTEXTnetworkProcessor-2 11:30:54.584 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time:listener-PLAINTEXTnetworkProcessor-2 11:30:54.584 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time:listener-PLAINTEXTnetworkProcessor-2 11:30:54.584 [pool-6-thread-1] INFO kafka.network.Acceptor - Awaiting socket connections on 127.0.0.1:63361. 11:30:54.599 [pool-6-thread-1] INFO kafka.network.SocketServer - [Socket Server on Broker 2], Started 1 acceptor threads 11:30:54.599 [ExpirationReaper-2-Produce] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-2-Produce]: Starting 11:30:54.599 [ExpirationReaper-2-DeleteRecords] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-2-DeleteRecords]: Starting 11:30:54.599 [ExpirationReaper-2-Fetch] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-2-Fetch]: Starting 11:30:54.599 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task isr-expiration with initial delay 0 ms and period 5000 ms. 11:30:54.599 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task isr-change-propagation with initial delay 0 ms and period 2500 ms. 11:30:54.599 [controller-event-thread] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [controller-event-thread]: Starting 11:30:54.599 [ExpirationReaper-2-topic] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-2-topic]: Starting 11:30:54.599 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:exists cxid:0xd zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 11:30:54.599 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:exists cxid:0xd zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 11:30:54.599 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 13,3 replyHeader:: 13,35,0 request:: '/controller,T response:: s{22,22,1505298653036,1505298653036,0,0,0,98651252271546369,54,0,22} 11:30:54.599 [controller-event-thread] DEBUG org.I0Itec.zkclient.ZkClient - Subscribed data changes for /controller 11:30:54.599 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0xe zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:54.599 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0xe zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:54.599 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 14,4 replyHeader:: 14,35,-101 request:: '/brokers/topics/__consumer_offsets,F response:: 11:30:54.599 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0xf zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 11:30:54.599 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0xf zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 11:30:54.599 [pool-6-thread-1] DEBUG kafka.utils.ZkUtils - Partition map for /brokers/topics/__consumer_offsets is Map() 11:30:54.599 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 15,4 replyHeader:: 15,35,0 request:: '/controller,T response:: #7b2276657273696f6e223a312c2262726f6b65726964223a302c2274696d657374616d70223a2231353035323938363438393437227d,s{22,22,1505298653036,1505298653036,0,0,0,98651252271546369,54,0,22} 11:30:54.599 [pool-6-thread-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 2]: Starting up. 11:30:54.599 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 11:30:54.599 [ExpirationReaper-2-Rebalance] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-2-Rebalance]: Starting 11:30:54.599 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task delete-expired-group-metadata with initial delay 0 ms and period 600000 ms. 11:30:54.599 [pool-6-thread-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 2]: Startup complete. 11:30:54.599 [ExpirationReaper-2-Heartbeat] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-2-Heartbeat]: Starting 11:30:54.599 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x10 zxid:0xfffffffffffffffe txntype:unknown reqpath:/latest_producer_id_block 11:30:54.599 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x10 zxid:0xfffffffffffffffe txntype:unknown reqpath:/latest_producer_id_block 11:30:54.599 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Removed 0 expired offsets in 0 milliseconds. 11:30:54.599 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 16,4 replyHeader:: 16,35,0 request:: '/latest_producer_id_block,F response:: #7b2276657273696f6e223a312c2262726f6b6572223a312c22626c6f636b5f7374617274223a2231303030222c22626c6f636b5f656e64223a2231393939227d,s{18,31,1505298652651,1505298654468,2,0,0,0,64,0,18} 11:30:54.616 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller 2]: Broker 0 has been elected as the controller, so stopping the election process. 11:30:54.618 [pool-6-thread-1] DEBUG kafka.coordinator.transaction.ProducerIdManager - [ProducerId Manager 2]: Read current producerId block (brokerId:1,blockStartProducerId:1000,blockEndProducerId:1999), Zk path version 2 11:30:54.622 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:setData cxid:0x11 zxid:0x24 txntype:5 reqpath:n/a 11:30:54.622 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:setData cxid:0x11 zxid:0x24 txntype:5 reqpath:n/a 11:30:54.622 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 17,5 replyHeader:: 17,36,0 request:: '/latest_producer_id_block,#7b2276657273696f6e223a312c2262726f6b6572223a322c22626c6f636b5f7374617274223a2232303030222c22626c6f636b5f656e64223a2232393939227d,2 response:: s{18,36,1505298652651,1505298654618,3,0,0,0,64,0,18} 11:30:54.622 [pool-6-thread-1] DEBUG kafka.utils.ZkUtils - Conditional update of path /latest_producer_id_block with value {"version":1,"broker":2,"block_start":"2000","block_end":"2999"} and expected version 2 succeeded, returning the new version: 3 11:30:54.622 [pool-6-thread-1] INFO kafka.coordinator.transaction.ProducerIdManager - [ProducerId Manager 2]: Acquired new producerId block (brokerId:2,blockStartProducerId:2000,blockEndProducerId:2999) by writing to Zk with path version 3 11:30:54.622 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x12 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:54.622 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x12 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:54.622 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 18,4 replyHeader:: 18,36,-101 request:: '/brokers/topics/__transaction_state,F response:: 11:30:54.622 [pool-6-thread-1] DEBUG kafka.utils.ZkUtils - Partition map for /brokers/topics/__transaction_state is Map() 11:30:54.622 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed: 11:30:54.622 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created: 11:30:54.622 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received: 11:30:54.622 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent: 11:30:54.622 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received: 11:30:54.622 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time: 11:30:54.622 [pool-6-thread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time: 11:30:54.622 [pool-6-thread-1] INFO kafka.coordinator.transaction.TransactionCoordinator - [Transaction Coordinator 2]: Starting up. 11:30:54.622 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 11:30:54.622 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task transaction-abort with initial delay 60000 ms and period 60000 ms. 11:30:54.622 [pool-6-thread-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task transactionalId-expiration with initial delay 3600000 ms and period 3600000 ms. 11:30:54.622 [pool-6-thread-1] INFO kafka.coordinator.transaction.TransactionCoordinator - [Transaction Coordinator 2]: Startup complete. 11:30:54.622 [TxnMarkerSenderThread-2] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 2]: Starting 11:30:54.622 [pool-6-thread-1] DEBUG kafka.utils.Mx4jLoader$ - Will try to load MX4j now, if it's in the classpath 11:30:54.661 [pool-6-thread-1] INFO kafka.utils.Mx4jLoader$ - Will not load MX4J, mx4j-tools.jar is not in the classpath 11:30:54.661 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:exists cxid:0x13 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:54.661 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:exists cxid:0x13 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:54.662 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 19,3 replyHeader:: 19,36,0 request:: '/config/changes,F response:: s{10,10,1505298652617,1505298652617,0,0,0,0,0,0,10} 11:30:54.663 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:exists cxid:0x14 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:54.664 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:exists cxid:0x14 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:54.664 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 20,3 replyHeader:: 20,36,0 request:: '/config/changes,T response:: s{10,10,1505298652617,1505298652617,0,0,0,0,0,0,10} 11:30:54.664 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getChildren cxid:0x15 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:54.664 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getChildren cxid:0x15 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:54.664 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 21,8 replyHeader:: 21,36,0 request:: '/config/changes,T response:: v{} 11:30:54.664 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getChildren cxid:0x16 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:54.664 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getChildren cxid:0x16 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:30:54.664 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 22,8 replyHeader:: 22,36,0 request:: '/config/changes,T response:: v{} 11:30:54.664 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getChildren cxid:0x17 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics 11:30:54.664 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getChildren cxid:0x17 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics 11:30:54.664 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 23,8 replyHeader:: 23,36,0 request:: '/config/topics,F response:: v{} 11:30:54.664 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getChildren cxid:0x18 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/clients 11:30:54.664 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getChildren cxid:0x18 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/clients 11:30:54.664 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 24,8 replyHeader:: 24,36,0 request:: '/config/clients,F response:: v{} 11:30:54.664 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getChildren cxid:0x19 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 11:30:54.664 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getChildren cxid:0x19 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 11:30:54.664 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 25,8 replyHeader:: 25,36,-101 request:: '/config/users,F response:: v{} 11:30:54.664 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getChildren cxid:0x1a zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 11:30:54.664 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getChildren cxid:0x1a zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 11:30:54.664 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 26,8 replyHeader:: 26,36,-101 request:: '/config/users,F response:: v{} 11:30:54.664 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getChildren cxid:0x1b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers 11:30:54.664 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getChildren cxid:0x1b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers 11:30:54.664 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 27,8 replyHeader:: 27,36,-101 request:: '/config/brokers,F response:: v{} 11:30:54.664 [pool-6-thread-1] DEBUG kafka.utils.ZKCheckedEphemeral - Path: /brokers/ids/2, Prefix: /brokers, Suffix: /ids/2 11:30:54.664 [pool-6-thread-1] INFO kafka.utils.ZKCheckedEphemeral - Creating /brokers/ids/2 (is it secure? false) 11:30:54.664 [pool-6-thread-1] DEBUG kafka.utils.ZKCheckedEphemeral - Path: /brokers/ids/2, Prefix: /brokers, Suffix: /ids/2 11:30:54.664 [pool-6-thread-1] DEBUG kafka.utils.ZKCheckedEphemeral - Path: /brokers/ids/2, Prefix: /brokers/ids, Suffix: /2 11:30:54.664 [pool-6-thread-1] DEBUG kafka.utils.ZKCheckedEphemeral - Path: /brokers/ids/2, Prefix: /brokers/ids/2, Suffix: 11:30:54.664 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0003 type:create cxid:0x1c zxid:0x25 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers 11:30:54.664 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0003 type:create cxid:0x1d zxid:0x26 txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids 11:30:54.664 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:create cxid:0x1c zxid:0x25 txntype:-1 reqpath:n/a 11:30:54.664 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -110 11:30:54.664 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:/brokers serverPath:/brokers finished:false header:: 28,1 replyHeader:: 28,37,-110 request:: '/brokers,,v{s{31,s{'world,'anyone}}},0 response:: 11:30:54.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:create cxid:0x1d zxid:0x26 txntype:-1 reqpath:n/a 11:30:54.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -110 11:30:54.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:create cxid:0x1e zxid:0x27 txntype:1 reqpath:n/a 11:30:54.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:create cxid:0x1e zxid:0x27 txntype:1 reqpath:n/a 11:30:54.680 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 29,1 replyHeader:: 29,38,-110 request:: '/brokers/ids,,v{s{31,s{'world,'anyone}}},0 response:: 11:30:54.680 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification sessionid:0x15e7aca904b0001 11:30:54.680 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/ids for sessionid 0x15e7aca904b0001 11:30:54.680 [pool-6-thread-1-EventThread] DEBUG org.I0Itec.zkclient.ZkClient - Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/ids 11:30:54.680 [pool-6-thread-1-EventThread] DEBUG org.I0Itec.zkclient.ZkEventThread - New event: ZkEvent[Children of /brokers/ids changed sent to kafka.controller.BrokerChangeListener@72282627] 11:30:54.680 [pool-6-thread-1-EventThread] DEBUG org.I0Itec.zkclient.ZkClient - Leaving process event 11:30:54.680 [ZkClient-EventThread-78-localhost:63309] DEBUG org.I0Itec.zkclient.ZkEventThread - Delivering event #4 ZkEvent[Children of /brokers/ids changed sent to kafka.controller.BrokerChangeListener@72282627] 11:30:54.680 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:/brokers/ids/2 serverPath:/brokers/ids/2 finished:false header:: 30,1 replyHeader:: 30,39,0 request:: '/brokers/ids/2,#7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,v{s{31,s{'world,'anyone}}},1 response:: '/brokers/ids/2 11:30:54.680 [pool-6-thread-1] INFO kafka.utils.ZKCheckedEphemeral - Result of znode creation is: OK 11:30:54.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x52 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:54.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x52 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:54.680 [pool-6-thread-1] INFO kafka.utils.ZkUtils - Registered broker 2 at path /brokers/ids/2 with addresses: EndPoint(127.0.0.1,63361,ListenerName(PLAINTEXT),PLAINTEXT) 11:30:54.680 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 82,3 replyHeader:: 82,39,0 request:: '/brokers/ids,T response:: s{6,6,1505298652598,1505298652598,0,3,0,0,0,3,39} 11:30:54.680 [pool-6-thread-1] WARN kafka.server.BrokerMetadataCheckpoint - No meta.properties file under dir C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749\meta.properties 11:30:54.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x53 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:54.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x53 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:54.680 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 83,8 replyHeader:: 83,39,0 request:: '/brokers/ids,T response:: v{'0,'1,'2} 11:30:54.680 [ZkClient-EventThread-78-localhost:63309] DEBUG org.I0Itec.zkclient.ZkEventThread - Delivering event #4 done 11:30:54.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x54 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:54.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x54 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:54.680 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 84,8 replyHeader:: 84,39,0 request:: '/brokers/ids,T response:: v{'0,'1,'2} 11:30:54.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x55 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:54.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x55 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:54.680 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 85,4 replyHeader:: 85,39,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:54.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x56 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:54.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x56 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:54.680 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 86,4 replyHeader:: 86,39,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:54.696 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x57 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:54.696 [pool-6-thread-1] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 0.11.0.0 11:30:54.696 [pool-6-thread-1] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : cb8625948210849f 11:30:54.696 [pool-6-thread-1] INFO kafka.server.KafkaServer - [Kafka Server 2], started 11:30:54.696 [pool-6-thread-1] DEBUG org.apache.kafka.streams.integration.utils.KafkaEmbedded - Startup of embedded Kafka broker at 127.0.0.1:63361 completed (with ZK ensemble at localhost:63309) ... 11:30:54.696 [pool-6-thread-1] DEBUG org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster - Kafka instance is running at 127.0.0.1:63361, connected to ZooKeeper at localhost:63309 11:30:54.696 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x57 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:54.696 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 87,4 replyHeader:: 87,39,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:54.696 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: Newly added brokers: 2, deleted brokers: , all live brokers: 0,1,2 11:30:54.696 [controller-event-thread] DEBUG kafka.controller.ControllerChannelManager - [Channel manager on controller 0]: Controller 0 trying to connect to broker 2 11:30:54.696 [controller-event-thread] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed:broker-id-2 11:30:54.696 [controller-event-thread] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created:broker-id-2 11:30:54.696 [controller-event-thread] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:broker-id-2 11:30:54.696 [controller-event-thread] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:broker-id-2 11:30:54.696 [controller-event-thread] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received:broker-id-2 11:30:54.696 [controller-event-thread] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time:broker-id-2 11:30:54.696 [controller-event-thread] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time:broker-id-2 11:30:54.696 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: New broker startup callback for 2 11:30:54.713 [Controller-0-to-broker-2-send-thread] INFO kafka.controller.RequestSendThread - [Controller-0-to-broker-2-send-thread]: Starting 11:30:54.714 [Controller-0-to-broker-2-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node 2 at 127.0.0.1:63361. 11:30:54.715 [Controller-0-to-broker-2-send-thread] DEBUG org.apache.kafka.common.network.Selector - Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 65536, SO_TIMEOUT = 0 to node 2 11:30:54.715 [kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Acceptor - Accepted connection from /127.0.0.1:63366 on /127.0.0.1:63361 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:30:54.715 [Controller-0-to-broker-2-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - Completed connection to node 2. Ready. 11:30:54.715 [kafka-network-thread-2-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:63366 11:30:54.716 [Controller-0-to-broker-2-send-thread] INFO kafka.controller.RequestSendThread - [Controller-0-to-broker-2-send-thread]: Controller 0 connected to 127.0.0.1:63361 (id: 2 rack: null) for sending state change requests 11:30:54.844 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.streams.StreamsConfig - StreamsConfig values: application.id = exactly-once application.server = bootstrap.servers = [127.0.0.1:63325] buffered.records.per.partition = 1000 cache.max.bytes.buffering = 10485760 client.id = commit.interval.ms = 30000 connections.max.idle.ms = 540000 default.key.serde = class org.apache.kafka.common.serialization.Serdes$ByteArraySerde default.timestamp.extractor = class org.apache.kafka.streams.processor.FailOnInvalidTimestamp default.value.serde = class org.apache.kafka.common.serialization.Serdes$ByteArraySerde key.serde = null metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 num.standby.replicas = 0 num.stream.threads = 1 partition.grouper = class org.apache.kafka.streams.processor.DefaultPartitionGrouper poll.ms = 100 processing.guarantee = at_least_once receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 replication.factor = 1 request.timeout.ms = 40000 retry.backoff.ms = 100 rocksdb.config.setter = null security.protocol = PLAINTEXT send.buffer.bytes = 131072 state.cleanup.delay.ms = 600000 state.dir = C:\Users\Ryan\AppData\Local\Temp\dd18537f-7701-439c-8b57-f758ce707d932076172353219262199 timestamp.extractor = null value.serde = null windowstore.changelog.additional.retention.ms = 86400000 zookeeper.connect = 11:30:54.857 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [127.0.0.1:63325] buffer.memory = 33554432 client.id = compression.type = none connections.max.idle.ms = 540000 enable.idempotence = false interceptor.classes = null key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 0 retry.backoff.ms = 100 sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:30:54.870 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bufferpool-wait-time 11:30:54.873 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name buffer-exhausted-records 11:30:55.044 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 1 to Cluster(id = null, nodes = [127.0.0.1:63325 (id: -1 rack: null)], partitions = []) 11:30:55.048 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name produce-throttle-time 11:30:55.050 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed: 11:30:55.051 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created: 11:30:55.051 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received: 11:30:55.051 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent: 11:30:55.051 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received: 11:30:55.052 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time: 11:30:55.052 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time: 11:30:55.054 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name batch-size 11:30:55.054 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name compression-rate 11:30:55.054 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name queue-time 11:30:55.055 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name request-time 11:30:55.055 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name records-per-request 11:30:55.055 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name record-retries 11:30:55.055 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name errors 11:30:55.055 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name record-size-max 11:30:55.057 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name batch-split-rate 11:30:55.058 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 0.11.0.0 11:30:55.058 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : cb8625948210849f 11:30:55.058 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.producer.internals.Sender - Starting Kafka producer I/O thread. 11:30:55.058 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.producer.KafkaProducer - Kafka producer started 11:30:55.125 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name thread.exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1.commit-latency 11:30:55.125 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name thread.exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1.poll-latency 11:30:55.125 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name thread.exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1.process-latency 11:30:55.125 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name thread.exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1.punctuate-latency 11:30:55.125 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name thread.exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1.task-created 11:30:55.125 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name thread.exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1.task-closed 11:30:55.125 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name thread.exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1.skipped-records 11:30:55.140 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Creating consumer client 11:30:55.160 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: auto.commit.interval.ms = 5000 auto.offset.reset = earliest bootstrap.servers = [127.0.0.1:63325] check.crcs = true client.id = exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer connections.max.idle.ms = 540000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = exactly-once heartbeat.interval.ms = 3000 interceptor.classes = null internal.leave.group.on.close = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 2147483647 max.poll.records = 1000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [org.apache.kafka.streams.processor.internals.StreamPartitionAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 305000 retry.backoff.ms = 100 sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 session.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer 11:30:55.161 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Starting the Kafka consumer 11:30:55.162 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 1 to Cluster(id = null, nodes = [127.0.0.1:63325 (id: -1 rack: null)], partitions = []) 11:30:55.176 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name fetch-throttle-time 11:30:55.178 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed: 11:30:55.179 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created: 11:30:55.179 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received: 11:30:55.179 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent: 11:30:55.179 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received: 11:30:55.179 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time: 11:30:55.180 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time: 11:30:55.204 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 1 to Cluster(id = null, nodes = [127.0.0.1:63325 (id: -1 rack: null)], partitions = []) 11:30:55.204 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed: 11:30:55.204 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created: 11:30:55.204 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received: 11:30:55.204 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent: 11:30:55.204 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received: 11:30:55.204 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time: 11:30:55.204 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time: 11:30:55.219 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name heartbeat-latency 11:30:55.219 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name join-latency 11:30:55.219 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name sync-latency 11:30:55.219 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name commit-latency 11:30:55.219 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-fetched 11:30:55.219 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name records-fetched 11:30:55.219 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name fetch-latency 11:30:55.219 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name records-lag 11:30:55.219 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 0.11.0.0 11:30:55.219 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : cb8625948210849f 11:30:55.219 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Kafka consumer created 11:30:55.219 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Creating restore consumer client 11:30:55.219 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: auto.commit.interval.ms = 5000 auto.offset.reset = earliest bootstrap.servers = [127.0.0.1:63325] check.crcs = true client.id = exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-restore-consumer connections.max.idle.ms = 540000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = heartbeat.interval.ms = 3000 interceptor.classes = null internal.leave.group.on.close = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 2147483647 max.poll.records = 1000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 305000 retry.backoff.ms = 100 sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 session.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer 11:30:55.219 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Starting the Kafka consumer 11:30:55.235 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 1 to Cluster(id = null, nodes = [127.0.0.1:63325 (id: -1 rack: null)], partitions = []) 11:30:55.235 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name fetch-throttle-time 11:30:55.235 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed: 11:30:55.235 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created: 11:30:55.235 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received: 11:30:55.235 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent: 11:30:55.235 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received: 11:30:55.235 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time: 11:30:55.235 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time: 11:30:55.235 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name heartbeat-latency 11:30:55.235 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name join-latency 11:30:55.235 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name sync-latency 11:30:55.235 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name commit-latency 11:30:55.235 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-fetched 11:30:55.235 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name records-fetched 11:30:55.235 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name fetch-latency 11:30:55.235 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name records-lag 11:30:55.235 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 0.11.0.0 11:30:55.235 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : cb8625948210849f 11:30:55.235 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Kafka consumer created 11:30:55.235 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] State transition from CREATED to RUNNING. 11:30:55.252 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.streams.KafkaStreams - stream-client [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181] Starting Kafka Stream process. 11:30:55.253 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 1 to Cluster(id = null, nodes = [127.0.0.1:63325 (id: -1 rack: null)], partitions = []) 11:30:55.254 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed: 11:30:55.255 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created: 11:30:55.255 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received: 11:30:55.255 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent: 11:30:55.255 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received: 11:30:55.255 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time: 11:30:55.255 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time: 11:30:55.255 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 1 to Cluster(id = null, nodes = [127.0.0.1:63325 (id: -1 rack: null)], partitions = []) 11:30:55.255 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node -1 at 127.0.0.1:63325. 11:30:55.257 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-sent 11:30:55.257 [kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Acceptor - Accepted connection from /127.0.0.1:63377 on /127.0.0.1:63325 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:30:55.257 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-received 11:30:55.257 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.latency 11:30:55.257 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.network.Selector - Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 11:30:55.257 [kafka-network-thread-0-ListenerName(PLAINTEXT)-PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:63377 11:30:55.257 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.NetworkClient - Completed connection to node -1. Fetching API versions. 11:30:55.257 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.NetworkClient - Initiating API versions fetch from node -1. 11:30:55.262 [kafka-request-handler-3] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name Request-: 11:30:55.262 [kafka-request-handler-3] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name RequestThrottleTime-: 11:30:55.262 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.NetworkClient - Recorded API versions for node -1: (Produce(0): 0 to 3 [usable: 3], Fetch(1): 0 to 5 [usable: 5], Offsets(2): 0 to 2 [usable: 2], Metadata(3): 0 to 4 [usable: 4], LeaderAndIsr(4): 0 [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 [usable: 3], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 3 [usable: 3], OffsetFetch(9): 0 to 3 [usable: 3], FindCoordinator(10): 0 to 1 [usable: 1], JoinGroup(11): 0 to 2 [usable: 2], Heartbeat(12): 0 to 1 [usable: 1], LeaveGroup(13): 0 to 1 [usable: 1], SyncGroup(14): 0 to 1 [usable: 1], DescribeGroups(15): 0 to 1 [usable: 1], ListGroups(16): 0 to 1 [usable: 1], SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 to 1 [usable: 1], CreateTopics(19): 0 to 2 [usable: 2], DeleteTopics(20): 0 to 1 [usable: 1], DeleteRecords(21): 0 [usable: 0], InitProducerId(22): 0 [usable: 0], OffsetForLeaderEpoch(23): 0 [usable: 0], AddPartitionsToTxn(24): 0 [usable: 0], AddOffsetsToTxn(25): 0 [usable: 0], EndTxn(26): 0 [usable: 0], WriteTxnMarkers(27): 0 [usable: 0], TxnOffsetCommit(28): 0 [usable: 0], DescribeAcls(29): 0 [usable: 0], CreateAcls(30): 0 [usable: 0], DeleteAcls(31): 0 [usable: 0], DescribeConfigs(32): 0 [usable: 0], AlterConfigs(33): 0 [usable: 0]) 11:30:55.381 [kafka-network-thread-0-ListenerName(PLAINTEXT)-PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - Connection with /127.0.0.1 disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:87) at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:75) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:203) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:167) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:379) at org.apache.kafka.common.network.Selector.poll(Selector.java:326) at kafka.network.Processor.poll(SocketServer.scala:499) at kafka.network.Processor.run(SocketServer.scala:435) at java.lang.Thread.run(Unknown Source) 11:30:55.381 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name connections-closed: 11:30:55.381 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name connections-created: 11:30:55.381 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name bytes-sent-received: 11:30:55.381 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name bytes-sent: 11:30:55.381 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name bytes-received: 11:30:55.381 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name select-time: 11:30:55.382 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name io-time: 11:30:55.382 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name node--1.bytes-sent 11:30:55.382 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name node--1.bytes-received 11:30:55.382 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name node--1.latency 11:30:55.382 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.streams.KafkaStreams - stream-client [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181] State transition from CREATED to RUNNING. 11:30:55.382 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.streams.KafkaStreams - stream-client [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181] Started Kafka Stream process 11:30:55.382 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Starting 11:30:55.382 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Subscribed to pattern: my-topic 11:30:55.383 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending GroupCoordinator request for group exactly-once to broker 127.0.0.1:63325 (id: -1 rack: null) 11:30:55.386 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Initialize connection to node -1 for sending metadata request 11:30:55.386 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node -1 at 127.0.0.1:63325. 11:30:55.387 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-sent 11:30:55.387 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-received 11:30:55.387 [kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Acceptor - Accepted connection from /127.0.0.1:63378 on /127.0.0.1:63325 and assigned it to processor 2, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:30:55.387 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.latency 11:30:55.388 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.common.network.Selector - Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 11:30:55.388 [kafka-network-thread-0-ListenerName(PLAINTEXT)-PLAINTEXT-2] DEBUG kafka.network.Processor - Processor 2 listening to new connection from /127.0.0.1:63378 11:30:55.388 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Completed connection to node -1. Fetching API versions. 11:30:55.388 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Initiating API versions fetch from node -1. 11:30:55.388 [kafka-request-handler-3] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name Request-:producer-1 11:30:55.389 [kafka-request-handler-3] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name RequestThrottleTime-:producer-1 11:30:55.390 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Recorded API versions for node -1: (Produce(0): 0 to 3 [usable: 3], Fetch(1): 0 to 5 [usable: 5], Offsets(2): 0 to 2 [usable: 2], Metadata(3): 0 to 4 [usable: 4], LeaderAndIsr(4): 0 [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 [usable: 3], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 3 [usable: 3], OffsetFetch(9): 0 to 3 [usable: 3], FindCoordinator(10): 0 to 1 [usable: 1], JoinGroup(11): 0 to 2 [usable: 2], Heartbeat(12): 0 to 1 [usable: 1], LeaveGroup(13): 0 to 1 [usable: 1], SyncGroup(14): 0 to 1 [usable: 1], DescribeGroups(15): 0 to 1 [usable: 1], ListGroups(16): 0 to 1 [usable: 1], SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 to 1 [usable: 1], CreateTopics(19): 0 to 2 [usable: 2], DeleteTopics(20): 0 to 1 [usable: 1], DeleteRecords(21): 0 [usable: 0], InitProducerId(22): 0 [usable: 0], OffsetForLeaderEpoch(23): 0 [usable: 0], AddPartitionsToTxn(24): 0 [usable: 0], AddOffsetsToTxn(25): 0 [usable: 0], EndTxn(26): 0 [usable: 0], WriteTxnMarkers(27): 0 [usable: 0], TxnOffsetCommit(28): 0 [usable: 0], DescribeAcls(29): 0 [usable: 0], CreateAcls(30): 0 [usable: 0], DeleteAcls(31): 0 [usable: 0], DescribeConfigs(32): 0 [usable: 0], AlterConfigs(33): 0 [usable: 0]) 11:30:55.390 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node -1 at 127.0.0.1:63325. 11:30:55.390 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=my-topic) to node -1 11:30:55.392 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-sent 11:30:55.392 [kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Acceptor - Accepted connection from /127.0.0.1:63379 on /127.0.0.1:63325 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:30:55.392 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-received 11:30:55.392 [kafka-network-thread-0-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:63379 11:30:55.392 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.latency 11:30:55.392 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.network.Selector - Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 11:30:55.392 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Completed connection to node -1. Fetching API versions. 11:30:55.392 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Initiating API versions fetch from node -1. 11:30:55.392 [kafka-request-handler-4] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name Request-:exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer 11:30:55.392 [kafka-request-handler-4] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name RequestThrottleTime-:exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer 11:30:55.392 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Recorded API versions for node -1: (Produce(0): 0 to 3 [usable: 3], Fetch(1): 0 to 5 [usable: 5], Offsets(2): 0 to 2 [usable: 2], Metadata(3): 0 to 4 [usable: 4], LeaderAndIsr(4): 0 [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 [usable: 3], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 3 [usable: 3], OffsetFetch(9): 0 to 3 [usable: 3], FindCoordinator(10): 0 to 1 [usable: 1], JoinGroup(11): 0 to 2 [usable: 2], Heartbeat(12): 0 to 1 [usable: 1], LeaveGroup(13): 0 to 1 [usable: 1], SyncGroup(14): 0 to 1 [usable: 1], DescribeGroups(15): 0 to 1 [usable: 1], ListGroups(16): 0 to 1 [usable: 1], SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 to 1 [usable: 1], CreateTopics(19): 0 to 2 [usable: 2], DeleteTopics(20): 0 to 1 [usable: 1], DeleteRecords(21): 0 [usable: 0], InitProducerId(22): 0 [usable: 0], OffsetForLeaderEpoch(23): 0 [usable: 0], AddPartitionsToTxn(24): 0 [usable: 0], AddOffsetsToTxn(25): 0 [usable: 0], EndTxn(26): 0 [usable: 0], WriteTxnMarkers(27): 0 [usable: 0], TxnOffsetCommit(28): 0 [usable: 0], DescribeAcls(29): 0 [usable: 0], CreateAcls(30): 0 [usable: 0], DeleteAcls(31): 0 [usable: 0], DescribeConfigs(32): 0 [usable: 0], AlterConfigs(33): 0 [usable: 0]) 11:30:55.392 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=) to node -1 11:30:55.408 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x58 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:55.408 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x58 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:55.408 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 88,8 replyHeader:: 88,39,0 request:: '/brokers/ids,T response:: v{'0,'1,'2} 11:30:55.408 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x59 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:55.408 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x59 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:55.408 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 89,4 replyHeader:: 89,39,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:55.408 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x5a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:55.408 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x5a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:55.408 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 90,4 replyHeader:: 90,39,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:55.408 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 2 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63361 (id: 2 rack: null), 127.0.0.1:63325 (id: 0 rack: null), 127.0.0.1:63344 (id: 1 rack: null)], partitions = []) 11:30:55.424 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x5b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:55.424 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x5b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:55.424 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 91,8 replyHeader:: 91,39,0 request:: '/brokers/ids,T response:: v{'0,'1,'2} 11:30:55.424 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x5c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:55.424 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x5c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:55.424 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x5d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:55.424 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x5d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:55.424 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 92,4 replyHeader:: 92,39,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:55.424 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 93,4 replyHeader:: 93,39,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:55.424 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x5e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:55.424 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x5e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:55.424 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 94,4 replyHeader:: 94,39,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:55.424 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x5f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:55.424 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x5f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:55.424 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 95,4 replyHeader:: 95,39,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:55.442 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x60 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:55.442 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x60 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:55.442 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 96,3 replyHeader:: 96,39,-101 request:: '/brokers/topics/my-topic,F response:: 11:30:55.442 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x61 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:55.442 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x61 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:55.442 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 97,3 replyHeader:: 97,39,-101 request:: '/brokers/topics/__consumer_offsets,F response:: 11:30:55.442 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x62 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:55.442 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x62 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:55.442 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 98,8 replyHeader:: 98,39,0 request:: '/brokers/topics,T response:: v{} 11:30:55.442 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:setData cxid:0x63 zxid:0x28 txntype:-1 reqpath:n/a Error Path:/config/topics/my-topic Error:KeeperErrorCode = NoNode for /config/topics/my-topic 11:30:55.442 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:setData cxid:0x64 zxid:0x29 txntype:-1 reqpath:n/a Error Path:/config/topics/__consumer_offsets Error:KeeperErrorCode = NoNode for /config/topics/__consumer_offsets 11:30:55.463 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:setData cxid:0x63 zxid:0x28 txntype:-1 reqpath:n/a 11:30:55.463 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:55.463 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 99,5 replyHeader:: 99,40,-101 request:: '/config/topics/my-topic,#7b2276657273696f6e223a312c22636f6e666967223a7b7d7d,-1 response:: 11:30:55.463 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x65 zxid:0x2a txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics 11:30:55.463 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:setData cxid:0x64 zxid:0x29 txntype:-1 reqpath:n/a 11:30:55.463 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:55.463 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 100,5 replyHeader:: 100,41,-101 request:: '/config/topics/__consumer_offsets,#7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,-1 response:: 11:30:55.463 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x66 zxid:0x2b txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics 11:30:55.478 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x65 zxid:0x2a txntype:-1 reqpath:n/a 11:30:55.478 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -110 11:30:55.478 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 101,1 replyHeader:: 101,42,-110 request:: '/config/topics,,v{s{31,s{'world,'anyone}}},0 response:: 11:30:55.478 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x66 zxid:0x2b txntype:-1 reqpath:n/a 11:30:55.478 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -110 11:30:55.478 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 102,1 replyHeader:: 102,43,-110 request:: '/config/topics,,v{s{31,s{'world,'anyone}}},0 response:: 11:30:55.478 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x67 zxid:0x2c txntype:1 reqpath:n/a 11:30:55.478 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x67 zxid:0x2c txntype:1 reqpath:n/a 11:30:55.478 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 103,1 replyHeader:: 103,44,0 request:: '/config/topics/my-topic,#7b2276657273696f6e223a312c22636f6e666967223a7b7d7d,v{s{31,s{'world,'anyone}}},0 response:: '/config/topics/my-topic 11:30:55.478 [kafka-request-handler-1] INFO kafka.admin.AdminUtils$ - Topic creation {"version":1,"partitions":{"0":[2]}} 11:30:55.478 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x68 zxid:0x2d txntype:1 reqpath:n/a 11:30:55.478 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x68 zxid:0x2d txntype:1 reqpath:n/a 11:30:55.478 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 104,1 replyHeader:: 104,45,0 request:: '/config/topics/__consumer_offsets,#7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,v{s{31,s{'world,'anyone}}},0 response:: '/config/topics/__consumer_offsets 11:30:55.478 [kafka-request-handler-2] INFO kafka.admin.AdminUtils$ - Topic creation {"version":1,"partitions":{"45":[1],"34":[2],"12":[1],"8":[0],"19":[2],"23":[0],"4":[2],"40":[2],"15":[1],"11":[0],"9":[1],"44":[0],"33":[1],"22":[2],"26":[0],"37":[2],"13":[2],"46":[2],"24":[1],"35":[0],"16":[2],"5":[0],"10":[2],"48":[1],"21":[1],"43":[2],"32":[0],"49":[2],"6":[1],"36":[1],"1":[2],"39":[1],"17":[0],"25":[2],"14":[0],"47":[0],"31":[2],"42":[1],"0":[1],"20":[0],"27":[1],"2":[0],"38":[0],"18":[1],"30":[1],"7":[2],"29":[0],"41":[0],"3":[1],"28":[2]}} 11:30:55.478 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x69 zxid:0x2e txntype:1 reqpath:n/a 11:30:55.478 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x69 zxid:0x2e txntype:1 reqpath:n/a 11:30:55.478 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification sessionid:0x15e7aca904b0001 11:30:55.478 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics for sessionid 0x15e7aca904b0001 11:30:55.478 [pool-6-thread-1-EventThread] DEBUG org.I0Itec.zkclient.ZkClient - Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics 11:30:55.478 [pool-6-thread-1-EventThread] DEBUG org.I0Itec.zkclient.ZkEventThread - New event: ZkEvent[Children of /brokers/topics changed sent to kafka.controller.TopicChangeListener@2812544] 11:30:55.478 [pool-6-thread-1-EventThread] DEBUG org.I0Itec.zkclient.ZkClient - Leaving process event 11:30:55.478 [ZkClient-EventThread-78-localhost:63309] DEBUG org.I0Itec.zkclient.ZkEventThread - Delivering event #5 ZkEvent[Children of /brokers/topics changed sent to kafka.controller.TopicChangeListener@2812544] 11:30:55.478 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 105,1 replyHeader:: 105,46,0 request:: '/brokers/topics/my-topic,#7b2276657273696f6e223a312c22706172746974696f6e73223a7b2230223a5b325d7d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/my-topic 11:30:55.478 [kafka-request-handler-1] DEBUG kafka.admin.AdminUtils$ - Updated path /brokers/topics/my-topic with {"version":1,"partitions":{"0":[2]}} for replica assignment 11:30:55.478 [kafka-request-handler-1] INFO kafka.server.KafkaApis - [KafkaApi-0] Auto creation of topic my-topic with 1 partitions and replication factor 1 is successful 11:30:55.478 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x6a zxid:0x2f txntype:1 reqpath:n/a 11:30:55.478 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x6a zxid:0x2f txntype:1 reqpath:n/a 11:30:55.478 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x6b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:55.478 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x6b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:55.478 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 106,1 replyHeader:: 106,47,0 request:: '/brokers/topics/__consumer_offsets,#7b2276657273696f6e223a312c22706172746974696f6e73223a7b223435223a5b315d2c223334223a5b325d2c223132223a5b315d2c2238223a5b305d2c223139223a5b325d2c223233223a5b305d2c2234223a5b325d2c223430223a5b325d2c223135223a5b315d2c223131223a5b305d2c2239223a5b315d2c223434223a5b305d2c223333223a5b315d2c223232223a5b325d2c223236223a5b305d2c223337223a5b325d2c223133223a5b325d2c223436223a5b325d2c223234223a5b315d2c223335223a5b305d2c223136223a5b325d2c2235223a5b305d2c223130223a5b325d2c223438223a5b315d2c223231223a5b315d2c223433223a5b325d2c223332223a5b305d2c223439223a5b325d2c2236223a5b315d2c223336223a5b315d2c2231223a5b325d2c223339223a5b315d2c223137223a5b305d2c223235223a5b325d2c223134223a5b305d2c223437223a5b305d2c223331223a5b325d2c223432223a5b315d2c2230223a5b315d2c223230223a5b305d2c223237223a5b315d2c2232223a5b305d2c223338223a5b305d2c223138223a5b315d2c223330223a5b315d2c2237223a5b325d2c223239223a5b305d2c223431223a5b305d2c2233223a5b315d2c223238223a5b325d7d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets 11:30:55.478 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 1 : {my-topic=LEADER_NOT_AVAILABLE} 11:30:55.478 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 2 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63361 (id: 2 rack: null), 127.0.0.1:63344 (id: 1 rack: null), 127.0.0.1:63325 (id: 0 rack: null)], partitions = []) 11:30:55.478 [kafka-request-handler-2] DEBUG kafka.admin.AdminUtils$ - Updated path /brokers/topics/__consumer_offsets with {"version":1,"partitions":{"45":[1],"34":[2],"12":[1],"8":[0],"19":[2],"23":[0],"4":[2],"40":[2],"15":[1],"11":[0],"9":[1],"44":[0],"33":[1],"22":[2],"26":[0],"37":[2],"13":[2],"46":[2],"24":[1],"35":[0],"16":[2],"5":[0],"10":[2],"48":[1],"21":[1],"43":[2],"32":[0],"49":[2],"6":[1],"36":[1],"1":[2],"39":[1],"17":[0],"25":[2],"14":[0],"47":[0],"31":[2],"42":[1],"0":[1],"20":[0],"27":[1],"2":[0],"38":[0],"18":[1],"30":[1],"7":[2],"29":[0],"41":[0],"3":[1],"28":[2]}} for replica assignment 11:30:55.478 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 107,3 replyHeader:: 107,47,0 request:: '/brokers/topics,T response:: s{7,7,1505298652598,1505298652598,0,2,0,0,0,2,47} 11:30:55.494 [kafka-request-handler-2] INFO kafka.server.KafkaApis - [KafkaApi-0] Auto creation of topic __consumer_offsets with 50 partitions and replication factor 1 is successful 11:30:55.494 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x6c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:55.494 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x6c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:55.494 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 108,8 replyHeader:: 108,47,0 request:: '/brokers/topics,T response:: v{'my-topic,'__consumer_offsets} 11:30:55.494 [ZkClient-EventThread-78-localhost:63309] DEBUG org.I0Itec.zkclient.ZkEventThread - Delivering event #5 done 11:30:55.494 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received GroupCoordinator response ClientResponse(receivedTimeMs=1505298655494, latencyMs=110, disconnected=false, requestHeader={api_key=10,api_version=1,correlation_id=0,client_id=exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer}, responseBody=FindCoordinatorResponse(throttleTimeMs=0, errorMessage='null', error=COORDINATOR_NOT_AVAILABLE, node=:-1 (id: -1 rack: null))) for group exactly-once 11:30:55.494 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x6d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:55.494 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Group coordinator lookup for group exactly-once failed: The coordinator is not available. 11:30:55.494 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x6d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:55.494 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Coordinator discovery failed for group exactly-once, refreshing metadata 11:30:55.494 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 109,4 replyHeader:: 109,47,0 request:: '/brokers/topics/my-topic,F response:: #7b2276657273696f6e223a312c22706172746974696f6e73223a7b2230223a5b325d7d7d,s{46,46,1505298655478,1505298655478,0,0,0,0,36,0,46} 11:30:55.494 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [my-topic], partition [0] are [List(2)] 11:30:55.494 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x6e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:55.494 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x6e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:55.494 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 110,4 replyHeader:: 110,47,0 request:: '/brokers/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22706172746974696f6e73223a7b223435223a5b315d2c223334223a5b325d2c223132223a5b315d2c2238223a5b305d2c223139223a5b325d2c223233223a5b305d2c2234223a5b325d2c223430223a5b325d2c223135223a5b315d2c223131223a5b305d2c2239223a5b315d2c223434223a5b305d2c223333223a5b315d2c223232223a5b325d2c223236223a5b305d2c223337223a5b325d2c223133223a5b325d2c223436223a5b325d2c223234223a5b315d2c223335223a5b305d2c223136223a5b325d2c2235223a5b305d2c223130223a5b325d2c223438223a5b315d2c223231223a5b315d2c223433223a5b325d2c223332223a5b305d2c223439223a5b325d2c2236223a5b315d2c223336223a5b315d2c2231223a5b325d2c223339223a5b315d2c223137223a5b305d2c223235223a5b325d2c223134223a5b305d2c223437223a5b305d2c223331223a5b325d2c223432223a5b315d2c2230223a5b315d2c223230223a5b305d2c223237223a5b315d2c2232223a5b305d2c223338223a5b305d2c223138223a5b315d2c223330223a5b315d2c2237223a5b325d2c223239223a5b305d2c223431223a5b305d2c2233223a5b315d2c223238223a5b325d7d7d,s{47,47,1505298655478,1505298655478,0,0,0,0,468,0,47} 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Initialize connection to node 2 for sending metadata request 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node 2 at 127.0.0.1:63361. 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-2.bytes-sent 11:30:55.510 [kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Acceptor - Accepted connection from /127.0.0.1:63380 on /127.0.0.1:63361 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-2.bytes-received 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-2.latency 11:30:55.510 [kafka-network-thread-2-ListenerName(PLAINTEXT)-PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:63380 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.network.Selector - Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 2 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Completed connection to node 2. Fetching API versions. 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Initiating API versions fetch from node 2. 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Initialize connection to node 0 for sending metadata request 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node 0 at 127.0.0.1:63325. 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-0.bytes-sent 11:30:55.510 [kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Acceptor - Accepted connection from /127.0.0.1:63381 on /127.0.0.1:63325 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-0.bytes-received 11:30:55.510 [kafka-network-thread-0-ListenerName(PLAINTEXT)-PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:63381 11:30:55.510 [kafka-request-handler-0] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name Request-:exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-0.latency 11:30:55.510 [kafka-request-handler-0] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name RequestThrottleTime-:exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.network.Selector - Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 0 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Completed connection to node 0. Fetching API versions. 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Initiating API versions fetch from node 0. 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Initialize connection to node 1 for sending metadata request 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node 1 at 127.0.0.1:63344. 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-1.bytes-sent 11:30:55.510 [kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Acceptor - Accepted connection from /127.0.0.1:63382 on /127.0.0.1:63344 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-1.bytes-received 11:30:55.510 [kafka-network-thread-1-ListenerName(PLAINTEXT)-PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:63382 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-1.latency 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.network.Selector - Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Recorded API versions for node 2: (Produce(0): 0 to 3 [usable: 3], Fetch(1): 0 to 5 [usable: 5], Offsets(2): 0 to 2 [usable: 2], Metadata(3): 0 to 4 [usable: 4], LeaderAndIsr(4): 0 [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 [usable: 3], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 3 [usable: 3], OffsetFetch(9): 0 to 3 [usable: 3], FindCoordinator(10): 0 to 1 [usable: 1], JoinGroup(11): 0 to 2 [usable: 2], Heartbeat(12): 0 to 1 [usable: 1], LeaveGroup(13): 0 to 1 [usable: 1], SyncGroup(14): 0 to 1 [usable: 1], DescribeGroups(15): 0 to 1 [usable: 1], ListGroups(16): 0 to 1 [usable: 1], SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 to 1 [usable: 1], CreateTopics(19): 0 to 2 [usable: 2], DeleteTopics(20): 0 to 1 [usable: 1], DeleteRecords(21): 0 [usable: 0], InitProducerId(22): 0 [usable: 0], OffsetForLeaderEpoch(23): 0 [usable: 0], AddPartitionsToTxn(24): 0 [usable: 0], AddOffsetsToTxn(25): 0 [usable: 0], EndTxn(26): 0 [usable: 0], WriteTxnMarkers(27): 0 [usable: 0], TxnOffsetCommit(28): 0 [usable: 0], DescribeAcls(29): 0 [usable: 0], CreateAcls(30): 0 [usable: 0], DeleteAcls(31): 0 [usable: 0], DescribeConfigs(32): 0 [usable: 0], AlterConfigs(33): 0 [usable: 0]) 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Completed connection to node 1. Fetching API versions. 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Initiating API versions fetch from node 1. 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=) to node 2 11:30:55.510 [kafka-request-handler-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name Request-:exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Recorded API versions for node 0: (Produce(0): 0 to 3 [usable: 3], Fetch(1): 0 to 5 [usable: 5], Offsets(2): 0 to 2 [usable: 2], Metadata(3): 0 to 4 [usable: 4], LeaderAndIsr(4): 0 [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 [usable: 3], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 3 [usable: 3], OffsetFetch(9): 0 to 3 [usable: 3], FindCoordinator(10): 0 to 1 [usable: 1], JoinGroup(11): 0 to 2 [usable: 2], Heartbeat(12): 0 to 1 [usable: 1], LeaveGroup(13): 0 to 1 [usable: 1], SyncGroup(14): 0 to 1 [usable: 1], DescribeGroups(15): 0 to 1 [usable: 1], ListGroups(16): 0 to 1 [usable: 1], SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 to 1 [usable: 1], CreateTopics(19): 0 to 2 [usable: 2], DeleteTopics(20): 0 to 1 [usable: 1], DeleteRecords(21): 0 [usable: 0], InitProducerId(22): 0 [usable: 0], OffsetForLeaderEpoch(23): 0 [usable: 0], AddPartitionsToTxn(24): 0 [usable: 0], AddOffsetsToTxn(25): 0 [usable: 0], EndTxn(26): 0 [usable: 0], WriteTxnMarkers(27): 0 [usable: 0], TxnOffsetCommit(28): 0 [usable: 0], DescribeAcls(29): 0 [usable: 0], CreateAcls(30): 0 [usable: 0], DeleteAcls(31): 0 [usable: 0], DescribeConfigs(32): 0 [usable: 0], AlterConfigs(33): 0 [usable: 0]) 11:30:55.510 [kafka-request-handler-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name RequestThrottleTime-:exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 3 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63361 (id: 2 rack: null), 127.0.0.1:63344 (id: 1 rack: null), 127.0.0.1:63325 (id: 0 rack: null)], partitions = []) 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending GroupCoordinator request for group exactly-once to broker 127.0.0.1:63361 (id: 2 rack: null) 11:30:55.510 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Recorded API versions for node 1: (Produce(0): 0 to 3 [usable: 3], Fetch(1): 0 to 5 [usable: 5], Offsets(2): 0 to 2 [usable: 2], Metadata(3): 0 to 4 [usable: 4], LeaderAndIsr(4): 0 [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 [usable: 3], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 3 [usable: 3], OffsetFetch(9): 0 to 3 [usable: 3], FindCoordinator(10): 0 to 1 [usable: 1], JoinGroup(11): 0 to 2 [usable: 2], Heartbeat(12): 0 to 1 [usable: 1], LeaveGroup(13): 0 to 1 [usable: 1], SyncGroup(14): 0 to 1 [usable: 1], DescribeGroups(15): 0 to 1 [usable: 1], ListGroups(16): 0 to 1 [usable: 1], SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 to 1 [usable: 1], CreateTopics(19): 0 to 2 [usable: 2], DeleteTopics(20): 0 to 1 [usable: 1], DeleteRecords(21): 0 [usable: 0], InitProducerId(22): 0 [usable: 0], OffsetForLeaderEpoch(23): 0 [usable: 0], AddPartitionsToTxn(24): 0 [usable: 0], AddOffsetsToTxn(25): 0 [usable: 0], EndTxn(26): 0 [usable: 0], WriteTxnMarkers(27): 0 [usable: 0], TxnOffsetCommit(28): 0 [usable: 0], DescribeAcls(29): 0 [usable: 0], CreateAcls(30): 0 [usable: 0], DeleteAcls(31): 0 [usable: 0], DescribeConfigs(32): 0 [usable: 0], AlterConfigs(33): 0 [usable: 0]) 11:30:55.510 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [45] are [List(1)] 11:30:55.510 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [34] are [List(2)] 11:30:55.510 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [12] are [List(1)] 11:30:55.510 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [8] are [List(0)] 11:30:55.510 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getChildren cxid:0x1f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:55.510 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getChildren cxid:0x1f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:55.510 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [19] are [List(2)] 11:30:55.510 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [23] are [List(0)] 11:30:55.510 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [4] are [List(2)] 11:30:55.510 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 31,8 replyHeader:: 31,47,0 request:: '/brokers/ids,F response:: v{'0,'1,'2} 11:30:55.510 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [40] are [List(2)] 11:30:55.510 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [15] are [List(1)] 11:30:55.510 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [11] are [List(0)] 11:30:55.510 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [9] are [List(1)] 11:30:55.510 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [44] are [List(0)] 11:30:55.510 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x20 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:55.510 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x20 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:55.510 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [33] are [List(1)] 11:30:55.510 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [22] are [List(2)] 11:30:55.510 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [26] are [List(0)] 11:30:55.510 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 32,4 replyHeader:: 32,47,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:55.510 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [37] are [List(2)] 11:30:55.510 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [13] are [List(2)] 11:30:55.510 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [46] are [List(2)] 11:30:55.510 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [24] are [List(1)] 11:30:55.510 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [35] are [List(0)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [16] are [List(2)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [5] are [List(0)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [10] are [List(2)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [48] are [List(1)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [21] are [List(1)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [43] are [List(2)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [32] are [List(0)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [49] are [List(2)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [6] are [List(1)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [36] are [List(1)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [1] are [List(2)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [39] are [List(1)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [17] are [List(0)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [25] are [List(2)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [14] are [List(0)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [47] are [List(0)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [31] are [List(2)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [42] are [List(1)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [0] are [List(1)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [20] are [List(0)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [27] are [List(1)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [2] are [List(0)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [38] are [List(0)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [18] are [List(1)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [30] are [List(1)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [7] are [List(2)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [29] are [List(0)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [41] are [List(0)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [3] are [List(1)] 11:30:55.525 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__consumer_offsets], partition [28] are [List(2)] 11:30:55.525 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x21 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:55.525 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x21 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:55.525 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 33,4 replyHeader:: 33,47,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:55.525 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x22 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:55.525 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x22 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:55.525 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 34,4 replyHeader:: 34,47,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:55.525 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: New topics: [Set(my-topic, __consumer_offsets)], deleted topics: [Set()], new partition replica assignment [Map([__consumer_offsets,19] -> List(2), [__consumer_offsets,30] -> List(1), [__consumer_offsets,47] -> List(0), [__consumer_offsets,29] -> List(0), [__consumer_offsets,41] -> List(0), [__consumer_offsets,39] -> List(1), [__consumer_offsets,10] -> List(2), [__consumer_offsets,17] -> List(0), [__consumer_offsets,14] -> List(0), [__consumer_offsets,40] -> List(2), [__consumer_offsets,18] -> List(1), [__consumer_offsets,26] -> List(0), [__consumer_offsets,0] -> List(1), [__consumer_offsets,24] -> List(1), [__consumer_offsets,33] -> List(1), [__consumer_offsets,20] -> List(0), [__consumer_offsets,3] -> List(1), [__consumer_offsets,21] -> List(1), [__consumer_offsets,5] -> List(0), [__consumer_offsets,22] -> List(2), [__consumer_offsets,12] -> List(1), [__consumer_offsets,8] -> List(0), [__consumer_offsets,23] -> List(0), [__consumer_offsets,15] -> List(1), [__consumer_offsets,48] -> List(1), [__consumer_offsets,11] -> List(0), [__consumer_offsets,13] -> List(2), [my-topic,0] -> List(2), [__consumer_offsets,49] -> List(2), [__consumer_offsets,6] -> List(1), [__consumer_offsets,28] -> List(2), [__consumer_offsets,4] -> List(2), [__consumer_offsets,37] -> List(2), [__consumer_offsets,31] -> List(2), [__consumer_offsets,44] -> List(0), [__consumer_offsets,42] -> List(1), [__consumer_offsets,34] -> List(2), [__consumer_offsets,46] -> List(2), [__consumer_offsets,25] -> List(2), [__consumer_offsets,45] -> List(1), [__consumer_offsets,27] -> List(1), [__consumer_offsets,32] -> List(0), [__consumer_offsets,43] -> List(2), [__consumer_offsets,36] -> List(1), [__consumer_offsets,35] -> List(0), [__consumer_offsets,7] -> List(2), [__consumer_offsets,9] -> List(1), [__consumer_offsets,38] -> List(0), [__consumer_offsets,1] -> List(2), [__consumer_offsets,16] -> List(2), [__consumer_offsets,2] -> List(0))] 11:30:55.525 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:exists cxid:0x23 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:55.525 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:exists cxid:0x23 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:55.541 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 35,3 replyHeader:: 35,47,0 request:: '/brokers/topics/__consumer_offsets,F response:: s{47,47,1505298655478,1505298655478,0,0,0,0,468,0,47} 11:30:55.541 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: New topic creation callback for [__consumer_offsets,19],[__consumer_offsets,30],[__consumer_offsets,47],[__consumer_offsets,29],[__consumer_offsets,41],[__consumer_offsets,39],[__consumer_offsets,10],[__consumer_offsets,17],[__consumer_offsets,14],[__consumer_offsets,40],[__consumer_offsets,18],[__consumer_offsets,26],[__consumer_offsets,0],[__consumer_offsets,24],[__consumer_offsets,33],[__consumer_offsets,20],[__consumer_offsets,3],[__consumer_offsets,21],[__consumer_offsets,5],[__consumer_offsets,22],[__consumer_offsets,12],[__consumer_offsets,8],[__consumer_offsets,23],[__consumer_offsets,15],[__consumer_offsets,48],[__consumer_offsets,11],[__consumer_offsets,13],[my-topic,0],[__consumer_offsets,49],[__consumer_offsets,6],[__consumer_offsets,28],[__consumer_offsets,4],[__consumer_offsets,37],[__consumer_offsets,31],[__consumer_offsets,44],[__consumer_offsets,42],[__consumer_offsets,34],[__consumer_offsets,46],[__consumer_offsets,25],[__consumer_offsets,45],[__consumer_offsets,27],[__consumer_offsets,32],[__consumer_offsets,43],[__consumer_offsets,36],[__consumer_offsets,35],[__consumer_offsets,7],[__consumer_offsets,9],[__consumer_offsets,38],[__consumer_offsets,1],[__consumer_offsets,16],[__consumer_offsets,2] 11:30:55.541 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received GroupCoordinator response ClientResponse(receivedTimeMs=1505298655541, latencyMs=31, disconnected=false, requestHeader={api_key=10,api_version=1,correlation_id=7,client_id=exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer}, responseBody=FindCoordinatorResponse(throttleTimeMs=0, errorMessage='null', error=COORDINATOR_NOT_AVAILABLE, node=:-1 (id: -1 rack: null))) for group exactly-once 11:30:55.541 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Group coordinator lookup for group exactly-once failed: The coordinator is not available. 11:30:55.541 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Coordinator discovery failed for group exactly-once, refreshing metadata 11:30:55.541 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x6f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:55.541 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x6f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:55.541 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 111,3 replyHeader:: 111,47,0 request:: '/brokers/topics/my-topic,T response:: s{46,46,1505298655478,1505298655478,0,0,0,0,36,0,46} 11:30:55.541 [controller-event-thread] DEBUG org.I0Itec.zkclient.ZkClient - Subscribed data changes for /brokers/topics/my-topic 11:30:55.541 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x70 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:55.541 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x70 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:55.541 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 112,3 replyHeader:: 112,47,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{47,47,1505298655478,1505298655478,0,0,0,0,468,0,47} 11:30:55.541 [controller-event-thread] DEBUG org.I0Itec.zkclient.ZkClient - Subscribed data changes for /brokers/topics/__consumer_offsets 11:30:55.541 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: New partition creation callback for [__consumer_offsets,19],[__consumer_offsets,30],[__consumer_offsets,47],[__consumer_offsets,29],[__consumer_offsets,41],[__consumer_offsets,39],[__consumer_offsets,10],[__consumer_offsets,17],[__consumer_offsets,14],[__consumer_offsets,40],[__consumer_offsets,18],[__consumer_offsets,26],[__consumer_offsets,0],[__consumer_offsets,24],[__consumer_offsets,33],[__consumer_offsets,20],[__consumer_offsets,3],[__consumer_offsets,21],[__consumer_offsets,5],[__consumer_offsets,22],[__consumer_offsets,12],[__consumer_offsets,8],[__consumer_offsets,23],[__consumer_offsets,15],[__consumer_offsets,48],[__consumer_offsets,11],[__consumer_offsets,13],[my-topic,0],[__consumer_offsets,49],[__consumer_offsets,6],[__consumer_offsets,28],[__consumer_offsets,4],[__consumer_offsets,37],[__consumer_offsets,31],[__consumer_offsets,44],[__consumer_offsets,42],[__consumer_offsets,34],[__consumer_offsets,46],[__consumer_offsets,25],[__consumer_offsets,45],[__consumer_offsets,27],[__consumer_offsets,32],[__consumer_offsets,43],[__consumer_offsets,36],[__consumer_offsets,35],[__consumer_offsets,7],[__consumer_offsets,9],[__consumer_offsets,38],[__consumer_offsets,1],[__consumer_offsets,16],[__consumer_offsets,2] 11:30:55.541 [controller-event-thread] INFO kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Invoking state change to NewPartition for partitions [__consumer_offsets,19],[__consumer_offsets,30],[__consumer_offsets,47],[__consumer_offsets,29],[__consumer_offsets,41],[__consumer_offsets,39],[__consumer_offsets,10],[__consumer_offsets,17],[__consumer_offsets,14],[__consumer_offsets,40],[__consumer_offsets,18],[__consumer_offsets,26],[__consumer_offsets,0],[__consumer_offsets,24],[__consumer_offsets,33],[__consumer_offsets,20],[__consumer_offsets,3],[__consumer_offsets,21],[__consumer_offsets,5],[__consumer_offsets,22],[__consumer_offsets,12],[__consumer_offsets,8],[__consumer_offsets,23],[__consumer_offsets,15],[__consumer_offsets,48],[__consumer_offsets,11],[__consumer_offsets,13],[my-topic,0],[__consumer_offsets,49],[__consumer_offsets,6],[__consumer_offsets,28],[__consumer_offsets,4],[__consumer_offsets,37],[__consumer_offsets,31],[__consumer_offsets,44],[__consumer_offsets,42],[__consumer_offsets,34],[__consumer_offsets,46],[__consumer_offsets,25],[__consumer_offsets,45],[__consumer_offsets,27],[__consumer_offsets,32],[__consumer_offsets,43],[__consumer_offsets,36],[__consumer_offsets,35],[__consumer_offsets,7],[__consumer_offsets,9],[__consumer_offsets,38],[__consumer_offsets,1],[__consumer_offsets,16],[__consumer_offsets,2] 11:30:55.541 [controller-event-thread] INFO kafka.controller.ReplicaStateMachine - [Replica state machine on controller 0]: Invoking state change to NewReplica for replicas [Topic=__consumer_offsets,Partition=48,Replica=1],[Topic=__consumer_offsets,Partition=21,Replica=1],[Topic=__consumer_offsets,Partition=18,Replica=1],[Topic=__consumer_offsets,Partition=9,Replica=1],[Topic=__consumer_offsets,Partition=39,Replica=1],[Topic=__consumer_offsets,Partition=22,Replica=2],[Topic=__consumer_offsets,Partition=35,Replica=0],[Topic=__consumer_offsets,Partition=13,Replica=2],[Topic=__consumer_offsets,Partition=34,Replica=2],[Topic=__consumer_offsets,Partition=40,Replica=2],[Topic=__consumer_offsets,Partition=37,Replica=2],[Topic=__consumer_offsets,Partition=2,Replica=0],[Topic=__consumer_offsets,Partition=11,Replica=0],[Topic=__consumer_offsets,Partition=29,Replica=0],[Topic=__consumer_offsets,Partition=27,Replica=1],[Topic=__consumer_offsets,Partition=6,Replica=1],[Topic=__consumer_offsets,Partition=30,Replica=1],[Topic=__consumer_offsets,Partition=42,Replica=1],[Topic=__consumer_offsets,Partition=26,Replica=0],[Topic=__consumer_offsets,Partition=17,Replica=0],[Topic=__consumer_offsets,Partition=3,Replica=1],[Topic=__consumer_offsets,Partition=28,Replica=2],[Topic=__consumer_offsets,Partition=7,Replica=2],[Topic=__consumer_offsets,Partition=43,Replica=2],[Topic=__consumer_offsets,Partition=10,Replica=2],[Topic=__consumer_offsets,Partition=41,Replica=0],[Topic=__consumer_offsets,Partition=20,Replica=0],[Topic=__consumer_offsets,Partition=4,Replica=2],[Topic=__consumer_offsets,Partition=45,Replica=1],[Topic=__consumer_offsets,Partition=46,Replica=2],[Topic=__consumer_offsets,Partition=47,Replica=0],[Topic=__consumer_offsets,Partition=8,Replica=0],[Topic=__consumer_offsets,Partition=38,Replica=0],[Topic=__consumer_offsets,Partition=49,Replica=2],[Topic=__consumer_offsets,Partition=1,Replica=2],[Topic=__consumer_offsets,Partition=19,Replica=2],[Topic=__consumer_offsets,Partition=0,Replica=1],[Topic=__consumer_offsets,Partition=33,Replica=1],[Topic=__consumer_offsets,Partition=5,Replica=0],[Topic=__consumer_offsets,Partition=31,Replica=2],[Topic=__consumer_offsets,Partition=25,Replica=2],[Topic=__consumer_offsets,Partition=44,Replica=0],[Topic=my-topic,Partition=0,Replica=2],[Topic=__consumer_offsets,Partition=36,Replica=1],[Topic=__consumer_offsets,Partition=12,Replica=1],[Topic=__consumer_offsets,Partition=16,Replica=2],[Topic=__consumer_offsets,Partition=15,Replica=1],[Topic=__consumer_offsets,Partition=23,Replica=0],[Topic=__consumer_offsets,Partition=32,Replica=0],[Topic=__consumer_offsets,Partition=14,Replica=0],[Topic=__consumer_offsets,Partition=24,Replica=1] 11:30:55.557 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x71 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/48/state 11:30:55.557 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x71 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/48/state 11:30:55.557 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 113,4 replyHeader:: 113,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/48/state,F response:: 11:30:55.558 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-48 11:30:55.561 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x72 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/21/state 11:30:55.561 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x72 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/21/state 11:30:55.562 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 114,4 replyHeader:: 114,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/21/state,F response:: 11:30:55.562 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-21 11:30:55.562 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x73 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/18/state 11:30:55.562 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x73 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/18/state 11:30:55.562 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 115,4 replyHeader:: 115,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/18/state,F response:: 11:30:55.563 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-18 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x74 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/9/state 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x74 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/9/state 11:30:55.563 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 116,4 replyHeader:: 116,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/9/state,F response:: 11:30:55.563 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-9 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x75 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/39/state 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x75 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/39/state 11:30:55.563 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 117,4 replyHeader:: 117,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/39/state,F response:: 11:30:55.563 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-39 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x76 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/22/state 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x76 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/22/state 11:30:55.563 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 118,4 replyHeader:: 118,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/22/state,F response:: 11:30:55.563 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-22 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x77 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/35/state 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x77 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/35/state 11:30:55.563 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 119,4 replyHeader:: 119,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/35/state,F response:: 11:30:55.563 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-35 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x78 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/13/state 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x78 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/13/state 11:30:55.563 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 120,4 replyHeader:: 120,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/13/state,F response:: 11:30:55.563 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-13 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x79 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/34/state 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x79 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/34/state 11:30:55.563 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 121,4 replyHeader:: 121,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/34/state,F response:: 11:30:55.563 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-34 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x7a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/40/state 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x7a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/40/state 11:30:55.563 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 122,4 replyHeader:: 122,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/40/state,F response:: 11:30:55.563 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-40 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x7b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/37/state 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x7b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/37/state 11:30:55.563 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 123,4 replyHeader:: 123,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/37/state,F response:: 11:30:55.563 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-37 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x7c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/2/state 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x7c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/2/state 11:30:55.563 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 124,4 replyHeader:: 124,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/2/state,F response:: 11:30:55.563 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-2 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x7d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/11/state 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x7d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/11/state 11:30:55.563 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 125,4 replyHeader:: 125,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/11/state,F response:: 11:30:55.563 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-11 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x7e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/29/state 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x7e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/29/state 11:30:55.563 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 126,4 replyHeader:: 126,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/29/state,F response:: 11:30:55.563 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-29 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x7f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/27/state 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x7f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/27/state 11:30:55.563 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 127,4 replyHeader:: 127,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/27/state,F response:: 11:30:55.563 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-27 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x80 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/6/state 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x80 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/6/state 11:30:55.563 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 128,4 replyHeader:: 128,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/6/state,F response:: 11:30:55.563 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-6 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x81 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/30/state 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x81 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/30/state 11:30:55.563 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 129,4 replyHeader:: 129,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/30/state,F response:: 11:30:55.563 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-30 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x82 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/42/state 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x82 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/42/state 11:30:55.563 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 130,4 replyHeader:: 130,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/42/state,F response:: 11:30:55.563 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-42 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x83 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/26/state 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x83 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/26/state 11:30:55.563 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 131,4 replyHeader:: 131,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/26/state,F response:: 11:30:55.563 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-26 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x84 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/17/state 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x84 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/17/state 11:30:55.563 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 132,4 replyHeader:: 132,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/17/state,F response:: 11:30:55.563 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-17 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x85 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/3/state 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x85 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/3/state 11:30:55.563 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 133,4 replyHeader:: 133,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/3/state,F response:: 11:30:55.563 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-3 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x86 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/28/state 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x86 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/28/state 11:30:55.563 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 134,4 replyHeader:: 134,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/28/state,F response:: 11:30:55.563 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-28 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x87 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/7/state 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x87 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/7/state 11:30:55.563 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 135,4 replyHeader:: 135,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/7/state,F response:: 11:30:55.563 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-7 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x88 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/43/state 11:30:55.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x88 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/43/state 11:30:55.563 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 136,4 replyHeader:: 136,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/43/state,F response:: 11:30:55.579 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Initialize connection to node 0 for sending metadata request 11:30:55.579 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-43 11:30:55.579 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node 0 at 127.0.0.1:63325. 11:30:55.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x89 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/10/state 11:30:55.579 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-0.bytes-sent 11:30:55.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x89 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/10/state 11:30:55.579 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 137,4 replyHeader:: 137,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/10/state,F response:: 11:30:55.579 [kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Acceptor - Accepted connection from /127.0.0.1:63383 on /127.0.0.1:63325 and assigned it to processor 2, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:30:55.579 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-0.bytes-received 11:30:55.579 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-0.latency 11:30:55.579 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-10 11:30:55.579 [kafka-network-thread-0-ListenerName(PLAINTEXT)-PLAINTEXT-2] DEBUG kafka.network.Processor - Processor 2 listening to new connection from /127.0.0.1:63383 11:30:55.579 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.common.network.Selector - Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 0 11:30:55.579 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Completed connection to node 0. Fetching API versions. 11:30:55.579 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Initiating API versions fetch from node 0. 11:30:55.579 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Initialize connection to node 2 for sending metadata request 11:30:55.579 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node 2 at 127.0.0.1:63361. 11:30:55.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x8a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/41/state 11:30:55.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x8a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/41/state 11:30:55.579 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 138,4 replyHeader:: 138,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/41/state,F response:: 11:30:55.579 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-41 11:30:55.579 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-2.bytes-sent 11:30:55.579 [kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Acceptor - Accepted connection from /127.0.0.1:63384 on /127.0.0.1:63361 and assigned it to processor 2, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:30:55.579 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-2.bytes-received 11:30:55.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x8b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/20/state 11:30:55.579 [kafka-network-thread-2-ListenerName(PLAINTEXT)-PLAINTEXT-2] DEBUG kafka.network.Processor - Processor 2 listening to new connection from /127.0.0.1:63384 11:30:55.579 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-2.latency 11:30:55.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x8b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/20/state 11:30:55.579 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.common.network.Selector - Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 2 11:30:55.579 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 139,4 replyHeader:: 139,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/20/state,F response:: 11:30:55.579 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Completed connection to node 2. Fetching API versions. 11:30:55.579 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Initiating API versions fetch from node 2. 11:30:55.579 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-20 11:30:55.579 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Initialize connection to node 1 for sending metadata request 11:30:55.579 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node 1 at 127.0.0.1:63344. 11:30:55.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x8c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/4/state 11:30:55.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x8c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/4/state 11:30:55.579 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-1.bytes-sent 11:30:55.579 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 140,4 replyHeader:: 140,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/4/state,F response:: 11:30:55.579 [kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Acceptor - Accepted connection from /127.0.0.1:63385 on /127.0.0.1:63344 and assigned it to processor 2, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:30:55.579 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-1.bytes-received 11:30:55.579 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-4 11:30:55.579 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-1.latency 11:30:55.579 [kafka-network-thread-1-ListenerName(PLAINTEXT)-PLAINTEXT-2] DEBUG kafka.network.Processor - Processor 2 listening to new connection from /127.0.0.1:63385 11:30:55.579 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.common.network.Selector - Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 11:30:55.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x8d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/45/state 11:30:55.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x8d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/45/state 11:30:55.579 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 141,4 replyHeader:: 141,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/45/state,F response:: 11:30:55.579 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Recorded API versions for node 0: (Produce(0): 0 to 3 [usable: 3], Fetch(1): 0 to 5 [usable: 5], Offsets(2): 0 to 2 [usable: 2], Metadata(3): 0 to 4 [usable: 4], LeaderAndIsr(4): 0 [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 [usable: 3], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 3 [usable: 3], OffsetFetch(9): 0 to 3 [usable: 3], FindCoordinator(10): 0 to 1 [usable: 1], JoinGroup(11): 0 to 2 [usable: 2], Heartbeat(12): 0 to 1 [usable: 1], LeaveGroup(13): 0 to 1 [usable: 1], SyncGroup(14): 0 to 1 [usable: 1], DescribeGroups(15): 0 to 1 [usable: 1], ListGroups(16): 0 to 1 [usable: 1], SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 to 1 [usable: 1], CreateTopics(19): 0 to 2 [usable: 2], DeleteTopics(20): 0 to 1 [usable: 1], DeleteRecords(21): 0 [usable: 0], InitProducerId(22): 0 [usable: 0], OffsetForLeaderEpoch(23): 0 [usable: 0], AddPartitionsToTxn(24): 0 [usable: 0], AddOffsetsToTxn(25): 0 [usable: 0], EndTxn(26): 0 [usable: 0], WriteTxnMarkers(27): 0 [usable: 0], TxnOffsetCommit(28): 0 [usable: 0], DescribeAcls(29): 0 [usable: 0], CreateAcls(30): 0 [usable: 0], DeleteAcls(31): 0 [usable: 0], DescribeConfigs(32): 0 [usable: 0], AlterConfigs(33): 0 [usable: 0]) 11:30:55.579 [kafka-request-handler-7] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name Request-:producer-1 11:30:55.579 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-45 11:30:55.579 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Completed connection to node 1. Fetching API versions. 11:30:55.579 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Initiating API versions fetch from node 1. 11:30:55.579 [kafka-request-handler-7] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name RequestThrottleTime-:producer-1 11:30:55.579 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=my-topic) to node 0 11:30:55.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x8e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/46/state 11:30:55.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x8e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/46/state 11:30:55.579 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 142,4 replyHeader:: 142,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/46/state,F response:: 11:30:55.579 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Recorded API versions for node 2: (Produce(0): 0 to 3 [usable: 3], Fetch(1): 0 to 5 [usable: 5], Offsets(2): 0 to 2 [usable: 2], Metadata(3): 0 to 4 [usable: 4], LeaderAndIsr(4): 0 [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 [usable: 3], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 3 [usable: 3], OffsetFetch(9): 0 to 3 [usable: 3], FindCoordinator(10): 0 to 1 [usable: 1], JoinGroup(11): 0 to 2 [usable: 2], Heartbeat(12): 0 to 1 [usable: 1], LeaveGroup(13): 0 to 1 [usable: 1], SyncGroup(14): 0 to 1 [usable: 1], DescribeGroups(15): 0 to 1 [usable: 1], ListGroups(16): 0 to 1 [usable: 1], SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 to 1 [usable: 1], CreateTopics(19): 0 to 2 [usable: 2], DeleteTopics(20): 0 to 1 [usable: 1], DeleteRecords(21): 0 [usable: 0], InitProducerId(22): 0 [usable: 0], OffsetForLeaderEpoch(23): 0 [usable: 0], AddPartitionsToTxn(24): 0 [usable: 0], AddOffsetsToTxn(25): 0 [usable: 0], EndTxn(26): 0 [usable: 0], WriteTxnMarkers(27): 0 [usable: 0], TxnOffsetCommit(28): 0 [usable: 0], DescribeAcls(29): 0 [usable: 0], CreateAcls(30): 0 [usable: 0], DeleteAcls(31): 0 [usable: 0], DescribeConfigs(32): 0 [usable: 0], AlterConfigs(33): 0 [usable: 0]) 11:30:55.579 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-46 11:30:55.579 [kafka-request-handler-3] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name Request-:producer-1 11:30:55.579 [kafka-request-handler-3] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name RequestThrottleTime-:producer-1 11:30:55.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x8f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/47/state 11:30:55.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x8f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/47/state 11:30:55.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x90 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:55.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x90 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:55.579 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 143,4 replyHeader:: 143,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/47/state,F response:: 11:30:55.579 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-47 11:30:55.579 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 144,8 replyHeader:: 144,47,0 request:: '/brokers/ids,T response:: v{'0,'1,'2} 11:30:55.579 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Recorded API versions for node 1: (Produce(0): 0 to 3 [usable: 3], Fetch(1): 0 to 5 [usable: 5], Offsets(2): 0 to 2 [usable: 2], Metadata(3): 0 to 4 [usable: 4], LeaderAndIsr(4): 0 [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 [usable: 3], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 3 [usable: 3], OffsetFetch(9): 0 to 3 [usable: 3], FindCoordinator(10): 0 to 1 [usable: 1], JoinGroup(11): 0 to 2 [usable: 2], Heartbeat(12): 0 to 1 [usable: 1], LeaveGroup(13): 0 to 1 [usable: 1], SyncGroup(14): 0 to 1 [usable: 1], DescribeGroups(15): 0 to 1 [usable: 1], ListGroups(16): 0 to 1 [usable: 1], SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 to 1 [usable: 1], CreateTopics(19): 0 to 2 [usable: 2], DeleteTopics(20): 0 to 1 [usable: 1], DeleteRecords(21): 0 [usable: 0], InitProducerId(22): 0 [usable: 0], OffsetForLeaderEpoch(23): 0 [usable: 0], AddPartitionsToTxn(24): 0 [usable: 0], AddOffsetsToTxn(25): 0 [usable: 0], EndTxn(26): 0 [usable: 0], WriteTxnMarkers(27): 0 [usable: 0], TxnOffsetCommit(28): 0 [usable: 0], DescribeAcls(29): 0 [usable: 0], CreateAcls(30): 0 [usable: 0], DeleteAcls(31): 0 [usable: 0], DescribeConfigs(32): 0 [usable: 0], AlterConfigs(33): 0 [usable: 0]) 11:30:55.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x91 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/8/state 11:30:55.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x91 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/8/state 11:30:55.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x92 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:55.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x92 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:55.579 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 145,4 replyHeader:: 145,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/8/state,F response:: 11:30:55.579 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-8 11:30:55.579 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 146,4 replyHeader:: 146,47,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:55.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x93 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/38/state 11:30:55.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x93 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/38/state 11:30:55.579 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 147,4 replyHeader:: 147,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/38/state,F response:: 11:30:55.579 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-38 11:30:55.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x94 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/49/state 11:30:55.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x94 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/49/state 11:30:55.579 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 148,4 replyHeader:: 148,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/49/state,F response:: 11:30:55.579 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-49 11:30:55.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x95 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/1/state 11:30:55.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x95 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/1/state 11:30:55.594 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 149,4 replyHeader:: 149,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/1/state,F response:: 11:30:55.594 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-1 11:30:55.594 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x96 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/19/state 11:30:55.594 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x96 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/19/state 11:30:55.594 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x97 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:55.594 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x97 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:55.594 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 150,4 replyHeader:: 150,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/19/state,F response:: 11:30:55.594 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-19 11:30:55.594 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 151,4 replyHeader:: 151,47,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:55.594 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x98 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/0/state 11:30:55.594 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x98 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/0/state 11:30:55.594 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 152,4 replyHeader:: 152,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/0/state,F response:: 11:30:55.594 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-0 11:30:55.594 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x99 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/33/state 11:30:55.594 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x99 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/33/state 11:30:55.594 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x9a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:55.594 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x9a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:55.594 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 153,4 replyHeader:: 153,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/33/state,F response:: 11:30:55.594 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-33 11:30:55.610 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 154,4 replyHeader:: 154,47,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:55.610 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=) to node 1 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x9b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/5/state 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x9b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/5/state 11:30:55.610 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 155,4 replyHeader:: 155,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/5/state,F response:: 11:30:55.610 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-5 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x9c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/31/state 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x9c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/31/state 11:30:55.610 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 156,4 replyHeader:: 156,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/31/state,F response:: 11:30:55.610 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 4 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63344 (id: 1 rack: null), 127.0.0.1:63361 (id: 2 rack: null), 127.0.0.1:63325 (id: 0 rack: null)], partitions = []) 11:30:55.610 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-31 11:30:55.610 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending GroupCoordinator request for group exactly-once to broker 127.0.0.1:63325 (id: 0 rack: null) 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x9d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/25/state 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x9d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/25/state 11:30:55.610 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 157,4 replyHeader:: 157,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/25/state,F response:: 11:30:55.610 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-25 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x9e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/44/state 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x9e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/44/state 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x9f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x9f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:55.610 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 158,4 replyHeader:: 158,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/44/state,F response:: 11:30:55.610 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-44 11:30:55.610 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 159,8 replyHeader:: 159,47,0 request:: '/brokers/ids,T response:: v{'0,'1,'2} 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0xa0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0xa0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0xa1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic/partitions/0/state 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0xa1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic/partitions/0/state 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0xa2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0xa2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:55.610 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 160,3 replyHeader:: 160,47,0 request:: '/brokers/topics/my-topic,T response:: s{46,46,1505298655478,1505298655478,0,0,0,0,36,0,46} 11:30:55.610 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 161,4 replyHeader:: 161,47,-101 request:: '/brokers/topics/my-topic/partitions/0/state,F response:: 11:30:55.610 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for my-topic-0 11:30:55.610 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 162,4 replyHeader:: 162,47,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0xa3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/36/state 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0xa3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/36/state 11:30:55.610 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 5 : {my-topic=LEADER_NOT_AVAILABLE} 11:30:55.610 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 3 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63361 (id: 2 rack: null), 127.0.0.1:63325 (id: 0 rack: null), 127.0.0.1:63344 (id: 1 rack: null)], partitions = []) 11:30:55.610 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 163,4 replyHeader:: 163,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/36/state,F response:: 11:30:55.610 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-36 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0xa4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/12/state 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0xa4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/12/state 11:30:55.610 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 164,4 replyHeader:: 164,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/12/state,F response:: 11:30:55.610 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-12 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0xa5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/16/state 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0xa5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/16/state 11:30:55.610 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 165,4 replyHeader:: 165,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/16/state,F response:: 11:30:55.610 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-16 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0xa6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/15/state 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0xa6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/15/state 11:30:55.610 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 166,4 replyHeader:: 166,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/15/state,F response:: 11:30:55.610 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-15 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0xa7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/23/state 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0xa7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/23/state 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0xa8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0xa8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:55.610 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 167,4 replyHeader:: 167,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/23/state,F response:: 11:30:55.610 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-23 11:30:55.610 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 168,4 replyHeader:: 168,47,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0xa9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/32/state 11:30:55.610 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0xa9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/32/state 11:30:55.610 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 169,4 replyHeader:: 169,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/32/state,F response:: 11:30:55.610 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-32 11:30:55.626 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0xaa zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/14/state 11:30:55.626 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0xaa zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/14/state 11:30:55.626 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 170,4 replyHeader:: 170,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/14/state,F response:: 11:30:55.626 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-14 11:30:55.626 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0xab zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/24/state 11:30:55.626 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0xab zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets/partitions/24/state 11:30:55.626 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 171,4 replyHeader:: 171,47,-101 request:: '/brokers/topics/__consumer_offsets/partitions/24/state,F response:: 11:30:55.626 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __consumer_offsets-24 11:30:55.626 [controller-event-thread] INFO kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Invoking state change to OnlinePartition for partitions [__consumer_offsets,19],[__consumer_offsets,30],[__consumer_offsets,47],[__consumer_offsets,29],[__consumer_offsets,41],[__consumer_offsets,39],[__consumer_offsets,10],[__consumer_offsets,17],[__consumer_offsets,14],[__consumer_offsets,40],[__consumer_offsets,18],[__consumer_offsets,26],[__consumer_offsets,0],[__consumer_offsets,24],[__consumer_offsets,33],[__consumer_offsets,20],[__consumer_offsets,3],[__consumer_offsets,21],[__consumer_offsets,5],[__consumer_offsets,22],[__consumer_offsets,12],[__consumer_offsets,8],[__consumer_offsets,23],[__consumer_offsets,15],[__consumer_offsets,48],[__consumer_offsets,11],[__consumer_offsets,13],[my-topic,0],[__consumer_offsets,49],[__consumer_offsets,6],[__consumer_offsets,28],[__consumer_offsets,4],[__consumer_offsets,37],[__consumer_offsets,31],[__consumer_offsets,44],[__consumer_offsets,42],[__consumer_offsets,34],[__consumer_offsets,46],[__consumer_offsets,25],[__consumer_offsets,45],[__consumer_offsets,27],[__consumer_offsets,32],[__consumer_offsets,43],[__consumer_offsets,36],[__consumer_offsets,35],[__consumer_offsets,7],[__consumer_offsets,9],[__consumer_offsets,38],[__consumer_offsets,1],[__consumer_offsets,16],[__consumer_offsets,2] 11:30:55.626 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0xac zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:55.626 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0xac zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:55.626 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 172,4 replyHeader:: 172,47,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:55.626 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,19] are: [List(2)] 11:30:55.626 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,19] to (Leader:2,ISR:2,LeaderEpoch:0,ControllerEpoch:1) 11:30:55.626 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0xad zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:55.626 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0xad zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:55.626 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 173,3 replyHeader:: 173,47,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{47,47,1505298655478,1505298655478,0,0,0,0,468,0,47} 11:30:55.626 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0xae zxid:0x30 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/19 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/19 11:30:55.626 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received GroupCoordinator response ClientResponse(receivedTimeMs=1505298655626, latencyMs=16, disconnected=false, requestHeader={api_key=10,api_version=1,correlation_id=9,client_id=exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer}, responseBody=FindCoordinatorResponse(throttleTimeMs=0, errorMessage='null', error=COORDINATOR_NOT_AVAILABLE, node=:-1 (id: -1 rack: null))) for group exactly-once 11:30:55.626 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Group coordinator lookup for group exactly-once failed: The coordinator is not available. 11:30:55.626 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Coordinator discovery failed for group exactly-once, refreshing metadata 11:30:55.641 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xae zxid:0x30 txntype:-1 reqpath:n/a 11:30:55.641 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:55.641 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 174,1 replyHeader:: 174,48,-101 request:: '/brokers/topics/__consumer_offsets/partitions/19/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:55.641 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0xaf zxid:0x31 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions 11:30:55.641 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xaf zxid:0x31 txntype:-1 reqpath:n/a 11:30:55.641 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:55.641 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 175,1 replyHeader:: 175,49,-101 request:: '/brokers/topics/__consumer_offsets/partitions/19,,v{s{31,s{'world,'anyone}}},0 response:: 11:30:55.641 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xb0 zxid:0x32 txntype:1 reqpath:n/a 11:30:55.641 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xb0 zxid:0x32 txntype:1 reqpath:n/a 11:30:55.641 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 176,1 replyHeader:: 176,50,0 request:: '/brokers/topics/__consumer_offsets/partitions,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions 11:30:55.641 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xb1 zxid:0x33 txntype:1 reqpath:n/a 11:30:55.641 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xb1 zxid:0x33 txntype:1 reqpath:n/a 11:30:55.641 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 177,1 replyHeader:: 177,51,0 request:: '/brokers/topics/__consumer_offsets/partitions/19,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/19 11:30:55.641 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xb2 zxid:0x34 txntype:1 reqpath:n/a 11:30:55.641 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xb2 zxid:0x34 txntype:1 reqpath:n/a 11:30:55.641 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 178,1 replyHeader:: 178,52,0 request:: '/brokers/topics/__consumer_offsets/partitions/19/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/19/state 11:30:55.661 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,30] are: [List(1)] 11:30:55.661 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,30] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) 11:30:55.662 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0xb3 zxid:0x35 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/30 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/30 11:30:55.663 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xb3 zxid:0x35 txntype:-1 reqpath:n/a 11:30:55.663 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:55.663 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 179,1 replyHeader:: 179,53,-101 request:: '/brokers/topics/__consumer_offsets/partitions/30/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:55.663 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xb4 zxid:0x36 txntype:1 reqpath:n/a 11:30:55.663 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xb4 zxid:0x36 txntype:1 reqpath:n/a 11:30:55.663 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 180,1 replyHeader:: 180,54,0 request:: '/brokers/topics/__consumer_offsets/partitions/30,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/30 11:30:55.663 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xb5 zxid:0x37 txntype:1 reqpath:n/a 11:30:55.663 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xb5 zxid:0x37 txntype:1 reqpath:n/a 11:30:55.663 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 181,1 replyHeader:: 181,55,0 request:: '/brokers/topics/__consumer_offsets/partitions/30/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/30/state 11:30:55.663 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,47] are: [List(0)] 11:30:55.663 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,47] to (Leader:0,ISR:0,LeaderEpoch:0,ControllerEpoch:1) 11:30:55.663 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0xb6 zxid:0x38 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/47 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/47 11:30:55.663 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xb6 zxid:0x38 txntype:-1 reqpath:n/a 11:30:55.663 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:55.663 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 182,1 replyHeader:: 182,56,-101 request:: '/brokers/topics/__consumer_offsets/partitions/47/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:55.663 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xb7 zxid:0x39 txntype:1 reqpath:n/a 11:30:55.663 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xb7 zxid:0x39 txntype:1 reqpath:n/a 11:30:55.663 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 183,1 replyHeader:: 183,57,0 request:: '/brokers/topics/__consumer_offsets/partitions/47,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/47 11:30:55.679 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xb8 zxid:0x3a txntype:1 reqpath:n/a 11:30:55.679 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xb8 zxid:0x3a txntype:1 reqpath:n/a 11:30:55.679 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 184,1 replyHeader:: 184,58,0 request:: '/brokers/topics/__consumer_offsets/partitions/47/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/47/state 11:30:55.679 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,29] are: [List(0)] 11:30:55.679 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,29] to (Leader:0,ISR:0,LeaderEpoch:0,ControllerEpoch:1) 11:30:55.679 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0xb9 zxid:0x3b txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/29 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/29 11:30:55.679 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xb9 zxid:0x3b txntype:-1 reqpath:n/a 11:30:55.679 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:55.679 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 185,1 replyHeader:: 185,59,-101 request:: '/brokers/topics/__consumer_offsets/partitions/29/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:55.679 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xba zxid:0x3c txntype:1 reqpath:n/a 11:30:55.679 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xba zxid:0x3c txntype:1 reqpath:n/a 11:30:55.679 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 186,1 replyHeader:: 186,60,0 request:: '/brokers/topics/__consumer_offsets/partitions/29,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/29 11:30:55.679 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xbb zxid:0x3d txntype:1 reqpath:n/a 11:30:55.679 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xbb zxid:0x3d txntype:1 reqpath:n/a 11:30:55.679 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 187,1 replyHeader:: 187,61,0 request:: '/brokers/topics/__consumer_offsets/partitions/29/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/29/state 11:30:55.679 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,41] are: [List(0)] 11:30:55.679 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,41] to (Leader:0,ISR:0,LeaderEpoch:0,ControllerEpoch:1) 11:30:55.695 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0xbc zxid:0x3e txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/41 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/41 11:30:55.695 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xbc zxid:0x3e txntype:-1 reqpath:n/a 11:30:55.695 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:55.695 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 188,1 replyHeader:: 188,62,-101 request:: '/brokers/topics/__consumer_offsets/partitions/41/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:55.695 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xbd zxid:0x3f txntype:1 reqpath:n/a 11:30:55.695 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xbd zxid:0x3f txntype:1 reqpath:n/a 11:30:55.695 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 189,1 replyHeader:: 189,63,0 request:: '/brokers/topics/__consumer_offsets/partitions/41,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/41 11:30:55.695 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xbe zxid:0x40 txntype:1 reqpath:n/a 11:30:55.695 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xbe zxid:0x40 txntype:1 reqpath:n/a 11:30:55.695 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 190,1 replyHeader:: 190,64,0 request:: '/brokers/topics/__consumer_offsets/partitions/41/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/41/state 11:30:55.710 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=) to node 0 11:30:55.710 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=my-topic) to node 2 11:30:55.710 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,39] are: [List(1)] 11:30:55.710 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,39] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) 11:30:55.710 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0xbf zxid:0x41 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/39 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/39 11:30:55.710 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 5 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63344 (id: 1 rack: null), 127.0.0.1:63361 (id: 2 rack: null), 127.0.0.1:63325 (id: 0 rack: null)], partitions = []) 11:30:55.710 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending GroupCoordinator request for group exactly-once to broker 127.0.0.1:63361 (id: 2 rack: null) 11:30:55.710 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xbf zxid:0x41 txntype:-1 reqpath:n/a 11:30:55.710 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:55.710 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getChildren cxid:0x24 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:55.710 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getChildren cxid:0x24 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:55.710 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 191,1 replyHeader:: 191,65,-101 request:: '/brokers/topics/__consumer_offsets/partitions/39/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:55.710 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getChildren cxid:0x25 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:55.710 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 36,8 replyHeader:: 36,65,0 request:: '/brokers/ids,F response:: v{'0,'1,'2} 11:30:55.710 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getChildren cxid:0x25 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:55.710 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 37,8 replyHeader:: 37,65,0 request:: '/brokers/ids,F response:: v{'0,'1,'2} 11:30:55.710 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xc0 zxid:0x42 txntype:1 reqpath:n/a 11:30:55.710 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xc0 zxid:0x42 txntype:1 reqpath:n/a 11:30:55.710 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x26 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:55.710 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x26 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:55.710 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 192,1 replyHeader:: 192,66,0 request:: '/brokers/topics/__consumer_offsets/partitions/39,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/39 11:30:55.710 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x27 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:55.710 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x27 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:55.710 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 38,4 replyHeader:: 38,66,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:55.710 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 39,4 replyHeader:: 39,66,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:55.710 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xc1 zxid:0x43 txntype:1 reqpath:n/a 11:30:55.710 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xc1 zxid:0x43 txntype:1 reqpath:n/a 11:30:55.710 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 193,1 replyHeader:: 193,67,0 request:: '/brokers/topics/__consumer_offsets/partitions/39/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/39/state 11:30:55.710 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x28 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:55.710 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x28 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:55.710 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,10] are: [List(2)] 11:30:55.710 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 40,4 replyHeader:: 40,67,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:55.710 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,10] to (Leader:2,ISR:2,LeaderEpoch:0,ControllerEpoch:1) 11:30:55.710 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0xc2 zxid:0x44 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/10 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/10 11:30:55.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xc2 zxid:0x44 txntype:-1 reqpath:n/a 11:30:55.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:55.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x29 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:55.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x29 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:55.726 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 194,1 replyHeader:: 194,68,-101 request:: '/brokers/topics/__consumer_offsets/partitions/10/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:55.726 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 41,4 replyHeader:: 41,68,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:55.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xc3 zxid:0x45 txntype:1 reqpath:n/a 11:30:55.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xc3 zxid:0x45 txntype:1 reqpath:n/a 11:30:55.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x2a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:55.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x2a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:55.726 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 195,1 replyHeader:: 195,69,0 request:: '/brokers/topics/__consumer_offsets/partitions/10,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/10 11:30:55.726 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 42,4 replyHeader:: 42,69,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:55.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xc4 zxid:0x46 txntype:1 reqpath:n/a 11:30:55.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xc4 zxid:0x46 txntype:1 reqpath:n/a 11:30:55.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x2b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:55.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x2b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:55.726 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 196,1 replyHeader:: 196,70,0 request:: '/brokers/topics/__consumer_offsets/partitions/10/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/10/state 11:30:55.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:exists cxid:0x2c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:55.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:exists cxid:0x2c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:55.726 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 43,4 replyHeader:: 43,70,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:55.726 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 44,3 replyHeader:: 44,70,0 request:: '/brokers/topics/my-topic,F response:: s{46,46,1505298655478,1505298655478,0,0,0,0,36,0,46} 11:30:55.726 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,17] are: [List(0)] 11:30:55.726 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,17] to (Leader:0,ISR:0,LeaderEpoch:0,ControllerEpoch:1) 11:30:55.726 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 6 : {my-topic=LEADER_NOT_AVAILABLE} 11:30:55.726 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 4 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63361 (id: 2 rack: null), 127.0.0.1:63344 (id: 1 rack: null), 127.0.0.1:63325 (id: 0 rack: null)], partitions = []) 11:30:55.746 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0xc5 zxid:0x47 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/17 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/17 11:30:55.746 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xc5 zxid:0x47 txntype:-1 reqpath:n/a 11:30:55.746 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:55.746 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:exists cxid:0x2d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:55.746 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:exists cxid:0x2d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:55.746 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 197,1 replyHeader:: 197,71,-101 request:: '/brokers/topics/__consumer_offsets/partitions/17/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:55.746 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 45,3 replyHeader:: 45,71,0 request:: '/brokers/topics/__consumer_offsets,F response:: s{47,47,1505298655478,1505298655478,0,1,0,0,468,1,50} 11:30:55.746 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received GroupCoordinator response ClientResponse(receivedTimeMs=1505298655746, latencyMs=36, disconnected=false, requestHeader={api_key=10,api_version=1,correlation_id=11,client_id=exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer}, responseBody=FindCoordinatorResponse(throttleTimeMs=0, errorMessage='null', error=COORDINATOR_NOT_AVAILABLE, node=:-1 (id: -1 rack: null))) for group exactly-once 11:30:55.746 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Group coordinator lookup for group exactly-once failed: The coordinator is not available. 11:30:55.746 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Coordinator discovery failed for group exactly-once, refreshing metadata 11:30:55.758 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xc6 zxid:0x48 txntype:1 reqpath:n/a 11:30:55.758 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xc6 zxid:0x48 txntype:1 reqpath:n/a 11:30:55.758 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 198,1 replyHeader:: 198,72,0 request:: '/brokers/topics/__consumer_offsets/partitions/17,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/17 11:30:55.763 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xc7 zxid:0x49 txntype:1 reqpath:n/a 11:30:55.763 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xc7 zxid:0x49 txntype:1 reqpath:n/a 11:30:55.763 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 199,1 replyHeader:: 199,73,0 request:: '/brokers/topics/__consumer_offsets/partitions/17/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/17/state 11:30:55.763 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,14] are: [List(0)] 11:30:55.763 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,14] to (Leader:0,ISR:0,LeaderEpoch:0,ControllerEpoch:1) 11:30:55.763 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0xc8 zxid:0x4a txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/14 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/14 11:30:55.763 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xc8 zxid:0x4a txntype:-1 reqpath:n/a 11:30:55.763 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:55.763 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 200,1 replyHeader:: 200,74,-101 request:: '/brokers/topics/__consumer_offsets/partitions/14/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:55.763 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xc9 zxid:0x4b txntype:1 reqpath:n/a 11:30:55.763 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xc9 zxid:0x4b txntype:1 reqpath:n/a 11:30:55.763 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 201,1 replyHeader:: 201,75,0 request:: '/brokers/topics/__consumer_offsets/partitions/14,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/14 11:30:55.763 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xca zxid:0x4c txntype:1 reqpath:n/a 11:30:55.763 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xca zxid:0x4c txntype:1 reqpath:n/a 11:30:55.763 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 202,1 replyHeader:: 202,76,0 request:: '/brokers/topics/__consumer_offsets/partitions/14/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/14/state 11:30:55.763 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,40] are: [List(2)] 11:30:55.763 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,40] to (Leader:2,ISR:2,LeaderEpoch:0,ControllerEpoch:1) 11:30:55.763 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0xcb zxid:0x4d txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/40 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/40 11:30:55.779 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xcb zxid:0x4d txntype:-1 reqpath:n/a 11:30:55.779 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:55.779 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 203,1 replyHeader:: 203,77,-101 request:: '/brokers/topics/__consumer_offsets/partitions/40/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:55.779 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xcc zxid:0x4e txntype:1 reqpath:n/a 11:30:55.779 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xcc zxid:0x4e txntype:1 reqpath:n/a 11:30:55.779 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 204,1 replyHeader:: 204,78,0 request:: '/brokers/topics/__consumer_offsets/partitions/40,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/40 11:30:55.794 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xcd zxid:0x4f txntype:1 reqpath:n/a 11:30:55.794 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xcd zxid:0x4f txntype:1 reqpath:n/a 11:30:55.794 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 205,1 replyHeader:: 205,79,0 request:: '/brokers/topics/__consumer_offsets/partitions/40/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/40/state 11:30:55.794 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,18] are: [List(1)] 11:30:55.794 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,18] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) 11:30:55.794 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0xce zxid:0x50 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/18 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/18 11:30:55.794 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xce zxid:0x50 txntype:-1 reqpath:n/a 11:30:55.794 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:55.794 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 206,1 replyHeader:: 206,80,-101 request:: '/brokers/topics/__consumer_offsets/partitions/18/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:55.794 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xcf zxid:0x51 txntype:1 reqpath:n/a 11:30:55.794 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xcf zxid:0x51 txntype:1 reqpath:n/a 11:30:55.794 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 207,1 replyHeader:: 207,81,0 request:: '/brokers/topics/__consumer_offsets/partitions/18,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/18 11:30:55.794 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xd0 zxid:0x52 txntype:1 reqpath:n/a 11:30:55.794 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xd0 zxid:0x52 txntype:1 reqpath:n/a 11:30:55.794 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 208,1 replyHeader:: 208,82,0 request:: '/brokers/topics/__consumer_offsets/partitions/18/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/18/state 11:30:55.794 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,26] are: [List(0)] 11:30:55.794 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,26] to (Leader:0,ISR:0,LeaderEpoch:0,ControllerEpoch:1) 11:30:55.794 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0xd1 zxid:0x53 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/26 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/26 11:30:55.810 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=) to node 2 11:30:55.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xd1 zxid:0x53 txntype:-1 reqpath:n/a 11:30:55.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:55.810 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 209,1 replyHeader:: 209,83,-101 request:: '/brokers/topics/__consumer_offsets/partitions/26/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:55.810 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 6 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63361 (id: 2 rack: null), 127.0.0.1:63325 (id: 0 rack: null), 127.0.0.1:63344 (id: 1 rack: null)], partitions = []) 11:30:55.810 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending GroupCoordinator request for group exactly-once to broker 127.0.0.1:63325 (id: 0 rack: null) 11:30:55.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xd2 zxid:0x54 txntype:1 reqpath:n/a 11:30:55.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xd2 zxid:0x54 txntype:1 reqpath:n/a 11:30:55.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0xd3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:55.810 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 210,1 replyHeader:: 210,84,0 request:: '/brokers/topics/__consumer_offsets/partitions/26,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/26 11:30:55.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0xd3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:55.810 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 211,8 replyHeader:: 211,84,0 request:: '/brokers/ids,T response:: v{'0,'1,'2} 11:30:55.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xd4 zxid:0x55 txntype:1 reqpath:n/a 11:30:55.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xd4 zxid:0x55 txntype:1 reqpath:n/a 11:30:55.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0xd5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:55.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0xd5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:55.810 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 212,1 replyHeader:: 212,85,0 request:: '/brokers/topics/__consumer_offsets/partitions/26/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/26/state 11:30:55.810 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 213,4 replyHeader:: 213,85,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:55.810 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,0] are: [List(1)] 11:30:55.810 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,0] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) 11:30:55.810 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0xd6 zxid:0x56 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/0 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/0 11:30:55.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xd6 zxid:0x56 txntype:-1 reqpath:n/a 11:30:55.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:55.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0xd7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:55.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0xd7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:55.810 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 214,1 replyHeader:: 214,86,-101 request:: '/brokers/topics/__consumer_offsets/partitions/0/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:55.810 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 215,4 replyHeader:: 215,86,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:55.826 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xd8 zxid:0x57 txntype:1 reqpath:n/a 11:30:55.826 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xd8 zxid:0x57 txntype:1 reqpath:n/a 11:30:55.826 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0xd9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:55.826 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0xd9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:55.826 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 216,1 replyHeader:: 216,87,0 request:: '/brokers/topics/__consumer_offsets/partitions/0,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/0 11:30:55.826 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 217,4 replyHeader:: 217,87,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:55.826 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xda zxid:0x58 txntype:1 reqpath:n/a 11:30:55.826 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xda zxid:0x58 txntype:1 reqpath:n/a 11:30:55.826 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 218,1 replyHeader:: 218,88,0 request:: '/brokers/topics/__consumer_offsets/partitions/0/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/0/state 11:30:55.826 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,24] are: [List(1)] 11:30:55.826 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,24] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) 11:30:55.826 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0xdb zxid:0x59 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/24 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/24 11:30:55.826 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xdb zxid:0x59 txntype:-1 reqpath:n/a 11:30:55.826 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:55.826 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0xdc zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:55.826 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0xdc zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:55.826 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 219,1 replyHeader:: 219,89,-101 request:: '/brokers/topics/__consumer_offsets/partitions/24/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:55.826 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 220,3 replyHeader:: 220,89,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{47,47,1505298655478,1505298655478,0,1,0,0,468,1,50} 11:30:55.826 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received GroupCoordinator response ClientResponse(receivedTimeMs=1505298655826, latencyMs=16, disconnected=false, requestHeader={api_key=10,api_version=1,correlation_id=13,client_id=exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer}, responseBody=FindCoordinatorResponse(throttleTimeMs=0, errorMessage='null', error=COORDINATOR_NOT_AVAILABLE, node=:-1 (id: -1 rack: null))) for group exactly-once 11:30:55.826 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Group coordinator lookup for group exactly-once failed: The coordinator is not available. 11:30:55.826 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Coordinator discovery failed for group exactly-once, refreshing metadata 11:30:55.826 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xdd zxid:0x5a txntype:1 reqpath:n/a 11:30:55.826 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xdd zxid:0x5a txntype:1 reqpath:n/a 11:30:55.826 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 221,1 replyHeader:: 221,90,0 request:: '/brokers/topics/__consumer_offsets/partitions/24,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/24 11:30:55.841 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=my-topic) to node 1 11:30:55.842 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xde zxid:0x5b txntype:1 reqpath:n/a 11:30:55.843 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xde zxid:0x5b txntype:1 reqpath:n/a 11:30:55.843 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getChildren cxid:0x1f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:55.843 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getChildren cxid:0x1f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:55.843 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:30:55.843 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:30:55.843 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 31,8 replyHeader:: 31,91,0 request:: '/brokers/ids,F response:: v{'0,'1,'2} 11:30:55.844 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0002 after 0ms 11:30:55.844 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x20 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:55.844 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x20 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:55.844 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 32,4 replyHeader:: 32,91,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:55.845 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 222,1 replyHeader:: 222,91,0 request:: '/brokers/topics/__consumer_offsets/partitions/24/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/24/state 11:30:55.846 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,33] are: [List(1)] 11:30:55.846 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,33] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) 11:30:55.847 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0xdf zxid:0x5c txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/33 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/33 11:30:55.850 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xdf zxid:0x5c txntype:-1 reqpath:n/a 11:30:55.850 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:55.850 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x21 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:55.850 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x21 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:55.851 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 223,1 replyHeader:: 223,92,-101 request:: '/brokers/topics/__consumer_offsets/partitions/33/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:55.851 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 33,4 replyHeader:: 33,92,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:55.854 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xe0 zxid:0x5d txntype:1 reqpath:n/a 11:30:55.854 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xe0 zxid:0x5d txntype:1 reqpath:n/a 11:30:55.855 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 224,1 replyHeader:: 224,93,0 request:: '/brokers/topics/__consumer_offsets/partitions/33,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/33 11:30:55.855 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x22 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:55.855 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x22 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:55.855 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 34,4 replyHeader:: 34,93,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:55.858 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xe1 zxid:0x5e txntype:1 reqpath:n/a 11:30:55.858 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xe1 zxid:0x5e txntype:1 reqpath:n/a 11:30:55.858 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 225,1 replyHeader:: 225,94,0 request:: '/brokers/topics/__consumer_offsets/partitions/33/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/33/state 11:30:55.860 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,20] are: [List(0)] 11:30:55.860 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,20] to (Leader:0,ISR:0,LeaderEpoch:0,ControllerEpoch:1) 11:30:55.860 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:exists cxid:0x23 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:55.860 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:exists cxid:0x23 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:55.860 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0xe2 zxid:0x5f txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/20 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/20 11:30:55.860 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 35,3 replyHeader:: 35,94,0 request:: '/brokers/topics/my-topic,F response:: s{46,46,1505298655478,1505298655478,0,0,0,0,36,0,46} 11:30:55.860 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 7 : {my-topic=LEADER_NOT_AVAILABLE} 11:30:55.860 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 5 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63361 (id: 2 rack: null), 127.0.0.1:63344 (id: 1 rack: null), 127.0.0.1:63325 (id: 0 rack: null)], partitions = []) 11:30:55.863 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xe2 zxid:0x5f txntype:-1 reqpath:n/a 11:30:55.863 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:55.863 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 226,1 replyHeader:: 226,95,-101 request:: '/brokers/topics/__consumer_offsets/partitions/20/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:55.873 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xe3 zxid:0x60 txntype:1 reqpath:n/a 11:30:55.873 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xe3 zxid:0x60 txntype:1 reqpath:n/a 11:30:55.873 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 227,1 replyHeader:: 227,96,0 request:: '/brokers/topics/__consumer_offsets/partitions/20,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/20 11:30:55.876 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xe4 zxid:0x61 txntype:1 reqpath:n/a 11:30:55.876 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xe4 zxid:0x61 txntype:1 reqpath:n/a 11:30:55.877 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 228,1 replyHeader:: 228,97,0 request:: '/brokers/topics/__consumer_offsets/partitions/20/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/20/state 11:30:55.877 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,3] are: [List(1)] 11:30:55.878 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,3] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) 11:30:55.878 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0xe5 zxid:0x62 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/3 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/3 11:30:55.887 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xe5 zxid:0x62 txntype:-1 reqpath:n/a 11:30:55.887 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:55.888 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 229,1 replyHeader:: 229,98,-101 request:: '/brokers/topics/__consumer_offsets/partitions/3/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:55.893 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xe6 zxid:0x63 txntype:1 reqpath:n/a 11:30:55.893 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xe6 zxid:0x63 txntype:1 reqpath:n/a 11:30:55.893 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 230,1 replyHeader:: 230,99,0 request:: '/brokers/topics/__consumer_offsets/partitions/3,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/3 11:30:55.893 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xe7 zxid:0x64 txntype:1 reqpath:n/a 11:30:55.893 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xe7 zxid:0x64 txntype:1 reqpath:n/a 11:30:55.893 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 231,1 replyHeader:: 231,100,0 request:: '/brokers/topics/__consumer_offsets/partitions/3/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/3/state 11:30:55.893 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,21] are: [List(1)] 11:30:55.893 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,21] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) 11:30:55.893 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0xe8 zxid:0x65 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/21 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/21 11:30:55.893 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xe8 zxid:0x65 txntype:-1 reqpath:n/a 11:30:55.893 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:55.893 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 232,1 replyHeader:: 232,101,-101 request:: '/brokers/topics/__consumer_offsets/partitions/21/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:55.893 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xe9 zxid:0x66 txntype:1 reqpath:n/a 11:30:55.908 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xe9 zxid:0x66 txntype:1 reqpath:n/a 11:30:55.908 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 233,1 replyHeader:: 233,102,0 request:: '/brokers/topics/__consumer_offsets/partitions/21,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/21 11:30:55.908 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xea zxid:0x67 txntype:1 reqpath:n/a 11:30:55.908 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xea zxid:0x67 txntype:1 reqpath:n/a 11:30:55.908 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 234,1 replyHeader:: 234,103,0 request:: '/brokers/topics/__consumer_offsets/partitions/21/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/21/state 11:30:55.908 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,5] are: [List(0)] 11:30:55.908 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,5] to (Leader:0,ISR:0,LeaderEpoch:0,ControllerEpoch:1) 11:30:55.908 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0xeb zxid:0x68 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/5 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/5 11:30:55.908 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xeb zxid:0x68 txntype:-1 reqpath:n/a 11:30:55.908 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:55.908 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 235,1 replyHeader:: 235,104,-101 request:: '/brokers/topics/__consumer_offsets/partitions/5/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:55.908 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xec zxid:0x69 txntype:1 reqpath:n/a 11:30:55.908 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xec zxid:0x69 txntype:1 reqpath:n/a 11:30:55.908 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 236,1 replyHeader:: 236,105,0 request:: '/brokers/topics/__consumer_offsets/partitions/5,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/5 11:30:55.908 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xed zxid:0x6a txntype:1 reqpath:n/a 11:30:55.908 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xed zxid:0x6a txntype:1 reqpath:n/a 11:30:55.908 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 237,1 replyHeader:: 237,106,0 request:: '/brokers/topics/__consumer_offsets/partitions/5/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/5/state 11:30:55.908 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,22] are: [List(2)] 11:30:55.908 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,22] to (Leader:2,ISR:2,LeaderEpoch:0,ControllerEpoch:1) 11:30:55.924 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=) to node 1 11:30:55.924 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0xee zxid:0x6b txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/22 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/22 11:30:55.924 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 7 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63325 (id: 0 rack: null), 127.0.0.1:63361 (id: 2 rack: null), 127.0.0.1:63344 (id: 1 rack: null)], partitions = []) 11:30:55.924 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending GroupCoordinator request for group exactly-once to broker 127.0.0.1:63325 (id: 0 rack: null) 11:30:55.924 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xee zxid:0x6b txntype:-1 reqpath:n/a 11:30:55.924 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:55.924 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:30:55.924 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:30:55.924 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0xef zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:55.924 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 238,1 replyHeader:: 238,107,-101 request:: '/brokers/topics/__consumer_offsets/partitions/22/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:55.924 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0000 after 4ms 11:30:55.924 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0xef zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:55.924 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 239,8 replyHeader:: 239,107,0 request:: '/brokers/ids,T response:: v{'0,'1,'2} 11:30:55.924 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xf0 zxid:0x6c txntype:1 reqpath:n/a 11:30:55.924 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xf0 zxid:0x6c txntype:1 reqpath:n/a 11:30:55.924 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0xf1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:55.924 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0xf1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:55.924 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 240,1 replyHeader:: 240,108,0 request:: '/brokers/topics/__consumer_offsets/partitions/22,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/22 11:30:55.924 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 241,4 replyHeader:: 241,108,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:55.924 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xf2 zxid:0x6d txntype:1 reqpath:n/a 11:30:55.924 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xf2 zxid:0x6d txntype:1 reqpath:n/a 11:30:55.924 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0xf3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:55.924 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0xf3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:55.924 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 242,1 replyHeader:: 242,109,0 request:: '/brokers/topics/__consumer_offsets/partitions/22/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/22/state 11:30:55.924 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 243,4 replyHeader:: 243,109,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:55.924 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,12] are: [List(1)] 11:30:55.924 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,12] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) 11:30:55.939 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0xf4 zxid:0x6e txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/12 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/12 11:30:55.939 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xf4 zxid:0x6e txntype:-1 reqpath:n/a 11:30:55.939 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:55.939 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0xf5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:55.939 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0xf5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:55.939 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 244,1 replyHeader:: 244,110,-101 request:: '/brokers/topics/__consumer_offsets/partitions/12/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:55.939 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 245,4 replyHeader:: 245,110,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:55.939 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xf6 zxid:0x6f txntype:1 reqpath:n/a 11:30:55.939 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xf6 zxid:0x6f txntype:1 reqpath:n/a 11:30:55.939 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 246,1 replyHeader:: 246,111,0 request:: '/brokers/topics/__consumer_offsets/partitions/12,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/12 11:30:55.939 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0xf7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:55.939 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0xf7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:55.939 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 247,3 replyHeader:: 247,111,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{47,47,1505298655478,1505298655478,0,1,0,0,468,1,50} 11:30:55.939 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received GroupCoordinator response ClientResponse(receivedTimeMs=1505298655939, latencyMs=15, disconnected=false, requestHeader={api_key=10,api_version=1,correlation_id=15,client_id=exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer}, responseBody=FindCoordinatorResponse(throttleTimeMs=0, errorMessage='null', error=COORDINATOR_NOT_AVAILABLE, node=:-1 (id: -1 rack: null))) for group exactly-once 11:30:55.939 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Group coordinator lookup for group exactly-once failed: The coordinator is not available. 11:30:55.939 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Coordinator discovery failed for group exactly-once, refreshing metadata 11:30:55.939 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xf8 zxid:0x70 txntype:1 reqpath:n/a 11:30:55.939 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xf8 zxid:0x70 txntype:1 reqpath:n/a 11:30:55.939 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 248,1 replyHeader:: 248,112,0 request:: '/brokers/topics/__consumer_offsets/partitions/12/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/12/state 11:30:55.939 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,8] are: [List(0)] 11:30:55.939 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,8] to (Leader:0,ISR:0,LeaderEpoch:0,ControllerEpoch:1) 11:30:55.939 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0xf9 zxid:0x71 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/8 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/8 11:30:55.956 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xf9 zxid:0x71 txntype:-1 reqpath:n/a 11:30:55.956 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:55.956 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 249,1 replyHeader:: 249,113,-101 request:: '/brokers/topics/__consumer_offsets/partitions/8/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:55.960 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xfa zxid:0x72 txntype:1 reqpath:n/a 11:30:55.960 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xfa zxid:0x72 txntype:1 reqpath:n/a 11:30:55.960 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 250,1 replyHeader:: 250,114,0 request:: '/brokers/topics/__consumer_offsets/partitions/8,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/8 11:30:55.963 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xfb zxid:0x73 txntype:1 reqpath:n/a 11:30:55.963 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xfb zxid:0x73 txntype:1 reqpath:n/a 11:30:55.963 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 251,1 replyHeader:: 251,115,0 request:: '/brokers/topics/__consumer_offsets/partitions/8/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/8/state 11:30:55.963 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,23] are: [List(0)] 11:30:55.963 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,23] to (Leader:0,ISR:0,LeaderEpoch:0,ControllerEpoch:1) 11:30:55.963 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0xfc zxid:0x74 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/23 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/23 11:30:55.963 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xfc zxid:0x74 txntype:-1 reqpath:n/a 11:30:55.963 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:55.963 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 252,1 replyHeader:: 252,116,-101 request:: '/brokers/topics/__consumer_offsets/partitions/23/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:55.963 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xfd zxid:0x75 txntype:1 reqpath:n/a 11:30:55.963 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xfd zxid:0x75 txntype:1 reqpath:n/a 11:30:55.963 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 253,1 replyHeader:: 253,117,0 request:: '/brokers/topics/__consumer_offsets/partitions/23,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/23 11:30:55.963 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xfe zxid:0x76 txntype:1 reqpath:n/a 11:30:55.963 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0xfe zxid:0x76 txntype:1 reqpath:n/a 11:30:55.963 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 254,1 replyHeader:: 254,118,0 request:: '/brokers/topics/__consumer_offsets/partitions/23/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/23/state 11:30:55.963 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,15] are: [List(1)] 11:30:55.963 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,15] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) 11:30:55.963 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0xff zxid:0x77 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/15 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/15 11:30:55.979 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=my-topic) to node 1 11:30:55.979 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0xff zxid:0x77 txntype:-1 reqpath:n/a 11:30:55.979 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:55.979 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getChildren cxid:0x24 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:55.979 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getChildren cxid:0x24 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:55.979 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 255,1 replyHeader:: 255,119,-101 request:: '/brokers/topics/__consumer_offsets/partitions/15/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:55.979 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 36,8 replyHeader:: 36,119,0 request:: '/brokers/ids,F response:: v{'0,'1,'2} 11:30:55.979 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x100 zxid:0x78 txntype:1 reqpath:n/a 11:30:55.979 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x100 zxid:0x78 txntype:1 reqpath:n/a 11:30:55.979 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x25 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:55.979 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x25 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:55.979 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 256,1 replyHeader:: 256,120,0 request:: '/brokers/topics/__consumer_offsets/partitions/15,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/15 11:30:55.979 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 37,4 replyHeader:: 37,120,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:55.979 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x101 zxid:0x79 txntype:1 reqpath:n/a 11:30:55.979 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x101 zxid:0x79 txntype:1 reqpath:n/a 11:30:55.979 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 257,1 replyHeader:: 257,121,0 request:: '/brokers/topics/__consumer_offsets/partitions/15/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/15/state 11:30:55.979 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x26 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:55.979 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x26 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:55.979 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 38,4 replyHeader:: 38,121,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:55.979 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,48] are: [List(1)] 11:30:55.979 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,48] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) 11:30:55.979 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x102 zxid:0x7a txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/48 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/48 11:30:55.995 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x102 zxid:0x7a txntype:-1 reqpath:n/a 11:30:55.995 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:55.995 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x27 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:55.995 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x27 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:55.995 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 258,1 replyHeader:: 258,122,-101 request:: '/brokers/topics/__consumer_offsets/partitions/48/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:55.995 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 39,4 replyHeader:: 39,122,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:55.995 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x103 zxid:0x7b txntype:1 reqpath:n/a 11:30:55.995 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x103 zxid:0x7b txntype:1 reqpath:n/a 11:30:55.995 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 259,1 replyHeader:: 259,123,0 request:: '/brokers/topics/__consumer_offsets/partitions/48,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/48 11:30:55.995 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:exists cxid:0x28 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:55.995 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:exists cxid:0x28 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:55.995 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 40,3 replyHeader:: 40,123,0 request:: '/brokers/topics/my-topic,F response:: s{46,46,1505298655478,1505298655478,0,0,0,0,36,0,46} 11:30:55.995 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 8 : {my-topic=LEADER_NOT_AVAILABLE} 11:30:55.995 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 6 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63344 (id: 1 rack: null), 127.0.0.1:63325 (id: 0 rack: null), 127.0.0.1:63361 (id: 2 rack: null)], partitions = []) 11:30:55.995 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x104 zxid:0x7c txntype:1 reqpath:n/a 11:30:55.995 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x104 zxid:0x7c txntype:1 reqpath:n/a 11:30:55.995 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 260,1 replyHeader:: 260,124,0 request:: '/brokers/topics/__consumer_offsets/partitions/48/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/48/state 11:30:55.995 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,11] are: [List(0)] 11:30:55.995 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,11] to (Leader:0,ISR:0,LeaderEpoch:0,ControllerEpoch:1) 11:30:55.995 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x105 zxid:0x7d txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/11 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/11 11:30:56.010 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x105 zxid:0x7d txntype:-1 reqpath:n/a 11:30:56.010 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:56.010 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 261,1 replyHeader:: 261,125,-101 request:: '/brokers/topics/__consumer_offsets/partitions/11/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:56.010 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x106 zxid:0x7e txntype:1 reqpath:n/a 11:30:56.010 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x106 zxid:0x7e txntype:1 reqpath:n/a 11:30:56.010 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 262,1 replyHeader:: 262,126,0 request:: '/brokers/topics/__consumer_offsets/partitions/11,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/11 11:30:56.026 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=) to node 1 11:30:56.026 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 8 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63325 (id: 0 rack: null), 127.0.0.1:63361 (id: 2 rack: null), 127.0.0.1:63344 (id: 1 rack: null)], partitions = []) 11:30:56.026 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending GroupCoordinator request for group exactly-once to broker 127.0.0.1:63344 (id: 1 rack: null) 11:30:56.026 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x107 zxid:0x7f txntype:1 reqpath:n/a 11:30:56.026 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x107 zxid:0x7f txntype:1 reqpath:n/a 11:30:56.026 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getChildren cxid:0x29 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.026 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getChildren cxid:0x29 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.026 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 263,1 replyHeader:: 263,127,0 request:: '/brokers/topics/__consumer_offsets/partitions/11/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/11/state 11:30:56.026 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 41,8 replyHeader:: 41,127,0 request:: '/brokers/ids,F response:: v{'0,'1,'2} 11:30:56.026 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x2a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.026 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x2a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.026 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,13] are: [List(2)] 11:30:56.026 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,13] to (Leader:2,ISR:2,LeaderEpoch:0,ControllerEpoch:1) 11:30:56.026 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 42,4 replyHeader:: 42,127,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:56.026 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x108 zxid:0x80 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/13 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/13 11:30:56.042 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x108 zxid:0x80 txntype:-1 reqpath:n/a 11:30:56.042 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:56.042 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x2b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.042 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x2b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.042 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 264,1 replyHeader:: 264,128,-101 request:: '/brokers/topics/__consumer_offsets/partitions/13/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:56.042 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 43,4 replyHeader:: 43,128,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:56.042 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x109 zxid:0x81 txntype:1 reqpath:n/a 11:30:56.042 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x109 zxid:0x81 txntype:1 reqpath:n/a 11:30:56.042 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x2c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.042 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x2c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.042 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 265,1 replyHeader:: 265,129,0 request:: '/brokers/topics/__consumer_offsets/partitions/13,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/13 11:30:56.042 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 44,4 replyHeader:: 44,129,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:56.042 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x10a zxid:0x82 txntype:1 reqpath:n/a 11:30:56.042 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x10a zxid:0x82 txntype:1 reqpath:n/a 11:30:56.042 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 266,1 replyHeader:: 266,130,0 request:: '/brokers/topics/__consumer_offsets/partitions/13/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/13/state 11:30:56.042 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [my-topic,0] are: [List(2)] 11:30:56.042 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [my-topic,0] to (Leader:2,ISR:2,LeaderEpoch:0,ControllerEpoch:1) 11:30:56.042 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x10b zxid:0x83 txntype:-1 reqpath:n/a Error Path:/brokers/topics/my-topic/partitions/0 Error:KeeperErrorCode = NoNode for /brokers/topics/my-topic/partitions/0 11:30:56.042 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x10b zxid:0x83 txntype:-1 reqpath:n/a 11:30:56.042 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:56.042 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:exists cxid:0x2d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:56.042 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:exists cxid:0x2d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:56.042 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 267,1 replyHeader:: 267,131,-101 request:: '/brokers/topics/my-topic/partitions/0/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:56.042 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 45,3 replyHeader:: 45,131,0 request:: '/brokers/topics/__consumer_offsets,F response:: s{47,47,1505298655478,1505298655478,0,1,0,0,468,1,50} 11:30:56.042 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x10c zxid:0x84 txntype:-1 reqpath:n/a Error Path:/brokers/topics/my-topic/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/my-topic/partitions 11:30:56.042 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received GroupCoordinator response ClientResponse(receivedTimeMs=1505298656042, latencyMs=16, disconnected=false, requestHeader={api_key=10,api_version=1,correlation_id=17,client_id=exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer}, responseBody=FindCoordinatorResponse(throttleTimeMs=0, errorMessage='null', error=COORDINATOR_NOT_AVAILABLE, node=:-1 (id: -1 rack: null))) for group exactly-once 11:30:56.042 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Group coordinator lookup for group exactly-once failed: The coordinator is not available. 11:30:56.042 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Coordinator discovery failed for group exactly-once, refreshing metadata 11:30:56.057 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x10c zxid:0x84 txntype:-1 reqpath:n/a 11:30:56.057 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:56.058 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 268,1 replyHeader:: 268,132,-101 request:: '/brokers/topics/my-topic/partitions/0,,v{s{31,s{'world,'anyone}}},0 response:: 11:30:56.063 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x10d zxid:0x85 txntype:1 reqpath:n/a 11:30:56.063 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x10d zxid:0x85 txntype:1 reqpath:n/a 11:30:56.063 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 269,1 replyHeader:: 269,133,0 request:: '/brokers/topics/my-topic/partitions,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/my-topic/partitions 11:30:56.063 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x10e zxid:0x86 txntype:1 reqpath:n/a 11:30:56.063 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x10e zxid:0x86 txntype:1 reqpath:n/a 11:30:56.079 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 270,1 replyHeader:: 270,134,0 request:: '/brokers/topics/my-topic/partitions/0,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/my-topic/partitions/0 11:30:56.079 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x10f zxid:0x87 txntype:1 reqpath:n/a 11:30:56.079 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x10f zxid:0x87 txntype:1 reqpath:n/a 11:30:56.079 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 271,1 replyHeader:: 271,135,0 request:: '/brokers/topics/my-topic/partitions/0/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/my-topic/partitions/0/state 11:30:56.079 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,49] are: [List(2)] 11:30:56.079 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,49] to (Leader:2,ISR:2,LeaderEpoch:0,ControllerEpoch:1) 11:30:56.079 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x110 zxid:0x88 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/49 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/49 11:30:56.079 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x110 zxid:0x88 txntype:-1 reqpath:n/a 11:30:56.079 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:56.079 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 272,1 replyHeader:: 272,136,-101 request:: '/brokers/topics/__consumer_offsets/partitions/49/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:56.079 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x111 zxid:0x89 txntype:1 reqpath:n/a 11:30:56.079 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x111 zxid:0x89 txntype:1 reqpath:n/a 11:30:56.079 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 273,1 replyHeader:: 273,137,0 request:: '/brokers/topics/__consumer_offsets/partitions/49,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/49 11:30:56.095 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x112 zxid:0x8a txntype:1 reqpath:n/a 11:30:56.095 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x112 zxid:0x8a txntype:1 reqpath:n/a 11:30:56.095 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 274,1 replyHeader:: 274,138,0 request:: '/brokers/topics/__consumer_offsets/partitions/49/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/49/state 11:30:56.095 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,6] are: [List(1)] 11:30:56.099 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,6] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) 11:30:56.099 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=my-topic) to node 0 11:30:56.099 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x113 zxid:0x8b txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/6 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/6 11:30:56.109 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x113 zxid:0x8b txntype:-1 reqpath:n/a 11:30:56.109 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:56.109 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x114 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.109 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x114 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.109 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 275,1 replyHeader:: 275,139,-101 request:: '/brokers/topics/__consumer_offsets/partitions/6/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:56.109 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 276,8 replyHeader:: 276,139,0 request:: '/brokers/ids,T response:: v{'0,'1,'2} 11:30:56.116 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x115 zxid:0x8c txntype:1 reqpath:n/a 11:30:56.116 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x115 zxid:0x8c txntype:1 reqpath:n/a 11:30:56.116 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x116 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.116 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x116 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.116 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 277,1 replyHeader:: 277,140,0 request:: '/brokers/topics/__consumer_offsets/partitions/6,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/6 11:30:56.116 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 278,4 replyHeader:: 278,140,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:56.116 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x117 zxid:0x8d txntype:1 reqpath:n/a 11:30:56.116 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x117 zxid:0x8d txntype:1 reqpath:n/a 11:30:56.116 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x118 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.116 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x118 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.116 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 279,1 replyHeader:: 279,141,0 request:: '/brokers/topics/__consumer_offsets/partitions/6/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/6/state 11:30:56.116 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 280,4 replyHeader:: 280,141,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:56.116 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,28] are: [List(2)] 11:30:56.116 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,28] to (Leader:2,ISR:2,LeaderEpoch:0,ControllerEpoch:1) 11:30:56.116 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x119 zxid:0x8e txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/28 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/28 11:30:56.116 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x119 zxid:0x8e txntype:-1 reqpath:n/a 11:30:56.116 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:56.116 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x11a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.116 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x11a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.116 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 281,1 replyHeader:: 281,142,-101 request:: '/brokers/topics/__consumer_offsets/partitions/28/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:56.116 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 282,4 replyHeader:: 282,142,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:56.116 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x11b zxid:0x8f txntype:1 reqpath:n/a 11:30:56.116 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x11b zxid:0x8f txntype:1 reqpath:n/a 11:30:56.116 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x11c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:56.116 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x11c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:56.116 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 283,1 replyHeader:: 283,143,0 request:: '/brokers/topics/__consumer_offsets/partitions/28,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/28 11:30:56.116 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 284,3 replyHeader:: 284,143,0 request:: '/brokers/topics/my-topic,T response:: s{46,46,1505298655478,1505298655478,0,1,0,0,36,1,133} 11:30:56.131 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=) to node 1 11:30:56.131 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 9 : {my-topic=LEADER_NOT_AVAILABLE} 11:30:56.131 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 7 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63344 (id: 1 rack: null), 127.0.0.1:63361 (id: 2 rack: null), 127.0.0.1:63325 (id: 0 rack: null)], partitions = []) 11:30:56.131 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 9 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63325 (id: 0 rack: null), 127.0.0.1:63361 (id: 2 rack: null), 127.0.0.1:63344 (id: 1 rack: null)], partitions = []) 11:30:56.131 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending GroupCoordinator request for group exactly-once to broker 127.0.0.1:63361 (id: 2 rack: null) 11:30:56.131 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x11d zxid:0x90 txntype:1 reqpath:n/a 11:30:56.131 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x11d zxid:0x90 txntype:1 reqpath:n/a 11:30:56.131 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 285,1 replyHeader:: 285,144,0 request:: '/brokers/topics/__consumer_offsets/partitions/28/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/28/state 11:30:56.131 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,4] are: [List(2)] 11:30:56.131 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,4] to (Leader:2,ISR:2,LeaderEpoch:0,ControllerEpoch:1) 11:30:56.131 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getChildren cxid:0x2e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.131 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getChildren cxid:0x2e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.131 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 46,8 replyHeader:: 46,144,0 request:: '/brokers/ids,F response:: v{'0,'1,'2} 11:30:56.131 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x11e zxid:0x91 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/4 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/4 11:30:56.131 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x11e zxid:0x91 txntype:-1 reqpath:n/a 11:30:56.131 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:56.131 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x2f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.131 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x2f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.131 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 286,1 replyHeader:: 286,145,-101 request:: '/brokers/topics/__consumer_offsets/partitions/4/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:56.131 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 47,4 replyHeader:: 47,145,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:56.131 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x11f zxid:0x92 txntype:1 reqpath:n/a 11:30:56.131 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x11f zxid:0x92 txntype:1 reqpath:n/a 11:30:56.131 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x30 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.131 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x30 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.131 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 287,1 replyHeader:: 287,146,0 request:: '/brokers/topics/__consumer_offsets/partitions/4,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/4 11:30:56.131 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 48,4 replyHeader:: 48,146,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:56.147 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x120 zxid:0x93 txntype:1 reqpath:n/a 11:30:56.147 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x120 zxid:0x93 txntype:1 reqpath:n/a 11:30:56.147 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x31 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.147 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 288,1 replyHeader:: 288,147,0 request:: '/brokers/topics/__consumer_offsets/partitions/4/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/4/state 11:30:56.147 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x31 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.147 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 49,4 replyHeader:: 49,147,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:56.147 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,37] are: [List(2)] 11:30:56.147 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,37] to (Leader:2,ISR:2,LeaderEpoch:0,ControllerEpoch:1) 11:30:56.147 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x121 zxid:0x94 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/37 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/37 11:30:56.147 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x121 zxid:0x94 txntype:-1 reqpath:n/a 11:30:56.147 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:56.147 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 289,1 replyHeader:: 289,148,-101 request:: '/brokers/topics/__consumer_offsets/partitions/37/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:56.147 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:exists cxid:0x32 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:56.147 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:exists cxid:0x32 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:56.147 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 50,3 replyHeader:: 50,148,0 request:: '/brokers/topics/__consumer_offsets,F response:: s{47,47,1505298655478,1505298655478,0,1,0,0,468,1,50} 11:30:56.147 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received GroupCoordinator response ClientResponse(receivedTimeMs=1505298656147, latencyMs=16, disconnected=false, requestHeader={api_key=10,api_version=1,correlation_id=19,client_id=exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer}, responseBody=FindCoordinatorResponse(throttleTimeMs=0, errorMessage='null', error=COORDINATOR_NOT_AVAILABLE, node=:-1 (id: -1 rack: null))) for group exactly-once 11:30:56.147 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Group coordinator lookup for group exactly-once failed: The coordinator is not available. 11:30:56.147 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Coordinator discovery failed for group exactly-once, refreshing metadata 11:30:56.147 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x122 zxid:0x95 txntype:1 reqpath:n/a 11:30:56.147 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x122 zxid:0x95 txntype:1 reqpath:n/a 11:30:56.147 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 290,1 replyHeader:: 290,149,0 request:: '/brokers/topics/__consumer_offsets/partitions/37,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/37 11:30:56.147 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x123 zxid:0x96 txntype:1 reqpath:n/a 11:30:56.147 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x123 zxid:0x96 txntype:1 reqpath:n/a 11:30:56.147 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 291,1 replyHeader:: 291,150,0 request:: '/brokers/topics/__consumer_offsets/partitions/37/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/37/state 11:30:56.147 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,31] are: [List(2)] 11:30:56.147 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,31] to (Leader:2,ISR:2,LeaderEpoch:0,ControllerEpoch:1) 11:30:56.147 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x124 zxid:0x97 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/31 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/31 11:30:56.164 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x124 zxid:0x97 txntype:-1 reqpath:n/a 11:30:56.164 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:56.164 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 292,1 replyHeader:: 292,151,-101 request:: '/brokers/topics/__consumer_offsets/partitions/31/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:56.164 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x125 zxid:0x98 txntype:1 reqpath:n/a 11:30:56.164 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x125 zxid:0x98 txntype:1 reqpath:n/a 11:30:56.164 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 293,1 replyHeader:: 293,152,0 request:: '/brokers/topics/__consumer_offsets/partitions/31,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/31 11:30:56.164 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x126 zxid:0x99 txntype:1 reqpath:n/a 11:30:56.164 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x126 zxid:0x99 txntype:1 reqpath:n/a 11:30:56.164 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 294,1 replyHeader:: 294,153,0 request:: '/brokers/topics/__consumer_offsets/partitions/31/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/31/state 11:30:56.164 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,44] are: [List(0)] 11:30:56.164 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,44] to (Leader:0,ISR:0,LeaderEpoch:0,ControllerEpoch:1) 11:30:56.180 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x127 zxid:0x9a txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/44 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/44 11:30:56.180 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x127 zxid:0x9a txntype:-1 reqpath:n/a 11:30:56.180 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:56.180 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 295,1 replyHeader:: 295,154,-101 request:: '/brokers/topics/__consumer_offsets/partitions/44/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:56.180 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x128 zxid:0x9b txntype:1 reqpath:n/a 11:30:56.180 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x128 zxid:0x9b txntype:1 reqpath:n/a 11:30:56.180 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 296,1 replyHeader:: 296,155,0 request:: '/brokers/topics/__consumer_offsets/partitions/44,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/44 11:30:56.180 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x129 zxid:0x9c txntype:1 reqpath:n/a 11:30:56.180 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x129 zxid:0x9c txntype:1 reqpath:n/a 11:30:56.180 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 297,1 replyHeader:: 297,156,0 request:: '/brokers/topics/__consumer_offsets/partitions/44/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/44/state 11:30:56.180 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,42] are: [List(1)] 11:30:56.180 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,42] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) 11:30:56.180 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x12a zxid:0x9d txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/42 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/42 11:30:56.180 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x12a zxid:0x9d txntype:-1 reqpath:n/a 11:30:56.180 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:56.180 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 298,1 replyHeader:: 298,157,-101 request:: '/brokers/topics/__consumer_offsets/partitions/42/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:56.195 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x12b zxid:0x9e txntype:1 reqpath:n/a 11:30:56.195 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x12b zxid:0x9e txntype:1 reqpath:n/a 11:30:56.195 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 299,1 replyHeader:: 299,158,0 request:: '/brokers/topics/__consumer_offsets/partitions/42,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/42 11:30:56.195 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x12c zxid:0x9f txntype:1 reqpath:n/a 11:30:56.195 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x12c zxid:0x9f txntype:1 reqpath:n/a 11:30:56.195 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 300,1 replyHeader:: 300,159,0 request:: '/brokers/topics/__consumer_offsets/partitions/42/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/42/state 11:30:56.195 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,34] are: [List(2)] 11:30:56.195 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,34] to (Leader:2,ISR:2,LeaderEpoch:0,ControllerEpoch:1) 11:30:56.195 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x12d zxid:0xa0 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/34 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/34 11:30:56.195 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x12d zxid:0xa0 txntype:-1 reqpath:n/a 11:30:56.195 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:56.195 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 301,1 replyHeader:: 301,160,-101 request:: '/brokers/topics/__consumer_offsets/partitions/34/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:56.195 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x12e zxid:0xa1 txntype:1 reqpath:n/a 11:30:56.195 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x12e zxid:0xa1 txntype:1 reqpath:n/a 11:30:56.195 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 302,1 replyHeader:: 302,161,0 request:: '/brokers/topics/__consumer_offsets/partitions/34,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/34 11:30:56.211 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x12f zxid:0xa2 txntype:1 reqpath:n/a 11:30:56.211 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x12f zxid:0xa2 txntype:1 reqpath:n/a 11:30:56.211 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 303,1 replyHeader:: 303,162,0 request:: '/brokers/topics/__consumer_offsets/partitions/34/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/34/state 11:30:56.211 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,46] are: [List(2)] 11:30:56.211 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,46] to (Leader:2,ISR:2,LeaderEpoch:0,ControllerEpoch:1) 11:30:56.211 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x130 zxid:0xa3 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/46 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/46 11:30:56.211 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x130 zxid:0xa3 txntype:-1 reqpath:n/a 11:30:56.211 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:56.211 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 304,1 replyHeader:: 304,163,-101 request:: '/brokers/topics/__consumer_offsets/partitions/46/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:56.226 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x131 zxid:0xa4 txntype:1 reqpath:n/a 11:30:56.226 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x131 zxid:0xa4 txntype:1 reqpath:n/a 11:30:56.226 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 305,1 replyHeader:: 305,164,0 request:: '/brokers/topics/__consumer_offsets/partitions/46,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/46 11:30:56.226 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x132 zxid:0xa5 txntype:1 reqpath:n/a 11:30:56.226 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x132 zxid:0xa5 txntype:1 reqpath:n/a 11:30:56.226 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 306,1 replyHeader:: 306,165,0 request:: '/brokers/topics/__consumer_offsets/partitions/46/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/46/state 11:30:56.226 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,25] are: [List(2)] 11:30:56.226 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,25] to (Leader:2,ISR:2,LeaderEpoch:0,ControllerEpoch:1) 11:30:56.226 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x133 zxid:0xa6 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/25 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/25 11:30:56.242 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=) to node 1 11:30:56.242 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=my-topic) to node 0 11:30:56.242 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x133 zxid:0xa6 txntype:-1 reqpath:n/a 11:30:56.242 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:56.242 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 307,1 replyHeader:: 307,166,-101 request:: '/brokers/topics/__consumer_offsets/partitions/25/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:56.242 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x134 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.242 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x134 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.242 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 10 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63325 (id: 0 rack: null), 127.0.0.1:63344 (id: 1 rack: null), 127.0.0.1:63361 (id: 2 rack: null)], partitions = []) 11:30:56.242 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending GroupCoordinator request for group exactly-once to broker 127.0.0.1:63325 (id: 0 rack: null) 11:30:56.242 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 308,8 replyHeader:: 308,166,0 request:: '/brokers/ids,T response:: v{'0,'1,'2} 11:30:56.242 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x135 zxid:0xa7 txntype:1 reqpath:n/a 11:30:56.242 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x135 zxid:0xa7 txntype:1 reqpath:n/a 11:30:56.242 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x136 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.242 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x136 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.242 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 309,1 replyHeader:: 309,167,0 request:: '/brokers/topics/__consumer_offsets/partitions/25,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/25 11:30:56.242 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x137 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.242 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x137 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.242 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 310,4 replyHeader:: 310,167,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:56.242 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 311,8 replyHeader:: 311,167,0 request:: '/brokers/ids,T response:: v{'0,'1,'2} 11:30:56.242 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x138 zxid:0xa8 txntype:1 reqpath:n/a 11:30:56.242 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x138 zxid:0xa8 txntype:1 reqpath:n/a 11:30:56.242 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x139 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.242 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x139 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.242 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 312,1 replyHeader:: 312,168,0 request:: '/brokers/topics/__consumer_offsets/partitions/25/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/25/state 11:30:56.242 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x13a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.242 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x13a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.242 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 313,4 replyHeader:: 313,168,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:56.242 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,45] are: [List(1)] 11:30:56.242 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,45] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) 11:30:56.242 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 314,4 replyHeader:: 314,168,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:56.242 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x13b zxid:0xa9 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/45 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/45 11:30:56.242 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x13b zxid:0xa9 txntype:-1 reqpath:n/a 11:30:56.242 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:56.242 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x13c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.242 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x13c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.242 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 315,1 replyHeader:: 315,169,-101 request:: '/brokers/topics/__consumer_offsets/partitions/45/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:56.242 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 316,4 replyHeader:: 316,169,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:56.260 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x13d zxid:0xaa txntype:1 reqpath:n/a 11:30:56.260 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x13d zxid:0xaa txntype:1 reqpath:n/a 11:30:56.260 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x13e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.260 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x13e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.260 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 317,1 replyHeader:: 317,170,0 request:: '/brokers/topics/__consumer_offsets/partitions/45,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/45 11:30:56.261 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 318,4 replyHeader:: 318,170,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:56.264 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x13f zxid:0xab txntype:1 reqpath:n/a 11:30:56.264 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x13f zxid:0xab txntype:1 reqpath:n/a 11:30:56.264 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x140 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.264 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 319,1 replyHeader:: 319,171,0 request:: '/brokers/topics/__consumer_offsets/partitions/45/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/45/state 11:30:56.264 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x140 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.264 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x141 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:56.264 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x141 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:56.264 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 320,4 replyHeader:: 320,171,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:56.264 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,27] are: [List(1)] 11:30:56.264 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,27] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) 11:30:56.264 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 321,3 replyHeader:: 321,171,0 request:: '/brokers/topics/my-topic,T response:: s{46,46,1505298655478,1505298655478,0,1,0,0,36,1,133} 11:30:56.264 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x142 zxid:0xac txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/27 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/27 11:30:56.264 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 10 : {my-topic=LEADER_NOT_AVAILABLE} 11:30:56.264 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 8 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63325 (id: 0 rack: null), 127.0.0.1:63344 (id: 1 rack: null), 127.0.0.1:63361 (id: 2 rack: null)], partitions = []) 11:30:56.264 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x142 zxid:0xac txntype:-1 reqpath:n/a 11:30:56.264 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:56.264 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x143 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:56.264 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x143 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:56.264 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 322,1 replyHeader:: 322,172,-101 request:: '/brokers/topics/__consumer_offsets/partitions/27/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:56.264 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 323,3 replyHeader:: 323,172,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{47,47,1505298655478,1505298655478,0,1,0,0,468,1,50} 11:30:56.264 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received GroupCoordinator response ClientResponse(receivedTimeMs=1505298656264, latencyMs=22, disconnected=false, requestHeader={api_key=10,api_version=1,correlation_id=21,client_id=exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer}, responseBody=FindCoordinatorResponse(throttleTimeMs=0, errorMessage='null', error=COORDINATOR_NOT_AVAILABLE, node=:-1 (id: -1 rack: null))) for group exactly-once 11:30:56.264 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Group coordinator lookup for group exactly-once failed: The coordinator is not available. 11:30:56.264 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Coordinator discovery failed for group exactly-once, refreshing metadata 11:30:56.264 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x144 zxid:0xad txntype:1 reqpath:n/a 11:30:56.264 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x144 zxid:0xad txntype:1 reqpath:n/a 11:30:56.264 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 324,1 replyHeader:: 324,173,0 request:: '/brokers/topics/__consumer_offsets/partitions/27,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/27 11:30:56.264 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x145 zxid:0xae txntype:1 reqpath:n/a 11:30:56.264 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x145 zxid:0xae txntype:1 reqpath:n/a 11:30:56.264 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 325,1 replyHeader:: 325,174,0 request:: '/brokers/topics/__consumer_offsets/partitions/27/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/27/state 11:30:56.264 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,32] are: [List(0)] 11:30:56.264 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,32] to (Leader:0,ISR:0,LeaderEpoch:0,ControllerEpoch:1) 11:30:56.264 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x146 zxid:0xaf txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/32 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/32 11:30:56.264 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x146 zxid:0xaf txntype:-1 reqpath:n/a 11:30:56.264 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:56.264 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 326,1 replyHeader:: 326,175,-101 request:: '/brokers/topics/__consumer_offsets/partitions/32/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:56.280 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x147 zxid:0xb0 txntype:1 reqpath:n/a 11:30:56.280 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x147 zxid:0xb0 txntype:1 reqpath:n/a 11:30:56.280 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 327,1 replyHeader:: 327,176,0 request:: '/brokers/topics/__consumer_offsets/partitions/32,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/32 11:30:56.280 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x148 zxid:0xb1 txntype:1 reqpath:n/a 11:30:56.280 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x148 zxid:0xb1 txntype:1 reqpath:n/a 11:30:56.280 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 328,1 replyHeader:: 328,177,0 request:: '/brokers/topics/__consumer_offsets/partitions/32/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/32/state 11:30:56.280 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,43] are: [List(2)] 11:30:56.280 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,43] to (Leader:2,ISR:2,LeaderEpoch:0,ControllerEpoch:1) 11:30:56.280 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x149 zxid:0xb2 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/43 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/43 11:30:56.295 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x149 zxid:0xb2 txntype:-1 reqpath:n/a 11:30:56.295 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:56.295 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 329,1 replyHeader:: 329,178,-101 request:: '/brokers/topics/__consumer_offsets/partitions/43/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:56.311 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x14a zxid:0xb3 txntype:1 reqpath:n/a 11:30:56.311 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x14a zxid:0xb3 txntype:1 reqpath:n/a 11:30:56.311 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 330,1 replyHeader:: 330,179,0 request:: '/brokers/topics/__consumer_offsets/partitions/43,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/43 11:30:56.327 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x14b zxid:0xb4 txntype:1 reqpath:n/a 11:30:56.327 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x14b zxid:0xb4 txntype:1 reqpath:n/a 11:30:56.327 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 331,1 replyHeader:: 331,180,0 request:: '/brokers/topics/__consumer_offsets/partitions/43/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/43/state 11:30:56.327 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,36] are: [List(1)] 11:30:56.327 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,36] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) 11:30:56.327 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x14c zxid:0xb5 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/36 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/36 11:30:56.342 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x14c zxid:0xb5 txntype:-1 reqpath:n/a 11:30:56.342 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:56.342 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=) to node 2 11:30:56.342 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 332,1 replyHeader:: 332,181,-101 request:: '/brokers/topics/__consumer_offsets/partitions/36/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:56.343 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 11 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63344 (id: 1 rack: null), 127.0.0.1:63325 (id: 0 rack: null), 127.0.0.1:63361 (id: 2 rack: null)], partitions = []) 11:30:56.344 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending GroupCoordinator request for group exactly-once to broker 127.0.0.1:63344 (id: 1 rack: null) 11:30:56.345 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x14d zxid:0xb6 txntype:1 reqpath:n/a 11:30:56.346 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x14d zxid:0xb6 txntype:1 reqpath:n/a 11:30:56.346 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getChildren cxid:0x2e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.346 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getChildren cxid:0x2e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.346 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 333,1 replyHeader:: 333,182,0 request:: '/brokers/topics/__consumer_offsets/partitions/36,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/36 11:30:56.346 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 46,8 replyHeader:: 46,182,0 request:: '/brokers/ids,F response:: v{'0,'1,'2} 11:30:56.349 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x14e zxid:0xb7 txntype:1 reqpath:n/a 11:30:56.349 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x14e zxid:0xb7 txntype:1 reqpath:n/a 11:30:56.350 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x2f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.350 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x2f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.350 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 334,1 replyHeader:: 334,183,0 request:: '/brokers/topics/__consumer_offsets/partitions/36/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/36/state 11:30:56.350 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 47,4 replyHeader:: 47,183,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:56.350 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,35] are: [List(0)] 11:30:56.350 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,35] to (Leader:0,ISR:0,LeaderEpoch:0,ControllerEpoch:1) 11:30:56.351 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x14f zxid:0xb8 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/35 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/35 11:30:56.353 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x14f zxid:0xb8 txntype:-1 reqpath:n/a 11:30:56.353 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:56.353 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x30 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.353 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x30 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.354 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 335,1 replyHeader:: 335,184,-101 request:: '/brokers/topics/__consumer_offsets/partitions/35/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:56.354 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 48,4 replyHeader:: 48,184,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:56.357 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x150 zxid:0xb9 txntype:1 reqpath:n/a 11:30:56.357 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x150 zxid:0xb9 txntype:1 reqpath:n/a 11:30:56.357 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x31 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.357 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x31 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.357 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 336,1 replyHeader:: 336,185,0 request:: '/brokers/topics/__consumer_offsets/partitions/35,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/35 11:30:56.357 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 49,4 replyHeader:: 49,185,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:56.360 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x151 zxid:0xba txntype:1 reqpath:n/a 11:30:56.361 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x151 zxid:0xba txntype:1 reqpath:n/a 11:30:56.361 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:exists cxid:0x32 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:56.361 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:exists cxid:0x32 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:56.361 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 337,1 replyHeader:: 337,186,0 request:: '/brokers/topics/__consumer_offsets/partitions/35/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/35/state 11:30:56.361 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 50,3 replyHeader:: 50,186,0 request:: '/brokers/topics/__consumer_offsets,F response:: s{47,47,1505298655478,1505298655478,0,1,0,0,468,1,50} 11:30:56.361 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,7] are: [List(2)] 11:30:56.361 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,7] to (Leader:2,ISR:2,LeaderEpoch:0,ControllerEpoch:1) 11:30:56.362 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received GroupCoordinator response ClientResponse(receivedTimeMs=1505298656362, latencyMs=18, disconnected=false, requestHeader={api_key=10,api_version=1,correlation_id=23,client_id=exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer}, responseBody=FindCoordinatorResponse(throttleTimeMs=0, errorMessage='null', error=COORDINATOR_NOT_AVAILABLE, node=:-1 (id: -1 rack: null))) for group exactly-once 11:30:56.362 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x152 zxid:0xbb txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/7 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/7 11:30:56.362 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Group coordinator lookup for group exactly-once failed: The coordinator is not available. 11:30:56.362 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Coordinator discovery failed for group exactly-once, refreshing metadata 11:30:56.364 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x152 zxid:0xbb txntype:-1 reqpath:n/a 11:30:56.364 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:56.364 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 338,1 replyHeader:: 338,187,-101 request:: '/brokers/topics/__consumer_offsets/partitions/7/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:56.365 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=my-topic) to node 1 11:30:56.368 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x153 zxid:0xbc txntype:1 reqpath:n/a 11:30:56.368 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x153 zxid:0xbc txntype:1 reqpath:n/a 11:30:56.368 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getChildren cxid:0x33 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.368 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getChildren cxid:0x33 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.368 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 339,1 replyHeader:: 339,188,0 request:: '/brokers/topics/__consumer_offsets/partitions/7,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/7 11:30:56.368 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 51,8 replyHeader:: 51,188,0 request:: '/brokers/ids,F response:: v{'0,'1,'2} 11:30:56.372 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x154 zxid:0xbd txntype:1 reqpath:n/a 11:30:56.372 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x154 zxid:0xbd txntype:1 reqpath:n/a 11:30:56.372 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x34 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.372 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x34 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.373 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 340,1 replyHeader:: 340,189,0 request:: '/brokers/topics/__consumer_offsets/partitions/7/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/7/state 11:30:56.373 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 52,4 replyHeader:: 52,189,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:56.373 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,9] are: [List(1)] 11:30:56.373 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,9] to (Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1) 11:30:56.373 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x155 zxid:0xbe txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/9 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/9 11:30:56.376 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x155 zxid:0xbe txntype:-1 reqpath:n/a 11:30:56.376 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:56.376 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x35 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.376 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x35 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.377 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 341,1 replyHeader:: 341,190,-101 request:: '/brokers/topics/__consumer_offsets/partitions/9/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:56.377 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 53,4 replyHeader:: 53,190,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:56.379 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x156 zxid:0xbf txntype:1 reqpath:n/a 11:30:56.380 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x156 zxid:0xbf txntype:1 reqpath:n/a 11:30:56.380 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x36 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.380 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x36 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.380 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 342,1 replyHeader:: 342,191,0 request:: '/brokers/topics/__consumer_offsets/partitions/9,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/9 11:30:56.380 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 54,4 replyHeader:: 54,191,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:56.384 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x157 zxid:0xc0 txntype:1 reqpath:n/a 11:30:56.384 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x157 zxid:0xc0 txntype:1 reqpath:n/a 11:30:56.384 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:exists cxid:0x37 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:56.384 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:exists cxid:0x37 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:56.384 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 55,3 replyHeader:: 55,192,0 request:: '/brokers/topics/my-topic,F response:: s{46,46,1505298655478,1505298655478,0,1,0,0,36,1,133} 11:30:56.385 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 343,1 replyHeader:: 343,192,0 request:: '/brokers/topics/__consumer_offsets/partitions/9/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b315d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/9/state 11:30:56.385 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,38] are: [List(0)] 11:30:56.385 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,38] to (Leader:0,ISR:0,LeaderEpoch:0,ControllerEpoch:1) 11:30:56.385 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 11 : {my-topic=LEADER_NOT_AVAILABLE} 11:30:56.385 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 9 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63344 (id: 1 rack: null), 127.0.0.1:63325 (id: 0 rack: null), 127.0.0.1:63361 (id: 2 rack: null)], partitions = []) 11:30:56.385 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x158 zxid:0xc1 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/38 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/38 11:30:56.388 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x158 zxid:0xc1 txntype:-1 reqpath:n/a 11:30:56.388 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:56.389 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 344,1 replyHeader:: 344,193,-101 request:: '/brokers/topics/__consumer_offsets/partitions/38/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:56.391 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x159 zxid:0xc2 txntype:1 reqpath:n/a 11:30:56.392 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x159 zxid:0xc2 txntype:1 reqpath:n/a 11:30:56.392 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 345,1 replyHeader:: 345,194,0 request:: '/brokers/topics/__consumer_offsets/partitions/38,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/38 11:30:56.394 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x15a zxid:0xc3 txntype:1 reqpath:n/a 11:30:56.394 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x15a zxid:0xc3 txntype:1 reqpath:n/a 11:30:56.394 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 346,1 replyHeader:: 346,195,0 request:: '/brokers/topics/__consumer_offsets/partitions/38/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/38/state 11:30:56.394 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,1] are: [List(2)] 11:30:56.394 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,1] to (Leader:2,ISR:2,LeaderEpoch:0,ControllerEpoch:1) 11:30:56.394 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x15b zxid:0xc4 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/1 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/1 11:30:56.394 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x15b zxid:0xc4 txntype:-1 reqpath:n/a 11:30:56.394 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:56.394 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 347,1 replyHeader:: 347,196,-101 request:: '/brokers/topics/__consumer_offsets/partitions/1/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:56.394 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x15c zxid:0xc5 txntype:1 reqpath:n/a 11:30:56.394 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x15c zxid:0xc5 txntype:1 reqpath:n/a 11:30:56.394 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 348,1 replyHeader:: 348,197,0 request:: '/brokers/topics/__consumer_offsets/partitions/1,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/1 11:30:56.410 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x15d zxid:0xc6 txntype:1 reqpath:n/a 11:30:56.410 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x15d zxid:0xc6 txntype:1 reqpath:n/a 11:30:56.410 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 349,1 replyHeader:: 349,198,0 request:: '/brokers/topics/__consumer_offsets/partitions/1/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/1/state 11:30:56.410 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,16] are: [List(2)] 11:30:56.410 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,16] to (Leader:2,ISR:2,LeaderEpoch:0,ControllerEpoch:1) 11:30:56.410 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x15e zxid:0xc7 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/16 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/16 11:30:56.410 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x15e zxid:0xc7 txntype:-1 reqpath:n/a 11:30:56.410 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:56.410 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 350,1 replyHeader:: 350,199,-101 request:: '/brokers/topics/__consumer_offsets/partitions/16/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:56.425 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x15f zxid:0xc8 txntype:1 reqpath:n/a 11:30:56.425 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x15f zxid:0xc8 txntype:1 reqpath:n/a 11:30:56.425 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 351,1 replyHeader:: 351,200,0 request:: '/brokers/topics/__consumer_offsets/partitions/16,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/16 11:30:56.425 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x160 zxid:0xc9 txntype:1 reqpath:n/a 11:30:56.425 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x160 zxid:0xc9 txntype:1 reqpath:n/a 11:30:56.425 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 352,1 replyHeader:: 352,201,0 request:: '/brokers/topics/__consumer_offsets/partitions/16/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b325d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/16/state 11:30:56.425 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__consumer_offsets,2] are: [List(0)] 11:30:56.425 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__consumer_offsets,2] to (Leader:0,ISR:0,LeaderEpoch:0,ControllerEpoch:1) 11:30:56.425 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x161 zxid:0xca txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/2 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/2 11:30:56.425 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x161 zxid:0xca txntype:-1 reqpath:n/a 11:30:56.425 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:56.425 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 353,1 replyHeader:: 353,202,-101 request:: '/brokers/topics/__consumer_offsets/partitions/2/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:56.425 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x162 zxid:0xcb txntype:1 reqpath:n/a 11:30:56.425 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x162 zxid:0xcb txntype:1 reqpath:n/a 11:30:56.425 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 354,1 replyHeader:: 354,203,0 request:: '/brokers/topics/__consumer_offsets/partitions/2,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/2 11:30:56.425 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x163 zxid:0xcc txntype:1 reqpath:n/a 11:30:56.425 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x163 zxid:0xcc txntype:1 reqpath:n/a 11:30:56.425 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 355,1 replyHeader:: 355,204,0 request:: '/brokers/topics/__consumer_offsets/partitions/2/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b305d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets/partitions/2/state 11:30:56.441 [controller-event-thread] INFO kafka.controller.ReplicaStateMachine - [Replica state machine on controller 0]: Invoking state change to OnlineReplica for replicas [Topic=__consumer_offsets,Partition=48,Replica=1],[Topic=__consumer_offsets,Partition=21,Replica=1],[Topic=__consumer_offsets,Partition=18,Replica=1],[Topic=__consumer_offsets,Partition=9,Replica=1],[Topic=__consumer_offsets,Partition=39,Replica=1],[Topic=__consumer_offsets,Partition=22,Replica=2],[Topic=__consumer_offsets,Partition=35,Replica=0],[Topic=__consumer_offsets,Partition=13,Replica=2],[Topic=__consumer_offsets,Partition=34,Replica=2],[Topic=__consumer_offsets,Partition=40,Replica=2],[Topic=__consumer_offsets,Partition=37,Replica=2],[Topic=__consumer_offsets,Partition=2,Replica=0],[Topic=__consumer_offsets,Partition=11,Replica=0],[Topic=__consumer_offsets,Partition=29,Replica=0],[Topic=__consumer_offsets,Partition=27,Replica=1],[Topic=__consumer_offsets,Partition=6,Replica=1],[Topic=__consumer_offsets,Partition=30,Replica=1],[Topic=__consumer_offsets,Partition=42,Replica=1],[Topic=__consumer_offsets,Partition=26,Replica=0],[Topic=__consumer_offsets,Partition=17,Replica=0],[Topic=__consumer_offsets,Partition=3,Replica=1],[Topic=__consumer_offsets,Partition=28,Replica=2],[Topic=__consumer_offsets,Partition=7,Replica=2],[Topic=__consumer_offsets,Partition=43,Replica=2],[Topic=__consumer_offsets,Partition=10,Replica=2],[Topic=__consumer_offsets,Partition=41,Replica=0],[Topic=__consumer_offsets,Partition=20,Replica=0],[Topic=__consumer_offsets,Partition=4,Replica=2],[Topic=__consumer_offsets,Partition=45,Replica=1],[Topic=__consumer_offsets,Partition=46,Replica=2],[Topic=__consumer_offsets,Partition=47,Replica=0],[Topic=__consumer_offsets,Partition=8,Replica=0],[Topic=__consumer_offsets,Partition=38,Replica=0],[Topic=__consumer_offsets,Partition=49,Replica=2],[Topic=__consumer_offsets,Partition=1,Replica=2],[Topic=__consumer_offsets,Partition=19,Replica=2],[Topic=__consumer_offsets,Partition=0,Replica=1],[Topic=__consumer_offsets,Partition=33,Replica=1],[Topic=__consumer_offsets,Partition=5,Replica=0],[Topic=__consumer_offsets,Partition=31,Replica=2],[Topic=__consumer_offsets,Partition=25,Replica=2],[Topic=__consumer_offsets,Partition=44,Replica=0],[Topic=my-topic,Partition=0,Replica=2],[Topic=__consumer_offsets,Partition=36,Replica=1],[Topic=__consumer_offsets,Partition=12,Replica=1],[Topic=__consumer_offsets,Partition=16,Replica=2],[Topic=__consumer_offsets,Partition=15,Replica=1],[Topic=__consumer_offsets,Partition=23,Replica=0],[Topic=__consumer_offsets,Partition=32,Replica=0],[Topic=__consumer_offsets,Partition=14,Replica=0],[Topic=__consumer_offsets,Partition=24,Replica=1] 11:30:56.456 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=) to node 1 11:30:56.459 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 12 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63361 (id: 2 rack: null), 127.0.0.1:63325 (id: 0 rack: null), 127.0.0.1:63344 (id: 1 rack: null)], partitions = []) 11:30:56.459 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending GroupCoordinator request for group exactly-once to broker 127.0.0.1:63361 (id: 2 rack: null) 11:30:56.460 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getChildren cxid:0x33 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.461 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getChildren cxid:0x33 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.461 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 51,8 replyHeader:: 51,204,0 request:: '/brokers/ids,F response:: v{'0,'1,'2} 11:30:56.462 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x34 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.462 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x34 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.462 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 52,4 replyHeader:: 52,204,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:56.465 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x35 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.465 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x35 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.465 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 53,4 replyHeader:: 53,204,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:56.465 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x36 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.465 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x36 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.465 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 54,4 replyHeader:: 54,204,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:56.465 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:exists cxid:0x37 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:56.465 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:exists cxid:0x37 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:56.465 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 55,3 replyHeader:: 55,204,0 request:: '/brokers/topics/__consumer_offsets,F response:: s{47,47,1505298655478,1505298655478,0,1,0,0,468,1,50} 11:30:56.465 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received GroupCoordinator response ClientResponse(receivedTimeMs=1505298656465, latencyMs=6, disconnected=false, requestHeader={api_key=10,api_version=1,correlation_id=25,client_id=exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer}, responseBody=FindCoordinatorResponse(throttleTimeMs=0, errorMessage='null', error=COORDINATOR_NOT_AVAILABLE, node=:-1 (id: -1 rack: null))) for group exactly-once 11:30:56.465 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Group coordinator lookup for group exactly-once failed: The coordinator is not available. 11:30:56.465 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Coordinator discovery failed for group exactly-once, refreshing metadata 11:30:56.480 [kafka-request-handler-5] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] Removed fetcher for partitions __consumer_offsets-30,__consumer_offsets-21,__consumer_offsets-27,__consumer_offsets-9,__consumer_offsets-33,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-48,__consumer_offsets-6,__consumer_offsets-0,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45 11:30:56.480 [kafka-request-handler-5] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 0] Removed fetcher for partitions __consumer_offsets-8,__consumer_offsets-35,__consumer_offsets-41,__consumer_offsets-23,__consumer_offsets-47,__consumer_offsets-38,__consumer_offsets-17,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-44,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-32 11:30:56.480 [kafka-request-handler-7] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 2] Removed fetcher for partitions __consumer_offsets-22,__consumer_offsets-4,__consumer_offsets-7,__consumer_offsets-46,__consumer_offsets-25,__consumer_offsets-49,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-37,my-topic-0,__consumer_offsets-19,__consumer_offsets-13,__consumer_offsets-43,__consumer_offsets-1,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-40 11:30:56.480 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x38 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.480 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x38 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.480 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x38 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.480 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x38 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.480 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 56,4 replyHeader:: 56,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.480 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x164 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.480 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x164 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.480 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 56,4 replyHeader:: 56,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.480 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 356,4 replyHeader:: 356,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.496 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=my-topic) to node 0 11:30:56.496 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x165 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.496 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x165 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.496 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 357,8 replyHeader:: 357,204,0 request:: '/brokers/ids,T response:: v{'0,'1,'2} 11:30:56.496 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x166 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.496 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x166 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.496 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 358,4 replyHeader:: 358,204,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:56.496 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x167 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.496 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x167 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.496 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 359,4 replyHeader:: 359,204,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:56.511 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x168 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.511 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x168 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.511 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 360,4 replyHeader:: 360,204,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:56.511 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x169 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:56.511 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x169 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:56.511 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 361,3 replyHeader:: 361,204,0 request:: '/brokers/topics/my-topic,T response:: s{46,46,1505298655478,1505298655478,0,1,0,0,36,1,133} 11:30:56.511 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 12 : {my-topic=LEADER_NOT_AVAILABLE} 11:30:56.511 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 10 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63361 (id: 2 rack: null), 127.0.0.1:63325 (id: 0 rack: null), 127.0.0.1:63344 (id: 1 rack: null)], partitions = []) 11:30:56.543 [kafka-request-handler-7] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749\__consumer_offsets-10\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.543 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316\__consumer_offsets-0\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.543 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081\__consumer_offsets-29\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.560 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=) to node 2 11:30:56.562 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 13 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63325 (id: 0 rack: null), 127.0.0.1:63344 (id: 1 rack: null), 127.0.0.1:63361 (id: 2 rack: null)], partitions = []) 11:30:56.562 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending GroupCoordinator request for group exactly-once to broker 127.0.0.1:63325 (id: 0 rack: null) 11:30:56.564 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x16a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.564 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x16a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.564 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 362,8 replyHeader:: 362,204,0 request:: '/brokers/ids,T response:: v{'0,'1,'2} 11:30:56.565 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x16b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.565 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x16b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.565 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 363,4 replyHeader:: 363,204,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:56.565 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x16c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.565 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x16c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.565 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 364,4 replyHeader:: 364,204,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:56.565 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x16d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.565 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x16d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.565 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 365,4 replyHeader:: 365,204,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:56.565 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x16e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:56.565 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x16e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:56.565 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 366,3 replyHeader:: 366,204,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{47,47,1505298655478,1505298655478,0,1,0,0,468,1,50} 11:30:56.565 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received GroupCoordinator response ClientResponse(receivedTimeMs=1505298656565, latencyMs=3, disconnected=false, requestHeader={api_key=10,api_version=1,correlation_id=27,client_id=exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer}, responseBody=FindCoordinatorResponse(throttleTimeMs=0, errorMessage='null', error=COORDINATOR_NOT_AVAILABLE, node=:-1 (id: -1 rack: null))) for group exactly-once 11:30:56.565 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Group coordinator lookup for group exactly-once failed: The coordinator is not available. 11:30:56.565 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Coordinator discovery failed for group exactly-once, refreshing metadata 11:30:56.580 [kafka-request-handler-7] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-10 with message format version 2 11:30:56.580 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-29 with message format version 2 11:30:56.580 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-0 with message format version 2 11:30:56.596 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-29 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.596 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-0 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.596 [kafka-request-handler-7] INFO kafka.log.Log - Completed load of log __consumer_offsets-10 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.596 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.596 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.596 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.596 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,29] in C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.596 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,0] in C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.596 [kafka-request-handler-7] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,10] in C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.596 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,0] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-0 11:30:56.596 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,10] on broker 2: No checkpointed highwatermark is found for partition __consumer_offsets-10 11:30:56.596 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,29] on broker 0: No checkpointed highwatermark is found for partition __consumer_offsets-29 11:30:56.596 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,29] on broker 0: __consumer_offsets-29 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.596 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,0] on broker 1: __consumer_offsets-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.596 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,10] on broker 2: __consumer_offsets-10 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.612 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=my-topic) to node 2 11:30:56.612 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getChildren cxid:0x39 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.612 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getChildren cxid:0x39 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.612 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 57,8 replyHeader:: 57,204,0 request:: '/brokers/ids,F response:: v{'0,'1,'2} 11:30:56.612 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x3a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.612 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x3a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.612 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 58,4 replyHeader:: 58,204,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:56.612 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,10] on broker 2: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.612 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,0] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.612 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,29] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.612 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x39 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.612 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x39 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.612 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x16f zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.612 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x16f zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.612 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 57,4 replyHeader:: 57,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.612 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x3b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.612 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x3b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.612 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 367,4 replyHeader:: 367,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.612 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x3c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.612 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x3c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.612 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 59,4 replyHeader:: 59,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.612 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 60,4 replyHeader:: 60,204,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:56.612 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x3d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.612 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x3d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.612 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 61,4 replyHeader:: 61,204,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:56.627 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:exists cxid:0x3e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:56.627 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:exists cxid:0x3e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:56.627 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 62,3 replyHeader:: 62,204,0 request:: '/brokers/topics/my-topic,F response:: s{46,46,1505298655478,1505298655478,0,1,0,0,36,1,133} 11:30:56.627 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316\__consumer_offsets-48\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.627 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 13 : {my-topic=LEADER_NOT_AVAILABLE} 11:30:56.627 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 11 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63325 (id: 0 rack: null), 127.0.0.1:63361 (id: 2 rack: null), 127.0.0.1:63344 (id: 1 rack: null)], partitions = []) 11:30:56.627 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-48 with message format version 2 11:30:56.627 [kafka-request-handler-7] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749\__consumer_offsets-7\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.627 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081\__consumer_offsets-26\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.627 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-48 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.627 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.627 [kafka-request-handler-7] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-7 with message format version 2 11:30:56.627 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,48] in C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.627 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,48] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-48 11:30:56.627 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,48] on broker 1: __consumer_offsets-48 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.627 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,48] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.627 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-26 with message format version 2 11:30:56.627 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x3a zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.627 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x3a zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.627 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 58,4 replyHeader:: 58,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.627 [kafka-request-handler-7] INFO kafka.log.Log - Completed load of log __consumer_offsets-7 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.627 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.627 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-26 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.627 [kafka-request-handler-7] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,7] in C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.627 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,7] on broker 2: No checkpointed highwatermark is found for partition __consumer_offsets-7 11:30:56.627 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,7] on broker 2: __consumer_offsets-7 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.627 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.627 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,7] on broker 2: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.627 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,26] in C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.627 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x3f zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.627 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,26] on broker 0: No checkpointed highwatermark is found for partition __consumer_offsets-26 11:30:56.627 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x3f zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.627 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,26] on broker 0: __consumer_offsets-26 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.627 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 63,4 replyHeader:: 63,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.627 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,26] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.627 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x170 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.627 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x170 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.643 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 368,4 replyHeader:: 368,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.649 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316\__consumer_offsets-45\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.654 [kafka-request-handler-7] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749\__consumer_offsets-4\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.654 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-45 with message format version 2 11:30:56.654 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081\__consumer_offsets-23\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.657 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-45 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.657 [kafka-request-handler-7] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-4 with message format version 2 11:30:56.657 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.658 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,45] in C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.658 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,45] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-45 11:30:56.658 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,45] on broker 1: __consumer_offsets-45 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.658 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,45] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.659 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x3b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.659 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x3b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.659 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 59,4 replyHeader:: 59,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.659 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-23 with message format version 2 11:30:56.660 [kafka-request-handler-7] INFO kafka.log.Log - Completed load of log __consumer_offsets-4 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.661 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.661 [kafka-request-handler-7] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,4] in C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.661 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,4] on broker 2: No checkpointed highwatermark is found for partition __consumer_offsets-4 11:30:56.661 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,4] on broker 2: __consumer_offsets-4 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.662 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,4] on broker 2: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.662 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-23 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.662 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x40 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.662 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x40 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.663 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.663 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,23] in C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.663 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 64,4 replyHeader:: 64,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.663 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,23] on broker 0: No checkpointed highwatermark is found for partition __consumer_offsets-23 11:30:56.663 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,23] on broker 0: __consumer_offsets-23 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.664 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,23] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.664 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x171 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.664 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x171 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.664 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 369,4 replyHeader:: 369,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.665 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316\__consumer_offsets-42\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.665 [kafka-request-handler-7] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749\__consumer_offsets-1\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.665 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-42 with message format version 2 11:30:56.665 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081\__consumer_offsets-20\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.665 [kafka-request-handler-7] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-1 with message format version 2 11:30:56.665 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-42 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.665 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.665 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,42] in C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.665 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,42] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-42 11:30:56.665 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-20 with message format version 2 11:30:56.665 [kafka-request-handler-7] INFO kafka.log.Log - Completed load of log __consumer_offsets-1 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.665 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,42] on broker 1: __consumer_offsets-42 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.665 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,42] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.680 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=) to node 2 11:30:56.680 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x3c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x3c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.680 [kafka-request-handler-7] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,1] in C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.680 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 60,4 replyHeader:: 60,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.680 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,1] on broker 2: No checkpointed highwatermark is found for partition __consumer_offsets-1 11:30:56.680 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,1] on broker 2: __consumer_offsets-1 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.680 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,1] on broker 2: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.680 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 14 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63361 (id: 2 rack: null), 127.0.0.1:63344 (id: 1 rack: null), 127.0.0.1:63325 (id: 0 rack: null)], partitions = []) 11:30:56.680 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending GroupCoordinator request for group exactly-once to broker 127.0.0.1:63344 (id: 1 rack: null) 11:30:56.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x41 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x41 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.680 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 65,4 replyHeader:: 65,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.680 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-20 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getChildren cxid:0x3d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getChildren cxid:0x3d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.680 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 61,8 replyHeader:: 61,204,0 request:: '/brokers/ids,F response:: v{'0,'1,'2} 11:30:56.680 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.680 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,20] in C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.680 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,20] on broker 0: No checkpointed highwatermark is found for partition __consumer_offsets-20 11:30:56.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x3e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x3e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.680 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,20] on broker 0: __consumer_offsets-20 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.680 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 62,4 replyHeader:: 62,204,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:56.680 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,20] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x172 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x172 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.680 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 370,4 replyHeader:: 370,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x3f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x3f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.680 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 63,4 replyHeader:: 63,204,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:56.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x40 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.680 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x40 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.680 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316\__consumer_offsets-39\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.680 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 64,4 replyHeader:: 64,204,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:56.680 [kafka-request-handler-7] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749\__consumer_offsets-49\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.680 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-39 with message format version 2 11:30:56.696 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:exists cxid:0x41 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:56.696 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:exists cxid:0x41 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:56.696 [kafka-request-handler-7] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-49 with message format version 2 11:30:56.696 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 65,3 replyHeader:: 65,204,0 request:: '/brokers/topics/__consumer_offsets,F response:: s{47,47,1505298655478,1505298655478,0,1,0,0,468,1,50} 11:30:56.696 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081\__consumer_offsets-17\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.696 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received GroupCoordinator response ClientResponse(receivedTimeMs=1505298656696, latencyMs=16, disconnected=false, requestHeader={api_key=10,api_version=1,correlation_id=29,client_id=exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer}, responseBody=FindCoordinatorResponse(throttleTimeMs=0, errorMessage='null', error=COORDINATOR_NOT_AVAILABLE, node=:-1 (id: -1 rack: null))) for group exactly-once 11:30:56.696 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Group coordinator lookup for group exactly-once failed: The coordinator is not available. 11:30:56.696 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Coordinator discovery failed for group exactly-once, refreshing metadata 11:30:56.696 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-39 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.696 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.696 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,39] in C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.696 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,39] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-39 11:30:56.696 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,39] on broker 1: __consumer_offsets-39 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.696 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,39] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.696 [kafka-request-handler-7] INFO kafka.log.Log - Completed load of log __consumer_offsets-49 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.696 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-17 with message format version 2 11:30:56.696 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x42 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.696 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x42 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.696 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 66,4 replyHeader:: 66,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.696 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.696 [kafka-request-handler-7] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,49] in C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.696 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,49] on broker 2: No checkpointed highwatermark is found for partition __consumer_offsets-49 11:30:56.696 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,49] on broker 2: __consumer_offsets-49 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.696 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,49] on broker 2: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.696 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x42 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.696 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x42 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.696 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 66,4 replyHeader:: 66,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.696 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-17 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.696 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.696 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,17] in C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.696 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,17] on broker 0: No checkpointed highwatermark is found for partition __consumer_offsets-17 11:30:56.696 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,17] on broker 0: __consumer_offsets-17 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.696 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,17] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.696 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x173 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.696 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x173 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.696 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 371,4 replyHeader:: 371,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.712 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316\__consumer_offsets-36\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.712 [kafka-request-handler-7] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749\__consumer_offsets-46\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.712 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-36 with message format version 2 11:30:56.712 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081\__consumer_offsets-14\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.712 [kafka-request-handler-7] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-46 with message format version 2 11:30:56.712 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-36 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.712 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.712 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,36] in C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.712 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,36] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-36 11:30:56.712 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,36] on broker 1: __consumer_offsets-36 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.712 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,36] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.712 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-14 with message format version 2 11:30:56.712 [kafka-request-handler-7] INFO kafka.log.Log - Completed load of log __consumer_offsets-46 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.712 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.712 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x43 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.712 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x43 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.712 [kafka-request-handler-7] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,46] in C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.712 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 67,4 replyHeader:: 67,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.712 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,46] on broker 2: No checkpointed highwatermark is found for partition __consumer_offsets-46 11:30:56.712 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,46] on broker 2: __consumer_offsets-46 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.712 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,46] on broker 2: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.712 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-14 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.727 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x43 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.727 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x43 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.727 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.727 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 67,4 replyHeader:: 67,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.727 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,14] in C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.727 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,14] on broker 0: No checkpointed highwatermark is found for partition __consumer_offsets-14 11:30:56.727 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,14] on broker 0: __consumer_offsets-14 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.727 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,14] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.727 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x174 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.727 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x174 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.727 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 372,4 replyHeader:: 372,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.727 [kafka-request-handler-7] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749\__consumer_offsets-43\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.743 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=my-topic) to node 0 11:30:56.743 [kafka-request-handler-7] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-43 with message format version 2 11:30:56.743 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x175 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.743 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x175 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.743 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 373,8 replyHeader:: 373,204,0 request:: '/brokers/ids,T response:: v{'0,'1,'2} 11:30:56.743 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x176 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.743 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x176 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.743 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 374,4 replyHeader:: 374,204,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:56.743 [kafka-request-handler-7] INFO kafka.log.Log - Completed load of log __consumer_offsets-43 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.743 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.743 [kafka-request-handler-7] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,43] in C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.743 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316\__consumer_offsets-33\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.743 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081\__consumer_offsets-11\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.743 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,43] on broker 2: No checkpointed highwatermark is found for partition __consumer_offsets-43 11:30:56.743 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,43] on broker 2: __consumer_offsets-43 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.743 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,43] on broker 2: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.743 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x177 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.743 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x177 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.743 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 375,4 replyHeader:: 375,204,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:56.743 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x44 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.743 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x44 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.743 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 68,4 replyHeader:: 68,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.743 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-11 with message format version 2 11:30:56.743 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-33 with message format version 2 11:30:56.743 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x178 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.743 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x178 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.758 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 376,4 replyHeader:: 376,204,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:56.759 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-11 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.760 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.760 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,11] in C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.759 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-33 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.760 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,11] on broker 0: No checkpointed highwatermark is found for partition __consumer_offsets-11 11:30:56.761 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,11] on broker 0: __consumer_offsets-11 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.761 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.761 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,11] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.761 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,33] in C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.762 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,33] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-33 11:30:56.762 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,33] on broker 1: __consumer_offsets-33 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.762 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x179 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.762 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x179 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.762 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 377,4 replyHeader:: 377,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.763 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,33] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.764 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x44 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.764 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x44 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.764 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x17a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:56.764 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x17a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:56.764 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 68,4 replyHeader:: 68,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.764 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 378,3 replyHeader:: 378,204,0 request:: '/brokers/topics/my-topic,T response:: s{46,46,1505298655478,1505298655478,0,1,0,0,36,1,133} 11:30:56.765 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 14 : {my-topic=LEADER_NOT_AVAILABLE} 11:30:56.765 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 12 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63344 (id: 1 rack: null), 127.0.0.1:63325 (id: 0 rack: null), 127.0.0.1:63361 (id: 2 rack: null)], partitions = []) 11:30:56.765 [kafka-request-handler-7] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749\__consumer_offsets-40\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.765 [kafka-request-handler-7] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-40 with message format version 2 11:30:56.765 [kafka-request-handler-7] INFO kafka.log.Log - Completed load of log __consumer_offsets-40 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.765 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081\__consumer_offsets-8\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.765 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.765 [kafka-request-handler-7] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,40] in C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.765 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,40] on broker 2: No checkpointed highwatermark is found for partition __consumer_offsets-40 11:30:56.765 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,40] on broker 2: __consumer_offsets-40 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.765 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,40] on broker 2: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.765 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x45 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.765 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x45 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.765 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 69,4 replyHeader:: 69,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.765 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316\__consumer_offsets-30\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.765 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-8 with message format version 2 11:30:56.781 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=) to node 2 11:30:56.781 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-30 with message format version 2 11:30:56.781 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 15 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63344 (id: 1 rack: null), 127.0.0.1:63361 (id: 2 rack: null), 127.0.0.1:63325 (id: 0 rack: null)], partitions = []) 11:30:56.781 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending GroupCoordinator request for group exactly-once to broker 127.0.0.1:63325 (id: 0 rack: null) 11:30:56.781 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-8 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.781 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.781 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x17b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.781 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,8] in C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.781 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x17b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.781 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,8] on broker 0: No checkpointed highwatermark is found for partition __consumer_offsets-8 11:30:56.781 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,8] on broker 0: __consumer_offsets-8 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.781 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-30 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.781 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,8] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.781 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.781 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,30] in C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.781 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,30] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-30 11:30:56.781 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,30] on broker 1: __consumer_offsets-30 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.781 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 379,8 replyHeader:: 379,204,0 request:: '/brokers/ids,T response:: v{'0,'1,'2} 11:30:56.781 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,30] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.781 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x17c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.781 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x17c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.781 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x45 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.781 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x45 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.781 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 380,4 replyHeader:: 380,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.781 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x17d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.781 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 69,4 replyHeader:: 69,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.781 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x17d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.781 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 381,4 replyHeader:: 381,204,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:56.781 [kafka-request-handler-7] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749\__consumer_offsets-37\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.781 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x17e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.781 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x17e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.781 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 382,4 replyHeader:: 382,204,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:56.781 [kafka-request-handler-7] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-37 with message format version 2 11:30:56.796 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x17f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.796 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x17f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.796 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 383,4 replyHeader:: 383,204,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:56.796 [kafka-request-handler-7] INFO kafka.log.Log - Completed load of log __consumer_offsets-37 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.796 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.796 [kafka-request-handler-7] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,37] in C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.796 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081\__consumer_offsets-5\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.796 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,37] on broker 2: No checkpointed highwatermark is found for partition __consumer_offsets-37 11:30:56.796 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,37] on broker 2: __consumer_offsets-37 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.796 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,37] on broker 2: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.796 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x180 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:56.796 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x180 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:56.796 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x46 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.796 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x46 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.796 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 384,3 replyHeader:: 384,204,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{47,47,1505298655478,1505298655478,0,1,0,0,468,1,50} 11:30:56.796 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 70,4 replyHeader:: 70,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.796 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received GroupCoordinator response ClientResponse(receivedTimeMs=1505298656796, latencyMs=15, disconnected=false, requestHeader={api_key=10,api_version=1,correlation_id=31,client_id=exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer}, responseBody=FindCoordinatorResponse(throttleTimeMs=0, errorMessage='null', error=COORDINATOR_NOT_AVAILABLE, node=:-1 (id: -1 rack: null))) for group exactly-once 11:30:56.796 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Group coordinator lookup for group exactly-once failed: The coordinator is not available. 11:30:56.796 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Coordinator discovery failed for group exactly-once, refreshing metadata 11:30:56.796 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-5 with message format version 2 11:30:56.796 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316\__consumer_offsets-27\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.796 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-5 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.796 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-27 with message format version 2 11:30:56.796 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.796 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,5] in C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.796 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,5] on broker 0: No checkpointed highwatermark is found for partition __consumer_offsets-5 11:30:56.796 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,5] on broker 0: __consumer_offsets-5 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.796 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,5] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.796 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x181 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.796 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x181 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.796 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 385,4 replyHeader:: 385,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.812 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-27 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.812 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.812 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,27] in C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.812 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,27] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-27 11:30:56.812 [kafka-request-handler-7] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749\__consumer_offsets-34\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.812 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,27] on broker 1: __consumer_offsets-27 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.812 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,27] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.812 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x46 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.812 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x46 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.812 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 70,4 replyHeader:: 70,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.812 [kafka-request-handler-7] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-34 with message format version 2 11:30:56.812 [kafka-request-handler-7] INFO kafka.log.Log - Completed load of log __consumer_offsets-34 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.812 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.812 [kafka-request-handler-7] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,34] in C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.812 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081\__consumer_offsets-2\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.812 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,34] on broker 2: No checkpointed highwatermark is found for partition __consumer_offsets-34 11:30:56.812 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,34] on broker 2: __consumer_offsets-34 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.812 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,34] on broker 2: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.812 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x47 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/my-topic 11:30:56.812 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x47 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/my-topic 11:30:56.812 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 71,4 replyHeader:: 71,204,0 request:: '/config/topics/my-topic,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b7d7d,s{44,44,1505298655478,1505298655478,0,0,0,0,25,0,44} 11:30:56.812 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-2 with message format version 2 11:30:56.812 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316\__consumer_offsets-24\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.827 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-2 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.827 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-24 with message format version 2 11:30:56.827 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.827 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,2] in C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.827 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,2] on broker 0: No checkpointed highwatermark is found for partition __consumer_offsets-2 11:30:56.827 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,2] on broker 0: __consumer_offsets-2 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.827 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,2] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.827 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x182 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.827 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x182 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.827 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 386,4 replyHeader:: 386,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.827 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-24 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.827 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.827 [kafka-request-handler-7] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749\my-topic-0\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.827 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,24] in C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.827 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,24] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-24 11:30:56.827 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,24] on broker 1: __consumer_offsets-24 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.827 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,24] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.827 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x47 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.827 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x47 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.827 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 71,4 replyHeader:: 71,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.847 [kafka-request-handler-7] INFO kafka.log.Log - Loading producer state from offset 0 for partition my-topic-0 with message format version 2 11:30:56.852 [kafka-request-handler-7] INFO kafka.log.Log - Completed load of log my-topic-0 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.853 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.853 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081\__consumer_offsets-47\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.854 [kafka-request-handler-7] INFO kafka.log.LogManager - Created log for partition [my-topic,0] in C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.854 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [my-topic,0] on broker 2: No checkpointed highwatermark is found for partition my-topic-0 11:30:56.854 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [my-topic,0] on broker 2: my-topic-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.855 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [my-topic,0] on broker 2: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.856 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x48 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.856 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x48 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.856 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 72,4 replyHeader:: 72,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.858 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316\__consumer_offsets-21\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.858 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-47 with message format version 2 11:30:56.861 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-21 with message format version 2 11:30:56.863 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-47 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.864 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.864 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,47] in C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.865 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,47] on broker 0: No checkpointed highwatermark is found for partition __consumer_offsets-47 11:30:56.865 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,47] on broker 0: __consumer_offsets-47 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.865 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,47] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.865 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-21 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.866 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=my-topic) to node 2 11:30:56.866 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.867 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x183 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.867 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x183 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.867 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,21] in C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.867 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 387,4 replyHeader:: 387,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.867 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,21] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-21 11:30:56.867 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getChildren cxid:0x49 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.867 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getChildren cxid:0x49 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.868 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,21] on broker 1: __consumer_offsets-21 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.868 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 73,8 replyHeader:: 73,204,0 request:: '/brokers/ids,F response:: v{'0,'1,'2} 11:30:56.868 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,21] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.870 [kafka-request-handler-7] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749\__consumer_offsets-31\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.871 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x48 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.871 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x48 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.872 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 72,4 replyHeader:: 72,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.872 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x4a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.874 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x4a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.874 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 74,4 replyHeader:: 74,204,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:56.877 [kafka-request-handler-7] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-31 with message format version 2 11:30:56.879 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x4b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.879 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x4b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.879 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 75,4 replyHeader:: 75,204,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:56.881 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=) to node 2 11:30:56.882 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081\__consumer_offsets-38\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.883 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 16 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63361 (id: 2 rack: null), 127.0.0.1:63325 (id: 0 rack: null), 127.0.0.1:63344 (id: 1 rack: null)], partitions = []) 11:30:56.884 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending GroupCoordinator request for group exactly-once to broker 127.0.0.1:63325 (id: 0 rack: null) 11:30:56.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x4c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x4c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.885 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 76,4 replyHeader:: 76,204,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:56.886 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x184 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.886 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x184 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.886 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 388,8 replyHeader:: 388,204,0 request:: '/brokers/ids,T response:: v{'0,'1,'2} 11:30:56.887 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-38 with message format version 2 11:30:56.887 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x185 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.887 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x185 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.888 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 389,4 replyHeader:: 389,204,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:56.888 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:exists cxid:0x4d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:56.888 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:exists cxid:0x4d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:56.889 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 77,3 replyHeader:: 77,204,0 request:: '/brokers/topics/my-topic,F response:: s{46,46,1505298655478,1505298655478,0,1,0,0,36,1,133} 11:30:56.890 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 15 : {my-topic=LEADER_NOT_AVAILABLE} 11:30:56.890 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 13 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63361 (id: 2 rack: null), 127.0.0.1:63344 (id: 1 rack: null), 127.0.0.1:63325 (id: 0 rack: null)], partitions = []) 11:30:56.891 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x186 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.891 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x186 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:56.891 [kafka-request-handler-7] INFO kafka.log.Log - Completed load of log __consumer_offsets-31 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.891 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 390,4 replyHeader:: 390,204,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:56.892 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-38 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.892 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.893 [kafka-request-handler-7] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,31] in C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.893 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.893 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,38] in C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.893 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,31] on broker 2: No checkpointed highwatermark is found for partition __consumer_offsets-31 11:30:56.893 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,38] on broker 0: No checkpointed highwatermark is found for partition __consumer_offsets-38 11:30:56.893 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,31] on broker 2: __consumer_offsets-31 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.894 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,38] on broker 0: __consumer_offsets-38 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.894 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,31] on broker 2: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.894 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,38] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.895 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x4e zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.895 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x4e zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.895 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x187 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.895 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x187 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.895 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 78,4 replyHeader:: 78,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.895 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 391,4 replyHeader:: 391,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.895 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x188 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.895 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x188 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:56.896 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 392,4 replyHeader:: 392,204,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:56.898 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316\__consumer_offsets-18\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.902 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x189 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:56.902 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x189 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:56.902 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-18 with message format version 2 11:30:56.902 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 393,3 replyHeader:: 393,204,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{47,47,1505298655478,1505298655478,0,1,0,0,468,1,50} 11:30:56.903 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received GroupCoordinator response ClientResponse(receivedTimeMs=1505298656903, latencyMs=19, disconnected=false, requestHeader={api_key=10,api_version=1,correlation_id=33,client_id=exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer}, responseBody=FindCoordinatorResponse(throttleTimeMs=0, errorMessage='null', error=COORDINATOR_NOT_AVAILABLE, node=:-1 (id: -1 rack: null))) for group exactly-once 11:30:56.904 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Group coordinator lookup for group exactly-once failed: The coordinator is not available. 11:30:56.904 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Coordinator discovery failed for group exactly-once, refreshing metadata 11:30:56.906 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-18 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.907 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.907 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,18] in C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.908 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,18] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-18 11:30:56.908 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,18] on broker 1: __consumer_offsets-18 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.908 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,18] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.909 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x49 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.909 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x49 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.910 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 73,4 replyHeader:: 73,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.912 [kafka-request-handler-7] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749\__consumer_offsets-19\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.914 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081\__consumer_offsets-35\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.917 [kafka-request-handler-7] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-19 with message format version 2 11:30:56.919 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-35 with message format version 2 11:30:56.920 [kafka-request-handler-7] INFO kafka.log.Log - Completed load of log __consumer_offsets-19 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.921 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.921 [kafka-request-handler-7] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,19] in C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.922 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,19] on broker 2: No checkpointed highwatermark is found for partition __consumer_offsets-19 11:30:56.922 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,19] on broker 2: __consumer_offsets-19 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.922 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316\__consumer_offsets-15\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.922 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,19] on broker 2: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.923 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-35 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.923 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x4f zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.923 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x4f zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.923 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 79,4 replyHeader:: 79,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.924 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.924 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,35] in C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.925 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,35] on broker 0: No checkpointed highwatermark is found for partition __consumer_offsets-35 11:30:56.925 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,35] on broker 0: __consumer_offsets-35 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.925 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,35] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.926 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x18a zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.926 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x18a zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.926 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 394,4 replyHeader:: 394,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.926 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-15 with message format version 2 11:30:56.931 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-15 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.931 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.931 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,15] in C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.932 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,15] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-15 11:30:56.932 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,15] on broker 1: __consumer_offsets-15 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.932 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,15] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.933 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x4a zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.933 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x4a zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.934 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 74,4 replyHeader:: 74,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.938 [kafka-request-handler-7] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749\__consumer_offsets-28\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.939 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081\__consumer_offsets-44\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.942 [kafka-request-handler-7] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-28 with message format version 2 11:30:56.943 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-44 with message format version 2 11:30:56.947 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-44 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.947 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316\__consumer_offsets-12\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.947 [kafka-request-handler-7] INFO kafka.log.Log - Completed load of log __consumer_offsets-28 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.947 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.947 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.948 [kafka-request-handler-7] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,28] in C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.948 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,44] in C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.948 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,28] on broker 2: No checkpointed highwatermark is found for partition __consumer_offsets-28 11:30:56.948 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,44] on broker 0: No checkpointed highwatermark is found for partition __consumer_offsets-44 11:30:56.948 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,44] on broker 0: __consumer_offsets-44 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.949 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,28] on broker 2: __consumer_offsets-28 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.949 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,44] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.949 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,28] on broker 2: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.950 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x18b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.950 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x18b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.950 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x50 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.950 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x50 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.950 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 395,4 replyHeader:: 395,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.950 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 80,4 replyHeader:: 80,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.950 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-12 with message format version 2 11:30:56.954 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-12 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.955 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.955 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,12] in C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.955 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,12] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-12 11:30:56.955 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,12] on broker 1: __consumer_offsets-12 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.955 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,12] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.956 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x4b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.956 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x4b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.956 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 75,4 replyHeader:: 75,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.962 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081\__consumer_offsets-32\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.963 [kafka-request-handler-7] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749\__consumer_offsets-25\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.966 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-32 with message format version 2 11:30:56.967 [kafka-request-handler-7] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-25 with message format version 2 11:30:56.969 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316\__consumer_offsets-9\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.970 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-32 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.970 [kafka-request-handler-7] INFO kafka.log.Log - Completed load of log __consumer_offsets-25 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.971 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.971 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.971 [kafka-request-handler-7] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,25] in C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.971 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,32] in C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.972 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,25] on broker 2: No checkpointed highwatermark is found for partition __consumer_offsets-25 11:30:56.972 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,32] on broker 0: No checkpointed highwatermark is found for partition __consumer_offsets-32 11:30:56.972 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,25] on broker 2: __consumer_offsets-25 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.972 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,32] on broker 0: __consumer_offsets-32 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.972 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-9 with message format version 2 11:30:56.972 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,25] on broker 2: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.972 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,32] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.973 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x18c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.973 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x18c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.973 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x51 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.973 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x51 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.973 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 396,4 replyHeader:: 396,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.973 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 81,4 replyHeader:: 81,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.975 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-9 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:56.976 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:56.976 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,9] in C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:56.977 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,9] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-9 11:30:56.977 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,9] on broker 1: __consumer_offsets-9 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:56.977 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,9] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:56.978 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x4c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.979 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x4c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:56.979 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 76,4 replyHeader:: 76,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:56.988 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081\__consumer_offsets-41\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.989 [kafka-request-handler-7] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749\__consumer_offsets-16\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.990 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=my-topic) to node 2 11:30:56.990 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=) to node 2 11:30:56.992 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getChildren cxid:0x52 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.992 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getChildren cxid:0x52 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.993 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 82,8 replyHeader:: 82,204,0 request:: '/brokers/ids,F response:: v{'0,'1,'2} 11:30:56.993 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-41 with message format version 2 11:30:56.994 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 17 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63325 (id: 0 rack: null), 127.0.0.1:63361 (id: 2 rack: null), 127.0.0.1:63344 (id: 1 rack: null)], partitions = []) 11:30:56.994 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending GroupCoordinator request for group exactly-once to broker 127.0.0.1:63361 (id: 2 rack: null) 11:30:56.994 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316\__consumer_offsets-6\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:56.995 [kafka-request-handler-7] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-16 with message format version 2 11:30:56.995 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x53 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.996 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x53 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:56.996 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getChildren cxid:0x54 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.996 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getChildren cxid:0x54 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:56.996 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 83,4 replyHeader:: 83,204,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:56.996 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 84,8 replyHeader:: 84,204,0 request:: '/brokers/ids,F response:: v{'0,'1,'2} 11:30:56.997 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-6 with message format version 2 11:30:56.999 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x55 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:57.000 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x55 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:57.000 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 85,4 replyHeader:: 85,204,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:57.000 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x56 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:57.000 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x56 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:57.000 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 86,4 replyHeader:: 86,204,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:57.001 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-6 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:57.001 [kafka-request-handler-7] INFO kafka.log.Log - Completed load of log __consumer_offsets-16 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:57.002 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:57.003 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:57.003 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,6] in C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:57.003 [kafka-request-handler-7] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,16] in C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:57.003 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,6] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-6 11:30:57.003 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,6] on broker 1: __consumer_offsets-6 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:57.004 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,16] on broker 2: No checkpointed highwatermark is found for partition __consumer_offsets-16 11:30:57.004 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,6] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:57.004 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,16] on broker 2: __consumer_offsets-16 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:57.004 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,16] on broker 2: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:57.004 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x57 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:57.005 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x57 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:57.005 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x4d zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:57.005 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x4d zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:57.005 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 87,4 replyHeader:: 87,204,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:57.005 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x58 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:57.005 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x58 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:57.005 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 77,4 replyHeader:: 77,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:57.005 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 88,4 replyHeader:: 88,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:57.006 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-41 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:57.007 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:57.007 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,41] in C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:57.008 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x59 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:57.008 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x59 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:57.008 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 89,4 replyHeader:: 89,204,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:57.008 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,41] on broker 0: No checkpointed highwatermark is found for partition __consumer_offsets-41 11:30:57.009 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,41] on broker 0: __consumer_offsets-41 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:57.009 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,41] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:57.017 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x5a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:57.018 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x5a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:57.018 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 90,4 replyHeader:: 90,204,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:57.018 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:exists cxid:0x5b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:57.019 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:exists cxid:0x5b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-topic 11:30:57.019 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 91,3 replyHeader:: 91,204,0 request:: '/brokers/topics/my-topic,F response:: s{46,46,1505298655478,1505298655478,0,1,0,0,36,1,133} 11:30:57.021 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 16 : {my-topic=LEADER_NOT_AVAILABLE} 11:30:57.021 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 14 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63325 (id: 0 rack: null), 127.0.0.1:63361 (id: 2 rack: null), 127.0.0.1:63344 (id: 1 rack: null)], partitions = []) 11:30:57.022 [kafka-request-handler-7] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749\__consumer_offsets-22\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:57.022 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:exists cxid:0x5c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:57.023 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:exists cxid:0x5c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:30:57.023 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 92,3 replyHeader:: 92,204,0 request:: '/brokers/topics/__consumer_offsets,F response:: s{47,47,1505298655478,1505298655478,0,1,0,0,468,1,50} 11:30:57.024 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received GroupCoordinator response ClientResponse(receivedTimeMs=1505298657024, latencyMs=30, disconnected=false, requestHeader={api_key=10,api_version=1,correlation_id=35,client_id=exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer}, responseBody=FindCoordinatorResponse(throttleTimeMs=0, errorMessage='null', error=COORDINATOR_NOT_AVAILABLE, node=:-1 (id: -1 rack: null))) for group exactly-once 11:30:57.024 [kafka-request-handler-5] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316\__consumer_offsets-3\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:57.024 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Group coordinator lookup for group exactly-once failed: The coordinator is not available. 11:30:57.024 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Coordinator discovery failed for group exactly-once, refreshing metadata 11:30:57.025 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task highwatermark-checkpoint with initial delay 0 ms and period 5000 ms. 11:30:57.029 [kafka-request-handler-5] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-3 with message format version 2 11:30:57.030 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Scheduling loading of offsets and group metadata from __consumer_offsets-2 11:30:57.030 [kafka-request-handler-7] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-22 with message format version 2 11:30:57.031 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-2 with initial delay 0 ms and period -1 ms. 11:30:57.032 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Scheduling loading of offsets and group metadata from __consumer_offsets-5 11:30:57.032 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-5 with initial delay 0 ms and period -1 ms. 11:30:57.032 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Scheduling loading of offsets and group metadata from __consumer_offsets-8 11:30:57.032 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-8 with initial delay 0 ms and period -1 ms. 11:30:57.032 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Scheduling loading of offsets and group metadata from __consumer_offsets-11 11:30:57.032 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-11 with initial delay 0 ms and period -1 ms. 11:30:57.032 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Scheduling loading of offsets and group metadata from __consumer_offsets-14 11:30:57.032 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-14 with initial delay 0 ms and period -1 ms. 11:30:57.033 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Scheduling loading of offsets and group metadata from __consumer_offsets-17 11:30:57.033 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-17 with initial delay 0 ms and period -1 ms. 11:30:57.033 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Scheduling loading of offsets and group metadata from __consumer_offsets-20 11:30:57.033 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-20 with initial delay 0 ms and period -1 ms. 11:30:57.033 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Scheduling loading of offsets and group metadata from __consumer_offsets-23 11:30:57.033 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-23 with initial delay 0 ms and period -1 ms. 11:30:57.033 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Scheduling loading of offsets and group metadata from __consumer_offsets-26 11:30:57.033 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-26 with initial delay 0 ms and period -1 ms. 11:30:57.033 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Scheduling loading of offsets and group metadata from __consumer_offsets-29 11:30:57.033 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-29 with initial delay 0 ms and period -1 ms. 11:30:57.033 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Scheduling loading of offsets and group metadata from __consumer_offsets-32 11:30:57.033 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-32 with initial delay 0 ms and period -1 ms. 11:30:57.033 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Scheduling loading of offsets and group metadata from __consumer_offsets-35 11:30:57.033 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-35 with initial delay 0 ms and period -1 ms. 11:30:57.034 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Scheduling loading of offsets and group metadata from __consumer_offsets-38 11:30:57.034 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-38 with initial delay 0 ms and period -1 ms. 11:30:57.034 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Scheduling loading of offsets and group metadata from __consumer_offsets-41 11:30:57.034 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-41 with initial delay 0 ms and period -1 ms. 11:30:57.034 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Scheduling loading of offsets and group metadata from __consumer_offsets-44 11:30:57.034 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-44 with initial delay 0 ms and period -1 ms. 11:30:57.034 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Scheduling loading of offsets and group metadata from __consumer_offsets-47 11:30:57.034 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-47 with initial delay 0 ms and period -1 ms. 11:30:57.034 [kafka-request-handler-7] INFO kafka.log.Log - Completed load of log __consumer_offsets-22 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:57.035 [kafka-request-handler-5] INFO kafka.log.Log - Completed load of log __consumer_offsets-3 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:57.035 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:57.035 [kafka-request-handler-7] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,22] in C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:57.036 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,22] on broker 2: No checkpointed highwatermark is found for partition __consumer_offsets-22 11:30:57.036 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,22] on broker 2: __consumer_offsets-22 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:57.036 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,22] on broker 2: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:57.038 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x5d zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:57.038 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x5d zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:30:57.038 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 93,4 replyHeader:: 93,204,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374227d7d,s{45,45,1505298655478,1505298655478,0,0,0,0,109,0,45} 11:30:57.040 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:57.040 [kafka-request-handler-5] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,3] in C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:57.041 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,3] on broker 1: No checkpointed highwatermark is found for partition __consumer_offsets-3 11:30:57.041 [kafka-request-handler-5] INFO kafka.cluster.Partition - Partition [__consumer_offsets,3] on broker 1: __consumer_offsets-3 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:57.041 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Finished loading offsets and group metadata from __consumer_offsets-2 in 6 milliseconds. 11:30:57.041 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,3] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:57.041 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task highwatermark-checkpoint with initial delay 0 ms and period 5000 ms. 11:30:57.042 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Scheduling loading of offsets and group metadata from __consumer_offsets-0 11:30:57.042 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-0 with initial delay 0 ms and period -1 ms. 11:30:57.042 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Scheduling loading of offsets and group metadata from __consumer_offsets-3 11:30:57.042 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-3 with initial delay 0 ms and period -1 ms. 11:30:57.042 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Scheduling loading of offsets and group metadata from __consumer_offsets-6 11:30:57.042 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Finished loading offsets and group metadata from __consumer_offsets-0 in 0 milliseconds. 11:30:57.042 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-6 with initial delay 0 ms and period -1 ms. 11:30:57.043 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Scheduling loading of offsets and group metadata from __consumer_offsets-9 11:30:57.043 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-9 with initial delay 0 ms and period -1 ms. 11:30:57.043 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Finished loading offsets and group metadata from __consumer_offsets-3 in 0 milliseconds. 11:30:57.042 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Finished loading offsets and group metadata from __consumer_offsets-5 in 0 milliseconds. 11:30:57.043 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Finished loading offsets and group metadata from __consumer_offsets-6 in 0 milliseconds. 11:30:57.043 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Scheduling loading of offsets and group metadata from __consumer_offsets-12 11:30:57.043 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-12 with initial delay 0 ms and period -1 ms. 11:30:57.043 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Scheduling loading of offsets and group metadata from __consumer_offsets-15 11:30:57.043 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Finished loading offsets and group metadata from __consumer_offsets-8 in 0 milliseconds. 11:30:57.043 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-15 with initial delay 0 ms and period -1 ms. 11:30:57.043 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Scheduling loading of offsets and group metadata from __consumer_offsets-18 11:30:57.043 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-18 with initial delay 0 ms and period -1 ms. 11:30:57.043 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Finished loading offsets and group metadata from __consumer_offsets-9 in 0 milliseconds. 11:30:57.044 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Scheduling loading of offsets and group metadata from __consumer_offsets-21 11:30:57.044 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-21 with initial delay 0 ms and period -1 ms. 11:30:57.044 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Scheduling loading of offsets and group metadata from __consumer_offsets-24 11:30:57.043 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Finished loading offsets and group metadata from __consumer_offsets-11 in 0 milliseconds. 11:30:57.044 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Finished loading offsets and group metadata from __consumer_offsets-12 in 0 milliseconds. 11:30:57.044 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Finished loading offsets and group metadata from __consumer_offsets-14 in 0 milliseconds. 11:30:57.044 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-24 with initial delay 0 ms and period -1 ms. 11:30:57.044 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Finished loading offsets and group metadata from __consumer_offsets-15 in 0 milliseconds. 11:30:57.044 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Scheduling loading of offsets and group metadata from __consumer_offsets-27 11:30:57.044 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-27 with initial delay 0 ms and period -1 ms. 11:30:57.044 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Finished loading offsets and group metadata from __consumer_offsets-17 in 0 milliseconds. 11:30:57.044 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Scheduling loading of offsets and group metadata from __consumer_offsets-30 11:30:57.044 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Finished loading offsets and group metadata from __consumer_offsets-18 in 0 milliseconds. 11:30:57.045 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-30 with initial delay 0 ms and period -1 ms. 11:30:57.045 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Finished loading offsets and group metadata from __consumer_offsets-20 in 1 milliseconds. 11:30:57.045 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Scheduling loading of offsets and group metadata from __consumer_offsets-33 11:30:57.045 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Finished loading offsets and group metadata from __consumer_offsets-21 in 0 milliseconds. 11:30:57.045 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-33 with initial delay 0 ms and period -1 ms. 11:30:57.045 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Scheduling loading of offsets and group metadata from __consumer_offsets-36 11:30:57.045 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Finished loading offsets and group metadata from __consumer_offsets-23 in 0 milliseconds. 11:30:57.045 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Finished loading offsets and group metadata from __consumer_offsets-24 in 0 milliseconds. 11:30:57.045 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Finished loading offsets and group metadata from __consumer_offsets-26 in 0 milliseconds. 11:30:57.045 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-36 with initial delay 0 ms and period -1 ms. 11:30:57.045 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Finished loading offsets and group metadata from __consumer_offsets-27 in 0 milliseconds. 11:30:57.045 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Scheduling loading of offsets and group metadata from __consumer_offsets-39 11:30:57.045 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Finished loading offsets and group metadata from __consumer_offsets-29 in 0 milliseconds. 11:30:57.045 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-39 with initial delay 0 ms and period -1 ms. 11:30:57.045 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Finished loading offsets and group metadata from __consumer_offsets-30 in 0 milliseconds. 11:30:57.045 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Scheduling loading of offsets and group metadata from __consumer_offsets-42 11:30:57.045 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Finished loading offsets and group metadata from __consumer_offsets-32 in 0 milliseconds. 11:30:57.045 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-42 with initial delay 0 ms and period -1 ms. 11:30:57.045 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Finished loading offsets and group metadata from __consumer_offsets-33 in 0 milliseconds. 11:30:57.046 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Finished loading offsets and group metadata from __consumer_offsets-35 in 1 milliseconds. 11:30:57.046 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Finished loading offsets and group metadata from __consumer_offsets-36 in 0 milliseconds. 11:30:57.046 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Scheduling loading of offsets and group metadata from __consumer_offsets-45 11:30:57.046 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-45 with initial delay 0 ms and period -1 ms. 11:30:57.046 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Finished loading offsets and group metadata from __consumer_offsets-39 in 0 milliseconds. 11:30:57.046 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Scheduling loading of offsets and group metadata from __consumer_offsets-48 11:30:57.046 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Finished loading offsets and group metadata from __consumer_offsets-38 in 0 milliseconds. 11:30:57.046 [kafka-request-handler-5] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-48 with initial delay 0 ms and period -1 ms. 11:30:57.046 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Finished loading offsets and group metadata from __consumer_offsets-42 in 0 milliseconds. 11:30:57.046 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Finished loading offsets and group metadata from __consumer_offsets-41 in 0 milliseconds. 11:30:57.046 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Finished loading offsets and group metadata from __consumer_offsets-45 in 0 milliseconds. 11:30:57.047 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Finished loading offsets and group metadata from __consumer_offsets-44 in 1 milliseconds. 11:30:57.047 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 1]: Finished loading offsets and group metadata from __consumer_offsets-48 in 1 milliseconds. 11:30:57.047 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 0]: Finished loading offsets and group metadata from __consumer_offsets-47 in 0 milliseconds. 11:30:57.095 [kafka-request-handler-7] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749\__consumer_offsets-13\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:57.098 [kafka-request-handler-7] INFO kafka.log.Log - Loading producer state from offset 0 for partition __consumer_offsets-13 with message format version 2 11:30:57.099 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=) to node 1 11:30:57.102 [kafka-request-handler-7] INFO kafka.log.Log - Completed load of log __consumer_offsets-13 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:57.102 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:57.103 [kafka-request-handler-7] INFO kafka.log.LogManager - Created log for partition [__consumer_offsets,13] in C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749 with properties {compression.type -> producer, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:57.103 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,13] on broker 2: No checkpointed highwatermark is found for partition __consumer_offsets-13 11:30:57.103 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__consumer_offsets,13] on broker 2: __consumer_offsets-13 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:57.104 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,13] on broker 2: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:57.104 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task highwatermark-checkpoint with initial delay 0 ms and period 5000 ms. 11:30:57.104 [kafka-request-handler-7] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Scheduling loading of offsets and group metadata from __consumer_offsets-22 11:30:57.104 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-22 with initial delay 0 ms and period -1 ms. 11:30:57.105 [kafka-request-handler-7] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Scheduling loading of offsets and group metadata from __consumer_offsets-25 11:30:57.105 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-25 with initial delay 0 ms and period -1 ms. 11:30:57.105 [kafka-request-handler-7] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Scheduling loading of offsets and group metadata from __consumer_offsets-28 11:30:57.105 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-28 with initial delay 0 ms and period -1 ms. 11:30:57.105 [kafka-request-handler-7] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Scheduling loading of offsets and group metadata from __consumer_offsets-31 11:30:57.105 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Finished loading offsets and group metadata from __consumer_offsets-22 in 0 milliseconds. 11:30:57.105 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-31 with initial delay 0 ms and period -1 ms. 11:30:57.105 [kafka-request-handler-7] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Scheduling loading of offsets and group metadata from __consumer_offsets-34 11:30:57.105 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-34 with initial delay 0 ms and period -1 ms. 11:30:57.105 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Finished loading offsets and group metadata from __consumer_offsets-25 in 0 milliseconds. 11:30:57.105 [kafka-request-handler-7] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Scheduling loading of offsets and group metadata from __consumer_offsets-37 11:30:57.105 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-37 with initial delay 0 ms and period -1 ms. 11:30:57.105 [kafka-request-handler-7] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Scheduling loading of offsets and group metadata from __consumer_offsets-40 11:30:57.105 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Finished loading offsets and group metadata from __consumer_offsets-28 in 0 milliseconds. 11:30:57.105 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-40 with initial delay 0 ms and period -1 ms. 11:30:57.106 [kafka-request-handler-7] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Scheduling loading of offsets and group metadata from __consumer_offsets-43 11:30:57.106 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-43 with initial delay 0 ms and period -1 ms. 11:30:57.106 [kafka-request-handler-7] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Scheduling loading of offsets and group metadata from __consumer_offsets-46 11:30:57.106 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Finished loading offsets and group metadata from __consumer_offsets-31 in 0 milliseconds. 11:30:57.106 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-46 with initial delay 0 ms and period -1 ms. 11:30:57.106 [kafka-request-handler-7] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Scheduling loading of offsets and group metadata from __consumer_offsets-49 11:30:57.106 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-49 with initial delay 0 ms and period -1 ms. 11:30:57.106 [kafka-request-handler-7] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Scheduling loading of offsets and group metadata from __consumer_offsets-1 11:30:57.106 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-1 with initial delay 0 ms and period -1 ms. 11:30:57.106 [kafka-request-handler-7] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Scheduling loading of offsets and group metadata from __consumer_offsets-4 11:30:57.106 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-4 with initial delay 0 ms and period -1 ms. 11:30:57.106 [kafka-request-handler-7] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Scheduling loading of offsets and group metadata from __consumer_offsets-7 11:30:57.106 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-7 with initial delay 0 ms and period -1 ms. 11:30:57.106 [kafka-request-handler-7] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Scheduling loading of offsets and group metadata from __consumer_offsets-10 11:30:57.106 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-10 with initial delay 0 ms and period -1 ms. 11:30:57.106 [kafka-request-handler-7] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Scheduling loading of offsets and group metadata from __consumer_offsets-13 11:30:57.106 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Finished loading offsets and group metadata from __consumer_offsets-34 in 0 milliseconds. 11:30:57.106 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-13 with initial delay 0 ms and period -1 ms. 11:30:57.107 [kafka-request-handler-7] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Scheduling loading of offsets and group metadata from __consumer_offsets-16 11:30:57.107 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Finished loading offsets and group metadata from __consumer_offsets-37 in 0 milliseconds. 11:30:57.107 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Finished loading offsets and group metadata from __consumer_offsets-40 in 0 milliseconds. 11:30:57.107 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-16 with initial delay 0 ms and period -1 ms. 11:30:57.107 [kafka-request-handler-7] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Scheduling loading of offsets and group metadata from __consumer_offsets-19 11:30:57.107 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-19 with initial delay 0 ms and period -1 ms. 11:30:57.107 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Finished loading offsets and group metadata from __consumer_offsets-43 in 0 milliseconds. 11:30:57.107 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Finished loading offsets and group metadata from __consumer_offsets-46 in 0 milliseconds. 11:30:57.107 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Finished loading offsets and group metadata from __consumer_offsets-49 in 0 milliseconds. 11:30:57.108 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Finished loading offsets and group metadata from __consumer_offsets-1 in 0 milliseconds. 11:30:57.108 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Finished loading offsets and group metadata from __consumer_offsets-4 in 0 milliseconds. 11:30:57.108 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Finished loading offsets and group metadata from __consumer_offsets-7 in 0 milliseconds. 11:30:57.108 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Finished loading offsets and group metadata from __consumer_offsets-10 in 0 milliseconds. 11:30:57.109 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Finished loading offsets and group metadata from __consumer_offsets-13 in 0 milliseconds. 11:30:57.109 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Finished loading offsets and group metadata from __consumer_offsets-16 in 0 milliseconds. 11:30:57.109 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [Group Metadata Manager on Broker 2]: Finished loading offsets and group metadata from __consumer_offsets-19 in 0 milliseconds. 11:30:57.111 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 18 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63344 (id: 1 rack: null), 127.0.0.1:63361 (id: 2 rack: null), 127.0.0.1:63325 (id: 0 rack: null)], partitions = [Partition(topic = my-topic, partition = 0, leader = 2, replicas = [2], isr = [2])]) 11:30:57.112 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending GroupCoordinator request for group exactly-once to broker 127.0.0.1:63325 (id: 0 rack: null) 11:30:57.121 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received GroupCoordinator response ClientResponse(receivedTimeMs=1505298657121, latencyMs=5, disconnected=false, requestHeader={api_key=10,api_version=1,correlation_id=37,client_id=exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer}, responseBody=FindCoordinatorResponse(throttleTimeMs=0, errorMessage='null', error=NONE, node=127.0.0.1:63325 (id: 0 rack: null))) for group exactly-once 11:30:57.121 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Discovered coordinator 127.0.0.1:63325 (id: 2147483647 rack: null) for group exactly-once. 11:30:57.122 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node 2147483647 at 127.0.0.1:63325. 11:30:57.123 [kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Acceptor - Accepted connection from /127.0.0.1:63386 on /127.0.0.1:63325 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:30:57.123 [kafka-network-thread-0-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:63386 11:30:57.127 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Revoking previously assigned partitions [] for group exactly-once 11:30:57.127 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] at state RUNNING: partitions [] revoked at the beginning of consumer rebalance. current assigned active tasks: [] current assigned standby tasks: [] 11:30:57.127 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Heartbeat thread for group exactly-once started 11:30:57.127 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] State transition from RUNNING to PARTITIONS_REVOKED. 11:30:57.128 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.KafkaStreams - stream-client [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181] State transition from RUNNING to REBALANCING. 11:30:57.128 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] suspendTasksAndState: suspending all active tasks [] and standby tasks [] 11:30:57.128 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Unsubscribed all topics or patterns and assigned partitions 11:30:57.128 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Updating suspended tasks to contain active tasks [] 11:30:57.128 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Removing all active tasks [] 11:30:57.128 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Removing all standby tasks [] 11:30:57.128 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] partition revocation took 1 ms. suspended active tasks: [] suspended standby tasks: [] previous active tasks: [] 11:30:57.130 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - (Re-)joining group exactly-once 11:30:57.130 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=my-topic) to node 1 11:30:57.132 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 15 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63344 (id: 1 rack: null), 127.0.0.1:63325 (id: 0 rack: null), 127.0.0.1:63361 (id: 2 rack: null)], partitions = [Partition(topic = my-topic, partition = 0, leader = 2, replicas = [2], isr = [2])]) 11:30:57.133 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.StreamPartitionAssignor - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] found [my-topic] topics possibly matching regex 11:30:57.133 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.TopologyBuilder - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] updating builder with SubscriptionUpdates{updatedTopicSubscriptions=[my-topic]} topic(s) with possible matching regex subscription(s) 11:30:57.139 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending JoinGroup ((type: JoinGroupRequest, groupId=exactly-once, sessionTimeout=10000, rebalanceTimeout=2147483647, memberId=, protocolType=consumer, groupProtocols=org.apache.kafka.common.requests.JoinGroupRequest$ProtocolMetadata@29977195)) to coordinator 127.0.0.1:63325 (id: 2147483647 rack: null) 11:30:57.141 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-2147483647.bytes-sent 11:30:57.141 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-2147483647.bytes-received 11:30:57.141 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-2147483647.latency 11:30:57.141 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.network.Selector - Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 2147483647 11:30:57.142 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Completed connection to node 2147483647. Fetching API versions. 11:30:57.142 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Initiating API versions fetch from node 2147483647. 11:30:57.143 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - Recorded API versions for node 2147483647: (Produce(0): 0 to 3 [usable: 3], Fetch(1): 0 to 5 [usable: 5], Offsets(2): 0 to 2 [usable: 2], Metadata(3): 0 to 4 [usable: 4], LeaderAndIsr(4): 0 [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 [usable: 3], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 3 [usable: 3], OffsetFetch(9): 0 to 3 [usable: 3], FindCoordinator(10): 0 to 1 [usable: 1], JoinGroup(11): 0 to 2 [usable: 2], Heartbeat(12): 0 to 1 [usable: 1], LeaveGroup(13): 0 to 1 [usable: 1], SyncGroup(14): 0 to 1 [usable: 1], DescribeGroups(15): 0 to 1 [usable: 1], ListGroups(16): 0 to 1 [usable: 1], SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 to 1 [usable: 1], CreateTopics(19): 0 to 2 [usable: 2], DeleteTopics(20): 0 to 1 [usable: 1], DeleteRecords(21): 0 [usable: 0], InitProducerId(22): 0 [usable: 0], OffsetForLeaderEpoch(23): 0 [usable: 0], AddPartitionsToTxn(24): 0 [usable: 0], AddOffsetsToTxn(25): 0 [usable: 0], EndTxn(26): 0 [usable: 0], WriteTxnMarkers(27): 0 [usable: 0], TxnOffsetCommit(28): 0 [usable: 0], DescribeAcls(29): 0 [usable: 0], CreateAcls(30): 0 [usable: 0], DeleteAcls(31): 0 [usable: 0], DescribeConfigs(32): 0 [usable: 0], AlterConfigs(33): 0 [usable: 0]) 11:30:57.161 [kafka-request-handler-6] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Preparing to rebalance group exactly-once with old generation 0 (__consumer_offsets-20) 11:30:57.162 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name topic.my-topic.records-per-batch 11:30:57.163 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name topic.my-topic.bytes 11:30:57.163 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name topic.my-topic.compression-rate 11:30:57.163 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name topic.my-topic.record-retries 11:30:57.163 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name topic.my-topic.record-errors 11:30:57.190 [executor-Rebalance] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Stabilized group exactly-once generation 1 (__consumer_offsets-20) 11:30:57.195 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received successful JoinGroup response for group exactly-once: org.apache.kafka.common.requests.JoinGroupResponse@4b2508fc 11:30:57.195 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Performing assignment for group exactly-once using strategy stream with subscriptions {exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer-b7ad69a5-cc35-4032-95d6-188d3c6b7e81=Subscription(topics=[my-topic])} 11:30:57.196 [kafka-request-handler-0] INFO kafka.server.epoch.LeaderEpochFileCache - Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset-1} for Partition: my-topic-0. Cache now contains 0 entries. 11:30:57.197 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamPartitionAssignor - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Constructed client metadata {0b0d8a4e-7380-4eb4-887b-13b509f90181=ClientMetadata{hostInfo=null, consumers=[exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer-b7ad69a5-cc35-4032-95d6-188d3c6b7e81], state=[activeTasks: ([]) standbyTasks: ([]) assignedTasks: ([]) prevActiveTasks: ([]) prevAssignedTasks: ([]) capacity: 1]}} from the member subscriptions. 11:30:57.198 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.StreamPartitionAssignor - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Starting to validate internal topics in partition assignor. 11:30:57.198 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamPartitionAssignor - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Completed validating internal topics in partition assignor 11:30:57.198 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.StreamPartitionAssignor - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Created repartition topics [] from the parsed topology. 11:30:57.198 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.StreamPartitionAssignor - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Starting to validate internal topics in partition assignor. 11:30:57.198 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamPartitionAssignor - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Completed validating internal topics in partition assignor 11:30:57.198 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.StreamPartitionAssignor - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Created state changelog topics {} from the parsed topology. 11:30:57.198 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.StreamPartitionAssignor - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Assigning tasks [0_0] to clients {0b0d8a4e-7380-4eb4-887b-13b509f90181=[activeTasks: ([]) standbyTasks: ([]) assignedTasks: ([]) prevActiveTasks: ([]) prevAssignedTasks: ([]) capacity: 1]} with number of replicas 0 11:30:57.214 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamPartitionAssignor - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Assigned tasks to clients as {0b0d8a4e-7380-4eb4-887b-13b509f90181=[activeTasks: ([0_0]) standbyTasks: ([]) assignedTasks: ([0_0]) prevActiveTasks: ([]) prevAssignedTasks: ([]) capacity: 1]}. 11:30:57.217 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Finished assignment for group exactly-once: {exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer-b7ad69a5-cc35-4032-95d6-188d3c6b7e81=Assignment(partitions=[my-topic-0])} 11:30:57.217 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending leader SyncGroup for group exactly-once to coordinator 127.0.0.1:63325 (id: 2147483647 rack: null): (type=SyncGroupRequest, groupId=exactly-once, generationId=1, memberId=exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer-b7ad69a5-cc35-4032-95d6-188d3c6b7e81, groupAssignment=exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer-b7ad69a5-cc35-4032-95d6-188d3c6b7e81) 11:30:57.221 [kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Assignment received from leader for group exactly-once for generation 1 11:30:57.229 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key my-topic-0 unblocked 0 fetch requests. 11:30:57.231 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [my-topic,0] on broker 2: High watermark updated to 1 [0 : 74] 11:30:57.231 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key my-topic-0 unblocked 0 fetch requests. 11:30:57.232 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key my-topic-0 unblocked 0 producer requests. 11:30:57.232 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key my-topic-0 unblocked 0 DeleteRecordsRequest. 11:30:57.233 [kafka-request-handler-1] INFO kafka.server.epoch.LeaderEpochFileCache - Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset-1} for Partition: __consumer_offsets-20. Cache now contains 0 entries. 11:30:57.235 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Produce to local log in 0 ms 11:30:57.242 [kafka-request-handler-0] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name Produce-:producer-1 11:30:57.242 [kafka-request-handler-0] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name ProduceThrottleTime-:producer-1 11:30:57.244 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __consumer_offsets-20 unblocked 0 fetch requests. 11:30:57.244 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,20] on broker 0: High watermark updated to 1 [0 : 545] 11:30:57.244 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __consumer_offsets-20 unblocked 0 fetch requests. 11:30:57.244 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __consumer_offsets-20 unblocked 0 producer requests. 11:30:57.244 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __consumer_offsets-20 unblocked 0 DeleteRecordsRequest. 11:30:57.245 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Produce to local log in 0 ms offset 0 11:30:57.253 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Successfully joined group exactly-once with generation 1 11:30:57.260 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Setting newly assigned partitions [my-topic-0] for group exactly-once 11:30:57.260 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] at state PARTITIONS_REVOKED: new partitions [my-topic-0] assigned at the end of consumer rebalance. assigned active tasks: [0_0] assigned standby tasks: [] current suspended active tasks: [] current suspended standby tasks: [] previous active tasks: [] 11:30:57.261 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] State transition from PARTITIONS_REVOKED to ASSIGNING_PARTITIONS. 11:30:57.261 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.KafkaStreams - stream-client [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181] State transition from REBALANCING to REBALANCING. 11:30:57.261 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Adding assigned tasks as active {0_0=[my-topic-0]} 11:30:57.261 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] New active tasks to be created: {0_0=[my-topic-0]} 11:30:57.261 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Creating active task 0_0 with assigned partitions [[my-topic-0]] 11:30:57.270 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Creating shared producer client 11:30:57.271 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [127.0.0.1:63325] buffer.memory = 33554432 client.id = exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-producer compression.type = none connections.max.idle.ms = 540000 enable.idempotence = false interceptor.classes = null key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 100 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 10 retry.backoff.ms = 100 sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer 11:30:57.277 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bufferpool-wait-time 11:30:57.278 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name buffer-exhausted-records 11:30:57.278 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 1 to Cluster(id = null, nodes = [127.0.0.1:63325 (id: -1 rack: null)], partitions = []) 11:30:57.278 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name produce-throttle-time 11:30:57.279 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed: 11:30:57.279 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created: 11:30:57.280 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received: 11:30:57.280 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent: 11:30:57.280 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received: 11:30:57.280 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time: 11:30:57.280 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time: 11:30:57.281 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name batch-size 11:30:57.281 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name compression-rate 11:30:57.281 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name queue-time 11:30:57.281 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name request-time 11:30:57.281 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name records-per-request 11:30:57.281 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name record-retries 11:30:57.281 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name errors 11:30:57.281 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name record-size-max 11:30:57.282 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name batch-split-rate 11:30:57.282 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 0.11.0.0 11:30:57.282 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : cb8625948210849f 11:30:57.282 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.producer.KafkaProducer - Kafka producer started 11:30:57.282 [kafka-producer-network-thread | exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-producer] DEBUG org.apache.kafka.clients.producer.internals.Sender - Starting Kafka producer I/O thread. 11:30:57.287 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.StateDirectory - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Acquired state dir lock for task 0_0 11:30:57.289 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.ProcessorStateManager - task [0_0] Created state store manager for task 0_0 with the acquired state dir lock 11:30:57.293 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name commit 11:30:57.294 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name 0_0-commit 11:30:57.303 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.StreamTask - task [0_0] Initializing 11:30:57.303 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.AbstractTask - task [0_0] Initializing state stores 11:30:57.303 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.AbstractTask - task [0_0] Updating store offset limits {} 11:30:57.303 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group exactly-once fetching committed offsets for partitions: [my-topic-0] 11:30:57.310 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group exactly-once has no committed offset for partition my-topic-0 11:30:57.310 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.ProcessorStateManager - task [0_0] Register global stores [] 11:30:57.310 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.StreamTask - task [0_0] Initializing processor nodes of the topology 11:30:57.311 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name process 11:30:57.311 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name task.0_0.KSTREAM-SOURCE-0000000000-process 11:30:57.312 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name punctuate 11:30:57.312 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name task.0_0.KSTREAM-SOURCE-0000000000-punctuate 11:30:57.312 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name create 11:30:57.312 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name task.0_0.KSTREAM-SOURCE-0000000000-create 11:30:57.312 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name destroy 11:30:57.312 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name task.0_0.KSTREAM-SOURCE-0000000000-destroy 11:30:57.312 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name forward 11:30:57.312 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name task.0_0.KSTREAM-SOURCE-0000000000-forward 11:30:57.314 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name task.0_0.KSTREAM-FOREACH-0000000001-process 11:30:57.314 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name task.0_0.KSTREAM-FOREACH-0000000001-punctuate 11:30:57.314 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name task.0_0.KSTREAM-FOREACH-0000000001-create 11:30:57.315 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name task.0_0.KSTREAM-FOREACH-0000000001-destroy 11:30:57.315 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name task.0_0.KSTREAM-FOREACH-0000000001-forward 11:30:57.315 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Created active task 0_0 with assigned partitions [my-topic-0] 11:30:57.315 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StoreChangelogReader - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Starting restoring state stores from changelog topics [] 11:30:57.315 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Unsubscribed all topics or patterns and assigned partitions 11:30:57.315 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Unsubscribed all topics or patterns and assigned partitions 11:30:57.316 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.StoreChangelogReader - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Took 1 ms to restore all active states 11:30:57.316 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Adding assigned standby tasks {} 11:30:57.316 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] New standby tasks to be created: {} 11:30:57.317 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Unsubscribed all topics or patterns and assigned partitions 11:30:57.317 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] State transition from ASSIGNING_PARTITIONS to RUNNING. 11:30:57.317 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.KafkaStreams - stream-client [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181] State transition from REBALANCING to RUNNING. 11:30:57.317 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] partition assignment took 57 ms. current active tasks: [0_0] current standby tasks: [] 11:30:57.318 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group exactly-once fetching committed offsets for partitions: [my-topic-0] 11:30:57.320 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group exactly-once has no committed offset for partition my-topic-0 11:30:57.325 [kafka-request-handler-4] DEBUG kafka.log.Log - Searching offset for timestamp -2 11:30:57.328 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Handling ListOffsetResponse response for my-topic-0. Fetched offset 0, timestamp -1 11:30:57.329 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Resetting offset for partition my-topic-0 to offset 0. 11:30:57.329 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 0 to node 127.0.0.1:63361 (id: 2 rack: null) 11:30:57.330 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:30:57.359 [kafka-request-handler-7] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name Fetch-:exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer 11:30:57.359 [kafka-request-handler-7] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name FetchThrottleTime-:exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer 11:30:57.371 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 0 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=1, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=74) 11:30:57.374 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name topic.my-topic.bytes-fetched 11:30:57.374 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name topic.my-topic.records-fetched 11:30:57.374 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name my-topic-0.records-lag 11:30:57.375 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 1 to node 127.0.0.1:63361 (id: 2 rack: null) 11:30:57.375 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) seen key foo with value bar 11:30:57.838 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.streams.StreamsConfig - Using commit.interval.ms default value of 100 as exactly once is enabled. 11:30:57.838 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.streams.StreamsConfig - StreamsConfig values: application.id = exactly-once application.server = bootstrap.servers = [127.0.0.1:63325] buffered.records.per.partition = 1000 cache.max.bytes.buffering = 10485760 client.id = commit.interval.ms = 100 connections.max.idle.ms = 540000 default.key.serde = class org.apache.kafka.common.serialization.Serdes$ByteArraySerde default.timestamp.extractor = class org.apache.kafka.streams.processor.FailOnInvalidTimestamp default.value.serde = class org.apache.kafka.common.serialization.Serdes$ByteArraySerde key.serde = null metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 num.standby.replicas = 0 num.stream.threads = 1 partition.grouper = class org.apache.kafka.streams.processor.DefaultPartitionGrouper poll.ms = 100 processing.guarantee = exactly_once receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 replication.factor = 1 request.timeout.ms = 40000 retry.backoff.ms = 100 rocksdb.config.setter = null security.protocol = PLAINTEXT send.buffer.bytes = 131072 state.cleanup.delay.ms = 600000 state.dir = C:\Users\Ryan\AppData\Local\Temp\dd18537f-7701-439c-8b57-f758ce707d935414539244310879072 timestamp.extractor = null value.serde = null windowstore.changelog.additional.retention.ms = 86400000 zookeeper.connect = 11:30:57.838 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [127.0.0.1:63325] buffer.memory = 33554432 client.id = compression.type = none connections.max.idle.ms = 540000 enable.idempotence = true interceptor.classes = null key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 0 retry.backoff.ms = 100 sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = dd18537f-7701-439c-8b57-f758ce707d93 value.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:30:57.854 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.clients.producer.KafkaProducer - Instantiated a transactional producer. 11:30:57.854 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.clients.producer.KafkaProducer - Overriding the default retries config to the recommended value of 2147483647 since the idempotent producer is enabled. 11:30:57.854 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.clients.producer.KafkaProducer - Overriding the default max.in.flight.requests.per.connection to 1 since idempontence is enabled. 11:30:57.854 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.clients.producer.KafkaProducer - Overriding the default acks to all since idempotence is enabled. 11:30:57.854 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bufferpool-wait-time 11:30:57.854 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name buffer-exhausted-records 11:30:57.855 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 1 to Cluster(id = null, nodes = [127.0.0.1:63325 (id: -1 rack: null)], partitions = []) 11:30:57.855 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name produce-throttle-time 11:30:57.856 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed: 11:30:57.856 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created: 11:30:57.856 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received: 11:30:57.856 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent: 11:30:57.856 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received: 11:30:57.856 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time: 11:30:57.856 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time: 11:30:57.857 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name batch-size 11:30:57.857 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name compression-rate 11:30:57.857 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name queue-time 11:30:57.857 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name request-time 11:30:57.857 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name records-per-request 11:30:57.857 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name record-retries 11:30:57.857 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name errors 11:30:57.857 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name record-size-max 11:30:57.857 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name batch-split-rate 11:30:57.857 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 0.11.0.0 11:30:57.857 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : cb8625948210849f 11:30:57.857 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.producer.KafkaProducer - Kafka producer started 11:30:57.858 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.clients.producer.internals.Sender - Starting Kafka producer I/O thread. 11:30:57.858 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name thread.exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2.commit-latency 11:30:57.858 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name thread.exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2.poll-latency 11:30:57.858 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name thread.exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2.process-latency 11:30:57.858 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name thread.exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2.punctuate-latency 11:30:57.858 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name thread.exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2.task-created 11:30:57.858 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name thread.exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2.task-closed 11:30:57.858 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name thread.exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2.skipped-records 11:30:57.859 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Creating consumer client 11:30:57.859 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: auto.commit.interval.ms = 5000 auto.offset.reset = earliest bootstrap.servers = [127.0.0.1:63325] check.crcs = true client.id = exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2-consumer connections.max.idle.ms = 540000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = exactly-once heartbeat.interval.ms = 3000 interceptor.classes = null internal.leave.group.on.close = false isolation.level = read_committed key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 2147483647 max.poll.records = 1000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [org.apache.kafka.streams.processor.internals.StreamPartitionAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 305000 retry.backoff.ms = 100 sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 session.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer 11:30:57.859 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Starting the Kafka consumer 11:30:57.859 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 1 to Cluster(id = null, nodes = [127.0.0.1:63325 (id: -1 rack: null)], partitions = []) 11:30:57.859 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name fetch-throttle-time 11:30:57.861 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed: 11:30:57.861 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created: 11:30:57.861 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received: 11:30:57.861 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent: 11:30:57.861 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received: 11:30:57.861 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time: 11:30:57.861 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time: 11:30:57.862 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 1 to Cluster(id = null, nodes = [127.0.0.1:63325 (id: -1 rack: null)], partitions = []) 11:30:57.863 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed: 11:30:57.863 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created: 11:30:57.863 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received: 11:30:57.863 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent: 11:30:57.863 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received: 11:30:57.863 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time: 11:30:57.863 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time: 11:30:57.863 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name heartbeat-latency 11:30:57.864 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name join-latency 11:30:57.864 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name sync-latency 11:30:57.864 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name commit-latency 11:30:57.864 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-fetched 11:30:57.864 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name records-fetched 11:30:57.864 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name fetch-latency 11:30:57.864 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name records-lag 11:30:57.864 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 0.11.0.0 11:30:57.864 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : cb8625948210849f 11:30:57.864 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Kafka consumer created 11:30:57.864 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Creating restore consumer client 11:30:57.865 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: auto.commit.interval.ms = 5000 auto.offset.reset = earliest bootstrap.servers = [127.0.0.1:63325] check.crcs = true client.id = exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2-restore-consumer connections.max.idle.ms = 540000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = heartbeat.interval.ms = 3000 interceptor.classes = null internal.leave.group.on.close = false isolation.level = read_committed key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 2147483647 max.poll.records = 1000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 305000 retry.backoff.ms = 100 sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 session.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer 11:30:57.865 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Starting the Kafka consumer 11:30:57.865 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 1 to Cluster(id = null, nodes = [127.0.0.1:63325 (id: -1 rack: null)], partitions = []) 11:30:57.865 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name fetch-throttle-time 11:30:57.867 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed: 11:30:57.867 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created: 11:30:57.867 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received: 11:30:57.867 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent: 11:30:57.867 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received: 11:30:57.867 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time: 11:30:57.867 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time: 11:30:57.867 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name heartbeat-latency 11:30:57.867 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name join-latency 11:30:57.867 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name sync-latency 11:30:57.868 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name commit-latency 11:30:57.868 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-fetched 11:30:57.868 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name records-fetched 11:30:57.868 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name fetch-latency 11:30:57.868 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name records-lag 11:30:57.868 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 0.11.0.0 11:30:57.868 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : cb8625948210849f 11:30:57.868 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Kafka consumer created 11:30:57.869 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] State transition from CREATED to RUNNING. 11:30:57.869 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.streams.KafkaStreams - stream-client [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196] Starting Kafka Stream process. 11:30:57.869 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 1 to Cluster(id = null, nodes = [127.0.0.1:63325 (id: -1 rack: null)], partitions = []) 11:30:57.871 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed: 11:30:57.871 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created: 11:30:57.871 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received: 11:30:57.871 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent: 11:30:57.871 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received: 11:30:57.871 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time: 11:30:57.871 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time: 11:30:57.871 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 1 to Cluster(id = null, nodes = [127.0.0.1:63325 (id: -1 rack: null)], partitions = []) 11:30:57.871 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node -1 at 127.0.0.1:63325. 11:30:57.872 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-sent 11:30:57.872 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-received 11:30:57.872 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.latency 11:30:57.872 [kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Acceptor - Accepted connection from /127.0.0.1:63399 on /127.0.0.1:63325 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:30:57.872 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.network.Selector - Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 11:30:57.872 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.NetworkClient - Completed connection to node -1. Fetching API versions. 11:30:57.872 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.NetworkClient - Initiating API versions fetch from node -1. 11:30:57.872 [kafka-network-thread-0-ListenerName(PLAINTEXT)-PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:63399 11:30:57.873 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.NetworkClient - Recorded API versions for node -1: (Produce(0): 0 to 3 [usable: 3], Fetch(1): 0 to 5 [usable: 5], Offsets(2): 0 to 2 [usable: 2], Metadata(3): 0 to 4 [usable: 4], LeaderAndIsr(4): 0 [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 [usable: 3], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 3 [usable: 3], OffsetFetch(9): 0 to 3 [usable: 3], FindCoordinator(10): 0 to 1 [usable: 1], JoinGroup(11): 0 to 2 [usable: 2], Heartbeat(12): 0 to 1 [usable: 1], LeaveGroup(13): 0 to 1 [usable: 1], SyncGroup(14): 0 to 1 [usable: 1], DescribeGroups(15): 0 to 1 [usable: 1], ListGroups(16): 0 to 1 [usable: 1], SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 to 1 [usable: 1], CreateTopics(19): 0 to 2 [usable: 2], DeleteTopics(20): 0 to 1 [usable: 1], DeleteRecords(21): 0 [usable: 0], InitProducerId(22): 0 [usable: 0], OffsetForLeaderEpoch(23): 0 [usable: 0], AddPartitionsToTxn(24): 0 [usable: 0], AddOffsetsToTxn(25): 0 [usable: 0], EndTxn(26): 0 [usable: 0], WriteTxnMarkers(27): 0 [usable: 0], TxnOffsetCommit(28): 0 [usable: 0], DescribeAcls(29): 0 [usable: 0], CreateAcls(30): 0 [usable: 0], DeleteAcls(31): 0 [usable: 0], DescribeConfigs(32): 0 [usable: 0], AlterConfigs(33): 0 [usable: 0]) 11:30:57.887 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 1 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=1, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:30:57.887 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 1 to node 127.0.0.1:63361 (id: 2 rack: null) 11:30:57.887 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:30:57.992 [kafka-network-thread-0-ListenerName(PLAINTEXT)-PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - Connection with /127.0.0.1 disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:87) at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:75) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:203) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:167) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:379) at org.apache.kafka.common.network.Selector.poll(Selector.java:326) at kafka.network.Processor.poll(SocketServer.scala:499) at kafka.network.Processor.run(SocketServer.scala:435) at java.lang.Thread.run(Unknown Source) 11:30:57.992 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name connections-closed: 11:30:57.992 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name connections-created: 11:30:57.992 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name bytes-sent-received: 11:30:57.992 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name bytes-sent: 11:30:57.992 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name bytes-received: 11:30:57.992 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name select-time: 11:30:57.992 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name io-time: 11:30:57.992 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name node--1.bytes-sent 11:30:57.992 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name node--1.bytes-received 11:30:57.992 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name node--1.latency 11:30:57.992 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.streams.KafkaStreams - stream-client [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196] State transition from CREATED to RUNNING. 11:30:57.992 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.streams.KafkaStreams - stream-client [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196] Started Kafka Stream process 11:30:57.992 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Starting 11:30:57.992 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Subscribed to pattern: my-topic 11:30:57.992 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending GroupCoordinator request for group exactly-once to broker 127.0.0.1:63325 (id: -1 rack: null) 11:30:57.992 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node -1 at 127.0.0.1:63325. 11:30:57.992 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [TransactionalId dd18537f-7701-439c-8b57-f758ce707d93] Transition from state UNINITIALIZED to INITIALIZING 11:30:57.992 [pool-6-thread-1-ScalaTest-running-Tests] INFO org.apache.kafka.clients.producer.internals.TransactionManager - [TransactionalId dd18537f-7701-439c-8b57-f758ce707d93] ProducerId set to -1 with epoch -1 11:30:57.992 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-sent 11:30:57.992 [kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Acceptor - Accepted connection from /127.0.0.1:63400 on /127.0.0.1:63325 and assigned it to processor 2, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:30:57.992 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-received 11:30:57.992 [kafka-network-thread-0-ListenerName(PLAINTEXT)-PLAINTEXT-2] DEBUG kafka.network.Processor - Processor 2 listening to new connection from /127.0.0.1:63400 11:30:57.992 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.latency 11:30:57.992 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.common.network.Selector - Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 11:30:57.992 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.NetworkClient - Completed connection to node -1. Fetching API versions. 11:30:57.992 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.NetworkClient - Initiating API versions fetch from node -1. 11:30:57.992 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [TransactionalId dd18537f-7701-439c-8b57-f758ce707d93] Enqueuing transactional request (type=InitProducerIdRequest, transactionalId=dd18537f-7701-439c-8b57-f758ce707d93, transactionTimeoutMs=60000) 11:30:57.992 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [TransactionalId dd18537f-7701-439c-8b57-f758ce707d93] Enqueuing transactional request (type=FindCoordinatorRequest, coordinatorKey=dd18537f-7701-439c-8b57-f758ce707d93, coordinatorType=TRANSACTION) 11:30:57.992 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [TransactionalId dd18537f-7701-439c-8b57-f758ce707d93] Enqueuing transactional request (type=InitProducerIdRequest, transactionalId=dd18537f-7701-439c-8b57-f758ce707d93, transactionTimeoutMs=60000) 11:30:57.992 [kafka-request-handler-3] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name Request-:exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2-consumer 11:30:57.992 [kafka-request-handler-3] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name RequestThrottleTime-:exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2-consumer 11:30:57.992 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.NetworkClient - Recorded API versions for node -1: (Produce(0): 0 to 3 [usable: 3], Fetch(1): 0 to 5 [usable: 5], Offsets(2): 0 to 2 [usable: 2], Metadata(3): 0 to 4 [usable: 4], LeaderAndIsr(4): 0 [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 [usable: 3], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 3 [usable: 3], OffsetFetch(9): 0 to 3 [usable: 3], FindCoordinator(10): 0 to 1 [usable: 1], JoinGroup(11): 0 to 2 [usable: 2], Heartbeat(12): 0 to 1 [usable: 1], LeaveGroup(13): 0 to 1 [usable: 1], SyncGroup(14): 0 to 1 [usable: 1], DescribeGroups(15): 0 to 1 [usable: 1], ListGroups(16): 0 to 1 [usable: 1], SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 to 1 [usable: 1], CreateTopics(19): 0 to 2 [usable: 2], DeleteTopics(20): 0 to 1 [usable: 1], DeleteRecords(21): 0 [usable: 0], InitProducerId(22): 0 [usable: 0], OffsetForLeaderEpoch(23): 0 [usable: 0], AddPartitionsToTxn(24): 0 [usable: 0], AddOffsetsToTxn(25): 0 [usable: 0], EndTxn(26): 0 [usable: 0], WriteTxnMarkers(27): 0 [usable: 0], TxnOffsetCommit(28): 0 [usable: 0], DescribeAcls(29): 0 [usable: 0], CreateAcls(30): 0 [usable: 0], DeleteAcls(31): 0 [usable: 0], DescribeConfigs(32): 0 [usable: 0], AlterConfigs(33): 0 [usable: 0]) 11:30:57.992 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=) to node -1 11:30:57.992 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 2 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63361 (id: 2 rack: null), 127.0.0.1:63325 (id: 0 rack: null), 127.0.0.1:63344 (id: 1 rack: null)], partitions = [Partition(topic = my-topic, partition = 0, leader = 2, replicas = [2], isr = [2])]) 11:30:58.008 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received GroupCoordinator response ClientResponse(receivedTimeMs=1505298658008, latencyMs=16, disconnected=false, requestHeader={api_key=10,api_version=1,correlation_id=0,client_id=exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2-consumer}, responseBody=FindCoordinatorResponse(throttleTimeMs=0, errorMessage='null', error=NONE, node=127.0.0.1:63325 (id: 0 rack: null))) for group exactly-once 11:30:58.008 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Discovered coordinator 127.0.0.1:63325 (id: 2147483647 rack: null) for group exactly-once. 11:30:58.008 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node 2147483647 at 127.0.0.1:63325. 11:30:58.008 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Revoking previously assigned partitions [] for group exactly-once 11:30:58.008 [kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Acceptor - Accepted connection from /127.0.0.1:63401 on /127.0.0.1:63325 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:30:58.008 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] at state RUNNING: partitions [] revoked at the beginning of consumer rebalance. current assigned active tasks: [] current assigned standby tasks: [] 11:30:58.008 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] State transition from RUNNING to PARTITIONS_REVOKED. 11:30:58.008 [kafka-network-thread-0-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:63401 11:30:58.008 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Heartbeat thread for group exactly-once started 11:30:58.008 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] INFO org.apache.kafka.streams.KafkaStreams - stream-client [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196] State transition from RUNNING to REBALANCING. 11:30:58.008 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] suspendTasksAndState: suspending all active tasks [] and standby tasks [] 11:30:58.008 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Unsubscribed all topics or patterns and assigned partitions 11:30:58.008 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Updating suspended tasks to contain active tasks [] 11:30:58.008 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Removing all active tasks [] 11:30:58.008 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Removing all standby tasks [] 11:30:58.008 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] partition revocation took 0 ms. suspended active tasks: [] suspended standby tasks: [] previous active tasks: [] 11:30:58.008 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - (Re-)joining group exactly-once 11:30:58.008 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamPartitionAssignor - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] found [my-topic] topics possibly matching regex 11:30:58.008 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.TopologyBuilder - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] updating builder with SubscriptionUpdates{updatedTopicSubscriptions=[my-topic]} topic(s) with possible matching regex subscription(s) 11:30:58.008 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending JoinGroup ((type: JoinGroupRequest, groupId=exactly-once, sessionTimeout=10000, rebalanceTimeout=2147483647, memberId=, protocolType=consumer, groupProtocols=org.apache.kafka.common.requests.JoinGroupRequest$ProtocolMetadata@1d833ab8)) to coordinator 127.0.0.1:63325 (id: 2147483647 rack: null) 11:30:58.008 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-2147483647.bytes-sent 11:30:58.008 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-2147483647.bytes-received 11:30:58.008 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-2147483647.latency 11:30:58.008 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.common.network.Selector - Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 2147483647 11:30:58.008 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.NetworkClient - Completed connection to node 2147483647. Fetching API versions. 11:30:58.008 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.NetworkClient - Initiating API versions fetch from node 2147483647. 11:30:58.008 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.NetworkClient - Recorded API versions for node 2147483647: (Produce(0): 0 to 3 [usable: 3], Fetch(1): 0 to 5 [usable: 5], Offsets(2): 0 to 2 [usable: 2], Metadata(3): 0 to 4 [usable: 4], LeaderAndIsr(4): 0 [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 [usable: 3], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 3 [usable: 3], OffsetFetch(9): 0 to 3 [usable: 3], FindCoordinator(10): 0 to 1 [usable: 1], JoinGroup(11): 0 to 2 [usable: 2], Heartbeat(12): 0 to 1 [usable: 1], LeaveGroup(13): 0 to 1 [usable: 1], SyncGroup(14): 0 to 1 [usable: 1], DescribeGroups(15): 0 to 1 [usable: 1], ListGroups(16): 0 to 1 [usable: 1], SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 to 1 [usable: 1], CreateTopics(19): 0 to 2 [usable: 2], DeleteTopics(20): 0 to 1 [usable: 1], DeleteRecords(21): 0 [usable: 0], InitProducerId(22): 0 [usable: 0], OffsetForLeaderEpoch(23): 0 [usable: 0], AddPartitionsToTxn(24): 0 [usable: 0], AddOffsetsToTxn(25): 0 [usable: 0], EndTxn(26): 0 [usable: 0], WriteTxnMarkers(27): 0 [usable: 0], TxnOffsetCommit(28): 0 [usable: 0], DescribeAcls(29): 0 [usable: 0], CreateAcls(30): 0 [usable: 0], DeleteAcls(31): 0 [usable: 0], DescribeConfigs(32): 0 [usable: 0], AlterConfigs(33): 0 [usable: 0]) 11:30:58.008 [kafka-request-handler-5] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Preparing to rebalance group exactly-once with old generation 1 (__consumer_offsets-20) 11:30:58.092 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node -1 at 127.0.0.1:63325. 11:30:58.092 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-sent 11:30:58.092 [kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Acceptor - Accepted connection from /127.0.0.1:63402 on /127.0.0.1:63325 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:30:58.092 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-received 11:30:58.092 [kafka-network-thread-0-ListenerName(PLAINTEXT)-PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:63402 11:30:58.092 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.latency 11:30:58.092 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.common.network.Selector - Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 11:30:58.092 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.clients.NetworkClient - Completed connection to node -1. Fetching API versions. 11:30:58.092 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.clients.NetworkClient - Initiating API versions fetch from node -1. 11:30:58.092 [kafka-request-handler-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name Request-:producer-2 11:30:58.092 [kafka-request-handler-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name RequestThrottleTime-:producer-2 11:30:58.092 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.clients.NetworkClient - Recorded API versions for node -1: (Produce(0): 0 to 3 [usable: 3], Fetch(1): 0 to 5 [usable: 5], Offsets(2): 0 to 2 [usable: 2], Metadata(3): 0 to 4 [usable: 4], LeaderAndIsr(4): 0 [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 [usable: 3], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 3 [usable: 3], OffsetFetch(9): 0 to 3 [usable: 3], FindCoordinator(10): 0 to 1 [usable: 1], JoinGroup(11): 0 to 2 [usable: 2], Heartbeat(12): 0 to 1 [usable: 1], LeaveGroup(13): 0 to 1 [usable: 1], SyncGroup(14): 0 to 1 [usable: 1], DescribeGroups(15): 0 to 1 [usable: 1], ListGroups(16): 0 to 1 [usable: 1], SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 to 1 [usable: 1], CreateTopics(19): 0 to 2 [usable: 2], DeleteTopics(20): 0 to 1 [usable: 1], DeleteRecords(21): 0 [usable: 0], InitProducerId(22): 0 [usable: 0], OffsetForLeaderEpoch(23): 0 [usable: 0], AddPartitionsToTxn(24): 0 [usable: 0], AddOffsetsToTxn(25): 0 [usable: 0], EndTxn(26): 0 [usable: 0], WriteTxnMarkers(27): 0 [usable: 0], TxnOffsetCommit(28): 0 [usable: 0], DescribeAcls(29): 0 [usable: 0], CreateAcls(30): 0 [usable: 0], DeleteAcls(31): 0 [usable: 0], DescribeConfigs(32): 0 [usable: 0], AlterConfigs(33): 0 [usable: 0]) 11:30:58.092 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.clients.producer.internals.Sender - [TransactionalId dd18537f-7701-439c-8b57-f758ce707d93] Sending transactional request (type=FindCoordinatorRequest, coordinatorKey=dd18537f-7701-439c-8b57-f758ce707d93, coordinatorType=TRANSACTION) to node 127.0.0.1:63325 (id: -1 rack: null) 11:30:58.092 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x18d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:58.092 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x18d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:58.092 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:30:58.092 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:30:58.092 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 397,8 replyHeader:: 397,204,0 request:: '/brokers/ids,T response:: v{'0,'1,'2} 11:30:58.092 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0001 after 1ms 11:30:58.092 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x18e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:58.092 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x18e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:58.092 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 398,4 replyHeader:: 398,204,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:58.092 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x18f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:58.092 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x18f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:58.092 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 399,4 replyHeader:: 399,204,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:58.092 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x190 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:58.092 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x190 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:58.092 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 400,4 replyHeader:: 400,204,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:58.092 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x191 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:58.092 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x191 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:58.108 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 401,3 replyHeader:: 401,204,-101 request:: '/brokers/topics/__transaction_state,F response:: 11:30:58.108 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x192 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:58.108 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x192 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:58.108 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 402,8 replyHeader:: 402,204,0 request:: '/brokers/topics,T response:: v{'my-topic,'__consumer_offsets} 11:30:58.108 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:setData cxid:0x193 zxid:0xcd txntype:-1 reqpath:n/a Error Path:/config/topics/__transaction_state Error:KeeperErrorCode = NoNode for /config/topics/__transaction_state 11:30:58.124 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:setData cxid:0x193 zxid:0xcd txntype:-1 reqpath:n/a 11:30:58.124 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:58.124 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 403,5 replyHeader:: 403,205,-101 request:: '/config/topics/__transaction_state,#7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22756e636c65616e2e6c65616465722e656c656374696f6e2e656e61626c65223a2266616c7365222c226d696e2e696e73796e632e7265706c69636173223a2232222c22636c65616e75702e706f6c696379223a22636f6d70616374222c22636f6d7072657373696f6e2e74797065223a22756e636f6d70726573736564227d7d,-1 response:: 11:30:58.124 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x194 zxid:0xce txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics 11:30:58.139 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x194 zxid:0xce txntype:-1 reqpath:n/a 11:30:58.139 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -110 11:30:58.139 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 404,1 replyHeader:: 404,206,-110 request:: '/config/topics,,v{s{31,s{'world,'anyone}}},0 response:: 11:30:58.155 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x195 zxid:0xcf txntype:1 reqpath:n/a 11:30:58.155 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x195 zxid:0xcf txntype:1 reqpath:n/a 11:30:58.155 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 405,1 replyHeader:: 405,207,0 request:: '/config/topics/__transaction_state,#7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22756e636c65616e2e6c65616465722e656c656374696f6e2e656e61626c65223a2266616c7365222c226d696e2e696e73796e632e7265706c69636173223a2232222c22636c65616e75702e706f6c696379223a22636f6d70616374222c22636f6d7072657373696f6e2e74797065223a22756e636f6d70726573736564227d7d,v{s{31,s{'world,'anyone}}},0 response:: '/config/topics/__transaction_state 11:30:58.155 [kafka-request-handler-7] INFO kafka.admin.AdminUtils$ - Topic creation {"version":1,"partitions":{"2":[1,0,2],"1":[0,2,1],"0":[2,1,0]}} 11:30:58.155 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x196 zxid:0xd0 txntype:1 reqpath:n/a 11:30:58.155 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x196 zxid:0xd0 txntype:1 reqpath:n/a 11:30:58.155 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification sessionid:0x15e7aca904b0001 11:30:58.155 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics for sessionid 0x15e7aca904b0001 11:30:58.155 [pool-6-thread-1-EventThread] DEBUG org.I0Itec.zkclient.ZkClient - Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics 11:30:58.155 [pool-6-thread-1-EventThread] DEBUG org.I0Itec.zkclient.ZkEventThread - New event: ZkEvent[Children of /brokers/topics changed sent to kafka.controller.TopicChangeListener@2812544] 11:30:58.155 [pool-6-thread-1-EventThread] DEBUG org.I0Itec.zkclient.ZkClient - Leaving process event 11:30:58.155 [ZkClient-EventThread-78-localhost:63309] DEBUG org.I0Itec.zkclient.ZkEventThread - Delivering event #6 ZkEvent[Children of /brokers/topics changed sent to kafka.controller.TopicChangeListener@2812544] 11:30:58.155 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 406,1 replyHeader:: 406,208,0 request:: '/brokers/topics/__transaction_state,#7b2276657273696f6e223a312c22706172746974696f6e73223a7b2232223a5b312c302c325d2c2231223a5b302c322c315d2c2230223a5b322c312c305d7d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__transaction_state 11:30:58.155 [kafka-request-handler-7] DEBUG kafka.admin.AdminUtils$ - Updated path /brokers/topics/__transaction_state with {"version":1,"partitions":{"2":[1,0,2],"1":[0,2,1],"0":[2,1,0]}} for replica assignment 11:30:58.170 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x197 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:58.170 [kafka-request-handler-7] INFO kafka.server.KafkaApis - [KafkaApi-0] Auto creation of topic __transaction_state with 3 partitions and replication factor 3 is successful 11:30:58.170 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x197 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:58.170 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 407,3 replyHeader:: 407,208,0 request:: '/brokers/topics,T response:: s{7,7,1505298652598,1505298652598,0,3,0,0,0,3,208} 11:30:58.171 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x198 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:58.171 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x198 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:30:58.171 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [TransactionalId dd18537f-7701-439c-8b57-f758ce707d93] Enqueuing transactional request (type=FindCoordinatorRequest, coordinatorKey=dd18537f-7701-439c-8b57-f758ce707d93, coordinatorType=TRANSACTION) 11:30:58.171 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 408,8 replyHeader:: 408,208,0 request:: '/brokers/topics,T response:: v{'my-topic,'__consumer_offsets,'__transaction_state} 11:30:58.171 [ZkClient-EventThread-78-localhost:63309] DEBUG org.I0Itec.zkclient.ZkEventThread - Delivering event #6 done 11:30:58.172 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x199 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:58.172 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x199 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:58.172 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 409,4 replyHeader:: 409,208,0 request:: '/brokers/topics/__transaction_state,F response:: #7b2276657273696f6e223a312c22706172746974696f6e73223a7b2232223a5b312c302c325d2c2231223a5b302c322c315d2c2230223a5b322c312c305d7d7d,s{208,208,1505298658155,1505298658155,0,0,0,0,64,0,208} 11:30:58.173 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__transaction_state], partition [2] are [List(1, 0, 2)] 11:30:58.173 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__transaction_state], partition [1] are [List(0, 2, 1)] 11:30:58.173 [controller-event-thread] DEBUG kafka.utils.ZkUtils - Replicas assigned to topic [__transaction_state], partition [0] are [List(2, 1, 0)] 11:30:58.174 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: New topics: [Set(__transaction_state)], deleted topics: [Set()], new partition replica assignment [Map([__transaction_state,1] -> List(0, 2, 1), [__transaction_state,0] -> List(2, 1, 0), [__transaction_state,2] -> List(1, 0, 2))] 11:30:58.174 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: New topic creation callback for [__transaction_state,1],[__transaction_state,0],[__transaction_state,2] 11:30:58.174 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x19a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:58.174 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x19a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:58.174 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 410,3 replyHeader:: 410,208,0 request:: '/brokers/topics/__transaction_state,T response:: s{208,208,1505298658155,1505298658155,0,0,0,0,64,0,208} 11:30:58.174 [controller-event-thread] DEBUG org.I0Itec.zkclient.ZkClient - Subscribed data changes for /brokers/topics/__transaction_state 11:30:58.175 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller 0]: New partition creation callback for [__transaction_state,1],[__transaction_state,0],[__transaction_state,2] 11:30:58.175 [controller-event-thread] INFO kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Invoking state change to NewPartition for partitions [__transaction_state,1],[__transaction_state,0],[__transaction_state,2] 11:30:58.175 [controller-event-thread] INFO kafka.controller.ReplicaStateMachine - [Replica state machine on controller 0]: Invoking state change to NewReplica for replicas [Topic=__transaction_state,Partition=1,Replica=0],[Topic=__transaction_state,Partition=2,Replica=1],[Topic=__transaction_state,Partition=0,Replica=0],[Topic=__transaction_state,Partition=2,Replica=0],[Topic=__transaction_state,Partition=0,Replica=1],[Topic=__transaction_state,Partition=2,Replica=2],[Topic=__transaction_state,Partition=0,Replica=2],[Topic=__transaction_state,Partition=1,Replica=2],[Topic=__transaction_state,Partition=1,Replica=1] 11:30:58.175 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x19b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state/partitions/1/state 11:30:58.175 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x19b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state/partitions/1/state 11:30:58.176 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 411,4 replyHeader:: 411,208,-101 request:: '/brokers/topics/__transaction_state/partitions/1/state,F response:: 11:30:58.176 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __transaction_state-1 11:30:58.176 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x19c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state/partitions/2/state 11:30:58.176 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x19c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state/partitions/2/state 11:30:58.177 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 412,4 replyHeader:: 412,208,-101 request:: '/brokers/topics/__transaction_state/partitions/2/state,F response:: 11:30:58.177 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __transaction_state-2 11:30:58.177 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x19d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state/partitions/0/state 11:30:58.177 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x19d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state/partitions/0/state 11:30:58.177 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 413,4 replyHeader:: 413,208,-101 request:: '/brokers/topics/__transaction_state/partitions/0/state,F response:: 11:30:58.177 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __transaction_state-0 11:30:58.177 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x19e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state/partitions/2/state 11:30:58.177 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x19e zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state/partitions/2/state 11:30:58.177 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 414,4 replyHeader:: 414,208,-101 request:: '/brokers/topics/__transaction_state/partitions/2/state,F response:: 11:30:58.177 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __transaction_state-2 11:30:58.177 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x19f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state/partitions/0/state 11:30:58.177 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x19f zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state/partitions/0/state 11:30:58.177 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 415,4 replyHeader:: 415,208,-101 request:: '/brokers/topics/__transaction_state/partitions/0/state,F response:: 11:30:58.177 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __transaction_state-0 11:30:58.177 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x1a0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state/partitions/2/state 11:30:58.177 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x1a0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state/partitions/2/state 11:30:58.177 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 416,4 replyHeader:: 416,208,-101 request:: '/brokers/topics/__transaction_state/partitions/2/state,F response:: 11:30:58.177 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __transaction_state-2 11:30:58.177 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x1a1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state/partitions/0/state 11:30:58.177 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x1a1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state/partitions/0/state 11:30:58.177 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 417,4 replyHeader:: 417,208,-101 request:: '/brokers/topics/__transaction_state/partitions/0/state,F response:: 11:30:58.177 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __transaction_state-0 11:30:58.177 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x1a2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state/partitions/1/state 11:30:58.177 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x1a2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state/partitions/1/state 11:30:58.177 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 418,4 replyHeader:: 418,208,-101 request:: '/brokers/topics/__transaction_state/partitions/1/state,F response:: 11:30:58.177 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __transaction_state-1 11:30:58.177 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x1a3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state/partitions/1/state 11:30:58.177 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x1a3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state/partitions/1/state 11:30:58.177 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 419,4 replyHeader:: 419,208,-101 request:: '/brokers/topics/__transaction_state/partitions/1/state,F response:: 11:30:58.177 [controller-event-thread] DEBUG kafka.utils.ReplicationUtils$ - Read leaderISR None for __transaction_state-1 11:30:58.177 [controller-event-thread] INFO kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Invoking state change to OnlinePartition for partitions [__transaction_state,1],[__transaction_state,0],[__transaction_state,2] 11:30:58.177 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__transaction_state,1] are: [List(0, 2, 1)] 11:30:58.177 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__transaction_state,1] to (Leader:0,ISR:0,2,1,LeaderEpoch:0,ControllerEpoch:1) 11:30:58.177 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x1a4 zxid:0xd1 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__transaction_state/partitions/1 Error:KeeperErrorCode = NoNode for /brokers/topics/__transaction_state/partitions/1 11:30:58.193 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x1a4 zxid:0xd1 txntype:-1 reqpath:n/a 11:30:58.193 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:58.193 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 420,1 replyHeader:: 420,209,-101 request:: '/brokers/topics/__transaction_state/partitions/1/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b302c322c315d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:58.193 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x1a5 zxid:0xd2 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__transaction_state/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/__transaction_state/partitions 11:30:58.193 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x1a5 zxid:0xd2 txntype:-1 reqpath:n/a 11:30:58.193 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:58.193 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 421,1 replyHeader:: 421,210,-101 request:: '/brokers/topics/__transaction_state/partitions/1,,v{s{31,s{'world,'anyone}}},0 response:: 11:30:58.208 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x1a6 zxid:0xd3 txntype:1 reqpath:n/a 11:30:58.208 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x1a6 zxid:0xd3 txntype:1 reqpath:n/a 11:30:58.208 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 422,1 replyHeader:: 422,211,0 request:: '/brokers/topics/__transaction_state/partitions,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__transaction_state/partitions 11:30:58.224 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x1a7 zxid:0xd4 txntype:1 reqpath:n/a 11:30:58.224 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x1a7 zxid:0xd4 txntype:1 reqpath:n/a 11:30:58.224 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 423,1 replyHeader:: 423,212,0 request:: '/brokers/topics/__transaction_state/partitions/1,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__transaction_state/partitions/1 11:30:58.245 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x1a8 zxid:0xd5 txntype:1 reqpath:n/a 11:30:58.245 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x1a8 zxid:0xd5 txntype:1 reqpath:n/a 11:30:58.245 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 424,1 replyHeader:: 424,213,0 request:: '/brokers/topics/__transaction_state/partitions/1/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a302c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b302c322c315d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__transaction_state/partitions/1/state 11:30:58.246 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__transaction_state,0] are: [List(2, 1, 0)] 11:30:58.246 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__transaction_state,0] to (Leader:2,ISR:2,1,0,LeaderEpoch:0,ControllerEpoch:1) 11:30:58.246 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x1a9 zxid:0xd6 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__transaction_state/partitions/0 Error:KeeperErrorCode = NoNode for /brokers/topics/__transaction_state/partitions/0 11:30:58.250 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x1a9 zxid:0xd6 txntype:-1 reqpath:n/a 11:30:58.250 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:58.250 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 425,1 replyHeader:: 425,214,-101 request:: '/brokers/topics/__transaction_state/partitions/0/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b322c312c305d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:58.250 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x1aa zxid:0xd7 txntype:1 reqpath:n/a 11:30:58.250 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x1aa zxid:0xd7 txntype:1 reqpath:n/a 11:30:58.250 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 426,1 replyHeader:: 426,215,0 request:: '/brokers/topics/__transaction_state/partitions/0,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__transaction_state/partitions/0 11:30:58.250 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x1ab zxid:0xd8 txntype:1 reqpath:n/a 11:30:58.250 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x1ab zxid:0xd8 txntype:1 reqpath:n/a 11:30:58.250 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 427,1 replyHeader:: 427,216,0 request:: '/brokers/topics/__transaction_state/partitions/0/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a322c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b322c312c305d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__transaction_state/partitions/0/state 11:30:58.250 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Live assigned replicas for partition [__transaction_state,2] are: [List(1, 0, 2)] 11:30:58.250 [controller-event-thread] DEBUG kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Initializing leader and isr for partition [__transaction_state,2] to (Leader:1,ISR:1,0,2,LeaderEpoch:0,ControllerEpoch:1) 11:30:58.250 [ProcessThread(sid:0 cport:63309):] INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x15e7aca904b0001 type:create cxid:0x1ac zxid:0xd9 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__transaction_state/partitions/2 Error:KeeperErrorCode = NoNode for /brokers/topics/__transaction_state/partitions/2 11:30:58.266 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x1ac zxid:0xd9 txntype:-1 reqpath:n/a 11:30:58.266 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:30:58.266 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 428,1 replyHeader:: 428,217,-101 request:: '/brokers/topics/__transaction_state/partitions/2/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b312c302c325d7d,v{s{31,s{'world,'anyone}}},0 response:: 11:30:58.269 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x1ad zxid:0xda txntype:1 reqpath:n/a 11:30:58.270 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x1ad zxid:0xda txntype:1 reqpath:n/a 11:30:58.270 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 429,1 replyHeader:: 429,218,0 request:: '/brokers/topics/__transaction_state/partitions/2,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__transaction_state/partitions/2 11:30:58.272 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:create cxid:0x1ae zxid:0xdb txntype:1 reqpath:n/a 11:30:58.272 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:create cxid:0x1ae zxid:0xdb txntype:1 reqpath:n/a 11:30:58.273 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 430,1 replyHeader:: 430,219,0 request:: '/brokers/topics/__transaction_state/partitions/2/state,#7b22636f6e74726f6c6c65725f65706f6368223a312c226c6561646572223a312c2276657273696f6e223a312c226c65616465725f65706f6368223a302c22697372223a5b312c302c325d7d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__transaction_state/partitions/2/state 11:30:58.274 [controller-event-thread] INFO kafka.controller.ReplicaStateMachine - [Replica state machine on controller 0]: Invoking state change to OnlineReplica for replicas [Topic=__transaction_state,Partition=1,Replica=0],[Topic=__transaction_state,Partition=2,Replica=1],[Topic=__transaction_state,Partition=0,Replica=0],[Topic=__transaction_state,Partition=2,Replica=0],[Topic=__transaction_state,Partition=0,Replica=1],[Topic=__transaction_state,Partition=2,Replica=2],[Topic=__transaction_state,Partition=0,Replica=2],[Topic=__transaction_state,Partition=1,Replica=2],[Topic=__transaction_state,Partition=1,Replica=1] 11:30:58.276 [kafka-request-handler-7] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 2] Removed fetcher for partitions __transaction_state-0 11:30:58.276 [kafka-request-handler-2] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] Removed fetcher for partitions __transaction_state-2 11:30:58.276 [kafka-request-handler-0] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 0] Removed fetcher for partitions __transaction_state-1 11:30:58.276 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x1af zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__transaction_state 11:30:58.276 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x1af zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__transaction_state 11:30:58.277 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x4e zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__transaction_state 11:30:58.277 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x4e zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__transaction_state 11:30:58.277 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 431,4 replyHeader:: 431,219,0 request:: '/config/topics/__transaction_state,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22756e636c65616e2e6c65616465722e656c656374696f6e2e656e61626c65223a2266616c7365222c226d696e2e696e73796e632e7265706c69636173223a2232222c22636c65616e75702e706f6c696379223a22636f6d70616374222c22636f6d7072657373696f6e2e74797065223a22756e636f6d70726573736564227d7d,s{207,207,1505298658139,1505298658139,0,0,0,0,180,0,207} 11:30:58.277 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x5e zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__transaction_state 11:30:58.277 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 78,4 replyHeader:: 78,219,0 request:: '/config/topics/__transaction_state,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22756e636c65616e2e6c65616465722e656c656374696f6e2e656e61626c65223a2266616c7365222c226d696e2e696e73796e632e7265706c69636173223a2232222c22636c65616e75702e706f6c696379223a22636f6d70616374222c22636f6d7072657373696f6e2e74797065223a22756e636f6d70726573736564227d7d,s{207,207,1505298658139,1505298658139,0,0,0,0,180,0,207} 11:30:58.277 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x5e zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__transaction_state 11:30:58.277 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:30:58.277 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:30:58.277 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:30:58.277 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:30:58.277 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 94,4 replyHeader:: 94,219,0 request:: '/config/topics/__transaction_state,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22756e636c65616e2e6c65616465722e656c656374696f6e2e656e61626c65223a2266616c7365222c226d696e2e696e73796e632e7265706c69636173223a2232222c22636c65616e75702e706f6c696379223a22636f6d70616374222c22636f6d7072657373696f6e2e74797065223a22756e636f6d70726573736564227d7d,s{207,207,1505298658139,1505298658139,0,0,0,0,180,0,207} 11:30:58.277 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0002 after 1ms 11:30:58.277 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0003 after 1ms 11:30:58.277 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller 0]: Preferred replicas by broker Map(2 -> Map([__consumer_offsets,19] -> List(2), [__consumer_offsets,10] -> List(2), [__consumer_offsets,40] -> List(2), [__consumer_offsets,22] -> List(2), [__transaction_state,0] -> List(2, 1, 0), [__consumer_offsets,13] -> List(2), [my-topic,0] -> List(2), [__consumer_offsets,49] -> List(2), [__consumer_offsets,28] -> List(2), [__consumer_offsets,4] -> List(2), [__consumer_offsets,37] -> List(2), [__consumer_offsets,31] -> List(2), [__consumer_offsets,46] -> List(2), [__consumer_offsets,34] -> List(2), [__consumer_offsets,25] -> List(2), [__consumer_offsets,43] -> List(2), [__consumer_offsets,7] -> List(2), [__consumer_offsets,1] -> List(2), [__consumer_offsets,16] -> List(2)), 1 -> Map([__consumer_offsets,30] -> List(1), [__consumer_offsets,39] -> List(1), [__consumer_offsets,18] -> List(1), [__consumer_offsets,0] -> List(1), [__consumer_offsets,24] -> List(1), [__consumer_offsets,33] -> List(1), [__consumer_offsets,3] -> List(1), [__consumer_offsets,21] -> List(1), [__consumer_offsets,12] -> List(1), [__consumer_offsets,15] -> List(1), [__consumer_offsets,48] -> List(1), [__consumer_offsets,6] -> List(1), [__consumer_offsets,42] -> List(1), [__transaction_state,2] -> List(1, 0, 2), [__consumer_offsets,27] -> List(1), [__consumer_offsets,45] -> List(1), [__consumer_offsets,36] -> List(1), [__consumer_offsets,9] -> List(1)), 0 -> Map([__consumer_offsets,47] -> List(0), [__consumer_offsets,29] -> List(0), [__consumer_offsets,41] -> List(0), [__consumer_offsets,17] -> List(0), [__consumer_offsets,14] -> List(0), [__consumer_offsets,26] -> List(0), [__consumer_offsets,20] -> List(0), [__consumer_offsets,5] -> List(0), [__transaction_state,1] -> List(0, 2, 1), [__consumer_offsets,8] -> List(0), [__consumer_offsets,23] -> List(0), [__consumer_offsets,11] -> List(0), [__consumer_offsets,44] -> List(0), [__consumer_offsets,32] -> List(0), [__consumer_offsets,35] -> List(0), [__consumer_offsets,38] -> List(0), [__consumer_offsets,2] -> List(0))) 11:30:58.277 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller 0]: Topics not in preferred replica Map() 11:30:58.277 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller 0]: Topics not in preferred replica Map() 11:30:58.277 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller 0]: Topics not in preferred replica Map() 11:30:58.277 [controller-event-thread] DEBUG kafka.utils.KafkaScheduler - Scheduling task auto-leader-rebalance-task with initial delay 300000 ms and period -1000 ms. 11:30:58.277 [kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081\__transaction_state-1\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:58.277 [kafka-request-handler-0] INFO kafka.log.Log - Loading producer state from offset 0 for partition __transaction_state-1 with message format version 2 11:30:58.293 [kafka-request-handler-7] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749\__transaction_state-0\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:58.293 [kafka-request-handler-2] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316\__transaction_state-2\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:58.293 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.clients.producer.internals.Sender - [TransactionalId dd18537f-7701-439c-8b57-f758ce707d93] Sending transactional request (type=FindCoordinatorRequest, coordinatorKey=dd18537f-7701-439c-8b57-f758ce707d93, coordinatorType=TRANSACTION) to node 127.0.0.1:63325 (id: -1 rack: null) 11:30:58.293 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x1b0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:58.293 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x1b0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:58.293 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 432,8 replyHeader:: 432,219,0 request:: '/brokers/ids,T response:: v{'0,'1,'2} 11:30:58.293 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x1b1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:58.293 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x1b1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:58.293 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 433,4 replyHeader:: 433,219,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:58.293 [kafka-request-handler-0] INFO kafka.log.Log - Completed load of log __transaction_state-1 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:58.293 [kafka-request-handler-2] INFO kafka.log.Log - Loading producer state from offset 0 for partition __transaction_state-2 with message format version 2 11:30:58.293 [kafka-request-handler-7] INFO kafka.log.Log - Loading producer state from offset 0 for partition __transaction_state-0 with message format version 2 11:30:58.293 [kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:58.293 [kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition [__transaction_state,1] in C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081 with properties {compression.type -> uncompressed, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 2, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:58.293 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x1b2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:58.293 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x1b2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:58.293 [kafka-request-handler-0] INFO kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: No checkpointed highwatermark is found for partition __transaction_state-1 11:30:58.293 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 434,4 replyHeader:: 434,219,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:58.293 [kafka-request-handler-0] INFO kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: __transaction_state-1 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:58.293 [kafka-request-handler-7] INFO kafka.log.Log - Completed load of log __transaction_state-0 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:58.293 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:58.293 [kafka-request-handler-7] INFO kafka.log.LogManager - Created log for partition [__transaction_state,0] in C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749 with properties {compression.type -> uncompressed, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 2, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:58.293 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x1b3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:58.293 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x1b3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:58.293 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: No checkpointed highwatermark is found for partition __transaction_state-0 11:30:58.293 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 435,4 replyHeader:: 435,219,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:58.293 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw -1 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0],-1 [0 : 0] 11:30:58.293 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: __transaction_state-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:58.293 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw -1 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are -1 [0 : 0],0 [0 : 0] 11:30:58.293 [kafka-request-handler-2] INFO kafka.log.Log - Completed load of log __transaction_state-2 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:58.293 [kafka-request-handler-2] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:58.293 [kafka-request-handler-2] INFO kafka.log.LogManager - Created log for partition [__transaction_state,2] in C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316 with properties {compression.type -> uncompressed, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 2, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:58.293 [kafka-request-handler-2] INFO kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: No checkpointed highwatermark is found for partition __transaction_state-2 11:30:58.293 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x1b4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:58.293 [kafka-request-handler-2] INFO kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: __transaction_state-2 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 11:30:58.293 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x1b4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:58.293 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw -1 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are -1 [0 : 0],0 [0 : 0] 11:30:58.293 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 436,3 replyHeader:: 436,219,0 request:: '/brokers/topics/__transaction_state,T response:: s{208,208,1505298658155,1505298658155,0,1,0,0,64,1,211} 11:30:58.293 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [TransactionalId dd18537f-7701-439c-8b57-f758ce707d93] Enqueuing transactional request (type=FindCoordinatorRequest, coordinatorKey=dd18537f-7701-439c-8b57-f758ce707d93, coordinatorType=TRANSACTION) 11:30:58.293 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x5f zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__transaction_state 11:30:58.293 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x5f zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__transaction_state 11:30:58.293 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x4f zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__transaction_state 11:30:58.293 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x4f zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__transaction_state 11:30:58.293 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 95,4 replyHeader:: 95,219,0 request:: '/config/topics/__transaction_state,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22756e636c65616e2e6c65616465722e656c656374696f6e2e656e61626c65223a2266616c7365222c226d696e2e696e73796e632e7265706c69636173223a2232222c22636c65616e75702e706f6c696379223a22636f6d70616374222c22636f6d7072657373696f6e2e74797065223a22756e636f6d70726573736564227d7d,s{207,207,1505298658139,1505298658139,0,0,0,0,180,0,207} 11:30:58.293 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x1b5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__transaction_state 11:30:58.293 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 79,4 replyHeader:: 79,219,0 request:: '/config/topics/__transaction_state,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22756e636c65616e2e6c65616465722e656c656374696f6e2e656e61626c65223a2266616c7365222c226d696e2e696e73796e632e7265706c69636173223a2232222c22636c65616e75702e706f6c696379223a22636f6d70616374222c22636f6d7072657373696f6e2e74797065223a22756e636f6d70726573736564227d7d,s{207,207,1505298658139,1505298658139,0,0,0,0,180,0,207} 11:30:58.293 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x1b5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__transaction_state 11:30:58.308 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 437,4 replyHeader:: 437,219,0 request:: '/config/topics/__transaction_state,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22756e636c65616e2e6c65616465722e656c656374696f6e2e656e61626c65223a2266616c7365222c226d696e2e696e73796e632e7265706c69636173223a2232222c22636c65616e75702e706f6c696379223a22636f6d70616374222c22636f6d7072657373696f6e2e74797065223a22756e636f6d70726573736564227d7d,s{207,207,1505298658139,1505298658139,0,0,0,0,180,0,207} 11:30:58.324 [kafka-request-handler-7] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749\__transaction_state-1\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:58.324 [kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081\__transaction_state-0\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:58.324 [kafka-request-handler-2] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316\__transaction_state-0\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:58.324 [kafka-request-handler-7] INFO kafka.log.Log - Loading producer state from offset 0 for partition __transaction_state-1 with message format version 2 11:30:58.324 [kafka-request-handler-0] INFO kafka.log.Log - Loading producer state from offset 0 for partition __transaction_state-0 with message format version 2 11:30:58.324 [kafka-request-handler-2] INFO kafka.log.Log - Loading producer state from offset 0 for partition __transaction_state-0 with message format version 2 11:30:58.324 [kafka-request-handler-0] INFO kafka.log.Log - Completed load of log __transaction_state-0 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:58.324 [kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:58.324 [kafka-request-handler-7] INFO kafka.log.Log - Completed load of log __transaction_state-1 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:58.324 [kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition [__transaction_state,0] in C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081 with properties {compression.type -> uncompressed, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 2, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:58.324 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:58.324 [kafka-request-handler-7] INFO kafka.log.LogManager - Created log for partition [__transaction_state,1] in C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749 with properties {compression.type -> uncompressed, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 2, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:58.324 [kafka-request-handler-0] INFO kafka.cluster.Partition - Partition [__transaction_state,0] on broker 0: No checkpointed highwatermark is found for partition __transaction_state-0 11:30:58.324 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__transaction_state,1] on broker 2: No checkpointed highwatermark is found for partition __transaction_state-1 11:30:58.324 [kafka-request-handler-2] INFO kafka.log.Log - Completed load of log __transaction_state-0 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:58.324 [kafka-request-handler-2] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:58.324 [kafka-request-handler-2] INFO kafka.log.LogManager - Created log for partition [__transaction_state,0] in C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316 with properties {compression.type -> uncompressed, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 2, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:58.324 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x60 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__transaction_state 11:30:58.324 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x60 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__transaction_state 11:30:58.324 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x1b6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__transaction_state 11:30:58.324 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x1b6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__transaction_state 11:30:58.324 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 96,4 replyHeader:: 96,219,0 request:: '/config/topics/__transaction_state,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22756e636c65616e2e6c65616465722e656c656374696f6e2e656e61626c65223a2266616c7365222c226d696e2e696e73796e632e7265706c69636173223a2232222c22636c65616e75702e706f6c696379223a22636f6d70616374222c22636f6d7072657373696f6e2e74797065223a22756e636f6d70726573736564227d7d,s{207,207,1505298658139,1505298658139,0,0,0,0,180,0,207} 11:30:58.324 [kafka-request-handler-2] INFO kafka.cluster.Partition - Partition [__transaction_state,0] on broker 1: No checkpointed highwatermark is found for partition __transaction_state-0 11:30:58.340 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 438,4 replyHeader:: 438,219,0 request:: '/config/topics/__transaction_state,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22756e636c65616e2e6c65616465722e656c656374696f6e2e656e61626c65223a2266616c7365222c226d696e2e696e73796e632e7265706c69636173223a2232222c22636c65616e75702e706f6c696379223a22636f6d70616374222c22636f6d7072657373696f6e2e74797065223a22756e636f6d70726573736564227d7d,s{207,207,1505298658139,1505298658139,0,0,0,0,180,0,207} 11:30:58.340 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x50 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__transaction_state 11:30:58.340 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x50 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__transaction_state 11:30:58.341 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 80,4 replyHeader:: 80,219,0 request:: '/config/topics/__transaction_state,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b227365676d656e742e6279746573223a22313034383537363030222c22756e636c65616e2e6c65616465722e656c656374696f6e2e656e61626c65223a2266616c7365222c226d696e2e696e73796e632e7265706c69636173223a2232222c22636c65616e75702e706f6c696379223a22636f6d70616374222c22636f6d7072657373696f6e2e74797065223a22756e636f6d70726573736564227d7d,s{207,207,1505298658139,1505298658139,0,0,0,0,180,0,207} 11:30:58.361 [kafka-request-handler-7] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749\__transaction_state-2\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:58.363 [kafka-request-handler-2] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316\__transaction_state-1\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:58.364 [kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081\__transaction_state-2\00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:30:58.366 [kafka-request-handler-7] INFO kafka.log.Log - Loading producer state from offset 0 for partition __transaction_state-2 with message format version 2 11:30:58.368 [kafka-request-handler-2] INFO kafka.log.Log - Loading producer state from offset 0 for partition __transaction_state-1 with message format version 2 11:30:58.368 [kafka-request-handler-0] INFO kafka.log.Log - Loading producer state from offset 0 for partition __transaction_state-2 with message format version 2 11:30:58.369 [kafka-request-handler-7] INFO kafka.log.Log - Completed load of log __transaction_state-2 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:58.370 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:58.370 [kafka-request-handler-7] INFO kafka.log.LogManager - Created log for partition [__transaction_state,2] in C:\Users\Ryan\AppData\Local\Temp\junit8008781114691003779\junit4867741781548443749 with properties {compression.type -> uncompressed, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 2, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:58.371 [kafka-request-handler-2] INFO kafka.log.Log - Completed load of log __transaction_state-1 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:58.371 [kafka-request-handler-7] INFO kafka.cluster.Partition - Partition [__transaction_state,2] on broker 2: No checkpointed highwatermark is found for partition __transaction_state-2 11:30:58.371 [kafka-request-handler-0] INFO kafka.log.Log - Completed load of log __transaction_state-2 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms 11:30:58.371 [kafka-request-handler-2] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:58.371 [kafka-request-handler-2] INFO kafka.log.LogManager - Created log for partition [__transaction_state,1] in C:\Users\Ryan\AppData\Local\Temp\junit3200635937714636515\junit4084973126178679316 with properties {compression.type -> uncompressed, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 2, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:58.371 [kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:30:58.372 [kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition [__transaction_state,2] in C:\Users\Ryan\AppData\Local\Temp\junit6436486861264935659\junit2669459697751179081 with properties {compression.type -> uncompressed, message.format.version -> 0.11.0-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 2, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. 11:30:58.372 [kafka-request-handler-7] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 2] Removed fetcher for partitions __transaction_state-1,__transaction_state-2 11:30:58.372 [kafka-request-handler-2] INFO kafka.cluster.Partition - Partition [__transaction_state,1] on broker 1: No checkpointed highwatermark is found for partition __transaction_state-1 11:30:58.373 [kafka-request-handler-0] INFO kafka.cluster.Partition - Partition [__transaction_state,2] on broker 0: No checkpointed highwatermark is found for partition __transaction_state-2 11:30:58.373 [kafka-request-handler-2] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] Removed fetcher for partitions __transaction_state-1,__transaction_state-0 11:30:58.373 [kafka-request-handler-0] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 0] Removed fetcher for partitions __transaction_state-2,__transaction_state-0 11:30:58.373 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-2 unblocked 0 producer requests. 11:30:58.373 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-1 unblocked 0 producer requests. 11:30:58.374 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-2 unblocked 0 fetch requests. 11:30:58.374 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-1 unblocked 0 fetch requests. 11:30:58.374 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-2 unblocked 0 producer requests. 11:30:58.374 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-0 unblocked 0 producer requests. 11:30:58.374 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-1 unblocked 0 producer requests. 11:30:58.374 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-0 unblocked 0 fetch requests. 11:30:58.374 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-1 unblocked 0 fetch requests. 11:30:58.374 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-2 unblocked 0 fetch requests. 11:30:58.374 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-0 unblocked 0 producer requests. 11:30:58.374 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-0 unblocked 0 fetch requests. 11:30:58.391 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 1 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=1, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:30:58.392 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 1 to node 127.0.0.1:63361 (id: 2 rack: null) 11:30:58.392 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:30:58.393 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.clients.producer.internals.Sender - [TransactionalId dd18537f-7701-439c-8b57-f758ce707d93] Sending transactional request (type=FindCoordinatorRequest, coordinatorKey=dd18537f-7701-439c-8b57-f758ce707d93, coordinatorType=TRANSACTION) to node 127.0.0.1:63325 (id: -1 rack: null) 11:30:58.394 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getChildren cxid:0x1b7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:58.394 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getChildren cxid:0x1b7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:30:58.394 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 439,8 replyHeader:: 439,219,0 request:: '/brokers/ids,T response:: v{'0,'1,'2} 11:30:58.395 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x1b8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:58.395 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x1b8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/0 11:30:58.395 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 440,4 replyHeader:: 440,219,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333235225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363533323336222c22706f7274223a36333332352c2276657273696f6e223a347d,s{29,29,1505298653236,1505298653236,0,0,0,98651252271546369,190,0,29} 11:30:58.398 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x1b9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:58.398 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x1b9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:30:58.398 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 441,4 replyHeader:: 441,219,0 request:: '/brokers/ids/1,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333434225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534343939222c22706f7274223a36333334342c2276657273696f6e223a347d,s{34,34,1505298654499,1505298654499,0,0,0,98651252271546370,190,0,34} 11:30:58.402 [kafka-request-handler-7] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed:broker-id-0fetcher-id-0 11:30:58.402 [kafka-request-handler-0] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed:broker-id-1fetcher-id-0 11:30:58.402 [kafka-request-handler-7] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created:broker-id-0fetcher-id-0 11:30:58.402 [kafka-request-handler-0] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created:broker-id-1fetcher-id-0 11:30:58.402 [kafka-request-handler-7] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:broker-id-0fetcher-id-0 11:30:58.402 [kafka-request-handler-0] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:broker-id-1fetcher-id-0 11:30:58.402 [kafka-request-handler-7] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:broker-id-0fetcher-id-0 11:30:58.402 [kafka-request-handler-0] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:broker-id-1fetcher-id-0 11:30:58.402 [kafka-request-handler-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed:broker-id-0fetcher-id-0 11:30:58.402 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x1ba zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:58.403 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x1ba zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/2 11:30:58.403 [kafka-request-handler-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created:broker-id-0fetcher-id-0 11:30:58.403 [kafka-request-handler-7] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received:broker-id-0fetcher-id-0 11:30:58.403 [kafka-request-handler-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:broker-id-0fetcher-id-0 11:30:58.403 [kafka-request-handler-7] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time:broker-id-0fetcher-id-0 11:30:58.403 [kafka-request-handler-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:broker-id-0fetcher-id-0 11:30:58.403 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 442,4 replyHeader:: 442,219,0 request:: '/brokers/ids/2,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f3132372e302e302e313a3633333631225d2c226a6d785f706f7274223a2d312c22686f7374223a223132372e302e302e31222c2274696d657374616d70223a2231353035323938363534363634222c22706f7274223a36333336312c2276657273696f6e223a347d,s{39,39,1505298654664,1505298654664,0,0,0,98651252271546371,190,0,39} 11:30:58.403 [kafka-request-handler-0] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received:broker-id-1fetcher-id-0 11:30:58.403 [kafka-request-handler-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received:broker-id-0fetcher-id-0 11:30:58.403 [kafka-request-handler-7] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time:broker-id-0fetcher-id-0 11:30:58.404 [kafka-request-handler-0] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time:broker-id-1fetcher-id-0 11:30:58.404 [kafka-request-handler-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time:broker-id-0fetcher-id-0 11:30:58.404 [kafka-request-handler-0] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time:broker-id-1fetcher-id-0 11:30:58.404 [kafka-request-handler-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time:broker-id-0fetcher-id-0 11:30:58.406 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:exists cxid:0x1bb zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:58.406 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:exists cxid:0x1bb zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:58.407 [ReplicaFetcherThread-0-0] INFO kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Starting 11:30:58.409 [ReplicaFetcherThread-0-1] INFO kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Starting 11:30:58.409 [ReplicaFetcherThread-0-0] INFO kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Starting 11:30:58.409 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 443,3 replyHeader:: 443,219,0 request:: '/brokers/topics/__transaction_state,T response:: s{208,208,1505298658155,1505298658155,0,1,0,0,64,1,211} 11:30:58.411 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [TransactionalId dd18537f-7701-439c-8b57-f758ce707d93] Enqueuing transactional request (type=FindCoordinatorRequest, coordinatorKey=dd18537f-7701-439c-8b57-f758ce707d93, coordinatorType=TRANSACTION) 11:30:58.413 [kafka-request-handler-0] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed:broker-id-2fetcher-id-0 11:30:58.413 [kafka-request-handler-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed:broker-id-2fetcher-id-0 11:30:58.413 [kafka-request-handler-7] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed:broker-id-1fetcher-id-0 11:30:58.413 [kafka-request-handler-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created:broker-id-2fetcher-id-0 11:30:58.413 [kafka-request-handler-7] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created:broker-id-1fetcher-id-0 11:30:58.414 [kafka-request-handler-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:broker-id-2fetcher-id-0 11:30:58.414 [kafka-request-handler-7] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:broker-id-1fetcher-id-0 11:30:58.414 [kafka-request-handler-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:broker-id-2fetcher-id-0 11:30:58.414 [kafka-request-handler-7] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:broker-id-1fetcher-id-0 11:30:58.414 [kafka-request-handler-0] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created:broker-id-2fetcher-id-0 11:30:58.414 [kafka-request-handler-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received:broker-id-2fetcher-id-0 11:30:58.414 [kafka-request-handler-7] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received:broker-id-1fetcher-id-0 11:30:58.414 [kafka-request-handler-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time:broker-id-2fetcher-id-0 11:30:58.414 [kafka-request-handler-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time:broker-id-2fetcher-id-0 11:30:58.414 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map(__transaction_state-2 -> -1) for broker BrokerEndPoint(1,127.0.0.1,63344) 11:30:58.415 [kafka-request-handler-7] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time:broker-id-1fetcher-id-0 11:30:58.415 [kafka-request-handler-7] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time:broker-id-1fetcher-id-0 11:30:58.415 [ReplicaFetcherThread-0-1] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node 1 at 127.0.0.1:63344. 11:30:58.415 [kafka-request-handler-0] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:broker-id-2fetcher-id-0 11:30:58.416 [kafka-request-handler-2] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] Added fetcher for partitions List([__transaction_state-0, initOffset 0 to broker BrokerEndPoint(2,127.0.0.1,63361)] , [__transaction_state-1, initOffset 0 to broker BrokerEndPoint(0,127.0.0.1,63325)] ) 11:30:58.416 [kafka-request-handler-7] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 2] Added fetcher for partitions List([__transaction_state-2, initOffset 0 to broker BrokerEndPoint(1,127.0.0.1,63344)] , [__transaction_state-1, initOffset 0 to broker BrokerEndPoint(0,127.0.0.1,63325)] ) 11:30:58.416 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map(__transaction_state-1 -> -1) for broker BrokerEndPoint(0,127.0.0.1,63325) 11:30:58.416 [ReplicaFetcherThread-0-0] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node 0 at 127.0.0.1:63325. 11:30:58.416 [kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Acceptor - Accepted connection from /127.0.0.1:63415 on /127.0.0.1:63344 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:30:58.416 [kafka-network-thread-1-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:63415 11:30:58.415 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map(__transaction_state-1 -> -1) for broker BrokerEndPoint(0,127.0.0.1,63325) 11:30:58.416 [kafka-request-handler-0] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:broker-id-2fetcher-id-0 11:30:58.417 [kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Acceptor - Accepted connection from /127.0.0.1:63416 on /127.0.0.1:63325 and assigned it to processor 2, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:30:58.417 [kafka-network-thread-0-ListenerName(PLAINTEXT)-PLAINTEXT-2] DEBUG kafka.network.Processor - Processor 2 listening to new connection from /127.0.0.1:63416 11:30:58.417 [ReplicaFetcherThread-0-0] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node 0 at 127.0.0.1:63325. 11:30:58.417 [ReplicaFetcherThread-0-0] DEBUG org.apache.kafka.common.network.Selector - Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 65536, SO_TIMEOUT = 0 to node 0 11:30:58.418 [ReplicaFetcherThread-0-0] DEBUG org.apache.kafka.clients.NetworkClient - Completed connection to node 0. Ready. 11:30:58.418 [ReplicaFetcherThread-0-0] DEBUG org.apache.kafka.common.network.Selector - Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 65536, SO_TIMEOUT = 0 to node 0 11:30:58.418 [ReplicaFetcherThread-0-0] DEBUG org.apache.kafka.clients.NetworkClient - Completed connection to node 0. Ready. 11:30:58.418 [kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Acceptor - Accepted connection from /127.0.0.1:63417 on /127.0.0.1:63325 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:30:58.419 [kafka-network-thread-0-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:63417 11:30:58.419 [kafka-request-handler-0] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received:broker-id-2fetcher-id-0 11:30:58.419 [kafka-request-handler-0] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time:broker-id-2fetcher-id-0 11:30:58.419 [kafka-request-handler-0] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time:broker-id-2fetcher-id-0 11:30:58.417 [ReplicaFetcherThread-0-1] DEBUG org.apache.kafka.common.network.Selector - Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 65536, SO_TIMEOUT = 0 to node 1 11:30:58.420 [ReplicaFetcherThread-0-1] DEBUG org.apache.kafka.clients.NetworkClient - Completed connection to node 1. Ready. 11:30:58.420 [kafka-request-handler-0] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 0] Added fetcher for partitions List([__transaction_state-0, initOffset 0 to broker BrokerEndPoint(2,127.0.0.1,63361)] , [__transaction_state-2, initOffset 0 to broker BrokerEndPoint(1,127.0.0.1,63344)] ) 11:30:58.421 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x51 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:58.421 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x51 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:58.421 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 81,4 replyHeader:: 81,219,0 request:: '/brokers/topics/__transaction_state,F response:: #7b2276657273696f6e223a312c22706172746974696f6e73223a7b2232223a5b312c302c325d2c2231223a5b302c322c315d2c2230223a5b322c312c305d7d7d,s{208,208,1505298658155,1505298658155,0,1,0,0,64,1,211} 11:30:58.423 [kafka-request-handler-7] DEBUG kafka.server.epoch.LeaderEpochFileCache - Processed offset for epoch request for partition __transaction_state-1 epoch:-1 and returning offset 0 from epoch list of size 0 11:30:58.424 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x61 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:58.424 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x61 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:58.425 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x1bc zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:58.425 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x1bc zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:58.426 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 444,4 replyHeader:: 444,219,0 request:: '/brokers/topics/__transaction_state,T response:: #7b2276657273696f6e223a312c22706172746974696f6e73223a7b2232223a5b312c302c325d2c2231223a5b302c322c315d2c2230223a5b322c312c305d7d7d,s{208,208,1505298658155,1505298658155,0,1,0,0,64,1,211} 11:30:58.426 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Receive leaderEpoch response Map(__transaction_state-1 -> EpochEndOffset{error=NONE, endOffset=0}) from broker BrokerEndPoint(0,127.0.0.1,63325) 11:30:58.426 [kafka-request-handler-3] DEBUG kafka.server.epoch.LeaderEpochFileCache - Processed offset for epoch request for partition __transaction_state-1 epoch:-1 and returning offset 0 from epoch list of size 0 11:30:58.423 [kafka-request-handler-7] DEBUG kafka.server.epoch.LeaderEpochFileCache - Processed offset for epoch request for partition __transaction_state-2 epoch:-1 and returning offset 0 from epoch list of size 0 11:30:58.425 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 97,4 replyHeader:: 97,219,0 request:: '/brokers/topics/__transaction_state,F response:: #7b2276657273696f6e223a312c22706172746974696f6e73223a7b2232223a5b312c302c325d2c2231223a5b302c322c315d2c2230223a5b322c312c305d7d7d,s{208,208,1505298658155,1505298658155,0,1,0,0,64,1,211} 11:30:58.426 [kafka-request-handler-2] DEBUG kafka.utils.ZkUtils - Partition map for /brokers/topics/__transaction_state is Map(2 -> List(1, 0, 2), 1 -> List(0, 2, 1), 0 -> List(2, 1, 0)) 11:30:58.427 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Receive leaderEpoch response Map(__transaction_state-1 -> EpochEndOffset{error=NONE, endOffset=0}) from broker BrokerEndPoint(0,127.0.0.1,63325) 11:30:58.427 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Receive leaderEpoch response Map(__transaction_state-2 -> EpochEndOffset{error=NONE, endOffset=0}) from broker BrokerEndPoint(1,127.0.0.1,63344) 11:30:58.428 [ReplicaFetcherThread-0-0] INFO kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Based on follower's leader epoch, leader replied with an offset 0 >= the follower's log end offset 0 in __transaction_state-1. No truncation needed. 11:30:58.429 [ReplicaFetcherThread-0-0] INFO kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Based on follower's leader epoch, leader replied with an offset 0 >= the follower's log end offset 0 in __transaction_state-1. No truncation needed. 11:30:58.429 [kafka-request-handler-2] DEBUG kafka.utils.KafkaScheduler - Scheduling task load-txns-for-partition-__transaction_state-2 with initial delay 0 ms and period -1 ms. 11:30:58.429 [ReplicaFetcherThread-0-1] INFO kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Based on follower's leader epoch, leader replied with an offset 0 >= the follower's log end offset 0 in __transaction_state-2. No truncation needed. 11:30:58.430 [ReplicaFetcherThread-0-0] INFO kafka.log.Log - Truncating __transaction_state-1 to 0 has no effect as the largest offset in the log is -1. 11:30:58.430 [ReplicaFetcherThread-0-1] INFO kafka.log.Log - Truncating __transaction_state-2 to 0 has no effect as the largest offset in the log is -1. 11:30:58.430 [ReplicaFetcherThread-0-0] INFO kafka.log.Log - Truncating __transaction_state-1 to 0 has no effect as the largest offset in the log is -1. 11:30:58.432 [ReplicaFetcherThread-0-2] INFO kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Starting 11:30:58.433 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map(__transaction_state-0 -> -1) for broker BrokerEndPoint(2,127.0.0.1,63361) 11:30:58.433 [ReplicaFetcherThread-0-2] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node 2 at 127.0.0.1:63361. 11:30:58.435 [kafka-request-handler-7] DEBUG kafka.utils.ZkUtils - Partition map for /brokers/topics/__transaction_state is Map(2 -> List(1, 0, 2), 1 -> List(0, 2, 1), 0 -> List(2, 1, 0)) 11:30:58.435 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task load-txns-for-partition-__transaction_state-0 with initial delay 0 ms and period -1 ms. 11:30:58.435 [kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Acceptor - Accepted connection from /127.0.0.1:63418 on /127.0.0.1:63361 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:30:58.433 [ReplicaFetcherThread-0-1] INFO kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Starting 11:30:58.436 [kafka-network-thread-2-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:63418 11:30:58.436 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map(__transaction_state-2 -> -1) for broker BrokerEndPoint(1,127.0.0.1,63344) 11:30:58.434 [kafka-request-handler-0] DEBUG kafka.utils.ZkUtils - Partition map for /brokers/topics/__transaction_state is Map(2 -> List(1, 0, 2), 1 -> List(0, 2, 1), 0 -> List(2, 1, 0)) 11:30:58.436 [ReplicaFetcherThread-0-2] INFO kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Starting 11:30:58.436 [kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task load-txns-for-partition-__transaction_state-1 with initial delay 0 ms and period -1 ms. 11:30:58.438 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map(__transaction_state-0 -> -1) for broker BrokerEndPoint(2,127.0.0.1,63361) 11:30:58.438 [transaction-log-manager-0] INFO kafka.coordinator.transaction.TransactionStateManager - [Transaction State Manager 1]: Loading transaction metadata from __transaction_state-2 11:30:58.439 [ReplicaFetcherThread-0-2] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node 2 at 127.0.0.1:63361. 11:30:58.439 [transaction-log-manager-0] INFO kafka.coordinator.transaction.TransactionStateManager - [Transaction State Manager 2]: Loading transaction metadata from __transaction_state-0 11:30:58.436 [ReplicaFetcherThread-0-2] DEBUG org.apache.kafka.common.network.Selector - Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 65536, SO_TIMEOUT = 0 to node 2 11:30:58.436 [ReplicaFetcherThread-0-1] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node 1 at 127.0.0.1:63344. 11:30:58.439 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x62 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:58.440 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x62 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:58.440 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x52 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:58.440 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x52 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:58.440 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x1bd zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:58.440 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x1bd zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:58.440 [kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Acceptor - Accepted connection from /127.0.0.1:63419 on /127.0.0.1:63361 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:30:58.440 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 445,4 replyHeader:: 445,219,0 request:: '/brokers/topics/__transaction_state,T response:: #7b2276657273696f6e223a312c22706172746974696f6e73223a7b2232223a5b312c302c325d2c2231223a5b302c322c315d2c2230223a5b322c312c305d7d7d,s{208,208,1505298658155,1505298658155,0,1,0,0,64,1,211} 11:30:58.441 [ReplicaFetcherThread-0-1] DEBUG org.apache.kafka.common.network.Selector - Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 65536, SO_TIMEOUT = 0 to node 1 11:30:58.441 [ReplicaFetcherThread-0-1] DEBUG org.apache.kafka.clients.NetworkClient - Completed connection to node 1. Ready. 11:30:58.441 [kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Acceptor - Accepted connection from /127.0.0.1:63420 on /127.0.0.1:63344 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:30:58.441 [kafka-network-thread-1-ListenerName(PLAINTEXT)-PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:63420 11:30:58.438 [transaction-log-manager-0] INFO kafka.coordinator.transaction.TransactionStateManager - [Transaction State Manager 0]: Loading transaction metadata from __transaction_state-1 11:30:58.439 [ReplicaFetcherThread-0-2] DEBUG org.apache.kafka.clients.NetworkClient - Completed connection to node 2. Ready. 11:30:58.440 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 98,4 replyHeader:: 98,219,0 request:: '/brokers/topics/__transaction_state,F response:: #7b2276657273696f6e223a312c22706172746974696f6e73223a7b2232223a5b312c302c325d2c2231223a5b302c322c315d2c2230223a5b322c312c305d7d7d,s{208,208,1505298658155,1505298658155,0,1,0,0,64,1,211} 11:30:58.440 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 82,4 replyHeader:: 82,219,0 request:: '/brokers/topics/__transaction_state,F response:: #7b2276657273696f6e223a312c22706172746974696f6e73223a7b2232223a5b312c302c325d2c2231223a5b302c322c315d2c2230223a5b322c312c305d7d7d,s{208,208,1505298658155,1505298658155,0,1,0,0,64,1,211} 11:30:58.440 [kafka-network-thread-2-ListenerName(PLAINTEXT)-PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:63419 11:30:58.440 [ReplicaFetcherThread-0-2] DEBUG org.apache.kafka.common.network.Selector - Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 65536, SO_TIMEOUT = 0 to node 2 11:30:58.442 [ReplicaFetcherThread-0-2] DEBUG org.apache.kafka.clients.NetworkClient - Completed connection to node 2. Ready. 11:30:58.443 [kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - Processed offset for epoch request for partition __transaction_state-2 epoch:-1 and returning offset 0 from epoch list of size 0 11:30:58.443 [kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - Processed offset for epoch request for partition __transaction_state-0 epoch:-1 and returning offset 0 from epoch list of size 0 11:30:58.444 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Receive leaderEpoch response Map(__transaction_state-2 -> EpochEndOffset{error=NONE, endOffset=0}) from broker BrokerEndPoint(1,127.0.0.1,63344) 11:30:58.444 [ReplicaFetcherThread-0-1] INFO kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Based on follower's leader epoch, leader replied with an offset 0 >= the follower's log end offset 0 in __transaction_state-2. No truncation needed. 11:30:58.444 [ReplicaFetcherThread-0-1] INFO kafka.log.Log - Truncating __transaction_state-2 to 0 has no effect as the largest offset in the log is -1. 11:30:58.444 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Receive leaderEpoch response Map(__transaction_state-0 -> EpochEndOffset{error=NONE, endOffset=0}) from broker BrokerEndPoint(2,127.0.0.1,63361) 11:30:58.443 [kafka-request-handler-3] DEBUG kafka.server.epoch.LeaderEpochFileCache - Processed offset for epoch request for partition __transaction_state-0 epoch:-1 and returning offset 0 from epoch list of size 0 11:30:58.445 [ReplicaFetcherThread-0-2] INFO kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Based on follower's leader epoch, leader replied with an offset 0 >= the follower's log end offset 0 in __transaction_state-0. No truncation needed. 11:30:58.445 [ReplicaFetcherThread-0-2] INFO kafka.log.Log - Truncating __transaction_state-0 to 0 has no effect as the largest offset in the log is -1. 11:30:58.445 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Receive leaderEpoch response Map(__transaction_state-0 -> EpochEndOffset{error=NONE, endOffset=0}) from broker BrokerEndPoint(2,127.0.0.1,63361) 11:30:58.445 [ReplicaFetcherThread-0-2] INFO kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Based on follower's leader epoch, leader replied with an offset 0 >= the follower's log end offset 0 in __transaction_state-0. No truncation needed. 11:30:58.446 [ReplicaFetcherThread-0-2] INFO kafka.log.Log - Truncating __transaction_state-0 to 0 has no effect as the largest offset in the log is -1. 11:30:58.444 [kafka-request-handler-0] DEBUG kafka.utils.ZkUtils - Partition map for /brokers/topics/__transaction_state is Map(2 -> List(1, 0, 2), 1 -> List(0, 2, 1), 0 -> List(2, 1, 0)) 11:30:58.448 [kafka-request-handler-7] DEBUG kafka.utils.ZkUtils - Partition map for /brokers/topics/__transaction_state is Map(2 -> List(1, 0, 2), 1 -> List(0, 2, 1), 0 -> List(2, 1, 0)) 11:30:58.449 [kafka-request-handler-2] DEBUG kafka.utils.ZkUtils - Partition map for /brokers/topics/__transaction_state is Map(2 -> List(1, 0, 2), 1 -> List(0, 2, 1), 0 -> List(2, 1, 0)) 11:30:58.451 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task remove-txns-for-partition-__transaction_state-2 with initial delay 0 ms and period -1 ms. 11:30:58.451 [kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task remove-txns-for-partition-__transaction_state-2 with initial delay 0 ms and period -1 ms. 11:30:58.451 [kafka-request-handler-2] DEBUG kafka.utils.KafkaScheduler - Scheduling task remove-txns-for-partition-__transaction_state-1 with initial delay 0 ms and period -1 ms. 11:30:58.452 [transaction-log-manager-0] INFO kafka.coordinator.transaction.TransactionStateManager - [Transaction State Manager 1]: Trying to remove cached transaction metadata for __transaction_state-1 on follower transition but there is no entries remaining; it is likely that another process for removing the cached entries has just executed earlier before 11:30:58.452 [transaction-log-manager-0] INFO kafka.coordinator.transaction.TransactionStateManager - [Transaction State Manager 2]: Trying to remove cached transaction metadata for __transaction_state-2 on follower transition but there is no entries remaining; it is likely that another process for removing the cached entries has just executed earlier before 11:30:58.452 [transaction-log-manager-0] INFO kafka.coordinator.transaction.TransactionStateManager - [Transaction State Manager 0]: Trying to remove cached transaction metadata for __transaction_state-2 on follower transition but there is no entries remaining; it is likely that another process for removing the cached entries has just executed earlier before 11:30:58.452 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:getData cxid:0x53 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:58.452 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:getData cxid:0x53 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:58.452 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:getData cxid:0x1be zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:58.452 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:getData cxid:0x1be zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:58.452 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0002, packet:: clientPath:null serverPath:null finished:false header:: 83,4 replyHeader:: 83,219,0 request:: '/brokers/topics/__transaction_state,F response:: #7b2276657273696f6e223a312c22706172746974696f6e73223a7b2232223a5b312c302c325d2c2231223a5b302c322c315d2c2230223a5b322c312c305d7d7d,s{208,208,1505298658155,1505298658155,0,1,0,0,64,1,211} 11:30:58.452 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:getData cxid:0x63 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:58.452 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:getData cxid:0x63 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:30:58.452 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0001, packet:: clientPath:null serverPath:null finished:false header:: 446,4 replyHeader:: 446,219,0 request:: '/brokers/topics/__transaction_state,T response:: #7b2276657273696f6e223a312c22706172746974696f6e73223a7b2232223a5b312c302c325d2c2231223a5b302c322c315d2c2230223a5b322c312c305d7d7d,s{208,208,1505298658155,1505298658155,0,1,0,0,64,1,211} 11:30:58.452 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15e7aca904b0003, packet:: clientPath:null serverPath:null finished:false header:: 99,4 replyHeader:: 99,219,0 request:: '/brokers/topics/__transaction_state,F response:: #7b2276657273696f6e223a312c22706172746974696f6e73223a7b2232223a5b312c302c325d2c2231223a5b302c322c315d2c2230223a5b322c312c305d7d7d,s{208,208,1505298658155,1505298658155,0,1,0,0,64,1,211} 11:30:58.452 [kafka-request-handler-2] DEBUG kafka.utils.ZkUtils - Partition map for /brokers/topics/__transaction_state is Map(2 -> List(1, 0, 2), 1 -> List(0, 2, 1), 0 -> List(2, 1, 0)) 11:30:58.452 [kafka-request-handler-2] DEBUG kafka.utils.KafkaScheduler - Scheduling task remove-txns-for-partition-__transaction_state-0 with initial delay 0 ms and period -1 ms. 11:30:58.452 [transaction-log-manager-0] INFO kafka.coordinator.transaction.TransactionStateManager - [Transaction State Manager 1]: Trying to remove cached transaction metadata for __transaction_state-0 on follower transition but there is no entries remaining; it is likely that another process for removing the cached entries has just executed earlier before 11:30:58.457 [kafka-request-handler-7] DEBUG kafka.utils.ZkUtils - Partition map for /brokers/topics/__transaction_state is Map(2 -> List(1, 0, 2), 1 -> List(0, 2, 1), 0 -> List(2, 1, 0)) 11:30:58.457 [kafka-request-handler-7] DEBUG kafka.utils.KafkaScheduler - Scheduling task remove-txns-for-partition-__transaction_state-1 with initial delay 0 ms and period -1 ms. 11:30:58.458 [transaction-log-manager-0] INFO kafka.coordinator.transaction.TransactionStateManager - [Transaction State Manager 2]: Trying to remove cached transaction metadata for __transaction_state-1 on follower transition but there is no entries remaining; it is likely that another process for removing the cached entries has just executed earlier before 11:30:58.463 [kafka-request-handler-0] DEBUG kafka.utils.ZkUtils - Partition map for /brokers/topics/__transaction_state is Map(2 -> List(1, 0, 2), 1 -> List(0, 2, 1), 0 -> List(2, 1, 0)) 11:30:58.463 [kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task remove-txns-for-partition-__transaction_state-0 with initial delay 0 ms and period -1 ms. 11:30:58.463 [transaction-log-manager-0] INFO kafka.coordinator.transaction.TransactionStateManager - [Transaction State Manager 0]: Trying to remove cached transaction metadata for __transaction_state-0 on follower transition but there is no entries remaining; it is likely that another process for removing the cached entries has just executed earlier before 11:30:58.494 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:30:58.494 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:30:58.494 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:30:58.496 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:58.496 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw -1 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0],-1 [0 : 0] 11:30:58.496 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:58.496 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:30:58.496 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:30:58.496 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:30:58.496 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:30:58.496 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:30:58.497 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:30:58.502 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:30:58.502 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:58.502 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:30:58.502 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:30:58.510 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:30:58.510 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw -1 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are -1 [0 : 0],0 [0 : 0] 11:30:58.510 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 0. 11:30:58.510 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:30:58.517 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:30:58.517 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:58.518 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 0. 11:30:58.518 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:30:58.518 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.clients.producer.internals.Sender - [TransactionalId dd18537f-7701-439c-8b57-f758ce707d93] Sending transactional request (type=FindCoordinatorRequest, coordinatorKey=dd18537f-7701-439c-8b57-f758ce707d93, coordinatorType=TRANSACTION) to node 127.0.0.1:63325 (id: -1 rack: null) 11:30:58.520 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node 2 at 127.0.0.1:63361. 11:30:58.521 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-2.bytes-sent 11:30:58.521 [kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Acceptor - Accepted connection from /127.0.0.1:63421 on /127.0.0.1:63361 and assigned it to processor 2, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:30:58.521 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-2.bytes-received 11:30:58.521 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-2.latency 11:30:58.521 [kafka-network-thread-2-ListenerName(PLAINTEXT)-PLAINTEXT-2] DEBUG kafka.network.Processor - Processor 2 listening to new connection from /127.0.0.1:63421 11:30:58.521 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.common.network.Selector - Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 2 11:30:58.521 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.clients.NetworkClient - Completed connection to node 2. Fetching API versions. 11:30:58.521 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.clients.NetworkClient - Initiating API versions fetch from node 2. 11:30:58.522 [kafka-request-handler-6] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name Request-:producer-2 11:30:58.522 [kafka-request-handler-6] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name RequestThrottleTime-:producer-2 11:30:58.522 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.clients.NetworkClient - Recorded API versions for node 2: (Produce(0): 0 to 3 [usable: 3], Fetch(1): 0 to 5 [usable: 5], Offsets(2): 0 to 2 [usable: 2], Metadata(3): 0 to 4 [usable: 4], LeaderAndIsr(4): 0 [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 3 [usable: 3], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 3 [usable: 3], OffsetFetch(9): 0 to 3 [usable: 3], FindCoordinator(10): 0 to 1 [usable: 1], JoinGroup(11): 0 to 2 [usable: 2], Heartbeat(12): 0 to 1 [usable: 1], LeaveGroup(13): 0 to 1 [usable: 1], SyncGroup(14): 0 to 1 [usable: 1], DescribeGroups(15): 0 to 1 [usable: 1], ListGroups(16): 0 to 1 [usable: 1], SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 to 1 [usable: 1], CreateTopics(19): 0 to 2 [usable: 2], DeleteTopics(20): 0 to 1 [usable: 1], DeleteRecords(21): 0 [usable: 0], InitProducerId(22): 0 [usable: 0], OffsetForLeaderEpoch(23): 0 [usable: 0], AddPartitionsToTxn(24): 0 [usable: 0], AddOffsetsToTxn(25): 0 [usable: 0], EndTxn(26): 0 [usable: 0], WriteTxnMarkers(27): 0 [usable: 0], TxnOffsetCommit(28): 0 [usable: 0], DescribeAcls(29): 0 [usable: 0], CreateAcls(30): 0 [usable: 0], DeleteAcls(31): 0 [usable: 0], DescribeConfigs(32): 0 [usable: 0], AlterConfigs(33): 0 [usable: 0]) 11:30:58.638 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.clients.producer.internals.Sender - [TransactionalId dd18537f-7701-439c-8b57-f758ce707d93] Sending transactional request (type=InitProducerIdRequest, transactionalId=dd18537f-7701-439c-8b57-f758ce707d93, transactionTimeoutMs=60000) to node 127.0.0.1:63361 (id: 2 rack: null) 11:30:58.638 [kafka-request-handler-1] DEBUG kafka.coordinator.transaction.TransactionMetadata - TransactionalId dd18537f-7701-439c-8b57-f758ce707d93 prepare transition from Empty to TxnTransitMetadata(producerId=2000, producerEpoch=0, txnTimeoutMs=60000, txnState=Empty, topicPartitions=Set(), txnStartTimestamp=-1, txnLastUpdateTimestamp=1505298658638) 11:30:58.654 [kafka-request-handler-1] INFO kafka.server.epoch.LeaderEpochFileCache - Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset-1} for Partition: __transaction_state-0. Cache now contains 0 entries. 11:30:58.707 [kafka-request-handler-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name LeaderReplication 11:30:58.723 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 2 fetch requests. 11:30:58.723 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0],1 [0 : 146] 11:30:58.723 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Produce to local log in 0 ms 11:30:58.738 [ReplicaFetcherThread-0-2] INFO kafka.server.epoch.LeaderEpochFileCache - Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset-1} for Partition: __transaction_state-0. Cache now contains 0 entries. 11:30:58.738 [ReplicaFetcherThread-0-2] INFO kafka.server.epoch.LeaderEpochFileCache - Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset-1} for Partition: __transaction_state-0. Cache now contains 0 entries. 11:30:58.807 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:30:58.807 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(1 [0 : 146],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [1], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:30:58.807 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 1 [0 : 146],0 [0 : 0] 11:30:58.807 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 1. 11:30:58.807 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:30:58.807 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:30:58.823 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(1 [0 : 146],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [1], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:30:58.823 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: High watermark updated to 1 [0 : 146] 11:30:58.823 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 fetch requests. 11:30:58.823 [kafka-request-handler-1] DEBUG kafka.coordinator.transaction.TransactionMetadata - TransactionalId dd18537f-7701-439c-8b57-f758ce707d93 complete transition from Empty to TxnTransitMetadata(producerId=2000, producerEpoch=0, txnTimeoutMs=60000, txnState=Empty, topicPartitions=Set(), txnStartTimestamp=-1, txnLastUpdateTimestamp=1505298658638) 11:30:58.823 [kafka-request-handler-1] DEBUG kafka.coordinator.transaction.TransactionStateManager - [Transaction State Manager 2]: Updating dd18537f-7701-439c-8b57-f758ce707d93's transaction state to TxnTransitMetadata(producerId=2000, producerEpoch=0, txnTimeoutMs=60000, txnState=Empty, topicPartitions=Set(), txnStartTimestamp=-1, txnLastUpdateTimestamp=1505298658638) with coordinator epoch 0 for dd18537f-7701-439c-8b57-f758ce707d93 succeeded 11:30:58.838 [kafka-request-handler-1] INFO kafka.coordinator.transaction.TransactionCoordinator - [Transaction Coordinator 2]: Initialized transactionalId dd18537f-7701-439c-8b57-f758ce707d93 with producerId 2000 and producer epoch 0 on partition __transaction_state-0 11:30:58.838 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 1 producer requests. 11:30:58.838 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 DeleteRecordsRequest. 11:30:58.838 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 1. 11:30:58.838 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:30:58.838 [kafka-producer-network-thread | producer-2] INFO org.apache.kafka.clients.producer.internals.TransactionManager - [TransactionalId dd18537f-7701-439c-8b57-f758ce707d93] ProducerId set to 2000 with epoch 0 11:30:58.838 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [TransactionalId dd18537f-7701-439c-8b57-f758ce707d93] Transition from state INITIALIZING to READY 11:30:58.838 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [TransactionalId dd18537f-7701-439c-8b57-f758ce707d93] Transition from state READY to IN_TRANSACTION 11:30:58.838 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request (type=MetadataRequest, topics=my-topic) to node -1 11:30:58.838 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 2 to Cluster(id = mXgsQa2iR6-LwjmHF4FaAw, nodes = [127.0.0.1:63361 (id: 2 rack: null), 127.0.0.1:63325 (id: 0 rack: null), 127.0.0.1:63344 (id: 1 rack: null)], partitions = [Partition(topic = my-topic, partition = 0, leader = 2, replicas = [2], isr = [2])]) 11:30:58.838 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [TransactionalId dd18537f-7701-439c-8b57-f758ce707d93] Begin adding new partition my-topic-0 to transaction 11:30:58.838 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [TransactionalId dd18537f-7701-439c-8b57-f758ce707d93] Transition from state IN_TRANSACTION to COMMITTING_TRANSACTION 11:30:58.838 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [TransactionalId dd18537f-7701-439c-8b57-f758ce707d93] Enqueuing transactional request (type=AddPartitionsToTxnRequest, transactionalId=dd18537f-7701-439c-8b57-f758ce707d93, producerId=2000, producerEpoch=0, partitions=[my-topic-0]) 11:30:58.838 [pool-6-thread-1-ScalaTest-running-Tests] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [TransactionalId dd18537f-7701-439c-8b57-f758ce707d93] Enqueuing transactional request (type=EndTxnRequest, transactionalId=dd18537f-7701-439c-8b57-f758ce707d93, producerId=2000, producerEpoch=0, result=COMMIT) 11:30:58.838 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.clients.producer.internals.Sender - [TransactionalId dd18537f-7701-439c-8b57-f758ce707d93] Sending transactional request (type=AddPartitionsToTxnRequest, transactionalId=dd18537f-7701-439c-8b57-f758ce707d93, producerId=2000, producerEpoch=0, partitions=[my-topic-0]) to node 127.0.0.1:63361 (id: 2 rack: null) 11:30:58.859 [kafka-request-handler-3] DEBUG kafka.coordinator.transaction.TransactionMetadata - TransactionalId dd18537f-7701-439c-8b57-f758ce707d93 prepare transition from Empty to TxnTransitMetadata(producerId=2000, producerEpoch=0, txnTimeoutMs=60000, txnState=Ongoing, topicPartitions=Set(my-topic-0), txnStartTimestamp=1505298658859, txnLastUpdateTimestamp=1505298658859) 11:30:58.867 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 2 fetch requests. 11:30:58.868 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 1 [0 : 146] is not larger than old hw 1 [0 : 146].All LEOs are 1 [0 : 146],2 [0 : 310] 11:30:58.868 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Produce to local log in 0 ms 11:30:58.869 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:30:58.870 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:30:58.871 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(2 [0 : 310],[],false,None)], HW: [1], leaderLogStartOffset: [0], leaderLogEndOffset: [2], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:30:58.871 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(2 [0 : 310],[],false,None)], HW: [1], leaderLogStartOffset: [0], leaderLogEndOffset: [2], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:30:58.871 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: High watermark updated to 2 [0 : 310] 11:30:58.872 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 fetch requests. 11:30:58.872 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 2 [0 : 310] is not larger than old hw 2 [0 : 310].All LEOs are 2 [0 : 310] 11:30:58.874 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 2. 11:30:58.874 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:30:58.874 [kafka-request-handler-0] DEBUG kafka.coordinator.transaction.TransactionMetadata - TransactionalId dd18537f-7701-439c-8b57-f758ce707d93 complete transition from Empty to TxnTransitMetadata(producerId=2000, producerEpoch=0, txnTimeoutMs=60000, txnState=Ongoing, topicPartitions=Set(my-topic-0), txnStartTimestamp=1505298658859, txnLastUpdateTimestamp=1505298658859) 11:30:58.874 [kafka-request-handler-0] DEBUG kafka.coordinator.transaction.TransactionStateManager - [Transaction State Manager 2]: Updating dd18537f-7701-439c-8b57-f758ce707d93's transaction state to TxnTransitMetadata(producerId=2000, producerEpoch=0, txnTimeoutMs=60000, txnState=Ongoing, topicPartitions=Set(my-topic-0), txnStartTimestamp=1505298658859, txnLastUpdateTimestamp=1505298658859) with coordinator epoch 0 for dd18537f-7701-439c-8b57-f758ce707d93 succeeded 11:30:58.879 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 1 producer requests. 11:30:58.879 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 DeleteRecordsRequest. 11:30:58.879 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [TransactionalId dd18537f-7701-439c-8b57-f758ce707d93] Successfully added partitions [my-topic-0] to transaction 11:30:58.879 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 2. 11:30:58.880 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:30:58.880 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.clients.producer.internals.RecordAccumulator - Assigning sequence number 0 from producer (producerId=2000, epoch=0) to dequeued batch from partition my-topic-0 bound for 127.0.0.1:63361 (id: 2 rack: null). 11:30:58.880 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name topic.my-topic.records-per-batch 11:30:58.880 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name topic.my-topic.bytes 11:30:58.881 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name topic.my-topic.compression-rate 11:30:58.881 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name topic.my-topic.record-retries 11:30:58.881 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name topic.my-topic.record-errors 11:30:58.896 [kafka-request-handler-2] DEBUG kafka.log.Log - First unstable offset for my-topic-0 updated to Some(1 [0 : 74]) 11:30:58.897 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key my-topic-0 unblocked 0 fetch requests. 11:30:58.897 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [my-topic,0] on broker 2: High watermark updated to 2 [0 : 148] 11:30:58.898 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key my-topic-0 unblocked 1 fetch requests. 11:30:58.899 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key my-topic-0 unblocked 0 producer requests. 11:30:58.899 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key my-topic-0 unblocked 0 DeleteRecordsRequest. 11:30:58.899 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Produce to local log in 0 ms 11:30:58.899 [kafka-request-handler-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name Produce-:producer-2 11:30:58.899 [kafka-request-handler-2] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name ProduceThrottleTime-:producer-2 11:30:58.899 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 1 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=74) 11:30:58.900 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:30:58.900 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:30:58.900 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.clients.producer.internals.Sender - Incremented sequence number for topic-partition my-topic-0 to 1 11:30:58.900 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.clients.producer.internals.Sender - [TransactionalId dd18537f-7701-439c-8b57-f758ce707d93] Sending transactional request (type=EndTxnRequest, transactionalId=dd18537f-7701-439c-8b57-f758ce707d93, producerId=2000, producerEpoch=0, result=COMMIT) to node 127.0.0.1:63361 (id: 2 rack: null) seen key foo with value bar 11:30:58.902 [kafka-request-handler-5] DEBUG kafka.coordinator.transaction.TransactionMetadata - TransactionalId dd18537f-7701-439c-8b57-f758ce707d93 prepare transition from Ongoing to TxnTransitMetadata(producerId=2000, producerEpoch=0, txnTimeoutMs=60000, txnState=PrepareCommit, topicPartitions=Set(my-topic-0), txnStartTimestamp=1505298658859, txnLastUpdateTimestamp=1505298658902) 11:30:58.907 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 2 fetch requests. 11:30:58.907 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 2 [0 : 310] is not larger than old hw 2 [0 : 310].All LEOs are 2 [0 : 310],3 [0 : 474] 11:30:58.907 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Produce to local log in 0 ms 11:30:58.907 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:30:58.907 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:30:58.907 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [2], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:30:58.907 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 2 [0 : 310] is not larger than old hw 2 [0 : 310].All LEOs are 2 [0 : 310],3 [0 : 474] 11:30:58.907 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:30:58.907 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [2], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:30:58.907 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:30:58.907 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: High watermark updated to 3 [0 : 474] 11:30:58.907 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 fetch requests. 11:30:58.907 [kafka-request-handler-1] DEBUG kafka.coordinator.transaction.TransactionMetadata - TransactionalId dd18537f-7701-439c-8b57-f758ce707d93 complete transition from Ongoing to TxnTransitMetadata(producerId=2000, producerEpoch=0, txnTimeoutMs=60000, txnState=PrepareCommit, topicPartitions=Set(my-topic-0), txnStartTimestamp=1505298658859, txnLastUpdateTimestamp=1505298658902) 11:30:58.907 [kafka-request-handler-1] DEBUG kafka.coordinator.transaction.TransactionStateManager - [Transaction State Manager 2]: Updating dd18537f-7701-439c-8b57-f758ce707d93's transaction state to TxnTransitMetadata(producerId=2000, producerEpoch=0, txnTimeoutMs=60000, txnState=PrepareCommit, topicPartitions=Set(my-topic-0), txnStartTimestamp=1505298658859, txnLastUpdateTimestamp=1505298658902) with coordinator epoch 0 for dd18537f-7701-439c-8b57-f758ce707d93 succeeded 11:30:58.907 [kafka-request-handler-1] DEBUG kafka.coordinator.transaction.TransactionMetadata - TransactionalId dd18537f-7701-439c-8b57-f758ce707d93 prepare transition from PrepareCommit to TxnTransitMetadata(producerId=2000, producerEpoch=0, txnTimeoutMs=60000, txnState=CompleteCommit, topicPartitions=Set(), txnStartTimestamp=1505298658859, txnLastUpdateTimestamp=1505298658907) 11:30:58.907 [kafka-producer-network-thread | producer-2] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [TransactionalId dd18537f-7701-439c-8b57-f758ce707d93] Transition from state COMMITTING_TRANSACTION to READY offset 1 11:30:58.907 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 1 producer requests. 11:30:58.907 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 DeleteRecordsRequest. 11:30:58.907 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:30:58.907 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:30:58.907 [TxnMarkerSenderThread-2] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node 2 at 127.0.0.1:63361. 11:30:58.923 [kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Acceptor - Accepted connection from /127.0.0.1:63423 on /127.0.0.1:63361 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:30:58.923 [kafka-network-thread-2-ListenerName(PLAINTEXT)-PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:63423 11:30:59.002 [executor-Fetch] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name LeaderReplication 11:30:59.002 [executor-Fetch] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name LeaderReplication 11:30:59.004 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:30:59.004 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:30:59.004 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:30:59.005 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:30:59.005 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:30:59.005 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:30:59.005 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:59.005 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:59.005 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:30:59.005 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:30:59.005 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:30:59.005 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:59.005 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:30:59.005 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:30:59.006 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:30:59.023 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:30:59.023 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:30:59.023 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:59.023 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:30:59.023 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:30:59.270 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:30:59.270 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:30:59.270 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0000 after 0ms 11:30:59.408 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:30:59.408 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:30:59.408 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:30:59.413 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:30:59.414 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:30:59.415 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:30:59.415 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:30:59.415 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:30:59.421 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:30:59.422 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:30:59.423 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:30:59.423 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:30:59.423 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:30:59.524 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:30:59.524 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:30:59.524 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:30:59.524 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:30:59.524 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:59.524 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:30:59.524 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:30:59.524 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:30:59.524 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:30:59.524 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:59.524 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:30:59.524 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:30:59.524 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:59.524 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:30:59.524 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:30:59.539 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:30:59.539 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:30:59.539 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:30:59.539 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:30:59.539 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:30:59.924 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:30:59.925 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:30:59.925 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:30:59.925 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:30:59.925 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:30:59.925 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:30:59.926 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:30:59.926 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:30:59.926 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:30:59.926 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:30:59.926 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:30:59.926 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:30:59.926 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:00.033 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:00.033 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:00.034 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:00.034 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:00.034 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:00.034 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:00.034 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:00.034 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:00.035 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:00.035 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:00.036 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:00.037 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:00.038 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:00.038 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:00.038 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:00.049 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:00.050 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:00.050 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:00.050 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:00.050 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:00.295 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending Heartbeat request for group exactly-once to coordinator 127.0.0.1:63325 (id: 2147483647 rack: null) 11:31:00.303 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Attempt to heartbeat failed for group exactly-once since it is rebalancing. 11:31:00.303 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Revoking previously assigned partitions [my-topic-0] for group exactly-once 11:31:00.303 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] at state RUNNING: partitions [my-topic-0] revoked at the beginning of consumer rebalance. current assigned active tasks: [0_0] current assigned standby tasks: [] 11:31:00.303 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] State transition from RUNNING to PARTITIONS_REVOKED. 11:31:00.303 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.KafkaStreams - stream-client [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181] State transition from RUNNING to REBALANCING. 11:31:00.304 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] suspendTasksAndState: suspending all active tasks [0_0] and standby tasks [] 11:31:00.304 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.StreamTask - task [0_0] Suspending 11:31:00.304 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.StreamTask - task [0_0] Closing processor topology 11:31:00.304 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name task.0_0.KSTREAM-SOURCE-0000000000-process 11:31:00.304 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name process 11:31:00.304 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name task.0_0.KSTREAM-FOREACH-0000000001-process 11:31:00.305 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name task.0_0.KSTREAM-SOURCE-0000000000-punctuate 11:31:00.305 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name punctuate 11:31:00.305 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name task.0_0.KSTREAM-FOREACH-0000000001-punctuate 11:31:00.305 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name task.0_0.KSTREAM-SOURCE-0000000000-forward 11:31:00.305 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name forward 11:31:00.305 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name task.0_0.KSTREAM-FOREACH-0000000001-forward 11:31:00.305 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name task.0_0.KSTREAM-SOURCE-0000000000-create 11:31:00.306 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name create 11:31:00.306 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name task.0_0.KSTREAM-FOREACH-0000000001-create 11:31:00.306 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name task.0_0.KSTREAM-SOURCE-0000000000-destroy 11:31:00.306 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name destroy 11:31:00.306 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name task.0_0.KSTREAM-FOREACH-0000000001-destroy 11:31:00.307 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.RecordCollectorImpl - task [0_0] Flushing producer 11:31:00.322 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.StreamTask - task [0_0] Committing offsets 11:31:00.333 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __consumer_offsets-20 unblocked 0 fetch requests. 11:31:00.334 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,20] on broker 0: High watermark updated to 2 [0 : 672] 11:31:00.334 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __consumer_offsets-20 unblocked 0 fetch requests. 11:31:00.334 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __consumer_offsets-20 unblocked 0 producer requests. 11:31:00.334 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __consumer_offsets-20 unblocked 0 DeleteRecordsRequest. 11:31:00.334 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Produce to local log in 0 ms 11:31:00.337 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group exactly-once committed offset 2 for partition my-topic-0 11:31:00.337 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Unsubscribed all topics or patterns and assigned partitions 11:31:00.338 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Updating suspended tasks to contain active tasks [0_0] 11:31:00.338 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Removing all active tasks [0_0] 11:31:00.338 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Removing all standby tasks [] 11:31:00.338 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] partition revocation took 35 ms. suspended active tasks: [0_0] suspended standby tasks: [] previous active tasks: [0_0] 11:31:00.338 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - (Re-)joining group exactly-once 11:31:00.338 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending JoinGroup ((type: JoinGroupRequest, groupId=exactly-once, sessionTimeout=10000, rebalanceTimeout=2147483647, memberId=exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer-b7ad69a5-cc35-4032-95d6-188d3c6b7e81, protocolType=consumer, groupProtocols=org.apache.kafka.common.requests.JoinGroupRequest$ProtocolMetadata@ba1e975)) to coordinator 127.0.0.1:63325 (id: 2147483647 rack: null) 11:31:00.341 [kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Stabilized group exactly-once generation 2 (__consumer_offsets-20) 11:31:00.342 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received successful JoinGroup response for group exactly-once: org.apache.kafka.common.requests.JoinGroupResponse@6c06a1b5 11:31:00.342 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Performing assignment for group exactly-once using strategy stream with subscriptions {exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2-consumer-a2a98c3a-d9cb-44b0-889d-3d7e1b901bf1=Subscription(topics=[my-topic]), exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer-b7ad69a5-cc35-4032-95d6-188d3c6b7e81=Subscription(topics=[my-topic])} 11:31:00.342 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamPartitionAssignor - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Constructed client metadata {0b0d8a4e-7380-4eb4-887b-13b509f90181=ClientMetadata{hostInfo=null, consumers=[exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer-b7ad69a5-cc35-4032-95d6-188d3c6b7e81], state=[activeTasks: ([]) standbyTasks: ([]) assignedTasks: ([]) prevActiveTasks: ([0_0]) prevAssignedTasks: ([0_0]) capacity: 1]}, 77838593-3573-4c2a-99f2-7151b9f3e196=ClientMetadata{hostInfo=null, consumers=[exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2-consumer-a2a98c3a-d9cb-44b0-889d-3d7e1b901bf1], state=[activeTasks: ([]) standbyTasks: ([]) assignedTasks: ([]) prevActiveTasks: ([]) prevAssignedTasks: ([]) capacity: 1]}} from the member subscriptions. 11:31:00.342 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received successful JoinGroup response for group exactly-once: org.apache.kafka.common.requests.JoinGroupResponse@c7ea4fa 11:31:00.342 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.StreamPartitionAssignor - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Starting to validate internal topics in partition assignor. 11:31:00.342 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamPartitionAssignor - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Completed validating internal topics in partition assignor 11:31:00.342 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending follower SyncGroup for group exactly-once to coordinator 127.0.0.1:63325 (id: 2147483647 rack: null): (type=SyncGroupRequest, groupId=exactly-once, generationId=2, memberId=exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2-consumer-a2a98c3a-d9cb-44b0-889d-3d7e1b901bf1, groupAssignment=) 11:31:00.342 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.StreamPartitionAssignor - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Created repartition topics [] from the parsed topology. 11:31:00.343 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.StreamPartitionAssignor - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Starting to validate internal topics in partition assignor. 11:31:00.343 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamPartitionAssignor - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Completed validating internal topics in partition assignor 11:31:00.343 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.StreamPartitionAssignor - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Created state changelog topics {} from the parsed topology. 11:31:00.343 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.StreamPartitionAssignor - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Assigning tasks [0_0] to clients {0b0d8a4e-7380-4eb4-887b-13b509f90181=[activeTasks: ([]) standbyTasks: ([]) assignedTasks: ([]) prevActiveTasks: ([0_0]) prevAssignedTasks: ([0_0]) capacity: 1], 77838593-3573-4c2a-99f2-7151b9f3e196=[activeTasks: ([]) standbyTasks: ([]) assignedTasks: ([]) prevActiveTasks: ([]) prevAssignedTasks: ([]) capacity: 1]} with number of replicas 0 11:31:00.343 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamPartitionAssignor - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Assigned tasks to clients as {0b0d8a4e-7380-4eb4-887b-13b509f90181=[activeTasks: ([0_0]) standbyTasks: ([]) assignedTasks: ([0_0]) prevActiveTasks: ([0_0]) prevAssignedTasks: ([0_0]) capacity: 1], 77838593-3573-4c2a-99f2-7151b9f3e196=[activeTasks: ([]) standbyTasks: ([]) assignedTasks: ([]) prevActiveTasks: ([]) prevAssignedTasks: ([]) capacity: 1]}. 11:31:00.343 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Finished assignment for group exactly-once: {exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2-consumer-a2a98c3a-d9cb-44b0-889d-3d7e1b901bf1=Assignment(partitions=[]), exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer-b7ad69a5-cc35-4032-95d6-188d3c6b7e81=Assignment(partitions=[my-topic-0])} 11:31:00.343 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending leader SyncGroup for group exactly-once to coordinator 127.0.0.1:63325 (id: 2147483647 rack: null): (type=SyncGroupRequest, groupId=exactly-once, generationId=2, memberId=exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer-b7ad69a5-cc35-4032-95d6-188d3c6b7e81, groupAssignment=exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2-consumer-a2a98c3a-d9cb-44b0-889d-3d7e1b901bf1,exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1-consumer-b7ad69a5-cc35-4032-95d6-188d3c6b7e81) 11:31:00.344 [kafka-request-handler-7] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Assignment received from leader for group exactly-once for generation 2 11:31:00.345 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __consumer_offsets-20 unblocked 0 fetch requests. 11:31:00.346 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__consumer_offsets,20] on broker 0: High watermark updated to 3 [0 : 1518] 11:31:00.346 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __consumer_offsets-20 unblocked 0 fetch requests. 11:31:00.346 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __consumer_offsets-20 unblocked 0 producer requests. 11:31:00.346 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __consumer_offsets-20 unblocked 0 DeleteRecordsRequest. 11:31:00.346 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Produce to local log in 0 ms 11:31:00.347 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Successfully joined group exactly-once with generation 2 11:31:00.348 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Setting newly assigned partitions [my-topic-0] for group exactly-once 11:31:00.348 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Successfully joined group exactly-once with generation 2 11:31:00.348 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] at state PARTITIONS_REVOKED: new partitions [my-topic-0] assigned at the end of consumer rebalance. assigned active tasks: [0_0] assigned standby tasks: [] current suspended active tasks: [0_0] current suspended standby tasks: [] previous active tasks: [0_0] 11:31:00.349 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] State transition from PARTITIONS_REVOKED to ASSIGNING_PARTITIONS. 11:31:00.350 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.KafkaStreams - stream-client [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181] State transition from REBALANCING to REBALANCING. 11:31:00.350 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Setting newly assigned partitions [] for group exactly-once 11:31:00.350 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Adding assigned tasks as active {0_0=[my-topic-0]} 11:31:00.350 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] at state PARTITIONS_REVOKED: new partitions [] assigned at the end of consumer rebalance. assigned active tasks: [] assigned standby tasks: [] current suspended active tasks: [] current suspended standby tasks: [] previous active tasks: [] 11:31:00.350 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.StreamTask - task [0_0] Resuming 11:31:00.350 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] State transition from PARTITIONS_REVOKED to ASSIGNING_PARTITIONS. 11:31:00.350 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.StreamTask - task [0_0] Initializing processor nodes of the topology 11:31:00.350 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] INFO org.apache.kafka.streams.KafkaStreams - stream-client [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196] State transition from REBALANCING to REBALANCING. 11:31:00.350 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name process 11:31:00.350 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Adding assigned tasks as active {} 11:31:00.350 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] New active tasks to be created: {} 11:31:00.350 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] INFO org.apache.kafka.streams.processor.internals.StoreChangelogReader - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Starting restoring state stores from changelog topics [] 11:31:00.350 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Unsubscribed all topics or patterns and assigned partitions 11:31:00.350 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Unsubscribed all topics or patterns and assigned partitions 11:31:00.350 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StoreChangelogReader - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Took 0 ms to restore all active states 11:31:00.350 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Adding assigned standby tasks {} 11:31:00.350 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] New standby tasks to be created: {} 11:31:00.350 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Unsubscribed all topics or patterns and assigned partitions 11:31:00.350 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name task.0_0.KSTREAM-SOURCE-0000000000-process 11:31:00.351 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] State transition from ASSIGNING_PARTITIONS to RUNNING. 11:31:00.351 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] INFO org.apache.kafka.streams.KafkaStreams - stream-client [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196] State transition from REBALANCING to RUNNING. 11:31:00.351 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] partition assignment took 1 ms. current active tasks: [] current standby tasks: [] 11:31:00.351 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name punctuate 11:31:00.351 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name task.0_0.KSTREAM-SOURCE-0000000000-punctuate 11:31:00.351 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name create 11:31:00.351 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name task.0_0.KSTREAM-SOURCE-0000000000-create 11:31:00.351 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name destroy 11:31:00.351 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name task.0_0.KSTREAM-SOURCE-0000000000-destroy 11:31:00.351 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name forward 11:31:00.352 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name task.0_0.KSTREAM-SOURCE-0000000000-forward 11:31:00.352 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name task.0_0.KSTREAM-FOREACH-0000000001-process 11:31:00.352 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name task.0_0.KSTREAM-FOREACH-0000000001-punctuate 11:31:00.352 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name task.0_0.KSTREAM-FOREACH-0000000001-create 11:31:00.352 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name task.0_0.KSTREAM-FOREACH-0000000001-destroy 11:31:00.353 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name task.0_0.KSTREAM-FOREACH-0000000001-forward 11:31:00.353 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] New active tasks to be created: {} 11:31:00.353 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StoreChangelogReader - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Starting restoring state stores from changelog topics [] 11:31:00.353 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Unsubscribed all topics or patterns and assigned partitions 11:31:00.353 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Unsubscribed all topics or patterns and assigned partitions 11:31:00.353 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.StoreChangelogReader - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Took 0 ms to restore all active states 11:31:00.353 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Adding assigned standby tasks {} 11:31:00.353 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] New standby tasks to be created: {} 11:31:00.353 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Unsubscribed all topics or patterns and assigned partitions 11:31:00.353 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] State transition from ASSIGNING_PARTITIONS to RUNNING. 11:31:00.353 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.KafkaStreams - stream-client [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181] State transition from REBALANCING to RUNNING. 11:31:00.353 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] partition assignment took 4 ms. current active tasks: [0_0] current standby tasks: [] 11:31:00.353 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group exactly-once fetching committed offsets for partitions: [my-topic-0] 11:31:00.354 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Resetting offset for partition my-topic-0 to the committed offset 2 11:31:00.439 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:00.439 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:00.439 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:00.439 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:00.439 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:00.439 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:00.439 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:00.439 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:00.439 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:00.439 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:00.439 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:00.439 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:00.439 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:00.455 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 123ms 11:31:00.456 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:00.456 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:00.456 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:00.456 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:00.456 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0001 after 0ms 11:31:00.456 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:00.456 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:00.456 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0003 after 1ms 11:31:00.456 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0002 after 1ms 11:31:00.541 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:00.541 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:00.541 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:00.541 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:00.541 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:00.541 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:00.541 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:00.541 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:00.541 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:00.541 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:00.541 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:00.541 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:00.541 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:00.541 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:00.541 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:00.557 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 2464ms 11:31:00.558 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:00.559 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:00.559 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:00.559 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:00.559 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:00.675 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:00.790 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 118ms 11:31:00.891 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 115ms 11:31:00.959 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:00.959 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:00.959 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:00.959 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:00.959 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:00.959 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:00.959 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:00.959 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:00.959 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:00.959 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:00.959 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:00.959 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:00.959 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:00.992 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:01.056 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:01.056 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:01.057 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:01.057 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:01.057 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:01.057 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:01.057 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:01.057 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:01.057 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:01.058 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:01.058 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:01.058 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:01.058 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:01.058 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:01.058 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:01.075 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:01.075 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:01.075 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:01.075 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:01.075 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:01.092 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:01.308 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:01.409 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 116ms 11:31:01.475 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:01.475 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:01.475 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:01.475 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:01.475 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:01.475 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:01.475 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:01.475 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:01.475 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:01.475 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:01.475 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:01.475 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:01.475 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:01.524 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:01.575 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:01.575 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:01.575 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:01.575 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:01.575 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:01.575 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:01.575 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:01.575 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:01.575 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:01.575 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:01.575 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:01.575 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:01.575 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:01.575 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:01.575 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:01.591 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:01.591 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:01.592 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:01.592 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:01.592 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:01.624 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 115ms 11:31:01.840 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 214ms 11:31:01.955 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 102ms 11:31:01.991 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:01.991 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:01.991 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:01.991 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:01.991 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:01.991 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:01.991 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:01.991 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:01.991 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:01.991 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:01.991 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:01.991 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:01.991 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:02.076 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 115ms 11:31:02.092 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:02.092 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:02.092 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:02.092 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:02.092 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:02.092 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:02.092 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:02.092 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:02.092 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:02.092 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:02.092 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:02.092 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:02.092 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:02.092 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:02.092 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:02.107 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:02.107 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:02.107 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:02.107 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:02.107 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:02.176 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 121ms 11:31:02.377 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 201ms 11:31:02.455 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:02.456 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:02.456 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:02.456 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:02.456 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0001 after 0ms 11:31:02.456 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:02.456 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:02.456 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0002 after 0ms 11:31:02.456 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0003 after 0ms 11:31:02.508 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:02.508 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:02.508 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:02.508 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:02.508 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:02.508 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:02.508 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:02.508 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:02.508 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:02.508 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:02.508 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:02.508 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:02.508 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:02.593 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 215ms 11:31:02.608 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:02.608 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:02.608 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0000 after 1ms 11:31:02.608 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:02.608 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:02.608 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:02.608 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:02.608 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:02.608 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:02.608 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:02.608 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:02.608 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:02.608 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:02.608 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:02.608 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:02.608 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:02.608 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:02.608 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:02.624 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:02.624 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:02.624 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:02.624 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:02.624 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:02.693 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:02.908 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:03.009 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 115ms 11:31:03.024 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:03.024 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:03.024 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:03.024 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:03.024 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:03.024 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:03.024 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:03.024 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:03.024 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:03.024 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:03.024 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:03.024 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:03.024 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:03.109 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:03.125 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:03.125 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:03.125 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:03.125 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:03.125 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:03.125 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:03.125 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:03.125 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:03.125 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:03.125 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:03.125 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:03.125 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:03.125 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:03.125 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:03.125 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:03.140 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:03.140 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:03.140 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:03.140 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:03.140 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:03.310 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:03.425 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending Heartbeat request for group exactly-once to coordinator 127.0.0.1:63325 (id: 2147483647 rack: null) 11:31:03.425 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:03.425 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received successful Heartbeat response for group exactly-once 11:31:03.440 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending Heartbeat request for group exactly-once to coordinator 127.0.0.1:63325 (id: 2147483647 rack: null) 11:31:03.440 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received successful Heartbeat response for group exactly-once 11:31:03.525 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 115ms 11:31:03.541 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:03.541 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:03.541 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:03.541 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:03.541 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:03.541 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:03.541 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:03.541 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:03.541 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:03.541 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:03.541 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:03.541 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:03.541 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:03.641 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:03.641 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:03.641 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:03.641 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:03.641 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:03.641 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:03.641 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:03.641 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:03.641 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:03.641 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:03.641 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:03.641 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:03.641 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:03.641 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:03.641 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:03.658 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:03.658 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:03.658 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:03.659 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:03.659 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:03.734 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 201ms 11:31:03.843 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 108ms 11:31:03.954 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 109ms 11:31:04.048 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:04.049 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:04.049 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:04.049 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:04.049 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:04.049 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:04.049 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:04.049 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:04.050 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:04.050 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:04.050 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:04.050 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:04.050 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:04.063 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 111ms 11:31:04.151 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:04.151 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:04.151 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:04.151 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:04.151 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:04.151 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:04.152 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:04.152 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:04.152 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:04.152 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:04.152 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:04.152 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:04.152 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:04.152 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:04.152 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:04.180 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 109ms 11:31:04.180 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:04.180 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:04.180 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:04.180 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:04.180 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:04.280 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 117ms 11:31:04.456 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:04.456 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:04.456 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:04.456 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:04.456 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0002 after 0ms 11:31:04.456 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:04.456 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:04.456 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0001 after 0ms 11:31:04.456 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0003 after 0ms 11:31:04.495 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 201ms 11:31:04.559 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:04.559 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:04.560 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:04.560 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:04.560 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:04.560 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:04.560 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:04.560 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:04.560 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:04.560 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:04.560 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:04.560 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:04.561 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:04.596 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 114ms 11:31:04.680 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:04.680 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:04.680 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:04.680 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:04.680 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:04.680 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:04.680 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:04.680 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:04.680 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:04.680 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:04.680 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:04.680 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:04.680 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:04.680 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:04.680 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:04.696 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:04.696 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:04.696 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:04.696 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:04.696 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:04.696 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:04.908 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:05.012 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 112ms 11:31:05.081 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:05.081 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:05.081 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:05.081 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:05.081 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:05.081 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:05.081 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:05.081 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:05.081 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:05.081 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:05.081 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:05.081 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:05.081 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:05.112 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 104ms 11:31:05.197 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:05.197 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:05.197 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:05.197 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:05.197 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:05.197 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:05.197 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:05.197 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:05.197 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:05.197 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:05.197 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:05.197 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:05.197 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:05.197 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:05.197 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:05.212 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:05.212 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:05.212 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:05.212 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:05.212 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:05.313 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:05.424 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:05.529 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 111ms 11:31:05.598 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:05.598 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:05.598 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:05.598 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:05.598 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:05.598 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:05.598 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:05.598 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:05.598 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:05.598 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:05.598 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:05.598 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:05.598 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:05.629 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 105ms 11:31:05.714 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:05.714 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:05.714 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:05.714 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:05.714 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:05.714 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:05.714 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:05.714 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:05.714 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:05.714 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:05.714 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:05.714 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:05.714 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:05.714 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:05.714 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:05.729 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:05.729 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:05.729 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:05.729 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:05.729 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:05.830 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:05.940 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:05.956 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:05.956 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:05.956 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0000 after 0ms 11:31:06.045 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 110ms 11:31:06.100 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:06.101 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:06.101 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:06.101 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:06.101 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:06.102 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:06.102 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:06.102 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:06.102 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:06.103 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:06.103 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:06.103 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:06.103 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:06.146 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 105ms 11:31:06.232 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:06.232 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:06.232 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:06.232 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:06.232 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:06.232 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:06.232 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:06.232 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:06.232 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:06.232 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:06.232 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:06.232 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:06.232 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:06.232 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:06.232 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:06.232 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:06.232 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:06.232 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:06.232 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:06.232 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:06.247 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:06.348 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:06.448 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:06.454 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending Heartbeat request for group exactly-once to coordinator 127.0.0.1:63325 (id: 2147483647 rack: null) 11:31:06.455 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending Heartbeat request for group exactly-once to coordinator 127.0.0.1:63325 (id: 2147483647 rack: null) 11:31:06.456 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received successful Heartbeat response for group exactly-once 11:31:06.457 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received successful Heartbeat response for group exactly-once 11:31:06.470 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:06.470 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:06.470 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:06.470 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:06.470 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:06.470 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:06.470 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0002 after 0ms 11:31:06.470 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0003 after 0ms 11:31:06.470 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0001 after 0ms 11:31:06.610 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:06.610 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:06.610 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:06.611 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:06.611 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:06.611 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:06.611 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:06.611 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:06.611 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:06.612 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:06.612 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:06.612 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:06.612 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:06.684 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 215ms 11:31:06.746 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:06.746 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:06.746 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:06.746 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:06.746 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:06.746 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:06.746 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:06.746 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:06.746 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:06.746 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:06.746 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:06.746 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:06.746 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:06.746 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:06.746 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:06.746 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:06.746 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:06.746 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:06.746 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:06.746 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:06.784 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 121ms 11:31:07.000 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:07.115 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 116ms 11:31:07.115 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:07.115 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:07.115 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:07.115 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:07.115 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:07.115 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:07.115 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:07.115 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:07.115 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:07.115 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:07.115 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:07.115 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:07.115 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:07.215 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 115ms 11:31:07.263 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:07.263 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:07.263 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:07.263 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:07.264 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:07.264 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:07.264 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:07.264 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:07.264 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:07.264 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:07.264 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:07.264 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:07.264 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:07.264 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:07.264 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:07.264 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:07.264 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:07.264 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:07.264 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:07.264 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:07.424 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 201ms 11:31:07.531 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 108ms 11:31:07.616 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:07.616 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:07.616 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:07.632 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 107ms 11:31:07.632 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:07.632 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:07.632 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:07.632 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:07.632 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:07.632 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:07.632 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:07.632 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:07.632 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:07.632 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:07.747 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:07.785 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:07.785 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:07.785 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:07.785 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:07.785 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:07.785 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:07.785 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:07.785 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:07.785 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:07.785 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:07.785 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:07.785 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:07.785 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:07.785 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:07.785 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:07.785 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:07.785 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:07.785 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:07.785 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:07.785 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:07.854 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 115ms 11:31:07.892 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-47 topicPartition=__consumer_offsets-47. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:07.892 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-14 topicPartition=__consumer_offsets-14. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:07.892 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-11 topicPartition=__consumer_offsets-11. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:07.892 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-44 topicPartition=__consumer_offsets-44. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:07.892 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-41 topicPartition=__consumer_offsets-41. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:07.892 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-23 topicPartition=__consumer_offsets-23. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:07.892 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__transaction_state-2 topicPartition=__transaction_state-2. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:07.892 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-20 topicPartition=__consumer_offsets-20. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:07.892 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-17 topicPartition=__consumer_offsets-17. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:07.892 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-32 topicPartition=__consumer_offsets-32. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:07.892 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-29 topicPartition=__consumer_offsets-29. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:07.892 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__transaction_state-0 topicPartition=__transaction_state-0. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:07.892 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-26 topicPartition=__consumer_offsets-26. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:07.892 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__transaction_state-1 topicPartition=__transaction_state-1. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:07.892 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-8 topicPartition=__consumer_offsets-8. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:07.892 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-5 topicPartition=__consumer_offsets-5. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:07.892 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-38 topicPartition=__consumer_offsets-38. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:07.892 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-35 topicPartition=__consumer_offsets-35. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:07.892 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-2 topicPartition=__consumer_offsets-2. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:07.955 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 107ms 11:31:08.063 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:08.132 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:08.132 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:08.132 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:08.147 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:08.147 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:08.147 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:08.147 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:08.147 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:08.147 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:08.147 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:08.147 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:08.147 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:08.147 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:08.163 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 108ms 11:31:08.295 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:08.295 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:08.295 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:08.295 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:08.295 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:08.295 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:08.296 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:08.296 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:08.296 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:08.296 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:08.296 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:08.296 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:08.296 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:08.296 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:08.296 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:08.296 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:08.296 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:08.296 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:08.296 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:08.296 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:08.364 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:08.470 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:08.486 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:08.486 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:08.486 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:08.486 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:08.486 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0001 after 0ms 11:31:08.486 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:08.486 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0003 after 0ms 11:31:08.486 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:08.486 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0002 after 0ms 11:31:08.579 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 106ms 11:31:08.642 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:08.642 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:08.642 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:08.653 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:08.653 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:08.653 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:08.653 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:08.653 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:08.653 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:08.653 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:08.653 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:08.653 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:08.653 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:08.684 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 109ms 11:31:08.788 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 105ms 11:31:08.804 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:08.804 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:08.804 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:08.805 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:08.805 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:08.805 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:08.805 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:08.805 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:08.805 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:08.805 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:08.805 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:08.805 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:08.806 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:08.806 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:08.805 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:08.806 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:08.806 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:08.806 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:08.806 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:08.806 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:08.889 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 104ms 11:31:09.004 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:09.104 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 115ms 11:31:09.158 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:09.158 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:09.158 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:09.158 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:09.158 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:09.158 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:09.158 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:09.158 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:09.158 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:09.158 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:09.158 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:09.158 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:09.158 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:09.297 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:09.297 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:09.297 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0000 after 0ms 11:31:09.313 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:09.313 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:09.313 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:09.313 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:09.314 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:09.314 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:09.314 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:09.314 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:09.314 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:09.314 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:09.314 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:09.314 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:09.314 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:09.314 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:09.314 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:09.314 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:09.314 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:09.314 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:09.314 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:09.314 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:09.328 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 215ms 11:31:09.439 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 109ms 11:31:09.455 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-15 topicPartition=__consumer_offsets-15. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.455 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-48 topicPartition=__consumer_offsets-48. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.455 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-45 topicPartition=__consumer_offsets-45. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.456 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-12 topicPartition=__consumer_offsets-12. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.456 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-9 topicPartition=__consumer_offsets-9. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.457 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-42 topicPartition=__consumer_offsets-42. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.457 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__transaction_state-2 topicPartition=__transaction_state-2. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.457 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-24 topicPartition=__consumer_offsets-24. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.457 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-21 topicPartition=__consumer_offsets-21. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.457 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-18 topicPartition=__consumer_offsets-18. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.457 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-0 topicPartition=__consumer_offsets-0. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.457 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-30 topicPartition=__consumer_offsets-30. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.458 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-27 topicPartition=__consumer_offsets-27. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.458 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__transaction_state-0 topicPartition=__transaction_state-0. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.458 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__transaction_state-1 topicPartition=__transaction_state-1. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.458 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-39 topicPartition=__consumer_offsets-39. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.458 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-6 topicPartition=__consumer_offsets-6. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.458 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-3 topicPartition=__consumer_offsets-3. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.459 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-36 topicPartition=__consumer_offsets-36. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.459 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-33 topicPartition=__consumer_offsets-33. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.532 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending Heartbeat request for group exactly-once to coordinator 127.0.0.1:63325 (id: 2147483647 rack: null) 11:31:09.533 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received successful Heartbeat response for group exactly-once 11:31:09.539 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 111ms 11:31:09.547 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending Heartbeat request for group exactly-once to coordinator 127.0.0.1:63325 (id: 2147483647 rack: null) 11:31:09.549 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received successful Heartbeat response for group exactly-once 11:31:09.598 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-16 topicPartition=__consumer_offsets-16. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.598 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-13 topicPartition=__consumer_offsets-13. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.598 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-46 topicPartition=__consumer_offsets-46. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.598 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-43 topicPartition=__consumer_offsets-43. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.598 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-10 topicPartition=__consumer_offsets-10. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.598 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__transaction_state-2 topicPartition=__transaction_state-2. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.598 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-22 topicPartition=__consumer_offsets-22. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.598 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-19 topicPartition=__consumer_offsets-19. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.598 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-49 topicPartition=__consumer_offsets-49. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.598 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-31 topicPartition=__consumer_offsets-31. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.598 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-28 topicPartition=__consumer_offsets-28. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.598 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-25 topicPartition=__consumer_offsets-25. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.598 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__transaction_state-0 topicPartition=__transaction_state-0. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.598 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__transaction_state-1 topicPartition=__transaction_state-1. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.598 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-7 topicPartition=__consumer_offsets-7. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.598 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-40 topicPartition=__consumer_offsets-40. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.598 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-37 topicPartition=__consumer_offsets-37. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.598 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-4 topicPartition=__consumer_offsets-4. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.598 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-1 topicPartition=__consumer_offsets-1. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.598 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-34 topicPartition=__consumer_offsets-34. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:09.667 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:09.667 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:09.667 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:09.667 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:09.667 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:09.667 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:09.667 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:09.667 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:09.667 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:09.667 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:09.667 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:09.667 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:09.667 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:09.745 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 206ms 11:31:09.830 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:09.830 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:09.830 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:09.830 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:09.830 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:09.830 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:09.830 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:09.830 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:09.830 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:09.830 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:09.830 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:09.830 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:09.830 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:09.830 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:09.830 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:09.830 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:09.830 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:09.830 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:09.830 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:09.830 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:09.968 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:10.068 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 123ms 11:31:10.184 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:10.184 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:10.184 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:10.184 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:10.184 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:10.184 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:10.184 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:10.184 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:10.184 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:10.184 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:10.184 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:10.184 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:10.184 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:10.268 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:10.348 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:10.348 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:10.348 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:10.348 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:10.349 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:10.349 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:10.349 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:10.349 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:10.349 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:10.349 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:10.349 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:10.349 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:10.349 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:10.349 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:10.349 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:10.349 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:10.349 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:10.349 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:10.349 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:10.349 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:10.470 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 201ms 11:31:10.487 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:10.487 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:10.487 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:10.487 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:10.487 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0002 after 0ms 11:31:10.487 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0003 after 0ms 11:31:10.487 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:10.487 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:10.487 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0001 after 0ms 11:31:10.570 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:10.701 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:10.701 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:10.701 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:10.701 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:10.701 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:10.701 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:10.701 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:10.701 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:10.701 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:10.701 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:10.701 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:10.701 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:10.701 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:10.785 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 215ms 11:31:10.851 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:10.851 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:10.851 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:10.851 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:10.852 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:10.852 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:10.852 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:10.852 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:10.852 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:10.852 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:10.852 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:10.852 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:10.852 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:10.852 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:10.852 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:10.852 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:10.852 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:10.852 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:10.852 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:10.852 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:10.986 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:11.086 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:11.218 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:11.218 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:11.218 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:11.218 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:11.218 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:11.218 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:11.218 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:11.218 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:11.218 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:11.218 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:11.218 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:11.218 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:11.218 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:11.287 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:11.355 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:11.355 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:11.356 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:11.357 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:11.357 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:11.357 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:11.357 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:11.357 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:11.357 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:11.357 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:11.357 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:11.357 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:11.358 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:11.358 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:11.358 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:11.358 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:11.358 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:11.358 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:11.358 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:11.358 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:11.387 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:11.617 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 215ms 11:31:11.717 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 115ms 11:31:11.733 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:11.733 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:11.733 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:11.733 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:11.733 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:11.733 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:11.733 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:11.733 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:11.733 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:11.733 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:11.733 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:11.733 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:11.733 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:11.860 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:11.860 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:11.860 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:11.860 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:11.860 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:11.860 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:11.860 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:11.860 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:11.860 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:11.860 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:11.861 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:11.861 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:11.861 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:11.861 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:11.861 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:11.861 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:11.861 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:11.861 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:11.861 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:11.861 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:11.925 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 201ms 11:31:12.033 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 107ms 11:31:12.134 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 108ms 11:31:12.234 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:12.250 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:12.251 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:12.251 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:12.251 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:12.251 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:12.252 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:12.252 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:12.252 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:12.252 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:12.252 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:12.252 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:12.252 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:12.252 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:12.362 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:12.362 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:12.363 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:12.362 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:12.363 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:12.363 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:12.363 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:12.363 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:12.363 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:12.363 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:12.363 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:12.363 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:12.363 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:12.363 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:12.363 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:12.363 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:12.363 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:12.364 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:12.364 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:12.364 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:12.471 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 215ms 11:31:12.487 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:12.487 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:12.487 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:12.487 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:12.487 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0003 after 0ms 11:31:12.487 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:12.487 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:12.487 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0001 after 0ms 11:31:12.487 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0002 after 0ms 11:31:12.572 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 122ms 11:31:12.572 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending Heartbeat request for group exactly-once to coordinator 127.0.0.1:63325 (id: 2147483647 rack: null) 11:31:12.572 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending Heartbeat request for group exactly-once to coordinator 127.0.0.1:63325 (id: 2147483647 rack: null) 11:31:12.572 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received successful Heartbeat response for group exactly-once 11:31:12.572 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received successful Heartbeat response for group exactly-once 11:31:12.634 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:12.634 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:12.634 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0000 after 0ms 11:31:12.672 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:12.772 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:12.772 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:12.772 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:12.772 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:12.772 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:12.772 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:12.772 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:12.772 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:12.772 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:12.772 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:12.772 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:12.772 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:12.772 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:12.871 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:12.871 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:12.871 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:12.871 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:12.871 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:12.871 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:12.871 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:12.871 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:12.871 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:12.872 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:12.872 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:12.872 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:12.872 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:12.872 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:12.872 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:12.872 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:12.872 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:12.872 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:12.872 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:12.872 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:12.888 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 215ms 11:31:13.004 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:13.104 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 116ms 11:31:13.279 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:13.279 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:13.279 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:13.280 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:13.280 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:13.281 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:13.281 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:13.281 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:13.281 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:13.281 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:13.281 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:13.281 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:13.281 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:13.322 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 215ms 11:31:13.376 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:13.376 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:13.376 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:13.376 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:13.376 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:13.376 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:13.376 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:13.376 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:13.376 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:13.376 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:13.377 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:13.377 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:13.377 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:13.377 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:13.377 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:13.377 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:13.377 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:13.377 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:13.377 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:13.377 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:13.424 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 103ms 11:31:13.535 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 102ms 11:31:13.635 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 111ms 11:31:13.790 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:13.790 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:13.790 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:13.790 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:13.790 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:13.791 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:13.791 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:13.791 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:13.791 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:13.791 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:13.791 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:13.791 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:13.791 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:13.840 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 201ms 11:31:13.878 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:13.878 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:13.879 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:13.879 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:13.879 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:13.879 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:13.879 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:13.879 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:13.879 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:13.879 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:13.879 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:13.879 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:13.880 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:13.880 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:13.880 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:13.880 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:13.880 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:13.880 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:13.880 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:13.880 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:13.955 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 104ms 11:31:14.074 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 115ms 11:31:14.189 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 119ms 11:31:14.290 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 115ms 11:31:14.308 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:14.308 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:14.308 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:14.308 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:14.308 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:14.308 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:14.308 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:14.308 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:14.308 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:14.308 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:14.308 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:14.308 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:14.308 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:14.381 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:14.382 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:14.382 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:14.382 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:14.382 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:14.382 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:14.382 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:14.382 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:14.383 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:14.383 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:14.383 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:14.383 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:14.383 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:14.383 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:14.383 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:14.383 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:14.383 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:14.383 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:14.383 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:14.383 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:14.391 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:14.490 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:14.490 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:14.490 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:14.490 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:14.490 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0003 after 0ms 11:31:14.490 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:14.490 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0002 after 0ms 11:31:14.490 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:14.491 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0001 after 0ms 11:31:14.492 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:14.608 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:14.708 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 116ms 11:31:14.825 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:14.825 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:14.825 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:14.825 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:14.825 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:14.825 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:14.825 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:14.825 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:14.825 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:14.825 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:14.825 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:14.825 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:14.825 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:14.889 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:14.889 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:14.889 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:14.889 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:14.889 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:14.889 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:14.889 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:14.890 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:14.890 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:14.890 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:14.890 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:14.890 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:14.890 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:14.890 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:14.890 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:14.890 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:14.890 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:14.890 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:14.890 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:14.890 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:14.924 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 201ms 11:31:15.038 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 115ms 11:31:15.138 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 114ms 11:31:15.339 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:15.339 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:15.339 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:15.339 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:15.339 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:15.339 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:15.339 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:15.339 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:15.339 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:15.339 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:15.339 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:15.339 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:15.339 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:15.339 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:15.393 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:15.393 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:15.393 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:15.393 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:15.393 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:15.393 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:15.393 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:15.393 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:15.393 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:15.393 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:15.393 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:15.393 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:15.393 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:15.393 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:15.393 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:15.393 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:15.393 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:15.393 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:15.393 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:15.393 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:15.440 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:15.555 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:15.608 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending Heartbeat request for group exactly-once to coordinator 127.0.0.1:63325 (id: 2147483647 rack: null) 11:31:15.608 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received successful Heartbeat response for group exactly-once 11:31:15.655 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 115ms 11:31:15.655 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending Heartbeat request for group exactly-once to coordinator 127.0.0.1:63325 (id: 2147483647 rack: null) 11:31:15.656 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received successful Heartbeat response for group exactly-once 11:31:15.842 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:15.842 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:15.842 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:15.844 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:15.844 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:15.845 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:15.845 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:15.845 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:15.845 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:15.845 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:15.845 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:15.845 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:15.845 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:15.877 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 222ms 11:31:15.909 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:15.909 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:15.909 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:15.909 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:15.909 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:15.909 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:15.909 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:15.909 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:15.909 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:15.909 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:15.909 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:15.909 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:15.909 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:15.909 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:15.909 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:15.909 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:15.909 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:15.909 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:15.909 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:15.909 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:15.978 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:15.978 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:15.978 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0000 after 0ms 11:31:16.078 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 201ms 11:31:16.279 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:16.345 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:16.345 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:16.345 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:16.348 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:16.348 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:16.349 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:16.349 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:16.349 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:16.349 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:16.349 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:16.349 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:16.349 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:16.349 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:16.379 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:16.425 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:16.425 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:16.425 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:16.425 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:16.425 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:16.425 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:16.425 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:16.425 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:16.425 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:16.425 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:16.425 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:16.425 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:16.425 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:16.425 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:16.425 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:16.425 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:16.425 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:16.425 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:16.425 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:16.425 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:16.496 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:16.496 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:16.496 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:16.496 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:16.496 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0003 after 0ms 11:31:16.496 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:16.496 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:16.496 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0001 after 0ms 11:31:16.496 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0002 after 0ms 11:31:16.594 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:16.695 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 115ms 11:31:16.795 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:16.846 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:16.846 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:16.846 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:16.850 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:16.850 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:16.851 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:16.851 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:16.851 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:16.851 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:16.851 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:16.851 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:16.851 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:16.851 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:16.939 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:16.939 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:16.939 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:16.939 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:16.939 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:16.939 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:16.939 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:16.939 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:16.939 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:16.939 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:16.939 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:16.939 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:16.939 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:16.939 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:16.939 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:16.939 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:16.939 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:16.939 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:16.939 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:16.939 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:17.017 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 213ms 11:31:17.118 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 109ms 11:31:17.218 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:17.349 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:17.350 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:17.350 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:17.354 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:17.354 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:17.355 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:17.355 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:17.355 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:17.355 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:17.355 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:17.355 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:17.355 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:17.355 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:17.440 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:17.443 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:17.443 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:17.443 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:17.443 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:17.443 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:17.443 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:17.443 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:17.443 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:17.443 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:17.443 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:17.443 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:17.443 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:17.443 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:17.444 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:17.444 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:17.444 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:17.444 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:17.444 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:17.444 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:17.444 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:17.540 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 122ms 11:31:17.741 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 201ms 11:31:17.852 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:17.853 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:17.853 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:17.857 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:17.857 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:17.858 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:17.858 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:17.858 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:17.858 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:17.858 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:17.858 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:17.858 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:17.858 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:17.941 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:17.957 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:17.957 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:17.957 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:17.957 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:17.957 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:17.957 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:17.957 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:17.957 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:17.957 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:17.957 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:17.957 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:17.957 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:17.957 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:17.957 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:17.957 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:17.957 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:17.957 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:17.957 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:17.957 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:17.957 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:18.142 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:18.251 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:18.351 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 109ms 11:31:18.355 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:18.355 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:18.355 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:18.360 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:18.360 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:18.361 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:18.361 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:18.361 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:18.361 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:18.361 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:18.361 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:18.361 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:18.361 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:18.471 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:18.471 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:18.471 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:18.471 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:18.471 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:18.471 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:18.471 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:18.471 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:18.472 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:18.472 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:18.472 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:18.472 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:18.472 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:18.472 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:18.472 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:18.472 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:18.472 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:18.472 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:18.472 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:18.472 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:18.501 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:18.501 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:18.502 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:18.502 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:18.502 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0003 after 0ms 11:31:18.502 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:18.502 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0002 after 0ms 11:31:18.502 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:18.502 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0001 after 0ms 11:31:18.563 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 204ms 11:31:18.656 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending Heartbeat request for group exactly-once to coordinator 127.0.0.1:63325 (id: 2147483647 rack: null) 11:31:18.658 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received successful Heartbeat response for group exactly-once 11:31:18.663 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 108ms 11:31:18.672 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending Heartbeat request for group exactly-once to coordinator 127.0.0.1:63325 (id: 2147483647 rack: null) 11:31:18.673 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received successful Heartbeat response for group exactly-once 11:31:18.860 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:18.860 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:18.861 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:18.863 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:18.864 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:18.864 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:18.864 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:18.865 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:18.865 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:18.865 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:18.865 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:18.865 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:18.865 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:18.872 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 208ms 11:31:18.976 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:18.977 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:18.977 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:18.977 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:18.977 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:18.977 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:18.977 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:18.977 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:18.977 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:18.977 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:18.977 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:18.977 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:18.977 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:18.977 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:18.977 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:18.977 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:18.977 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:18.977 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:18.977 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:18.977 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:18.977 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:19.093 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 104ms 11:31:19.193 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 117ms 11:31:19.324 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:19.324 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:19.324 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0000 after 0ms 11:31:19.363 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:19.363 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:19.363 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:19.366 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:19.366 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:19.366 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:19.366 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:19.366 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:19.366 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:19.366 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:19.366 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:19.366 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:19.366 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:19.424 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 216ms 11:31:19.480 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:19.480 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:19.480 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:19.480 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:19.480 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:19.480 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:19.480 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:19.480 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:19.480 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:19.480 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:19.481 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:19.480 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:19.481 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:19.481 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:19.481 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:19.481 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:19.481 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:19.481 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:19.481 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:19.481 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:19.525 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 115ms 11:31:19.625 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:19.825 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:19.866 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:19.866 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:19.866 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:19.869 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:19.869 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:19.869 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:19.870 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:19.870 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:19.870 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:19.870 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:19.870 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:19.870 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:19.870 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:19.983 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:19.983 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:19.983 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:19.983 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:19.984 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:19.984 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:19.984 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:19.984 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:19.984 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:19.984 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:19.984 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:19.984 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:19.984 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:19.984 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:19.984 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:19.984 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:19.984 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:19.984 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:19.984 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:19.984 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:20.026 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:20.126 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:20.335 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 208ms 11:31:20.376 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:20.376 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:20.376 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:20.376 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:20.376 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:20.376 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:20.376 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:20.376 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:20.376 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:20.376 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:20.376 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:20.381 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:20.381 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:20.439 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:20.497 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:20.497 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:20.497 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:20.497 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:20.497 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:20.497 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:20.497 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:20.497 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:20.497 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:20.497 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:20.497 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:20.497 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:20.497 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:20.497 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:20.497 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:20.497 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:20.497 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:20.497 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:20.497 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:20.497 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:20.513 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:20.513 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:20.513 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:20.513 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:20.513 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0002 after 0ms 11:31:20.513 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:20.513 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0003 after 0ms 11:31:20.513 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:20.513 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0001 after 0ms 11:31:20.544 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 104ms 11:31:20.644 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 105ms 11:31:20.854 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 209ms 11:31:20.884 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:20.883 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:20.884 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:20.884 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:20.884 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:20.885 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:20.885 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:20.885 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:20.885 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:20.885 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:20.885 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:20.885 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:20.885 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:20.954 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:21.000 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:21.001 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:21.001 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:21.001 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:21.001 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:21.002 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:21.002 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:21.002 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:21.002 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:21.002 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:21.002 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:21.003 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:21.003 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:21.003 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:21.004 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:21.004 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:21.004 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:21.004 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:21.004 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:21.004 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:21.154 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:21.355 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:21.386 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:21.386 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:21.386 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:21.387 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:21.388 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:21.388 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:21.388 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:21.388 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:21.388 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:21.388 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:21.388 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:21.389 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:21.389 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:21.455 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:21.503 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:21.503 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:21.504 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:21.504 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:21.504 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:21.504 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:21.504 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:21.504 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:21.504 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:21.504 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:21.505 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:21.505 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:21.506 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:21.506 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:21.506 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:21.506 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:21.506 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:21.506 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:21.506 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:21.506 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:21.656 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:21.724 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending Heartbeat request for group exactly-once to coordinator 127.0.0.1:63325 (id: 2147483647 rack: null) 11:31:21.729 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received successful Heartbeat response for group exactly-once 11:31:21.756 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending Heartbeat request for group exactly-once to coordinator 127.0.0.1:63325 (id: 2147483647 rack: null) 11:31:21.756 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:21.759 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received successful Heartbeat response for group exactly-once 11:31:21.890 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:21.890 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:21.890 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:21.890 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:21.890 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:21.891 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:21.891 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:21.891 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:21.891 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:21.891 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:21.891 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:21.891 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:21.891 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:21.956 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:22.008 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:22.008 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:22.008 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:22.008 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:22.008 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:22.008 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:22.008 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:22.008 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:22.008 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:22.008 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:22.008 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:22.008 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:22.008 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:22.008 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:22.008 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:22.008 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:22.008 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:22.008 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:22.008 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:22.008 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:22.156 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:22.357 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 201ms 11:31:22.391 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:22.392 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:22.392 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:22.394 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:22.394 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:22.394 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:22.394 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:22.395 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:22.395 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:22.395 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:22.395 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:22.395 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:22.395 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:22.511 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:22.511 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:22.512 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:22.512 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:22.512 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:22.512 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:22.512 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:22.512 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:22.513 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:22.512 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:22.513 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:22.513 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:22.513 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:22.513 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:22.513 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:22.513 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:22.513 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:22.513 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:22.513 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:22.513 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:22.513 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0003 after 0ms 11:31:22.513 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:22.513 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:22.514 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:22.513 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:22.514 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:22.514 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:22.514 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0001 after 1ms 11:31:22.514 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0002 after 1ms 11:31:22.557 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:22.657 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:22.657 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:22.657 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0000 after 0ms 11:31:22.757 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:22.874 [kafka-scheduler-5] DEBUG kafka.log.LogManager - Beginning log cleanup... 11:31:22.874 [kafka-scheduler-8] DEBUG kafka.log.LogManager - Checking for dirty logs to flush... 11:31:22.875 [kafka-scheduler-5] DEBUG kafka.log.LogManager - Log cleanup completed. 0 files deleted in 0 seconds 11:31:22.875 [kafka-scheduler-8] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:22.876 [kafka-scheduler-8] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:22.876 [kafka-scheduler-8] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:22.876 [kafka-scheduler-8] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:22.876 [kafka-scheduler-8] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:22.876 [kafka-scheduler-8] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:22.876 [kafka-scheduler-8] DEBUG kafka.log.LogManager - Checking if flush is needed on __transaction_state flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:22.876 [kafka-scheduler-8] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:22.876 [kafka-scheduler-8] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:22.876 [kafka-scheduler-8] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:22.876 [kafka-scheduler-8] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:22.876 [kafka-scheduler-8] DEBUG kafka.log.LogManager - Checking if flush is needed on __transaction_state flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:22.876 [kafka-scheduler-8] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:22.876 [kafka-scheduler-8] DEBUG kafka.log.LogManager - Checking if flush is needed on __transaction_state flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:22.876 [kafka-scheduler-8] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:22.876 [kafka-scheduler-8] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:22.876 [kafka-scheduler-8] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:22.876 [kafka-scheduler-8] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:22.876 [kafka-scheduler-8] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:22.893 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:22.893 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:22.893 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:22.895 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:22.896 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:22.896 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:22.897 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:22.897 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:22.897 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:22.897 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:22.898 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:22.898 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:22.898 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:22.905 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-47 topicPartition=__consumer_offsets-47. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:22.905 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-14 topicPartition=__consumer_offsets-14. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:22.905 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-11 topicPartition=__consumer_offsets-11. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:22.906 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-44 topicPartition=__consumer_offsets-44. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:22.906 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-41 topicPartition=__consumer_offsets-41. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:22.906 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-23 topicPartition=__consumer_offsets-23. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:22.906 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__transaction_state-2 topicPartition=__transaction_state-2. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:22.906 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-20 topicPartition=__consumer_offsets-20. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:22.906 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-17 topicPartition=__consumer_offsets-17. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:22.906 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-32 topicPartition=__consumer_offsets-32. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:22.907 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-29 topicPartition=__consumer_offsets-29. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:22.907 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__transaction_state-0 topicPartition=__transaction_state-0. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:22.907 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-26 topicPartition=__consumer_offsets-26. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:22.907 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__transaction_state-1 topicPartition=__transaction_state-1. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:22.907 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-8 topicPartition=__consumer_offsets-8. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:22.907 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-5 topicPartition=__consumer_offsets-5. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:22.907 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-38 topicPartition=__consumer_offsets-38. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:22.907 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-35 topicPartition=__consumer_offsets-35. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:22.907 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-2 topicPartition=__consumer_offsets-2. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:22.957 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:23.016 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:23.016 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:23.016 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:23.016 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:23.017 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:23.017 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:23.017 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:23.017 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:23.017 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:23.017 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:23.017 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:23.017 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:23.017 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:23.017 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:23.017 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:23.017 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:23.017 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:23.017 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:23.018 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:23.018 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:23.157 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:23.358 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 201ms 11:31:23.397 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:23.397 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:23.397 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:23.398 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:23.399 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:23.399 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:23.399 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:23.399 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:23.400 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:23.401 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:23.401 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:23.401 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:23.401 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:23.521 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:23.521 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:23.521 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:23.522 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:23.522 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:23.522 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:23.522 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:23.522 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:23.522 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:23.523 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:23.523 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:23.523 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:23.523 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:23.523 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:23.523 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:23.523 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:23.523 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:23.523 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:23.523 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:23.523 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:23.558 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:23.758 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:23.900 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:23.901 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:23.901 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:23.902 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:23.902 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:23.903 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:23.903 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:23.903 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:23.903 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:23.903 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:23.903 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:23.903 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:23.903 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:23.958 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:24.025 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:24.025 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:24.026 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:24.026 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:24.026 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:24.026 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:24.026 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:24.026 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:24.026 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:24.026 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:24.026 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:24.026 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:24.026 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:24.026 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:24.026 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:24.026 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:24.026 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:24.027 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:24.027 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:24.027 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:24.158 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:24.358 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:24.403 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:24.404 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:24.404 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:24.404 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:24.404 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:24.405 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:24.405 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:24.405 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:24.405 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:24.405 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:24.405 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:24.405 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:24.405 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:24.444 [kafka-scheduler-4] DEBUG kafka.log.LogManager - Beginning log cleanup... 11:31:24.444 [kafka-scheduler-2] DEBUG kafka.log.LogManager - Checking for dirty logs to flush... 11:31:24.444 [kafka-scheduler-4] DEBUG kafka.log.LogManager - Log cleanup completed. 0 files deleted in 0 seconds 11:31:24.444 [kafka-scheduler-2] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.444 [kafka-scheduler-2] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.444 [kafka-scheduler-2] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.444 [kafka-scheduler-2] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.444 [kafka-scheduler-2] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.444 [kafka-scheduler-2] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.444 [kafka-scheduler-2] DEBUG kafka.log.LogManager - Checking if flush is needed on __transaction_state flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.444 [kafka-scheduler-2] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.444 [kafka-scheduler-2] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.444 [kafka-scheduler-2] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.444 [kafka-scheduler-2] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.444 [kafka-scheduler-2] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.444 [kafka-scheduler-2] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.444 [kafka-scheduler-2] DEBUG kafka.log.LogManager - Checking if flush is needed on __transaction_state flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.444 [kafka-scheduler-2] DEBUG kafka.log.LogManager - Checking if flush is needed on __transaction_state flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.444 [kafka-scheduler-2] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.444 [kafka-scheduler-2] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.444 [kafka-scheduler-2] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.444 [kafka-scheduler-2] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.445 [kafka-scheduler-2] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.460 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-15 topicPartition=__consumer_offsets-15. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.461 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-48 topicPartition=__consumer_offsets-48. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.461 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-45 topicPartition=__consumer_offsets-45. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.461 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-12 topicPartition=__consumer_offsets-12. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.461 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-9 topicPartition=__consumer_offsets-9. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.461 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-42 topicPartition=__consumer_offsets-42. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.461 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__transaction_state-2 topicPartition=__transaction_state-2. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.462 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-24 topicPartition=__consumer_offsets-24. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.462 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-21 topicPartition=__consumer_offsets-21. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.462 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-18 topicPartition=__consumer_offsets-18. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.462 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-0 topicPartition=__consumer_offsets-0. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.462 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-30 topicPartition=__consumer_offsets-30. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.462 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-27 topicPartition=__consumer_offsets-27. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.462 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__transaction_state-0 topicPartition=__transaction_state-0. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.462 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__transaction_state-1 topicPartition=__transaction_state-1. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.464 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-39 topicPartition=__consumer_offsets-39. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.464 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-6 topicPartition=__consumer_offsets-6. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.465 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-3 topicPartition=__consumer_offsets-3. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.465 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-36 topicPartition=__consumer_offsets-36. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.465 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-33 topicPartition=__consumer_offsets-33. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.514 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:24.514 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:24.514 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:24.514 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:24.514 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0002 after 0ms 11:31:24.514 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0001 after 0ms 11:31:24.515 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:24.515 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:24.515 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0003 after 0ms 11:31:24.529 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:24.529 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:24.529 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:24.529 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:24.529 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:24.529 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:24.529 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:24.530 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:24.530 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:24.530 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:24.530 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:24.530 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:24.530 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:24.530 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:24.530 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:24.531 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:24.531 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:24.531 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:24.531 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:24.531 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:24.558 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:24.582 [kafka-scheduler-1] DEBUG kafka.log.LogManager - Beginning log cleanup... 11:31:24.582 [kafka-scheduler-5] DEBUG kafka.log.LogManager - Checking for dirty logs to flush... 11:31:24.582 [kafka-scheduler-5] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.582 [kafka-scheduler-5] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.582 [kafka-scheduler-5] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.582 [kafka-scheduler-5] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.582 [kafka-scheduler-5] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.582 [kafka-scheduler-5] DEBUG kafka.log.LogManager - Checking if flush is needed on __transaction_state flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.582 [kafka-scheduler-5] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.582 [kafka-scheduler-5] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.582 [kafka-scheduler-5] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.582 [kafka-scheduler-5] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.582 [kafka-scheduler-5] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.582 [kafka-scheduler-5] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.582 [kafka-scheduler-5] DEBUG kafka.log.LogManager - Checking if flush is needed on __transaction_state flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.582 [kafka-scheduler-5] DEBUG kafka.log.LogManager - Checking if flush is needed on __transaction_state flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.582 [kafka-scheduler-5] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.582 [kafka-scheduler-5] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.582 [kafka-scheduler-5] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.582 [kafka-scheduler-5] DEBUG kafka.log.LogManager - Checking if flush is needed on my-topic flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.583 [kafka-scheduler-5] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.583 [kafka-scheduler-5] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.583 [kafka-scheduler-5] DEBUG kafka.log.LogManager - Checking if flush is needed on __consumer_offsets flush interval 9223372036854775807 last flushed 1505298648947 time since last flush: 0 11:31:24.583 [kafka-scheduler-1] DEBUG kafka.log.LogManager - Garbage collecting 'my-topic-0' 11:31:24.584 [kafka-scheduler-1] DEBUG kafka.log.LogManager - Log cleanup completed. 0 files deleted in 0 seconds 11:31:24.601 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-16 topicPartition=__consumer_offsets-16. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.601 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-13 topicPartition=__consumer_offsets-13. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.602 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-46 topicPartition=__consumer_offsets-46. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.602 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-43 topicPartition=__consumer_offsets-43. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.602 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-10 topicPartition=__consumer_offsets-10. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.602 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__transaction_state-2 topicPartition=__transaction_state-2. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.602 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-22 topicPartition=__consumer_offsets-22. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.602 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-19 topicPartition=__consumer_offsets-19. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.602 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-49 topicPartition=__consumer_offsets-49. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.602 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-31 topicPartition=__consumer_offsets-31. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.602 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-28 topicPartition=__consumer_offsets-28. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.603 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-25 topicPartition=__consumer_offsets-25. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.603 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__transaction_state-0 topicPartition=__transaction_state-0. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.603 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__transaction_state-1 topicPartition=__transaction_state-1. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.603 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-7 topicPartition=__consumer_offsets-7. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.603 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-40 topicPartition=__consumer_offsets-40. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.603 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-37 topicPartition=__consumer_offsets-37. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.603 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-4 topicPartition=__consumer_offsets-4. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.603 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-1 topicPartition=__consumer_offsets-1. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.603 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-34 topicPartition=__consumer_offsets-34. Last clean offset=None now=1505298648947 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:31:24.729 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending Heartbeat request for group exactly-once to coordinator 127.0.0.1:63325 (id: 2147483647 rack: null) 11:31:24.730 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received successful Heartbeat response for group exactly-once 11:31:24.758 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:24.758 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending Heartbeat request for group exactly-once to coordinator 127.0.0.1:63325 (id: 2147483647 rack: null) 11:31:24.760 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received successful Heartbeat response for group exactly-once 11:31:24.907 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:24.908 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:24.908 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:24.908 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:24.908 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:24.908 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:24.908 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:24.908 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:24.908 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:24.908 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:24.908 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:24.908 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:24.909 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:24.958 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:25.031 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:25.031 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:25.032 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:25.032 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:25.032 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:25.032 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:25.032 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:25.032 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:25.032 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:25.032 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:25.033 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:25.033 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:25.033 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:25.034 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:25.034 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:25.034 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:25.034 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:25.034 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:25.034 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:25.034 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:25.159 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 201ms 11:31:25.360 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 201ms 11:31:25.410 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:25.411 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:25.411 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:25.411 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:25.411 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:25.411 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:25.412 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:25.412 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:25.412 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:25.412 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:25.412 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:25.412 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:25.412 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:25.431 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] Committing all active tasks [0_0] and standby tasks [] because the commit interval 30000ms has elapsed by 30095ms 11:31:25.431 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.streams.processor.internals.RecordCollectorImpl - task [0_0] Flushing producer 11:31:25.537 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:25.537 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:25.537 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:25.537 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:25.537 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:25.537 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:25.537 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:25.537 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:25.537 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:25.538 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:25.538 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:25.538 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:25.538 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:25.538 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:25.538 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:25.538 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:25.538 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:25.538 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:25.538 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:25.538 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:25.561 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:25.661 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:25.862 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:25.913 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:25.913 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:25.913 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:25.913 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:25.913 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:25.913 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:25.914 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:25.914 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:25.914 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:25.914 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:25.914 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:25.914 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:25.914 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:25.962 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:25.991 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:25.991 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:25.991 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0000 after 0ms 11:31:26.038 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:26.038 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:26.038 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:26.039 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:26.039 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:26.039 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:26.039 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:26.039 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:26.039 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:26.039 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:26.039 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:26.039 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:26.039 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:26.039 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:26.040 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:26.040 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:26.040 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:26.040 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:26.040 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:26.040 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:26.162 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:26.363 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 201ms 11:31:26.415 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:26.415 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:26.415 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:26.415 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:26.416 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:26.416 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:26.416 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:26.416 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:26.417 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:26.418 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:26.418 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:26.418 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:26.418 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:26.515 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:26.515 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:26.516 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:26.516 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:26.516 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0002 after 0ms 11:31:26.516 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0001 after 0ms 11:31:26.516 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:26.516 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:26.516 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0003 after 0ms 11:31:26.543 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:26.543 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:26.543 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:26.543 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:26.543 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:26.543 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:26.543 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:26.544 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:26.543 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:26.544 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:26.544 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:26.544 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:26.544 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:26.544 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:26.544 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:26.544 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:26.544 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:26.544 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:26.544 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:26.544 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:26.564 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:26.665 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:26.765 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:26.919 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:26.920 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:26.920 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:26.921 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:26.923 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:26.924 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:26.924 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:26.924 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:26.924 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:26.926 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:26.926 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:26.927 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:26.927 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:26.965 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:27.046 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:27.046 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:27.046 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:27.046 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:27.046 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:27.046 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:27.047 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:27.047 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:27.047 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:27.047 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:27.047 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:27.047 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:27.047 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:27.047 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:27.047 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:27.047 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:27.047 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:27.047 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:27.047 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:27.047 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:27.165 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:27.365 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:27.425 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:27.425 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:27.425 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:27.428 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:27.429 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:27.429 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:27.429 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:27.429 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:27.429 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:27.430 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:27.430 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:27.430 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:27.431 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:27.548 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:27.549 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:27.550 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:27.550 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:27.551 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:27.551 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:27.551 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:27.551 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:27.552 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:27.551 [kafka-request-handler-1] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:27.552 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:27.552 [kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:27.552 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:27.552 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:27.553 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:27.553 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:27.553 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:27.553 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:27.553 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:27.553 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:27.565 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:27.730 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending Heartbeat request for group exactly-once to coordinator 127.0.0.1:63325 (id: 2147483647 rack: null) 11:31:27.732 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received successful Heartbeat response for group exactly-once 11:31:27.765 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:27.765 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending Heartbeat request for group exactly-once to coordinator 127.0.0.1:63325 (id: 2147483647 rack: null) 11:31:27.767 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received successful Heartbeat response for group exactly-once 11:31:27.929 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:27.930 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:27.930 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:27.932 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:27.932 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:27.933 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:27.933 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:27.933 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:27.933 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:27.933 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:27.933 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:27.933 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:27.933 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:27.965 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:28.054 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:28.054 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:28.055 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:28.055 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:28.055 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:28.055 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:28.055 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:28.055 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:28.055 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:28.055 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:28.055 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:28.055 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:28.056 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:28.056 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:28.056 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:28.056 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:28.056 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:28.056 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:28.056 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:28.056 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:28.166 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:28.266 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms 11:31:28.433 [kafka-coordinator-heartbeat-thread | exactly-once] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:28.433 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:28.434 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:28.435 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:28.435 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:28.436 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:28.436 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:28.436 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:28.436 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:28.436 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:28.436 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:28.436 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:28.436 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:28.466 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:28.516 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:28.516 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0002 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:28.516 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:28.516 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0001 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:28.516 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0002 after 1ms 11:31:28.516 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:28.517 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0001 after 1ms 11:31:28.517 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0003 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:28.517 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0003 after 1ms 11:31:28.557 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:28.557 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:28.557 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:28.557 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:28.558 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:28.558 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:28.558 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:28.558 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:28.558 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:28.558 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:28.558 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:28.558 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:28.558 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:28.558 [kafka-request-handler-6] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:28.558 [kafka-request-handler-6] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:28.559 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:28.560 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:28.560 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:28.560 [kafka-request-handler-4] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:28.560 [kafka-request-handler-4] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:28.666 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:28.866 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:28.938 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:28.939 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:28.939 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:28.939 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:28.939 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:28.941 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:28.941 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:28.941 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:28.941 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:28.941 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:28.941 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:28.941 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:28.941 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:29.061 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:29.061 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:29.061 [ReplicaFetcherThread-0-1] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-1]: Build leaderEpoch request Map() for broker BrokerEndPoint(1,127.0.0.1,63344) 11:31:29.062 [ReplicaFetcherThread-0-0] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-0]: Build leaderEpoch request Map() for broker BrokerEndPoint(0,127.0.0.1,63325) 11:31:29.062 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:29.062 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:29.062 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Recording follower broker 2 log read results: ArrayBuffer((__transaction_state-2,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:29.062 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:29.062 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:29.062 [kafka-request-handler-2] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 2 log end offset (LEO) position 0. 11:31:29.062 [kafka-request-handler-5] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 0 log end offset (LEO) position 0. 11:31:29.062 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:29.062 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-1,Fetch Data: [FetchDataInfo(0 [0 : 0],[],false,None)], HW: [0], leaderLogStartOffset: [0], leaderLogEndOffset: [0], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:29.062 [kafka-request-handler-2] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:29.063 [kafka-request-handler-5] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:29.063 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,2] on broker 1: Recorded replica 2 log end offset (LEO) position 0. 11:31:29.063 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Skipping update high watermark since new hw 0 [0 : 0] is not larger than old hw 0 [0 : 0].All LEOs are 0 [0 : 0] 11:31:29.063 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 1]: Request key __transaction_state-2 unblocked 0 producer requests. 11:31:29.063 [kafka-request-handler-0] DEBUG kafka.cluster.Partition - Partition [__transaction_state,1] on broker 0: Recorded replica 1 log end offset (LEO) position 0. 11:31:29.063 [kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 0]: Request key __transaction_state-1 unblocked 0 producer requests. 11:31:29.067 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 200ms 11:31:29.167 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 101ms [info] Tests: [info] - should process messages WITHOUT transactional semantics [info] - should process messages WITH transactional semantics *** FAILED *** [info] A timeout occurred waiting for a future to complete. Queried 61 times, sleeping 500 milliseconds between each query. (Tests.scala:73) 11:31:29.325 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x15e7aca904b0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:29.325 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x15e7aca904b0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:31:29.325 [pool-6-thread-1-SendThread(127.0.0.1:63309)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15e7aca904b0000 after 0ms [info] Run completed in 40 seconds, 671 milliseconds. [info] Total number of tests run: 2 [info] Suites: completed 1, aborted 0 [info] Tests: succeeded 1, failed 1, canceled 0, ignored 0, pending 0 [info] *** 1 TEST FAILED *** [error] Failed tests: [error] com.transcognify.exactlyonce.Tests 11:31:29.368 [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] DEBUG org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [exactly-once-77838593-3573-4c2a-99f2-7151b9f3e196-StreamThread-2] Committing all active tasks [] and standby tasks [] because the commit interval 100ms has elapsed by 201ms [error] (test:test) sbt.TestsFailedException: Tests unsuccessful [error] Total time: 44 s, completed 13-Sep-2017 11:31:29 11:31:29.441 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetch READ_UNCOMMITTED at offset 2 for partition my-topic-0 returned fetch data (error=NONE, highWaterMark=2, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 11:31:29.441 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Added READ_UNCOMMITTED fetch request for partition my-topic-0 at offset 2 to node 127.0.0.1:63361 (id: 2 rack: null) 11:31:29.441 [exactly-once-0b0d8a4e-7380-4eb4-887b-13b509f90181-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Sending READ_UNCOMMITTED fetch for partitions [my-topic-0] to broker 127.0.0.1:63361 (id: 2 rack: null) 11:31:29.442 [Thread-3] ERROR org.apache.kafka.test.TestUtils - Error deleting C:\Users\Ryan\AppData\Local\Temp\kafka-2483555969984076021 java.nio.file.FileSystemException: C:\Users\Ryan\AppData\Local\Temp\kafka-2483555969984076021\version-2\log.1: The process cannot access the file because it is being used by another process. at sun.nio.fs.WindowsException.translateToIOException(Unknown Source) at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source) at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source) at sun.nio.fs.WindowsFileSystemProvider.implDelete(Unknown Source) at sun.nio.fs.AbstractFileSystemProvider.delete(Unknown Source) at java.nio.file.Files.delete(Unknown Source) at org.apache.kafka.common.utils.Utils$2.visitFile(Utils.java:591) at org.apache.kafka.common.utils.Utils$2.visitFile(Utils.java:580) at java.nio.file.Files.walkFileTree(Unknown Source) at java.nio.file.Files.walkFileTree(Unknown Source) at org.apache.kafka.common.utils.Utils.delete(Utils.java:580) at org.apache.kafka.test.TestUtils$1.run(TestUtils.java:182) 11:31:29.443 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:29.443 [ReplicaFetcherThread-0-2] DEBUG kafka.server.ReplicaFetcherThread - [ReplicaFetcherThread-0-2]: Build leaderEpoch request Map() for broker BrokerEndPoint(2,127.0.0.1,63361) 11:31:29.444 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 1 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:29.444 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Recording follower broker 0 log read results: ArrayBuffer((__transaction_state-0,Fetch Data: [FetchDataInfo(3 [0 : 474],[],false,None)], HW: [3], leaderLogStartOffset: [0], leaderLogEndOffset: [3], followerLogStartOffset: [0], fetchTimeMs: [1505298648947], readSize: [1048576], error: [NONE])) 11:31:29.444 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:29.444 [kafka-request-handler-7] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 1 log end offset (LEO) position 3. 11:31:29.444 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Skipping update high watermark since new hw 3 [0 : 474] is not larger than old hw 3 [0 : 474].All LEOs are 3 [0 : 474] 11:31:29.444 [kafka-request-handler-3] DEBUG kafka.cluster.Partition - Partition [__transaction_state,0] on broker 2: Recorded replica 0 log end offset (LEO) position 3. 11:31:29.444 [kafka-request-handler-3] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests. 11:31:29.444 [kafka-request-handler-7] DEBUG kafka.server.ReplicaManager - [Replica Manager on Broker 2]: Request key __transaction_state-0 unblocked 0 producer requests.