[2023-08-08 16:07:37,325] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) [2023-08-08 16:07:38,119] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) [2023-08-08 16:07:38,533] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) [2023-08-08 16:07:38,537] INFO [ControllerServer id=2] Starting controller (kafka.server.ControllerServer) [2023-08-08 16:07:38,567] INFO authorizerStart completed for endpoint CONTROLLER. Endpoint is now READY. (org.apache.kafka.server.network.EndpointReadyFutures) [2023-08-08 16:07:39,088] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) [2023-08-08 16:07:39,241] INFO [SocketServer listenerType=CONTROLLER, nodeId=2] Created data-plane acceptor and processors for endpoint : ListenerName(CONTROLLER) (kafka.network.SocketServer) [2023-08-08 16:07:39,245] INFO [SharedServer id=2] Starting SharedServer (kafka.server.SharedServer) [2023-08-08 16:07:39,405] INFO [LogLoader partition=__cluster_metadata-0, dir=/data01/kafka-logs-351] Recovering unflushed segment 1084546. 0/1 recovered for __cluster_metadata-0. (kafka.log.LogLoader) [2023-08-08 16:07:39,409] INFO [LogLoader partition=__cluster_metadata-0, dir=/data01/kafka-logs-351] Loading producer state till offset 1084546 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:39,410] INFO [LogLoader partition=__cluster_metadata-0, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 1084546 (kafka.log.UnifiedLog$) [2023-08-08 16:07:39,411] INFO Deleted producer state snapshot /data01/kafka-logs-351/__cluster_metadata-0/00000000000001950057.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:39,411] INFO Deleted producer state snapshot /data01/kafka-logs-351/__cluster_metadata-0/00000000000001958462.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:39,411] INFO Deleted producer state snapshot /data01/kafka-logs-351/__cluster_metadata-0/00000000000001960615.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:39,420] INFO [ProducerStateManager partition=__cluster_metadata-0]Wrote producer snapshot at offset 1084546 with 0 producer ids in 6 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:07:39,420] INFO [LogLoader partition=__cluster_metadata-0, dir=/data01/kafka-logs-351] Producer state recovery took 1ms for snapshot load and 9ms for segment recovery from offset 1084546 (kafka.log.UnifiedLog$) [2023-08-08 16:07:41,414] INFO [ProducerStateManager partition=__cluster_metadata-0]Wrote producer snapshot at offset 1960615 with 0 producer ids in 5 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:07:41,429] INFO [LogLoader partition=__cluster_metadata-0, dir=/data01/kafka-logs-351] Loading producer state till offset 1960615 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:41,429] INFO [LogLoader partition=__cluster_metadata-0, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 1960615 (kafka.log.UnifiedLog$) [2023-08-08 16:07:41,430] INFO Deleted producer state snapshot /data01/kafka-logs-351/__cluster_metadata-0/00000000000001084546.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:41,430] INFO [ProducerStateManager partition=__cluster_metadata-0]Loading producer state from snapshot file 'SnapshotFile(offset=1960615, file=/data01/kafka-logs-351/__cluster_metadata-0/00000000000001960615.snapshot)' (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:07:41,432] INFO [LogLoader partition=__cluster_metadata-0, dir=/data01/kafka-logs-351] Producer state recovery took 3ms for snapshot load and 0ms for segment recovery from offset 1960615 (kafka.log.UnifiedLog$) [2023-08-08 16:07:41,563] INFO Initialized snapshots with IDs SortedSet(OffsetAndEpoch(offset=1088929, epoch=1821), OffsetAndEpoch(offset=1096127, epoch=1821), OffsetAndEpoch(offset=1103325, epoch=1821), OffsetAndEpoch(offset=1110523, epoch=1821), OffsetAndEpoch(offset=1117721, epoch=1821), OffsetAndEpoch(offset=1124919, epoch=1821), OffsetAndEpoch(offset=1132117, epoch=1821), OffsetAndEpoch(offset=1139315, epoch=1821), OffsetAndEpoch(offset=1146513, epoch=1821), OffsetAndEpoch(offset=1153711, epoch=1821), OffsetAndEpoch(offset=1160909, epoch=1821), OffsetAndEpoch(offset=1168107, epoch=1821), OffsetAndEpoch(offset=1175305, epoch=1821), OffsetAndEpoch(offset=1182503, epoch=1821), OffsetAndEpoch(offset=1189701, epoch=1821), OffsetAndEpoch(offset=1196899, epoch=1821), OffsetAndEpoch(offset=1204097, epoch=1821), OffsetAndEpoch(offset=1211295, epoch=1821), OffsetAndEpoch(offset=1218493, epoch=1821), OffsetAndEpoch(offset=1225691, epoch=1821), OffsetAndEpoch(offset=1232889, epoch=1821), OffsetAndEpoch(offset=1240087, epoch=1821), OffsetAndEpoch(offset=1247285, epoch=1821), OffsetAndEpoch(offset=1254483, epoch=1821), OffsetAndEpoch(offset=1261681, epoch=1821), OffsetAndEpoch(offset=1268879, epoch=1821), OffsetAndEpoch(offset=1276077, epoch=1821), OffsetAndEpoch(offset=1283275, epoch=1821), OffsetAndEpoch(offset=1290473, epoch=1821), OffsetAndEpoch(offset=1297671, epoch=1821), OffsetAndEpoch(offset=1304869, epoch=1821), OffsetAndEpoch(offset=1312067, epoch=1821), OffsetAndEpoch(offset=1319265, epoch=1821), OffsetAndEpoch(offset=1326463, epoch=1821), OffsetAndEpoch(offset=1333661, epoch=1821), OffsetAndEpoch(offset=1340859, epoch=1821), OffsetAndEpoch(offset=1348057, epoch=1821), OffsetAndEpoch(offset=1355255, epoch=1821), OffsetAndEpoch(offset=1362453, epoch=1821), OffsetAndEpoch(offset=1369651, epoch=1821), OffsetAndEpoch(offset=1376849, epoch=1821), OffsetAndEpoch(offset=1384047, epoch=1821), OffsetAndEpoch(offset=1391245, epoch=1821), OffsetAndEpoch(offset=1398443, epoch=1821), OffsetAndEpoch(offset=1405641, epoch=1821), OffsetAndEpoch(offset=1412839, epoch=1821), OffsetAndEpoch(offset=1420037, epoch=1821), OffsetAndEpoch(offset=1427235, epoch=1821), OffsetAndEpoch(offset=1434433, epoch=1821), OffsetAndEpoch(offset=1441631, epoch=1821), OffsetAndEpoch(offset=1448829, epoch=1821), OffsetAndEpoch(offset=1456027, epoch=1821), OffsetAndEpoch(offset=1463225, epoch=1821), OffsetAndEpoch(offset=1470423, epoch=1821), OffsetAndEpoch(offset=1477621, epoch=1821), OffsetAndEpoch(offset=1484819, epoch=1821), OffsetAndEpoch(offset=1492017, epoch=1821), OffsetAndEpoch(offset=1499215, epoch=1821), OffsetAndEpoch(offset=1506413, epoch=1821), OffsetAndEpoch(offset=1513611, epoch=1821), OffsetAndEpoch(offset=1520809, epoch=1821), OffsetAndEpoch(offset=1528007, epoch=1821), OffsetAndEpoch(offset=1535205, epoch=1821), OffsetAndEpoch(offset=1542403, epoch=1821), OffsetAndEpoch(offset=1549601, epoch=1821), OffsetAndEpoch(offset=1556799, epoch=1821), OffsetAndEpoch(offset=1563997, epoch=1821), OffsetAndEpoch(offset=1571195, epoch=1821), OffsetAndEpoch(offset=1578393, epoch=1821), OffsetAndEpoch(offset=1585591, epoch=1821), OffsetAndEpoch(offset=1592789, epoch=1821), OffsetAndEpoch(offset=1599987, epoch=1821), OffsetAndEpoch(offset=1607185, epoch=1821), OffsetAndEpoch(offset=1614383, epoch=1821), OffsetAndEpoch(offset=1621581, epoch=1821), OffsetAndEpoch(offset=1628779, epoch=1821), OffsetAndEpoch(offset=1635977, epoch=1821), OffsetAndEpoch(offset=1643175, epoch=1821), OffsetAndEpoch(offset=1650373, epoch=1821), OffsetAndEpoch(offset=1657571, epoch=1821), OffsetAndEpoch(offset=1664769, epoch=1821), OffsetAndEpoch(offset=1671967, epoch=1821), OffsetAndEpoch(offset=1679165, epoch=1821), OffsetAndEpoch(offset=1686363, epoch=1821), OffsetAndEpoch(offset=1693561, epoch=1821), OffsetAndEpoch(offset=1700759, epoch=1821), OffsetAndEpoch(offset=1707957, epoch=1821), OffsetAndEpoch(offset=1715155, epoch=1821), OffsetAndEpoch(offset=1722353, epoch=1821), OffsetAndEpoch(offset=1729551, epoch=1821), OffsetAndEpoch(offset=1736749, epoch=1821), OffsetAndEpoch(offset=1743947, epoch=1821), OffsetAndEpoch(offset=1751145, epoch=1821), OffsetAndEpoch(offset=1758343, epoch=1821), OffsetAndEpoch(offset=1765541, epoch=1821), OffsetAndEpoch(offset=1772739, epoch=1821), OffsetAndEpoch(offset=1779937, epoch=1821), OffsetAndEpoch(offset=1787135, epoch=1821), OffsetAndEpoch(offset=1794333, epoch=1821), OffsetAndEpoch(offset=1801531, epoch=1821), OffsetAndEpoch(offset=1808729, epoch=1821), OffsetAndEpoch(offset=1815928, epoch=1821), OffsetAndEpoch(offset=1823126, epoch=1821), OffsetAndEpoch(offset=1830324, epoch=1821), OffsetAndEpoch(offset=1837522, epoch=1821), OffsetAndEpoch(offset=1844720, epoch=1821), OffsetAndEpoch(offset=1851919, epoch=1821), OffsetAndEpoch(offset=1859117, epoch=1821), OffsetAndEpoch(offset=1866315, epoch=1821), OffsetAndEpoch(offset=1873514, epoch=1821), OffsetAndEpoch(offset=1880713, epoch=1821), OffsetAndEpoch(offset=1887911, epoch=1821), OffsetAndEpoch(offset=1895109, epoch=1821), OffsetAndEpoch(offset=1902307, epoch=1821), OffsetAndEpoch(offset=1909506, epoch=1821), OffsetAndEpoch(offset=1916704, epoch=1821)) from /data01/kafka-logs-351/__cluster_metadata-0 (kafka.raft.KafkaMetadataLog$) [2023-08-08 16:07:41,596] INFO [raft-expiration-reaper]: Starting (kafka.raft.TimingWheelExpirationService$ExpiredOperationReaper) [2023-08-08 16:07:42,053] INFO [RaftManager id=2] Completed transition to FollowerState(fetchTimeoutMs=2000, epoch=1892, leaderId=3, voters=[1, 2, 3], highWatermark=Optional.empty, fetchingSnapshot=Optional.empty) from null (org.apache.kafka.raft.QuorumState) [2023-08-08 16:07:42,058] INFO [kafka-2-raft-outbound-request-thread]: Starting (kafka.raft.RaftSendThread) [2023-08-08 16:07:42,058] INFO [kafka-2-raft-io-thread]: Starting (kafka.raft.KafkaRaftManager$RaftIoThread) [2023-08-08 16:07:42,211] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2023-08-08 16:07:42,213] INFO [ControllerServer id=2] Waiting for controller quorum voters future (kafka.server.ControllerServer) [2023-08-08 16:07:42,213] INFO [ControllerServer id=2] Finished waiting for controller quorum voters future (kafka.server.ControllerServer) [2023-08-08 16:07:42,226] INFO [RaftManager id=2] High watermark set to Optional[LogOffsetMetadata(offset=1962805, metadata=Optional.empty)] for the first time for epoch 1892 (org.apache.kafka.raft.FollowerState) [2023-08-08 16:07:42,228] INFO [RaftManager id=2] Registered the listener org.apache.kafka.image.loader.MetadataLoader@1559685078 (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:07:42,236] INFO [MetadataLoader id=2] handleLoadSnapshot(00000000000001916704-0000001821): incrementing HandleLoadSnapshotCount to 1. (org.apache.kafka.image.loader.MetadataLoader) [2023-08-08 16:07:42,263] INFO [RaftManager id=2] Registered the listener org.apache.kafka.controller.QuorumController$QuorumMetaLogListener@865354228 (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:07:42,271] INFO [controller-2-ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2023-08-08 16:07:42,275] INFO [controller-2-ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2023-08-08 16:07:42,277] INFO [controller-2-ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2023-08-08 16:07:42,279] INFO [controller-2-ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2023-08-08 16:07:42,302] INFO [ExpirationReaper-2-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2023-08-08 16:07:42,309] INFO [MetadataLoader id=2] handleLoadSnapshot(00000000000001916704-0000001821): generated a metadata delta between offset -1 and this snapshot in 72073 us. (org.apache.kafka.image.loader.MetadataLoader) [2023-08-08 16:07:42,311] INFO [MetadataLoader id=2] handleLoadSnapshot: The loader is still catching up because we have loaded up to offset 1916703, but the high water mark is 1962805 (org.apache.kafka.image.loader.MetadataLoader) [2023-08-08 16:07:42,311] INFO [MetadataLoader id=2] initializeNewPublishers: The loader is still catching up because we have loaded up to offset 1916703, but the high water mark is 1962805 (org.apache.kafka.image.loader.MetadataLoader) [2023-08-08 16:07:42,323] INFO [SocketServer listenerType=CONTROLLER, nodeId=2] Enabling request processing. (kafka.network.SocketServer) [2023-08-08 16:07:42,327] INFO Awaiting socket connections on 10.58.12.165:9093. (kafka.network.DataPlaneAcceptor) [2023-08-08 16:07:42,360] INFO [ControllerServer id=2] Waiting for all of the authorizer futures to be completed (kafka.server.ControllerServer) [2023-08-08 16:07:42,360] INFO [ControllerServer id=2] Finished waiting for all of the authorizer futures to be completed (kafka.server.ControllerServer) [2023-08-08 16:07:42,360] INFO [ControllerServer id=2] Waiting for all of the SocketServer Acceptors to be started (kafka.server.ControllerServer) [2023-08-08 16:07:42,360] INFO [ControllerServer id=2] Finished waiting for all of the SocketServer Acceptors to be started (kafka.server.ControllerServer) [2023-08-08 16:07:42,369] INFO [ControllerServer id=2] Waiting for the controller metadata publishers to be installed (kafka.server.ControllerServer) [2023-08-08 16:07:42,610] INFO [MetadataLoader id=2] handleCommit: The loader is still catching up because we have loaded up to offset 1962804, but the high water mark is 1962806 (org.apache.kafka.image.loader.MetadataLoader) [2023-08-08 16:07:42,611] INFO [MetadataLoader id=2] initializeNewPublishers: The loader is still catching up because we have loaded up to offset 1962804, but the high water mark is 1962806 (org.apache.kafka.image.loader.MetadataLoader) [2023-08-08 16:07:42,611] INFO [MetadataLoader id=2] initializeNewPublishers: The loader is still catching up because we have loaded up to offset 1962804, but the high water mark is 1962806 (org.apache.kafka.image.loader.MetadataLoader) [2023-08-08 16:07:42,611] INFO [ControllerServer id=2] Finished waiting for the controller metadata publishers to be installed (kafka.server.ControllerServer) [2023-08-08 16:07:42,612] INFO [MetadataLoader id=2] handleCommit: The loader finished catching up to the current high water mark of 1962806 (org.apache.kafka.image.loader.MetadataLoader) [2023-08-08 16:07:42,619] INFO [BrokerServer id=2] Transition from SHUTDOWN to STARTING (kafka.server.BrokerServer) [2023-08-08 16:07:42,622] INFO [RaftManager id=2] Become candidate due to fetch timeout (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:07:42,623] INFO [BrokerServer id=2] Starting broker (kafka.server.BrokerServer) [2023-08-08 16:07:42,629] INFO [RaftManager id=2] Completed transition to CandidateState(localId=2, epoch=1893, retries=1, voteStates={1=UNRECORDED, 2=GRANTED, 3=UNRECORDED}, highWatermark=Optional[LogOffsetMetadata(offset=1962806, metadata=Optional.empty)], electionTimeoutMs=1053) from FollowerState(fetchTimeoutMs=2000, epoch=1892, leaderId=3, voters=[1, 2, 3], highWatermark=Optional[LogOffsetMetadata(offset=1962806, metadata=Optional.empty)], fetchingSnapshot=Optional.empty) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:07:42,645] INFO [broker-2-ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2023-08-08 16:07:42,650] INFO [broker-2-ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2023-08-08 16:07:42,652] INFO [broker-2-ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2023-08-08 16:07:42,652] INFO [broker-2-ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2023-08-08 16:07:42,698] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:42,698] WARN [RaftManager id=2] Connection to node 1 (/10.58.16.231:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:42,707] INFO [BrokerServer id=2] Waiting for controller quorum voters future (kafka.server.BrokerServer) [2023-08-08 16:07:42,707] INFO [BrokerServer id=2] Finished waiting for controller quorum voters future (kafka.server.BrokerServer) [2023-08-08 16:07:42,724] INFO [MetadataLoader id=2] InitializeNewPublishers: initializing SnapshotGenerator with a snapshot at offset 1962805 (org.apache.kafka.image.loader.MetadataLoader) [2023-08-08 16:07:42,725] INFO [MetadataLoader id=2] InitializeNewPublishers: initializing FeaturesPublisher with a snapshot at offset 1962805 (org.apache.kafka.image.loader.MetadataLoader) [2023-08-08 16:07:42,726] INFO [MetadataLoader id=2] InitializeNewPublishers: initializing DynamicConfigPublisher controller id=2 with a snapshot at offset 1962805 (org.apache.kafka.image.loader.MetadataLoader) [2023-08-08 16:07:42,727] INFO [MetadataLoader id=2] InitializeNewPublishers: initializing DynamicClientQuotaPublisher controller id=2 with a snapshot at offset 1962805 (org.apache.kafka.image.loader.MetadataLoader) [2023-08-08 16:07:42,728] INFO [MetadataLoader id=2] InitializeNewPublishers: initializing ScramPublisher controller id=2 with a snapshot at offset 1962805 (org.apache.kafka.image.loader.MetadataLoader) [2023-08-08 16:07:42,733] INFO [MetadataLoader id=2] InitializeNewPublishers: initializing ControllerMetadataMetricsPublisher with a snapshot at offset 1962805 (org.apache.kafka.image.loader.MetadataLoader) [2023-08-08 16:07:42,735] INFO [RaftManager id=2] Completed transition to Leader(localId=2, epoch=1893, epochStartOffset=1962807, highWatermark=Optional.empty, voterStates={1=ReplicaState(nodeId=1, endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=false), 2=ReplicaState(nodeId=2, endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true), 3=ReplicaState(nodeId=3, endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=false)}) from CandidateState(localId=2, epoch=1893, retries=1, voteStates={1=UNRECORDED, 2=GRANTED, 3=GRANTED}, highWatermark=Optional[LogOffsetMetadata(offset=1962806, metadata=Optional.empty)], electionTimeoutMs=1053) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:07:42,743] INFO [broker-2-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:42,745] INFO [broker-2-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:42,843] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:42,843] WARN [RaftManager id=2] Connection to node 1 (/10.58.16.231:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:42,846] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) [2023-08-08 16:07:42,858] INFO [SocketServer listenerType=BROKER, nodeId=2] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) [2023-08-08 16:07:42,884] INFO [RaftManager id=2] Node 3 disconnected. (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:42,900] INFO [broker-2-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:42,900] INFO [broker-2-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:42,960] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:42,961] WARN [RaftManager id=2] Connection to node 1 (/10.58.16.231:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:42,985] INFO [ExpirationReaper-2-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2023-08-08 16:07:42,986] INFO [ExpirationReaper-2-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2023-08-08 16:07:42,993] INFO [ExpirationReaper-2-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2023-08-08 16:07:42,995] INFO [ExpirationReaper-2-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2023-08-08 16:07:43,028] INFO [ExpirationReaper-2-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2023-08-08 16:07:43,181] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:43,181] WARN [RaftManager id=2] Connection to node 1 (/10.58.16.231:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:43,220] INFO [ExpirationReaper-2-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2023-08-08 16:07:43,235] INFO [ExpirationReaper-2-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2023-08-08 16:07:43,377] INFO [BrokerLifecycleManager id=2] Incarnation 621OaXmSSqeE2PJkSE936w of broker 2 in cluster VTx-f_krQviH03igQw0AVw is now STARTING. (kafka.server.BrokerLifecycleManager) [2023-08-08 16:07:43,384] INFO [broker-2-to-controller-heartbeat-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:43,384] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:43,411] INFO [ExpirationReaper-2-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2023-08-08 16:07:43,415] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:43,415] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:43,466] INFO [BrokerServer id=2] Waiting for the broker metadata publishers to be installed (kafka.server.BrokerServer) [2023-08-08 16:07:43,466] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:43,471] INFO [MetadataLoader id=2] InitializeNewPublishers: initializing BrokerMetadataPublisher with a snapshot at offset 1962805 (org.apache.kafka.image.loader.MetadataLoader) [2023-08-08 16:07:43,478] INFO [BrokerMetadataPublisher id=2] Publishing initial metadata at offset OffsetAndEpoch(offset=1962805, epoch=1892) with metadata.version 3.5-IV2. (kafka.server.metadata.BrokerMetadataPublisher) [2023-08-08 16:07:43,483] INFO Loading logs from log dirs ArraySeq(/data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:43,492] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:43,492] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:43,501] INFO [BrokerServer id=2] Finished waiting for the broker metadata publishers to be installed (kafka.server.BrokerServer) [2023-08-08 16:07:43,501] INFO [BrokerServer id=2] Waiting for the controller to acknowledge that we are caught up (kafka.server.BrokerServer) [2023-08-08 16:07:43,520] INFO Skipping recovery of 880 logs from /data01/kafka-logs-351 since clean shutdown file was found (kafka.log.LogManager) [2023-08-08 16:07:43,542] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:43,561] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:43,562] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:43,563] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:43,563] WARN [RaftManager id=2] Connection to node 1 (/10.58.16.231:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:43,608] INFO [LogLoader partition=test004-686, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:43,615] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:43,625] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-686, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=686, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 71ms (1/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:43,635] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:43,636] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:43,636] INFO [LogLoader partition=test004-554, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:43,639] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-554, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=554, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 14ms (2/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:43,683] INFO [LogLoader partition=test004-289, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:43,687] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:43,693] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:43,693] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:43,695] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-289, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=289, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 56ms (3/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:43,722] INFO [LogLoader partition=test004-24, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:43,725] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-24, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=24, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 30ms (4/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:43,742] INFO [LogLoader partition=test005-290, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:43,744] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:43,750] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:43,751] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:43,753] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-290, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=290, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 28ms (5/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:43,791] INFO [LogLoader partition=test004-620, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:43,795] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-620, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=620, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 42ms (6/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:43,801] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:43,808] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:43,808] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:43,811] INFO [LogLoader partition=test004-356, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:43,823] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-356, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=356, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 27ms (7/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:43,830] INFO [LogLoader partition=test004-157, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:43,835] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-157, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=157, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 12ms (8/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:43,849] INFO [LogLoader partition=test004-355, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:43,851] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-355, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=355, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 16ms (9/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:43,857] INFO [LogLoader partition=test004-556, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:43,859] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-556, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=556, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (10/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:43,859] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:43,868] INFO [LogLoader partition=test005-26, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:43,871] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:43,871] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:43,874] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-26, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=26, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 15ms (11/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:43,881] INFO [LogLoader partition=__consumer_offsets-21, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:43,882] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-21, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=21, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (12/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:43,899] INFO [LogLoader partition=test004-488, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:43,900] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-488, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=488, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 18ms (13/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:43,918] INFO [LogLoader partition=test005-157, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:43,921] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-157, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=157, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 20ms (14/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:43,923] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:43,939] INFO [LogLoader partition=test005-355, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:43,940] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-355, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=355, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 19ms (15/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:43,945] INFO [LogLoader partition=test004-622, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:43,946] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-622, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=622, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (16/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:43,948] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:43,948] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:43,974] INFO [LogLoader partition=test005-92, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:43,982] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-92, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=92, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 36ms (17/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:43,998] INFO [LogLoader partition=__consumer_offsets-5, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,000] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-5, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=5, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 18ms (18/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,000] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,015] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:44,015] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,017] INFO [LogLoader partition=test004-25, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,018] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-25, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=25, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 19ms (19/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,033] INFO [LogLoader partition=test005-358, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,035] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-358, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=358, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 17ms (20/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,044] INFO [LogLoader partition=test004-558, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,045] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-558, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=558, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (21/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,053] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:44,053] WARN [RaftManager id=2] Connection to node 1 (/10.58.16.231:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:44,066] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,071] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:44,071] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,075] INFO [LogLoader partition=__consumer_offsets-1, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,078] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-1, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=1, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 32ms (22/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,088] INFO [LogLoader partition=__consumer_offsets-16, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,091] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-16, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=16, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 13ms (23/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,103] INFO [LogLoader partition=__consumer_offsets-45, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,105] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-45, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=45, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 14ms (24/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,111] INFO [LogLoader partition=test004-624, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,112] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-624, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=624, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (25/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,122] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,126] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:44,127] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,130] INFO [LogLoader partition=__consumer_offsets-41, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,132] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-41, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=41, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 19ms (26/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,145] INFO [LogLoader partition=test005-25, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,148] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-25, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=25, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 17ms (27/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,156] INFO [LogLoader partition=test004-292, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,166] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-292, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=292, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 17ms (28/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,172] INFO [LogLoader partition=test-3, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,173] INFO Completed load of Log(dir=/data01/kafka-logs-351/test-3, topicId=HeEEmpDsSGeLVSIfaRiRqQ, topic=test, partition=3, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (29/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,179] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,188] INFO [LogLoader partition=__consumer_offsets-49, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,190] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-49, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=49, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 16ms (30/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,193] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:44,193] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,195] INFO [LogLoader partition=test004-91, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,196] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-91, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=91, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (31/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,202] INFO [LogLoader partition=test004-358, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,203] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-358, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=358, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (32/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,219] INFO [LogLoader partition=test005-294, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,220] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-294, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=294, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 16ms (33/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,232] INFO [LogLoader partition=__consumer_offsets-25, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,234] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-25, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=25, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 13ms (34/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,244] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,244] INFO [LogLoader partition=test123-58, dir=/data01/kafka-logs-351] Loading producer state till offset 257789 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,244] INFO [LogLoader partition=test123-58, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 257789 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,245] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-58/00000000000000257789.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:44,245] INFO [LogLoader partition=test123-58, dir=/data01/kafka-logs-351] Producer state recovery took 1ms for snapshot load and 0ms for segment recovery from offset 257789 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,246] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-58, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=58, highWatermark=257789, lastStableOffset=257789, logStartOffset=257789, logEndOffset=257789) with 1 segments in 11ms (35/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,256] INFO [LogLoader partition=test004-421, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,260] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-421, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=421, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 15ms (36/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,266] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:44,267] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,280] INFO [LogLoader partition=test004-424, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,282] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-424, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=424, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 21ms (37/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,313] INFO [LogLoader partition=test004-228, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,320] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,323] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-228, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=228, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 41ms (38/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,344] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:44,344] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,345] INFO [LogLoader partition=__consumer_offsets-44, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,347] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-44, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=44, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 24ms (39/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,366] INFO [LogLoader partition=test004-223, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,368] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-223, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=223, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 21ms (40/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,373] INFO [LogLoader partition=test004-487, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,380] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-487, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=487, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 12ms (41/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,394] INFO [LogLoader partition=test004-93, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,395] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,397] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-93, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=93, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 17ms (42/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,399] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:44,400] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,408] INFO [LogLoader partition=test004-294, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,409] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-294, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=294, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 12ms (43/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,423] INFO [LogLoader partition=__consumer_offsets-32, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,426] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-32, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=32, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 17ms (44/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,432] INFO [LogLoader partition=test005-93, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,437] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-93, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=93, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (45/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,441] INFO [LogLoader partition=test004-426, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,442] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-426, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=426, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (46/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,446] INFO [LogLoader partition=__consumer_offsets-7, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,449] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,453] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-7, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=7, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (47/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,458] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:44,458] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,462] INFO [LogLoader partition=test005-223, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,464] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-223, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=223, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (48/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,468] INFO [LogLoader partition=test005-288, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,469] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-288, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=288, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (49/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,473] INFO [LogLoader partition=test004-159, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,475] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-159, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=159, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (50/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,488] INFO [LogLoader partition=test004-161, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,489] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-161, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=161, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 15ms (51/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,504] INFO [LogLoader partition=__consumer_offsets-36, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,509] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-36, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=36, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 19ms (52/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,509] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,514] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:44,514] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,521] INFO [LogLoader partition=test005-159, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,523] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-159, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=159, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 13ms (53/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,547] INFO [LogLoader partition=test005-161, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,549] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-161, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=161, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 25ms (54/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,556] INFO [LogLoader partition=__consumer_offsets-14, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,557] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-14, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=14, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (55/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,564] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,568] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:44,569] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,569] INFO [LogLoader partition=test004-222, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,571] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-222, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=222, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 13ms (56/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,583] INFO [LogLoader partition=test004-225, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,585] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-225, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=225, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 14ms (57/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,589] INFO [LogLoader partition=test004-227, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,590] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-227, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=227, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (58/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,601] INFO [LogLoader partition=__consumer_offsets-10, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,602] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-10, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=10, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (59/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,613] INFO [LogLoader partition=test005-354, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,616] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-354, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=354, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 14ms (60/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,618] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,625] INFO [LogLoader partition=test005-225, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,628] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-225, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=225, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 12ms (61/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,630] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:44,630] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,632] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:44,632] WARN [RaftManager id=2] Connection to node 1 (/10.58.16.231:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:44,648] INFO [LogLoader partition=test004-557, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,650] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-557, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=557, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 23ms (62/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,656] INFO [LogLoader partition=__consumer_offsets-18, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,657] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-18, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=18, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (63/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,661] INFO [LogLoader partition=test004-291, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,662] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-291, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=291, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (64/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,672] INFO [LogLoader partition=test004-689, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,673] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-689, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=689, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (65/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,678] INFO [LogLoader partition=__consumer_offsets-27, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,681] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,682] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-27, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=27, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (66/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,688] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:44,688] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,694] INFO [LogLoader partition=test005-90, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,695] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-90, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=90, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 12ms (67/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,700] INFO [LogLoader partition=test005-224, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,701] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-224, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=224, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (68/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,707] INFO [LogLoader partition=__consumer_offsets-35, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,709] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-35, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=35, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (69/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,713] INFO [LogLoader partition=test-2, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,714] INFO Completed load of Log(dir=/data01/kafka-logs-351/test-2, topicId=HeEEmpDsSGeLVSIfaRiRqQ, topic=test, partition=2, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (70/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,717] INFO [LogLoader partition=__consumer_offsets-13, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,723] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-13, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=13, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (71/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,729] INFO [LogLoader partition=test005-291, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,730] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-291, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=291, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (72/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,739] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,760] INFO [LogLoader partition=test005-293, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,762] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:44,763] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,764] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-293, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=293, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 33ms (73/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,769] INFO [LogLoader partition=__consumer_offsets-46, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,791] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-46, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=46, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 27ms (74/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,812] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,827] INFO [LogLoader partition=__consumer_offsets-9, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,829] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:44,829] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,831] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-9, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=9, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 40ms (75/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,837] INFO [LogLoader partition=test004-27, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,838] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-27, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=27, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (76/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,854] INFO [LogLoader partition=test004-425, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,855] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-425, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=425, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 17ms (77/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,859] INFO [LogLoader partition=__consumer_offsets-42, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,863] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-42, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=42, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (78/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,876] INFO [LogLoader partition=test005-27, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,877] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-27, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=27, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 15ms (79/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,879] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,892] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:44,892] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,893] INFO [LogLoader partition=test004-491, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,895] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-491, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=491, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 17ms (80/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,901] INFO [LogLoader partition=__consumer_offsets-17, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,903] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-17, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=17, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (81/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,912] INFO [LogLoader partition=test004-621, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,913] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-621, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=621, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (82/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,921] INFO [LogLoader partition=test005-94, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,922] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-94, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=94, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (83/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,927] INFO [LogLoader partition=__consumer_offsets-30, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,928] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-30, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=30, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (84/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,934] INFO [LogLoader partition=test004-687, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,935] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-687, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=687, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (85/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,942] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,943] INFO [LogLoader partition=test004-28, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,944] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-28, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=28, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (86/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,948] INFO [LogLoader partition=__consumer_offsets-26, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,949] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-26, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=26, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (87/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,955] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:44,955] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:44,959] INFO [LogLoader partition=test-0, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,960] INFO Completed load of Log(dir=/data01/kafka-logs-351/test-0, topicId=HeEEmpDsSGeLVSIfaRiRqQ, topic=test, partition=0, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (88/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,966] INFO [LogLoader partition=test004-94, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,967] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-94, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=94, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (89/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:44,983] INFO [LogLoader partition=__consumer_offsets-38, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:44,988] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-38, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=38, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 21ms (90/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,005] INFO [LogLoader partition=test004-357, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,006] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-357, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=357, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 18ms (91/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,009] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,019] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:45,019] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,026] INFO [LogLoader partition=test005-226, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,027] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-226, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=226, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 21ms (92/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,031] INFO [LogLoader partition=__consumer_offsets-34, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,032] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-34, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=34, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (93/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,052] INFO [LogLoader partition=test005-28, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,056] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-28, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=28, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 23ms (94/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,069] INFO [LogLoader partition=__consumer_offsets-12, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,071] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,072] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-12, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=12, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 16ms (95/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,078] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:45,078] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,080] INFO [LogLoader partition=test005-357, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,081] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-357, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=357, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (96/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,088] INFO [LogLoader partition=test004-494, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,090] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-494, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=494, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (97/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,093] INFO [LogLoader partition=__consumer_offsets-24, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,094] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-24, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=24, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (98/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,114] INFO [LogLoader partition=test004-560, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,115] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-560, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=560, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 21ms (99/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,122] INFO [LogLoader partition=__consumer_offsets-20, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,123] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-20, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=20, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (100/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,128] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:45,128] WARN [RaftManager id=2] Connection to node 1 (/10.58.16.231:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:45,129] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,134] INFO [LogLoader partition=test004-230, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,135] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-230, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=230, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 12ms (101/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,136] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:45,136] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,150] INFO [LogLoader partition=__consumer_offsets-0, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,151] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-0, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=0, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 16ms (102/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,167] INFO [LogLoader partition=__consumer_offsets-29, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,169] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-29, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=29, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 17ms (103/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,177] INFO [LogLoader partition=test004-489, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,178] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-489, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=489, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (104/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,186] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,186] INFO [LogLoader partition=test004-296, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,188] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-296, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=296, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (105/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,190] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:45,190] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,191] INFO [LogLoader partition=__consumer_offsets-8, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,192] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-8, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=8, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (106/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,234] INFO [LogLoader partition=__consumer_offsets-37, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,235] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-37, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=37, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 44ms (107/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,239] INFO [LogLoader partition=test005-158, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,240] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-158, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=158, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (108/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,241] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,244] INFO [LogLoader partition=test004-362, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,245] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-362, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=362, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (109/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,247] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:45,247] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,249] INFO [LogLoader partition=__consumer_offsets-4, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,250] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-4, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=4, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (110/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,254] INFO [LogLoader partition=__consumer_offsets-33, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,260] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-33, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=33, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (111/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,269] INFO [LogLoader partition=test004-31, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,273] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-31, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=31, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 13ms (112/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,285] INFO [LogLoader partition=__consumer_offsets-15, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,288] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-15, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=15, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 15ms (113/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,291] INFO [LogLoader partition=__consumer_offsets-48, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,292] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-48, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=48, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (114/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,295] INFO [LogLoader partition=test005-31, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,295] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-31, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=31, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (115/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,298] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,300] INFO [LogLoader partition=__consumer_offsets-11, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,302] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:45,303] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,306] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-11, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=11, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (116/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,309] INFO [LogLoader partition=test123-59, dir=/data01/kafka-logs-351] Loading producer state till offset 294075 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,309] INFO [LogLoader partition=test123-59, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 294075 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,310] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-59/00000000000000294075.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:45,310] INFO [LogLoader partition=test123-59, dir=/data01/kafka-logs-351] Producer state recovery took 1ms for snapshot load and 0ms for segment recovery from offset 294075 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,311] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-59, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=59, highWatermark=294075, lastStableOffset=294075, logStartOffset=294075, logEndOffset=294075) with 1 segments in 5ms (117/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,314] INFO [LogLoader partition=test005-97, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,315] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-97, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=97, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (118/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,321] INFO [LogLoader partition=__consumer_offsets-23, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,323] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-23, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=23, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (119/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,328] INFO [LogLoader partition=test004-163, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,333] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-163, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=163, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (120/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,338] INFO [LogLoader partition=__consumer_offsets-19, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,339] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-19, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=19, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (121/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,346] INFO [LogLoader partition=test004-288, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,347] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-288, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=288, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (122/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,353] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,360] INFO [LogLoader partition=test004-92, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,363] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:45,363] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,364] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-92, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=92, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 16ms (123/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,375] INFO [LogLoader partition=test005-163, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,387] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-163, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=163, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 23ms (124/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,391] INFO [LogLoader partition=__consumer_offsets-28, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,392] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-28, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=28, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (125/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,398] INFO [LogLoader partition=__consumer_offsets-40, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,405] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-40, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=40, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 13ms (126/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,411] INFO [LogLoader partition=__consumer_offsets-3, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,412] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-3, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=3, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (127/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,415] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,417] INFO [LogLoader partition=test004-158, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,419] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-158, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=158, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (128/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,419] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:45,420] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,422] INFO [LogLoader partition=__consumer_offsets-47, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,423] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-47, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=47, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (129/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,430] INFO [LogLoader partition=test123-44, dir=/data01/kafka-logs-351] Loading producer state till offset 293991 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,430] INFO [LogLoader partition=test123-44, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 293991 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,430] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-44/00000000000000293991.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:45,430] INFO [LogLoader partition=test123-44, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 293991 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,437] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-44, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=44, highWatermark=293991, lastStableOffset=293991, logStartOffset=293991, logEndOffset=293991) with 1 segments in 15ms (130/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,441] INFO [LogLoader partition=test004-341, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,445] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-341, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=341, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (131/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,450] INFO [LogLoader partition=__consumer_offsets-43, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,451] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-43, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=43, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (132/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,455] INFO [LogLoader partition=test004-275, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,457] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-275, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=275, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (133/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,465] INFO [LogLoader partition=test005-341, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,472] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,477] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:45,477] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,480] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-341, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=341, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 22ms (134/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,491] INFO [LogLoader partition=__consumer_offsets-22, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,492] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-22, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=22, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 12ms (135/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,496] INFO [LogLoader partition=test004-11, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,503] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-11, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=11, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (136/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,514] INFO [LogLoader partition=test004-407, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,516] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-407, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=407, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 14ms (137/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,520] INFO [LogLoader partition=__consumer_offsets-31, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,522] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-31, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=31, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (138/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,526] INFO [LogLoader partition=test004-470, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,528] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,530] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:45,531] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,556] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:45,556] WARN [RaftManager id=2] Connection to node 1 (/10.58.16.231:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:45,566] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-470, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=470, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 44ms (139/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,576] INFO [LogLoader partition=test004-602, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,581] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,583] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-602, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=602, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 17ms (140/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,586] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:45,586] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,587] INFO [LogLoader partition=test004-605, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,588] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-605, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=605, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (141/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,591] INFO [LogLoader partition=test004-539, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,598] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-539, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=539, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (142/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,601] INFO [LogLoader partition=__consumer_offsets-39, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,602] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-39, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=39, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (143/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,613] INFO [LogLoader partition=test005-77, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,617] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-77, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=77, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 14ms (144/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,621] INFO [LogLoader partition=test004-671, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,622] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-671, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=671, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (145/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,627] INFO [LogLoader partition=test005-142, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,628] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-142, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=142, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (146/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,632] INFO [LogLoader partition=__consumer_offsets-6, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,637] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-6, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=6, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (147/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,637] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,645] INFO [LogLoader partition=test123-43, dir=/data01/kafka-logs-351] Loading producer state till offset 258970 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,645] INFO [LogLoader partition=test123-43, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 258970 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,645] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-43/00000000000000258970.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:45,645] INFO [LogLoader partition=test123-43, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 258970 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,646] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-43, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=43, highWatermark=258970, lastStableOffset=258970, logStartOffset=258970, logEndOffset=258970) with 1 segments in 10ms (148/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,648] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:45,649] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,654] INFO [LogLoader partition=__consumer_offsets-2, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,655] INFO Completed load of Log(dir=/data01/kafka-logs-351/__consumer_offsets-2, topicId=VTTnHOjHS1i07Zhb99_tfg, topic=__consumer_offsets, partition=2, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (149/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,659] INFO [LogLoader partition=test004-692, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,661] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-692, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=692, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (150/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,677] INFO [LogLoader partition=test004-493, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,678] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-493, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=493, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 18ms (151/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,685] INFO [LogLoader partition=test004-625, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,686] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-625, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=625, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (152/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,692] INFO [LogLoader partition=test004-691, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,693] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-691, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=691, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (153/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,699] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,699] INFO [LogLoader partition=test005-229, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,703] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:45,703] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,707] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-229, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=229, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 14ms (154/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,711] INFO [LogLoader partition=test005-295, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,712] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-295, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=295, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (155/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,715] INFO [LogLoader partition=test004-361, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,716] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-361, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=361, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (156/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,720] INFO [LogLoader partition=test004-427, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,721] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-427, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=427, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (157/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,728] INFO [LogLoader partition=test005-96, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,729] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-96, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=96, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (158/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,733] INFO [LogLoader partition=test005-162, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,734] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-162, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=162, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (159/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,739] INFO [LogLoader partition=test004-96, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,740] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-96, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=96, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (160/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,743] INFO [LogLoader partition=test005-228, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,744] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-228, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=228, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (161/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,748] INFO [LogLoader partition=test004-162, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,753] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,757] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:45,757] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,766] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-162, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=162, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 21ms (162/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,771] INFO [LogLoader partition=test-4, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,772] INFO Completed load of Log(dir=/data01/kafka-logs-351/test-4, topicId=HeEEmpDsSGeLVSIfaRiRqQ, topic=test, partition=4, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (163/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,778] INFO [LogLoader partition=test004-17, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,779] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-17, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=17, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (164/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,782] INFO [LogLoader partition=test005-17, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,783] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-17, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=17, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (165/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,787] INFO [LogLoader partition=test004-83, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,788] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-83, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=83, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (166/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,792] INFO [LogLoader partition=test005-83, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,793] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-83, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=83, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (167/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,796] INFO [LogLoader partition=test004-612, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,798] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-612, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=612, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (168/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,802] INFO [LogLoader partition=test004-678, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,803] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-678, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=678, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (169/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,807] INFO [LogLoader partition=test004-413, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,807] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,808] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-413, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=413, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (170/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,812] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:45,812] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,812] INFO [LogLoader partition=test004-479, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,813] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-479, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=479, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (171/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,825] INFO [LogLoader partition=test004-545, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,826] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-545, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=545, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 12ms (172/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,830] INFO [LogLoader partition=test004-215, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,831] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-215, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=215, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (173/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,834] INFO [LogLoader partition=test-7, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,835] INFO Completed load of Log(dir=/data01/kafka-logs-351/test-7, topicId=HeEEmpDsSGeLVSIfaRiRqQ, topic=test, partition=7, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (174/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,839] INFO [LogLoader partition=test005-215, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,840] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-215, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=215, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (175/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,848] INFO [LogLoader partition=test004-281, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,851] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-281, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=281, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 12ms (176/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,856] INFO [LogLoader partition=test005-281, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,857] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-281, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=281, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (177/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,860] INFO [LogLoader partition=test004-347, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,862] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,864] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-347, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=347, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (178/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,867] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:45,867] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,869] INFO [LogLoader partition=test005-347, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,870] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-347, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=347, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (179/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,883] INFO [LogLoader partition=test005-16, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,884] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-16, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=16, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 13ms (180/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,889] INFO [LogLoader partition=test004-16, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,890] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-16, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=16, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (181/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,898] INFO [LogLoader partition=test005-148, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,899] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-148, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=148, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (182/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,903] INFO [LogLoader partition=test123-49, dir=/data01/kafka-logs-351] Loading producer state till offset 293879 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,903] INFO [LogLoader partition=test123-49, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 293879 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,903] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-49/00000000000000293879.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:45,903] INFO [LogLoader partition=test123-49, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 293879 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,904] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-49, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=49, highWatermark=293879, lastStableOffset=293879, logStartOffset=293879, logEndOffset=293879) with 1 segments in 4ms (183/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,908] INFO [LogLoader partition=test004-677, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,909] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-677, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=677, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (184/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,917] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,920] INFO [LogLoader partition=test004-544, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,921] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-544, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=544, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 12ms (185/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,922] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:45,922] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,929] INFO [LogLoader partition=test004-610, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,931] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-610, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=610, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (186/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,934] INFO [LogLoader partition=test004-148, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,935] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-148, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=148, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (187/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,939] INFO [LogLoader partition=test-6, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,940] INFO Completed load of Log(dir=/data01/kafka-logs-351/test-6, topicId=HeEEmpDsSGeLVSIfaRiRqQ, topic=test, partition=6, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (188/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,952] INFO [LogLoader partition=test005-346, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,954] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-346, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=346, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 14ms (189/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,958] INFO [LogLoader partition=test004-346, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,959] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-346, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=346, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (190/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,966] INFO [LogLoader partition=test005-19, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,968] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-19, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=19, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (191/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,972] INFO [LogLoader partition=test004-548, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:45,972] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,977] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-548, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=548, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (192/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:45,986] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:45,986] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:45,996] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:45,996] WARN [RaftManager id=2] Connection to node 1 (/10.58.16.231:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:46,001] INFO [LogLoader partition=test004-349, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,002] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-349, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=349, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 24ms (193/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,005] INFO [LogLoader partition=test005-349, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,006] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-349, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=349, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (194/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,010] INFO [LogLoader partition=test004-85, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,011] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-85, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=85, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (195/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,014] INFO [LogLoader partition=test004-151, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,015] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-151, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=151, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (196/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,018] INFO [LogLoader partition=test005-151, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,022] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-151, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=151, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (197/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,025] INFO [LogLoader partition=test-9, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,026] INFO Completed load of Log(dir=/data01/kafka-logs-351/test-9, topicId=HeEEmpDsSGeLVSIfaRiRqQ, topic=test, partition=9, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (198/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,029] INFO [LogLoader partition=test005-217, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,031] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-217, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=217, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (199/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,037] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,046] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:46,046] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,046] INFO [LogLoader partition=test004-283, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,047] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-283, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=283, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 17ms (200/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,054] INFO [LogLoader partition=test005-84, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,055] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-84, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=84, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (201/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,060] INFO [LogLoader partition=test004-18, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,060] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-18, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=18, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (202/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,067] INFO [LogLoader partition=test004-613, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,067] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-613, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=613, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (203/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,074] INFO [LogLoader partition=test004-679, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,075] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-679, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=679, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (204/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,080] INFO [LogLoader partition=test004-348, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,081] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-348, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=348, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (205/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,093] INFO [LogLoader partition=test004-414, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,094] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-414, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=414, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 13ms (206/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,096] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,099] INFO [LogLoader partition=test004-480, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,100] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-480, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=480, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (207/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,102] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:46,102] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,104] INFO [LogLoader partition=test004-546, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,105] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-546, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=546, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (208/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,116] INFO [LogLoader partition=test123-51, dir=/data01/kafka-logs-351] Loading producer state till offset 294130 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,116] INFO [LogLoader partition=test123-51, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 294130 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,116] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-51/00000000000000294130.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:46,116] INFO [LogLoader partition=test123-51, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 294130 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,117] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-51, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=51, highWatermark=294130, lastStableOffset=294130, logStartOffset=294130, logEndOffset=294130) with 1 segments in 12ms (209/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,122] INFO [LogLoader partition=test005-282, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,122] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-282, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=282, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (210/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,126] INFO [LogLoader partition=test004-216, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,127] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-216, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=216, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (211/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,130] INFO [LogLoader partition=test004-484, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,131] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-484, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=484, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (212/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,135] INFO [LogLoader partition=test004-682, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,147] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-682, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=682, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 16ms (213/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,151] INFO [LogLoader partition=test004-285, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,152] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-285, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=285, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (214/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,152] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,156] INFO [LogLoader partition=test005-285, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,156] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:46,156] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,157] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-285, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=285, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (215/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,164] INFO [LogLoader partition=test004-351, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,165] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-351, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=351, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (216/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,168] INFO [LogLoader partition=test004-483, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,169] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-483, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=483, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (217/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,172] INFO [LogLoader partition=test004-21, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,173] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-21, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=21, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (218/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,177] INFO [LogLoader partition=test005-21, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,178] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-21, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=21, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (219/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,181] INFO [LogLoader partition=test005-87, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,183] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-87, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=87, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (220/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,186] INFO [LogLoader partition=test123-54, dir=/data01/kafka-logs-351] Loading producer state till offset 184725 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,186] INFO [LogLoader partition=test123-54, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 184725 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,186] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-54/00000000000000184725.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:46,186] INFO [LogLoader partition=test123-54, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 184725 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,187] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-54, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=54, highWatermark=184725, lastStableOffset=184725, logStartOffset=184725, logEndOffset=184725) with 1 segments in 4ms (221/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,196] INFO [LogLoader partition=test004-219, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,196] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-219, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=219, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (222/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,200] INFO [LogLoader partition=test-11, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,200] INFO Completed load of Log(dir=/data01/kafka-logs-351/test-11, topicId=HeEEmpDsSGeLVSIfaRiRqQ, topic=test, partition=11, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (223/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,203] INFO [LogLoader partition=test005-219, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,204] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-219, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=219, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (224/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,207] INFO [LogLoader partition=test005-20, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,208] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-20, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=20, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (225/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,208] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,214] INFO [LogLoader partition=test004-549, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,215] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-549, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=549, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (226/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,218] INFO [LogLoader partition=test004-615, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,219] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-615, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=615, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (227/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,223] INFO [LogLoader partition=test005-350, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,224] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-350, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=350, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (228/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,224] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:46,225] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,230] INFO [LogLoader partition=test004-284, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,231] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-284, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=284, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (229/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,238] INFO [LogLoader partition=test004-416, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,239] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-416, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=416, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (230/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,243] INFO [LogLoader partition=test004-482, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,244] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-482, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=482, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (231/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,248] INFO [LogLoader partition=test005-86, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,249] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-86, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=86, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (232/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,253] INFO [LogLoader partition=test004-20, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,257] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-20, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=20, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (233/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,261] INFO [LogLoader partition=test005-152, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,262] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-152, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=152, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (234/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,265] INFO [LogLoader partition=test123-53, dir=/data01/kafka-logs-351] Loading producer state till offset 259806 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,266] INFO [LogLoader partition=test123-53, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 259806 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,266] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-53/00000000000000259806.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:46,266] INFO [LogLoader partition=test123-53, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 259806 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,266] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-53, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=53, highWatermark=259806, lastStableOffset=259806, logStartOffset=259806, logEndOffset=259806) with 1 segments in 5ms (235/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,275] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,281] INFO [LogLoader partition=test004-86, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,282] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-86, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=86, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 16ms (236/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,290] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:46,291] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,291] INFO [LogLoader partition=test005-218, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,292] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-218, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=218, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (237/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,296] INFO [LogLoader partition=test004-152, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,297] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-152, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=152, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (238/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,307] INFO [LogLoader partition=test005-284, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,308] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-284, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=284, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (239/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,313] INFO [LogLoader partition=test004-218, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,314] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-218, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=218, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (240/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,319] INFO [LogLoader partition=test004-684, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,320] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-684, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=684, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (241/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,324] INFO [LogLoader partition=test004-420, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,325] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-420, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=420, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (242/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,330] INFO [LogLoader partition=test004-552, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,331] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-552, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=552, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (243/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,341] INFO [LogLoader partition=test004-618, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,341] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,342] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-618, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=618, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (244/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,345] INFO [LogLoader partition=test004-221, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,346] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:46,346] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,346] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-221, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=221, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (245/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,358] INFO [LogLoader partition=test-13, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,369] INFO Completed load of Log(dir=/data01/kafka-logs-351/test-13, topicId=HeEEmpDsSGeLVSIfaRiRqQ, topic=test, partition=13, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 23ms (246/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,373] INFO [LogLoader partition=test004-287, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,374] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-287, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=287, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (247/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,381] INFO [LogLoader partition=test005-287, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,383] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-287, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=287, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (248/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,387] INFO [LogLoader partition=test005-353, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,388] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-353, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=353, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (249/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,392] INFO [LogLoader partition=test004-419, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,393] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-419, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=419, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (250/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,396] INFO [LogLoader partition=test004-89, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,398] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,398] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-89, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=89, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (251/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,403] INFO [LogLoader partition=test004-155, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,405] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-155, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=155, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (252/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,407] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:46,407] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,411] INFO [LogLoader partition=test005-155, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,412] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-155, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=155, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (253/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,417] INFO [LogLoader partition=test123-56, dir=/data01/kafka-logs-351] Loading producer state till offset 293670 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,417] INFO [LogLoader partition=test123-56, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 293670 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,417] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-56/00000000000000293670.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:46,417] INFO [LogLoader partition=test123-56, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 293670 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,425] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-56, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=56, highWatermark=293670, lastStableOffset=293670, logStartOffset=293670, logEndOffset=293670) with 1 segments in 13ms (254/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,431] INFO [LogLoader partition=test004-551, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,432] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-551, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=551, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (255/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,443] INFO [LogLoader partition=test004-617, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,444] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-617, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=617, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 12ms (256/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,452] INFO [LogLoader partition=test004-683, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,453] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-683, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=683, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (257/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,459] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,464] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:46,464] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,467] INFO [LogLoader partition=test-12, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,468] INFO Completed load of Log(dir=/data01/kafka-logs-351/test-12, topicId=HeEEmpDsSGeLVSIfaRiRqQ, topic=test, partition=12, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 14ms (258/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,472] INFO [LogLoader partition=test005-352, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,473] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-352, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=352, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (259/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,476] INFO [LogLoader partition=test004-352, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,477] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-352, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=352, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (260/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,481] INFO [LogLoader partition=test004-418, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,481] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-418, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=418, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (261/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,486] INFO [LogLoader partition=test005-22, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,487] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-22, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=22, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (262/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,490] INFO [LogLoader partition=test005-88, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,491] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-88, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=88, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (263/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,494] INFO [LogLoader partition=test004-22, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,494] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-22, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=22, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (264/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,501] INFO [LogLoader partition=test005-154, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,502] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-154, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=154, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (265/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,505] INFO [LogLoader partition=test004-88, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,505] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-88, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=88, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (266/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,508] INFO [LogLoader partition=test005-220, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,514] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-220, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=220, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (267/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,514] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,517] INFO [LogLoader partition=test004-154, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,518] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:46,518] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,518] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-154, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=154, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (268/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,521] INFO [LogLoader partition=test004-141, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,522] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-141, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=141, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (269/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,525] INFO [LogLoader partition=test005-141, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,525] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-141, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=141, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (270/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,529] INFO [LogLoader partition=test004-207, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,530] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-207, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=207, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (271/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,533] INFO [LogLoader partition=test005-207, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,533] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-207, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=207, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (272/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,537] INFO [LogLoader partition=test005-273, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,538] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-273, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=273, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (273/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,541] INFO [LogLoader partition=test005-339, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,542] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-339, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=339, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (274/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,547] INFO [LogLoader partition=test004-9, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,548] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-9, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=9, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (275/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,551] INFO [LogLoader partition=test005-9, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,552] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-9, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=9, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (276/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,564] INFO [LogLoader partition=test004-75, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,565] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-75, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=75, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 12ms (277/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,568] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,576] INFO [LogLoader partition=test004-669, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,576] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:46,576] WARN [RaftManager id=2] Connection to node 1 (/10.58.16.231:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:46,577] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:46,578] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,578] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-669, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=669, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 13ms (278/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,581] INFO [LogLoader partition=test004-471, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,582] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-471, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=471, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (279/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,585] INFO [LogLoader partition=test005-206, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,585] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-206, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=206, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (280/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,590] INFO [LogLoader partition=test005-272, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,591] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-272, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=272, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (281/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,595] INFO [LogLoader partition=test004-206, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,595] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-206, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=206, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (282/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,599] INFO [LogLoader partition=test004-272, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,600] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-272, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=272, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (283/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,606] INFO [LogLoader partition=test004-338, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,607] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-338, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=338, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (284/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,610] INFO [LogLoader partition=test005-8, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,611] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-8, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=8, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (285/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,614] INFO [LogLoader partition=test005-74, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,615] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-74, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=74, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (286/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,618] INFO [LogLoader partition=test005-140, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,619] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-140, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=140, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (287/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,622] INFO [LogLoader partition=test004-668, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,623] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-668, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=668, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (288/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,628] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,631] INFO [LogLoader partition=test004-536, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,632] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-536, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=536, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (289/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,635] INFO [LogLoader partition=test004-143, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,636] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-143, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=143, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (290/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,636] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:46,636] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,639] INFO [LogLoader partition=test-17, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,640] INFO Completed load of Log(dir=/data01/kafka-logs-351/test-17, topicId=HeEEmpDsSGeLVSIfaRiRqQ, topic=test, partition=17, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (291/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,643] INFO [LogLoader partition=test004-76, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,644] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-76, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=76, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (292/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,646] INFO [LogLoader partition=test005-208, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,647] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-208, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=208, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (293/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,651] INFO [LogLoader partition=test005-274, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,657] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-274, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=274, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (294/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,661] INFO [LogLoader partition=test004-208, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,662] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-208, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=208, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (295/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,689] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,695] INFO [LogLoader partition=test004-274, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,696] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-274, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=274, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 35ms (296/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,699] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:46,699] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,702] INFO [LogLoader partition=test005-10, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,703] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-10, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=10, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (297/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,716] INFO [LogLoader partition=test005-76, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,717] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-76, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=76, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 15ms (298/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,720] INFO [LogLoader partition=test004-604, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,721] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-604, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=604, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (299/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,731] INFO [LogLoader partition=test-16, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,732] INFO Completed load of Log(dir=/data01/kafka-logs-351/test-16, topicId=HeEEmpDsSGeLVSIfaRiRqQ, topic=test, partition=16, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (300/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,749] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,754] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:46,754] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,771] INFO [LogLoader partition=test004-340, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,772] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-340, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=340, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 40ms (301/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,783] INFO [LogLoader partition=test004-406, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,784] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-406, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=406, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 13ms (302/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,791] INFO [LogLoader partition=test004-472, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,792] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-472, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=472, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (303/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,795] INFO [LogLoader partition=test004-538, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,796] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-538, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=538, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (304/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,800] INFO [LogLoader partition=test005-13, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,801] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-13, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=13, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (305/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,803] INFO [LogLoader partition=test004-79, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,804] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-79, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=79, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (306/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,805] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,807] INFO [LogLoader partition=test005-79, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,808] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-79, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=79, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (307/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,815] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:46,815] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,818] INFO [LogLoader partition=test004-145, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,819] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-145, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=145, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 12ms (308/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,825] INFO [LogLoader partition=test005-145, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,825] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-145, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=145, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (309/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,828] INFO [LogLoader partition=test004-211, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,829] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-211, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=211, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (310/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,832] INFO [LogLoader partition=test-20, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,832] INFO Completed load of Log(dir=/data01/kafka-logs-351/test-20, topicId=HeEEmpDsSGeLVSIfaRiRqQ, topic=test, partition=20, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (311/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,835] INFO [LogLoader partition=test004-673, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,836] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-673, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=673, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (312/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,850] INFO [LogLoader partition=test-19, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,852] INFO Completed load of Log(dir=/data01/kafka-logs-351/test-19, topicId=HeEEmpDsSGeLVSIfaRiRqQ, topic=test, partition=19, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 15ms (313/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,855] INFO [LogLoader partition=test004-277, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,856] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-277, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=277, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (314/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,862] INFO [LogLoader partition=test005-277, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,863] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-277, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=277, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (315/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,866] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,875] INFO [LogLoader partition=test004-343, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,875] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:46,875] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,876] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-343, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=343, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (316/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,880] INFO [LogLoader partition=test004-409, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,880] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-409, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=409, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (317/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,883] INFO [LogLoader partition=test004-475, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,884] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-475, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=475, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (318/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,891] INFO [LogLoader partition=test004-12, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,893] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-12, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=12, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (319/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,896] INFO [LogLoader partition=test123-45, dir=/data01/kafka-logs-351] Loading producer state till offset 293995 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,896] INFO [LogLoader partition=test123-45, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 293995 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,897] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-45/00000000000000293995.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:46,897] INFO [LogLoader partition=test123-45, dir=/data01/kafka-logs-351] Producer state recovery took 1ms for snapshot load and 0ms for segment recovery from offset 293995 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,897] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-45, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=45, highWatermark=293995, lastStableOffset=293995, logStartOffset=293995, logEndOffset=293995) with 1 segments in 5ms (320/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,915] INFO [LogLoader partition=test004-78, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,916] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-78, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=78, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 19ms (321/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,920] INFO [LogLoader partition=test005-210, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,921] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-210, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=210, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (322/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,925] INFO [LogLoader partition=test004-144, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,926] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,926] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-144, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=144, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (323/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,931] INFO [LogLoader partition=test005-276, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,933] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-276, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=276, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (324/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,935] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:46,935] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,938] INFO [LogLoader partition=test005-12, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,939] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-12, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=12, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (325/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,943] INFO [LogLoader partition=test004-540, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,948] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-540, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=540, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (326/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,952] INFO [LogLoader partition=test004-606, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,953] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-606, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=606, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (327/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,959] INFO [LogLoader partition=test005-342, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,960] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-342, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=342, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (328/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,964] INFO [LogLoader partition=test004-276, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,965] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-276, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=276, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (329/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,968] INFO [LogLoader partition=test004-342, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,977] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-342, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=342, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 12ms (330/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,981] INFO [LogLoader partition=test004-408, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,982] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-408, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=408, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (331/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,985] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,986] INFO [LogLoader partition=test004-474, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,991] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-474, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=474, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (332/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:46,995] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:46,995] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:46,996] INFO [LogLoader partition=test004-81, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:46,997] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-81, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=81, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (333/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,016] INFO [LogLoader partition=test005-81, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,018] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-81, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=81, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 21ms (334/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,026] INFO [LogLoader partition=test004-147, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,028] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-147, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=147, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (335/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,032] INFO [LogLoader partition=test005-147, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,038] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-147, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=147, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (336/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,042] INFO [LogLoader partition=test123-48, dir=/data01/kafka-logs-351] Loading producer state till offset 258635 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,042] INFO [LogLoader partition=test123-48, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 258635 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,042] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-48/00000000000000258635.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:47,042] INFO [LogLoader partition=test123-48, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 258635 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,042] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-48, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=48, highWatermark=258635, lastStableOffset=258635, logStartOffset=258635, logEndOffset=258635) with 1 segments in 5ms (337/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,045] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:47,048] INFO [LogLoader partition=test004-477, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,049] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-477, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=477, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (338/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,050] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:47,050] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:47,052] INFO [LogLoader partition=test004-609, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,053] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-609, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=609, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (339/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,059] INFO [LogLoader partition=test004-675, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,060] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-675, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=675, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (340/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,066] INFO [LogLoader partition=test004-213, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,067] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-213, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=213, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (341/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,070] INFO [LogLoader partition=test005-213, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,071] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-213, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=213, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (342/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,078] INFO [LogLoader partition=test004-279, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,079] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-279, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=279, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (343/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,090] INFO [LogLoader partition=test005-279, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,091] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-279, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=279, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 12ms (344/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,094] INFO [LogLoader partition=test004-411, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,095] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-411, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=411, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (345/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,099] INFO [LogLoader partition=test005-80, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,099] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-80, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=80, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (346/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,101] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:47,104] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:47,104] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:47,107] INFO [LogLoader partition=test004-14, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,108] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-14, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=14, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (347/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,115] INFO [LogLoader partition=test005-146, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,117] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-146, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=146, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (348/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,127] INFO [LogLoader partition=test123-47, dir=/data01/kafka-logs-351] Loading producer state till offset 259484 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,127] INFO [LogLoader partition=test123-47, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 259484 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,127] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-47/00000000000000259484.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:47,127] INFO [LogLoader partition=test123-47, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 259484 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,128] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-47, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=47, highWatermark=259484, lastStableOffset=259484, logStartOffset=259484, logEndOffset=259484) with 1 segments in 11ms (349/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,131] INFO [LogLoader partition=test005-212, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,132] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-212, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=212, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (350/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,135] INFO [LogLoader partition=test-21, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,136] INFO Completed load of Log(dir=/data01/kafka-logs-351/test-21, topicId=HeEEmpDsSGeLVSIfaRiRqQ, topic=test, partition=21, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (351/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,139] INFO [LogLoader partition=test004-542, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,140] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-542, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=542, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (352/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,143] INFO [LogLoader partition=test004-608, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,144] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-608, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=608, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (353/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,147] INFO [LogLoader partition=test004-674, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,148] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-674, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=674, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (354/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,151] INFO [LogLoader partition=test004-212, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,152] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-212, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=212, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (355/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,154] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:47,158] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:47,158] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:47,162] INFO [LogLoader partition=test005-344, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,162] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-344, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=344, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (356/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,166] INFO [LogLoader partition=test004-463, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,167] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-463, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=463, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (357/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,171] INFO [LogLoader partition=test004-595, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,172] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-595, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=595, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (358/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,175] INFO [LogLoader partition=test004-133, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,176] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-133, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=133, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (359/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,180] INFO [LogLoader partition=test005-133, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,181] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-133, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=133, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (360/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,192] INFO [LogLoader partition=test123-34, dir=/data01/kafka-logs-351] Loading producer state till offset 293970 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,192] INFO [LogLoader partition=test123-34, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 293970 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,192] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-34/00000000000000293970.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:47,192] INFO [LogLoader partition=test123-34, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 293970 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,193] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-34, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=34, highWatermark=293970, lastStableOffset=293970, logStartOffset=293970, logEndOffset=293970) with 1 segments in 12ms (361/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,196] INFO [LogLoader partition=test004-199, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,197] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-199, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=199, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (362/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,200] INFO [LogLoader partition=test004-265, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,201] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-265, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=265, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (363/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,204] INFO [LogLoader partition=test004-331, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,205] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-331, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=331, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (364/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,208] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:47,216] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:47,216] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:47,217] INFO [LogLoader partition=test005-331, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,218] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-331, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=331, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 14ms (365/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,226] INFO [LogLoader partition=test005-0, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,227] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-0, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=0, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (366/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,230] INFO [LogLoader partition=test005-66, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,231] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-66, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=66, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (367/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,240] INFO [LogLoader partition=test004-66, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,241] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-66, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=66, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (368/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,248] INFO [LogLoader partition=test004-661, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,249] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-661, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=661, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (369/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,253] INFO [LogLoader partition=test004-396, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,254] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-396, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=396, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (370/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,262] INFO [LogLoader partition=test004-462, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,264] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-462, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=462, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (371/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,266] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:47,269] INFO [LogLoader partition=test004-528, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,270] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-528, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=528, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (372/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,273] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:47,273] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:47,274] INFO [LogLoader partition=test004-594, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,275] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-594, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=594, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (373/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,278] INFO [LogLoader partition=test005-198, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,279] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-198, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=198, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (374/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,283] INFO [LogLoader partition=test005-264, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,284] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-264, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=264, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (375/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,289] INFO [LogLoader partition=test-23, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,290] INFO Completed load of Log(dir=/data01/kafka-logs-351/test-23, topicId=HeEEmpDsSGeLVSIfaRiRqQ, topic=test, partition=23, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (376/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,295] INFO [LogLoader partition=test004-264, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,296] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-264, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=264, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (377/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,306] INFO [LogLoader partition=test004-330, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,309] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-330, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=330, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 12ms (378/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,312] INFO [LogLoader partition=test004-65, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,313] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-65, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=65, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (379/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,322] INFO [LogLoader partition=test005-65, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,323] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-65, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=65, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (380/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,323] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:47,327] INFO [RaftManager id=2] High watermark set to LogOffsetMetadata(offset=1962808, metadata=Optional[(segmentBaseOffset=1084546,relativePositionInSegment=62218869)]) for the first time for epoch 1893 based on indexOfHw 1 and voters [ReplicaState(nodeId=1, endOffset=Optional[LogOffsetMetadata(offset=1962808, metadata=Optional[(segmentBaseOffset=1084546,relativePositionInSegment=62218869)])], lastFetchTimestamp=1691482067327, lastCaughtUpTimestamp=1691482067327, hasAcknowledgedLeader=true), ReplicaState(nodeId=2, endOffset=Optional[LogOffsetMetadata(offset=1962808, metadata=Optional[(segmentBaseOffset=1084546,relativePositionInSegment=62218869)])], lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true), ReplicaState(nodeId=3, endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true)] (org.apache.kafka.raft.LeaderState) [2023-08-08 16:07:47,330] INFO [LogLoader partition=test004-131, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,331] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:07:47,331] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-131, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=131, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (381/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,331] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:47,336] INFO [LogLoader partition=test004-660, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,339] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-660, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=660, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (382/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,342] INFO [LogLoader partition=test005-333, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,342] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-333, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=333, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (383/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,357] INFO [LogLoader partition=test004-399, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,358] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-399, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=399, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 15ms (384/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,368] INFO [LogLoader partition=test004-135, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,369] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-135, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=135, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (385/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,372] INFO [LogLoader partition=test004-201, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,373] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-201, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=201, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (386/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,377] INFO [LogLoader partition=test004-267, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,377] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-267, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=267, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (387/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,381] INFO [LogLoader partition=test-26, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,382] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:07:47,382] INFO Completed load of Log(dir=/data01/kafka-logs-351/test-26, topicId=HeEEmpDsSGeLVSIfaRiRqQ, topic=test, partition=26, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (388/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,385] INFO [LogLoader partition=test005-68, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,387] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-68, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=68, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (389/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,397] INFO [LogLoader partition=test004-2, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,399] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-2, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=2, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (390/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,403] INFO [LogLoader partition=test004-663, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,404] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-663, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=663, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (391/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,408] INFO [LogLoader partition=test004-398, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,408] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-398, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=398, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (392/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,412] INFO [LogLoader partition=test004-530, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,413] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-530, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=530, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (393/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,427] INFO [LogLoader partition=test005-134, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,429] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-134, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=134, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 17ms (394/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,435] INFO [BrokerLifecycleManager id=2] Successfully registered broker 2 with broker epoch 1962809 (kafka.server.BrokerLifecycleManager) [2023-08-08 16:07:47,438] INFO [LogLoader partition=test123-35, dir=/data01/kafka-logs-351] Loading producer state till offset 181620 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,438] INFO [LogLoader partition=test123-35, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 181620 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,438] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-35/00000000000000181620.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:47,438] INFO [LogLoader partition=test123-35, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 181620 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,439] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-35, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=35, highWatermark=181620, lastStableOffset=181620, logStartOffset=181620, logEndOffset=181620) with 1 segments in 10ms (395/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,444] INFO [LogLoader partition=test005-200, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,445] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-200, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=200, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (396/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,448] INFO [LogLoader partition=test004-134, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,449] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-134, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=134, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (397/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,452] INFO [LogLoader partition=test005-266, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,453] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-266, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=266, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (398/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,456] INFO [LogLoader partition=test-25, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,457] INFO Completed load of Log(dir=/data01/kafka-logs-351/test-25, topicId=HeEEmpDsSGeLVSIfaRiRqQ, topic=test, partition=25, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (399/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,464] INFO [LogLoader partition=test004-200, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,464] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-200, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=200, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (400/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,479] INFO [LogLoader partition=test005-332, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,480] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-332, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=332, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 15ms (401/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,484] INFO [LogLoader partition=test004-1, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,484] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-1, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=1, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (402/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,489] INFO [LogLoader partition=test005-1, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,489] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-1, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=1, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (403/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,501] INFO [LogLoader partition=test004-67, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,502] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-67, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=67, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 12ms (404/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,505] INFO [LogLoader partition=test004-269, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,506] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-269, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=269, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (405/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,509] INFO [LogLoader partition=test005-269, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,509] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-269, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=269, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (406/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,512] INFO [LogLoader partition=test-28, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,513] INFO Completed load of Log(dir=/data01/kafka-logs-351/test-28, topicId=HeEEmpDsSGeLVSIfaRiRqQ, topic=test, partition=28, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (407/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,516] INFO [LogLoader partition=test004-335, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,517] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-335, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=335, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (408/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,521] INFO [LogLoader partition=test005-335, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,522] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-335, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=335, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (409/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,525] INFO [LogLoader partition=test004-401, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,525] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-401, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=401, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (410/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,528] INFO [LogLoader partition=test004-467, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,529] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-467, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=467, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (411/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,532] INFO [LogLoader partition=test004-71, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,535] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-71, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=71, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (412/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,542] INFO [LogLoader partition=test005-71, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,542] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-71, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=71, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (413/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,545] INFO [LogLoader partition=test005-137, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,546] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-137, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=137, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (414/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,549] INFO [LogLoader partition=test123-38, dir=/data01/kafka-logs-351] Loading producer state till offset 161655 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,549] INFO [LogLoader partition=test123-38, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 161655 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,549] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-38/00000000000000161655.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:47,549] INFO [LogLoader partition=test123-38, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 161655 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,551] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-38, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=38, highWatermark=161655, lastStableOffset=161655, logStartOffset=161655, logEndOffset=161655) with 1 segments in 4ms (415/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,554] INFO [LogLoader partition=test005-203, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,557] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-203, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=203, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (416/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,567] INFO [LogLoader partition=test005-4, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,568] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-4, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=4, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (417/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,572] INFO [LogLoader partition=test004-533, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,573] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-533, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=533, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (418/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,576] INFO [LogLoader partition=test004-599, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,576] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-599, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=599, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (419/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,581] INFO [LogLoader partition=test004-334, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,581] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-334, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=334, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (420/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,592] INFO [LogLoader partition=test004-466, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,593] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-466, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=466, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 12ms (421/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,597] INFO [LogLoader partition=test005-70, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,598] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-70, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=70, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (422/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,602] INFO [LogLoader partition=test004-4, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,604] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-4, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=4, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (423/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,622] INFO [LogLoader partition=test005-136, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,623] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-136, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=136, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 19ms (424/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,627] INFO [LogLoader partition=test123-37, dir=/data01/kafka-logs-351] Loading producer state till offset 294060 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,628] INFO [LogLoader partition=test123-37, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 294060 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,628] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-37/00000000000000294060.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:47,628] INFO [LogLoader partition=test123-37, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 294060 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,629] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-37, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=37, highWatermark=294060, lastStableOffset=294060, logStartOffset=294060, logEndOffset=294060) with 1 segments in 5ms (425/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,647] INFO [LogLoader partition=test004-70, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,648] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-70, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=70, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 19ms (426/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,660] INFO [LogLoader partition=test005-202, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,661] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-202, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=202, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 13ms (427/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,675] INFO [LogLoader partition=test004-136, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,677] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-136, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=136, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 16ms (428/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,686] INFO [LogLoader partition=test005-268, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,687] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-268, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=268, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (429/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,691] INFO [LogLoader partition=test-27, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,692] INFO Completed load of Log(dir=/data01/kafka-logs-351/test-27, topicId=HeEEmpDsSGeLVSIfaRiRqQ, topic=test, partition=27, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (430/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,695] INFO [LogLoader partition=test004-202, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,700] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-202, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=202, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (431/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,706] INFO [LogLoader partition=test004-3, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,707] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-3, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=3, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (432/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,712] INFO [LogLoader partition=test005-3, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,713] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-3, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=3, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (433/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,717] INFO [LogLoader partition=test004-532, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,718] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-532, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=532, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (434/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,724] INFO [LogLoader partition=test004-598, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,726] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-598, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=598, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (435/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,729] INFO [LogLoader partition=test004-664, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,730] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-664, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=664, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (436/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,734] INFO [LogLoader partition=test004-205, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,736] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-205, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=205, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (437/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,748] INFO [LogLoader partition=test004-271, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,748] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-271, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=271, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 13ms (438/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,759] INFO [LogLoader partition=test005-271, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,760] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-271, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=271, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (439/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,768] INFO [LogLoader partition=test005-337, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,769] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-337, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=337, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (440/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,772] INFO [LogLoader partition=test004-403, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,773] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-403, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=403, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (441/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,777] INFO [LogLoader partition=test004-7, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,778] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-7, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=7, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (442/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,782] INFO [LogLoader partition=test004-73, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,783] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-73, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=73, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (443/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,788] INFO [LogLoader partition=test004-139, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,789] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-139, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=139, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (444/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,807] INFO [LogLoader partition=test123-40, dir=/data01/kafka-logs-351] Loading producer state till offset 141690 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,807] INFO [LogLoader partition=test123-40, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 141690 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,807] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-40/00000000000000141690.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:47,807] INFO [LogLoader partition=test123-40, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 141690 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,809] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-40, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=40, highWatermark=141690, lastStableOffset=141690, logStartOffset=141690, logEndOffset=141690) with 1 segments in 20ms (445/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,813] INFO [LogLoader partition=test005-336, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,813] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-336, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=336, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (446/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,817] INFO [LogLoader partition=test004-336, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,818] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-336, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=336, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (447/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,823] INFO [LogLoader partition=test004-402, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,824] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-402, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=402, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (448/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,830] INFO [LogLoader partition=test005-6, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,837] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-6, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=6, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 14ms (449/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,841] INFO [LogLoader partition=test005-72, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,842] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-72, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=72, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (450/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,848] INFO [LogLoader partition=test004-6, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,848] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-6, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=6, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (451/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,857] INFO [LogLoader partition=test005-138, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,858] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-138, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=138, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (452/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,868] INFO [LogLoader partition=test123-39, dir=/data01/kafka-logs-351] Loading producer state till offset 293971 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,868] INFO [LogLoader partition=test123-39, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 293971 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,868] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-39/00000000000000293971.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:47,868] INFO [LogLoader partition=test123-39, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 293971 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,875] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-39, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=39, highWatermark=293971, lastStableOffset=293971, logStartOffset=293971, logEndOffset=293971) with 1 segments in 16ms (453/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,879] INFO [LogLoader partition=test004-72, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,881] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-72, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=72, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (454/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,885] INFO [LogLoader partition=test005-204, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,886] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-204, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=204, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (455/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,889] INFO [LogLoader partition=test004-138, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,889] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-138, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=138, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (456/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,894] INFO [LogLoader partition=test004-468, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,896] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-468, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=468, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (457/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,900] INFO [LogLoader partition=test004-534, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,901] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-534, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=534, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (458/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,907] INFO [LogLoader partition=test004-600, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,908] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-600, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=600, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (459/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,911] INFO [LogLoader partition=test004-666, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,913] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-666, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=666, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (460/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,928] INFO [LogLoader partition=test004-653, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,934] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-653, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=653, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 21ms (461/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,937] INFO [LogLoader partition=test004-719, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,938] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-719, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=719, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (462/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,941] INFO [LogLoader partition=test004-389, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,942] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-389, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=389, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (463/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,946] INFO [LogLoader partition=test004-521, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,946] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-521, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=521, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (464/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,952] INFO [LogLoader partition=test005-190, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,956] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-190, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=190, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (465/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,960] INFO [LogLoader partition=test004-124, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,961] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-124, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=124, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (466/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,966] INFO [LogLoader partition=test005-256, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,972] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-256, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=256, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (467/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,976] INFO [LogLoader partition=test004-190, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,977] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-190, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=190, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (468/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,980] INFO [LogLoader partition=test005-322, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,981] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-322, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=322, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (469/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:47,984] INFO [LogLoader partition=test004-322, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:47,984] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-322, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=322, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (470/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,002] INFO [LogLoader partition=test005-58, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,003] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-58, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=58, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 19ms (471/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,011] INFO [LogLoader partition=test005-124, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,012] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-124, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=124, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (472/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,016] INFO [LogLoader partition=test004-652, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,017] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-652, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=652, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (473/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,023] INFO [LogLoader partition=test004-718, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,024] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-718, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=718, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (474/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,027] INFO [LogLoader partition=test004-388, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,028] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-388, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=388, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (475/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,031] INFO [LogLoader partition=test004-454, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,032] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-454, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=454, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (476/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,036] INFO [LogLoader partition=test004-586, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,036] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-586, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=586, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (477/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,039] INFO [LogLoader partition=test005-189, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,040] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-189, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=189, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (478/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,043] INFO [LogLoader partition=test004-255, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,044] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-255, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=255, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (479/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,048] INFO [LogLoader partition=test004-321, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,049] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-321, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=321, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (480/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,052] INFO [LogLoader partition=test005-321, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,052] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-321, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=321, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (481/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,055] INFO [LogLoader partition=test004-57, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,056] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-57, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=57, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (482/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,060] INFO [LogLoader partition=test005-57, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,061] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-57, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=57, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (483/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,064] INFO [LogLoader partition=test004-123, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,064] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-123, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=123, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (484/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,068] INFO [LogLoader partition=test123-24, dir=/data01/kafka-logs-351] Loading producer state till offset 185280 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,068] INFO [LogLoader partition=test123-24, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 185280 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,068] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-24/00000000000000185280.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:48,068] INFO [LogLoader partition=test123-24, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 185280 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,071] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-24, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=24, highWatermark=185280, lastStableOffset=185280, logStartOffset=185280, logEndOffset=185280) with 1 segments in 6ms (485/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,075] INFO [LogLoader partition=test004-589, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,075] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-589, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=589, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (486/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,082] INFO [LogLoader partition=test004-655, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,082] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-655, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=655, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (487/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,086] INFO [LogLoader partition=test004-325, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,087] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-325, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=325, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (488/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,090] INFO [LogLoader partition=test004-391, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,091] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-391, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=391, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (489/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,093] INFO [LogLoader partition=test004-457, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,094] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-457, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=457, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (490/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,097] INFO [LogLoader partition=test004-523, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,097] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-523, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=523, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (491/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,103] INFO [LogLoader partition=test005-126, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,103] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-126, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=126, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (492/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,107] INFO [LogLoader partition=test004-60, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,107] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-60, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=60, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (493/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,113] INFO [LogLoader partition=test004-126, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,115] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-126, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=126, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (494/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,121] INFO [LogLoader partition=test005-258, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,122] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-258, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=258, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (495/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,124] INFO [LogLoader partition=test004-192, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,125] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-192, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=192, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (496/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,134] INFO [LogLoader partition=test005-324, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,135] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-324, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=324, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (497/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,139] INFO [LogLoader partition=test004-258, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,140] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-258, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=258, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (498/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,144] INFO [LogLoader partition=test005-60, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,145] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-60, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=60, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (499/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,151] INFO [LogLoader partition=test004-654, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,151] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-654, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=654, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (500/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,154] INFO [LogLoader partition=test004-390, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,155] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-390, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=390, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (501/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,158] INFO [LogLoader partition=test004-456, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,164] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-456, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=456, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (502/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,167] INFO [LogLoader partition=test004-522, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,167] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-522, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=522, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (503/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,171] INFO [LogLoader partition=test005-125, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,172] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-125, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=125, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (504/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,176] INFO [LogLoader partition=test123-26, dir=/data01/kafka-logs-351] Loading producer state till offset 294030 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,176] INFO [LogLoader partition=test123-26, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 294030 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,176] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-26/00000000000000294030.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:48,176] INFO [LogLoader partition=test123-26, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 294030 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,177] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-26, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=26, highWatermark=294030, lastStableOffset=294030, logStartOffset=294030, logEndOffset=294030) with 1 segments in 4ms (505/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,179] INFO [LogLoader partition=test004-191, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,180] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-191, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=191, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (506/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,183] INFO [LogLoader partition=test004-257, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,184] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-257, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=257, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (507/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,194] INFO [LogLoader partition=test005-257, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,194] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-257, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=257, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (508/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,198] INFO [LogLoader partition=test004-59, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,199] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-59, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=59, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (509/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,202] INFO [LogLoader partition=test004-591, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,203] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-591, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=591, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (510/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,217] INFO [LogLoader partition=test004-393, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,219] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-393, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=393, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 16ms (511/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,226] INFO [LogLoader partition=test004-459, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,227] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-459, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=459, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (512/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,234] INFO [LogLoader partition=test005-128, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,236] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-128, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=128, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (513/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,243] INFO [LogLoader partition=test123-29, dir=/data01/kafka-logs-351] Loading producer state till offset 294075 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,244] INFO [LogLoader partition=test123-29, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 294075 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,244] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-29/00000000000000294075.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:48,244] INFO [LogLoader partition=test123-29, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 294075 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,245] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-29, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=29, highWatermark=294075, lastStableOffset=294075, logStartOffset=294075, logEndOffset=294075) with 1 segments in 9ms (514/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,251] INFO [LogLoader partition=test005-194, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,252] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-194, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=194, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (515/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,255] INFO [LogLoader partition=test004-194, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,256] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-194, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=194, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (516/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,258] INFO [LogLoader partition=test004-590, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,259] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-590, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=590, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (517/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,262] INFO [LogLoader partition=test005-326, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,263] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-326, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=326, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (518/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,267] INFO [LogLoader partition=test004-260, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,268] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-260, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=260, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (519/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,272] INFO [LogLoader partition=test004-326, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,272] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-326, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=326, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (520/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,285] INFO [LogLoader partition=test004-61, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,286] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-61, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=61, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 13ms (521/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,289] INFO [LogLoader partition=test005-61, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,290] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-61, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=61, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (522/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,293] INFO [LogLoader partition=test004-127, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,293] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-127, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=127, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (523/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,299] INFO [LogLoader partition=test123-28, dir=/data01/kafka-logs-351] Loading producer state till offset 260246 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,299] INFO [LogLoader partition=test123-28, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 260246 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,300] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-28/00000000000000260246.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:48,300] INFO [LogLoader partition=test123-28, dir=/data01/kafka-logs-351] Producer state recovery took 1ms for snapshot load and 0ms for segment recovery from offset 260246 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,300] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-28, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=28, highWatermark=260246, lastStableOffset=260246, logStartOffset=260246, logEndOffset=260246) with 1 segments in 7ms (524/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,303] INFO [LogLoader partition=test005-193, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,304] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-193, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=193, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (525/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,306] INFO [LogLoader partition=test005-259, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,307] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-259, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=259, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (526/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,309] INFO [LogLoader partition=test004-461, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,310] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-461, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=461, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (527/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,318] INFO [LogLoader partition=test004-527, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,318] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-527, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=527, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (528/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,321] INFO [LogLoader partition=test004-659, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,322] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-659, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=659, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (529/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,324] INFO [LogLoader partition=test004-263, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,325] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-263, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=263, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (530/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,329] INFO [LogLoader partition=test005-263, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,330] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-263, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=263, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (531/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,336] INFO [LogLoader partition=test004-329, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,337] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-329, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=329, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (532/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,341] INFO [LogLoader partition=test005-329, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,341] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-329, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=329, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (533/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,346] INFO [LogLoader partition=test005-64, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,347] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-64, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=64, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (534/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,355] INFO [LogLoader partition=test005-130, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,355] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-130, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=130, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (535/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,358] INFO [LogLoader partition=test123-31, dir=/data01/kafka-logs-351] Loading producer state till offset 259134 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,358] INFO [LogLoader partition=test123-31, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 259134 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,358] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-31/00000000000000259134.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:48,358] INFO [LogLoader partition=test123-31, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 259134 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,359] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-31, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=31, highWatermark=259134, lastStableOffset=259134, logStartOffset=259134, logEndOffset=259134) with 1 segments in 3ms (536/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,362] INFO [LogLoader partition=test004-64, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,363] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-64, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=64, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (537/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,365] INFO [LogLoader partition=test005-196, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,366] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-196, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=196, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (538/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,369] INFO [LogLoader partition=test004-526, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,369] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-526, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=526, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (539/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,373] INFO [LogLoader partition=test004-592, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,374] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-592, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=592, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (540/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,377] INFO [LogLoader partition=test004-658, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,378] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-658, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=658, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (541/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,387] INFO [LogLoader partition=test005-262, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,395] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-262, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=262, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 17ms (542/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,421] INFO [LogLoader partition=test004-196, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,422] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-196, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=196, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 27ms (543/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,431] INFO [LogLoader partition=test005-328, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,432] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-328, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=328, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (544/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,441] INFO [LogLoader partition=test004-262, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,442] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-262, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=262, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (545/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,445] INFO [LogLoader partition=test004-328, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,446] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-328, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=328, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (546/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,450] INFO [LogLoader partition=test004-394, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,451] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-394, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=394, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (547/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,456] INFO [LogLoader partition=test004-129, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,457] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-129, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=129, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (548/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,459] INFO [LogLoader partition=test005-129, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,460] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-129, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=129, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (549/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,463] INFO [LogLoader partition=test123-30, dir=/data01/kafka-logs-351] Loading producer state till offset 293880 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,463] INFO [LogLoader partition=test123-30, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 293880 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,463] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-30/00000000000000293880.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:48,463] INFO [LogLoader partition=test123-30, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 293880 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,464] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-30, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=30, highWatermark=293880, lastStableOffset=293880, logStartOffset=293880, logEndOffset=293880) with 1 segments in 4ms (550/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,471] INFO [LogLoader partition=test004-195, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,472] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-195, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=195, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (551/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,475] INFO [LogLoader partition=test005-195, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,475] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-195, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=195, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (552/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,479] INFO [LogLoader partition=test123-17, dir=/data01/kafka-logs-351] Loading producer state till offset 151470 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,479] INFO [LogLoader partition=test123-17, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 151470 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,479] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-17/00000000000000151470.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:48,479] INFO [LogLoader partition=test123-17, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 151470 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,480] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-17, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=17, highWatermark=151470, lastStableOffset=151470, logStartOffset=151470, logEndOffset=151470) with 1 segments in 5ms (553/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,484] INFO [LogLoader partition=test004-50, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,485] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-50, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=50, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (554/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,487] INFO [LogLoader partition=test004-711, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,488] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-711, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=711, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (555/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,491] INFO [LogLoader partition=test004-380, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,492] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-380, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=380, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (556/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,494] INFO [LogLoader partition=test004-446, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,495] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-446, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=446, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (557/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,497] INFO [LogLoader partition=test004-578, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,498] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-578, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=578, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (558/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,501] INFO [LogLoader partition=test005-182, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,502] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-182, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=182, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (559/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,513] INFO [LogLoader partition=test004-116, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,513] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-116, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=116, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 12ms (560/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,516] INFO [LogLoader partition=test005-248, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,517] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-248, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=248, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (561/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,519] INFO [LogLoader partition=test004-182, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,520] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-182, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=182, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (562/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,530] INFO [LogLoader partition=test005-314, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,531] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-314, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=314, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (563/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,534] INFO [LogLoader partition=test004-314, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,534] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-314, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=314, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (564/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,541] INFO [LogLoader partition=test004-49, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,547] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-49, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=49, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 12ms (565/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,550] INFO [LogLoader partition=test005-49, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,551] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-49, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=49, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (566/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,555] INFO [LogLoader partition=test004-115, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,556] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-115, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=115, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (567/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,559] INFO [LogLoader partition=test005-115, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,559] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-115, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=115, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (568/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,563] INFO [LogLoader partition=test123-16, dir=/data01/kafka-logs-351] Loading producer state till offset 294021 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,563] INFO [LogLoader partition=test123-16, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 294021 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,563] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-16/00000000000000294021.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:48,563] INFO [LogLoader partition=test123-16, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 294021 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,564] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-16, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=16, highWatermark=294021, lastStableOffset=294021, logStartOffset=294021, logEndOffset=294021) with 1 segments in 5ms (569/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,569] INFO [LogLoader partition=test004-710, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,570] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-710, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=710, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (570/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,579] INFO [LogLoader partition=test004-511, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,579] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-511, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=511, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (571/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,582] INFO [LogLoader partition=test004-643, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,583] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-643, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=643, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (572/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,589] INFO [LogLoader partition=test005-181, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,590] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-181, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=181, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (573/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,599] INFO [LogLoader partition=test004-247, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,600] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-247, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=247, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (574/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,604] INFO [LogLoader partition=test004-379, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,604] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-379, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=379, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (575/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,610] INFO [LogLoader partition=test005-52, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,611] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-52, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=52, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (576/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,614] INFO [LogLoader partition=test004-647, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,615] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-647, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=647, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (577/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,618] INFO [LogLoader partition=test004-713, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,619] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-713, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=713, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (578/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,622] INFO [LogLoader partition=test004-382, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,623] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-382, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=382, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (579/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,628] INFO [LogLoader partition=test004-448, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,628] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-448, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=448, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (580/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,634] INFO [LogLoader partition=test004-514, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,635] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-514, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=514, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (581/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,638] INFO [LogLoader partition=test005-118, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,639] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-118, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=118, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (582/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,642] INFO [LogLoader partition=test123-19, dir=/data01/kafka-logs-351] Loading producer state till offset 186720 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,642] INFO [LogLoader partition=test123-19, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 186720 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,643] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-19/00000000000000186720.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:48,643] INFO [LogLoader partition=test123-19, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 186720 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,643] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-19, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=19, highWatermark=186720, lastStableOffset=186720, logStartOffset=186720, logEndOffset=186720) with 1 segments in 4ms (583/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,646] INFO [LogLoader partition=test004-52, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,647] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-52, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=52, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (584/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,650] INFO [LogLoader partition=test005-184, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,650] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-184, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=184, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (585/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,660] INFO [LogLoader partition=test004-118, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,661] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-118, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=118, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (586/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,664] INFO [LogLoader partition=test005-250, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,665] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-250, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=250, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (587/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,668] INFO [LogLoader partition=test005-316, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,668] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-316, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=316, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (588/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,673] INFO [LogLoader partition=test004-51, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,674] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-51, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=51, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (589/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,677] INFO [LogLoader partition=test004-580, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,677] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-580, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=580, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (590/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,680] INFO [LogLoader partition=test004-646, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,681] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-646, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=646, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (591/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,685] INFO [LogLoader partition=test004-447, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,686] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-447, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=447, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (592/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,689] INFO [LogLoader partition=test004-513, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,693] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-513, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=513, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (593/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,697] INFO [LogLoader partition=test004-579, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,698] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-579, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=579, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (594/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,703] INFO [LogLoader partition=test004-117, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,703] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-117, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=117, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (595/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,706] INFO [LogLoader partition=test005-117, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,707] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-117, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=117, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (596/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,711] INFO [LogLoader partition=test123-18, dir=/data01/kafka-logs-351] Loading producer state till offset 293940 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,711] INFO [LogLoader partition=test123-18, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 293940 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,711] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-18/00000000000000293940.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:48,711] INFO [LogLoader partition=test123-18, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 293940 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,711] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-18, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=18, highWatermark=293940, lastStableOffset=293940, logStartOffset=293940, logEndOffset=293940) with 1 segments in 5ms (597/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,714] INFO [LogLoader partition=test004-183, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,715] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-183, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=183, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (598/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,718] INFO [LogLoader partition=test005-183, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,719] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-183, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=183, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (599/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,722] INFO [LogLoader partition=test004-249, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,722] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-249, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=249, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (600/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,727] INFO [LogLoader partition=test004-315, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,728] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-315, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=315, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (601/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,736] INFO [LogLoader partition=test005-315, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,741] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-315, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=315, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 13ms (602/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,750] INFO [LogLoader partition=test004-517, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,755] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-517, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=517, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 14ms (603/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,763] INFO [LogLoader partition=test004-583, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,763] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-583, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=583, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (604/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,767] INFO [LogLoader partition=test004-649, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,768] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-649, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=649, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (605/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,771] INFO [LogLoader partition=test004-252, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,771] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-252, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=252, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (606/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,777] INFO [LogLoader partition=test004-318, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,780] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-318, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=318, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (607/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,783] INFO [LogLoader partition=test004-384, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,784] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-384, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=384, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (608/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,788] INFO [LogLoader partition=test004-450, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,788] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-450, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=450, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (609/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,795] INFO [LogLoader partition=test005-54, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,796] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-54, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=54, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (610/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,801] INFO [LogLoader partition=test005-120, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,802] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-120, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=120, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (611/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,816] INFO [LogLoader partition=test123-21, dir=/data01/kafka-logs-351] Loading producer state till offset 189660 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,816] INFO [LogLoader partition=test123-21, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 189660 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,816] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-21/00000000000000189660.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:48,816] INFO [LogLoader partition=test123-21, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 189660 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,817] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-21, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=21, highWatermark=189660, lastStableOffset=189660, logStartOffset=189660, logEndOffset=189660) with 1 segments in 15ms (612/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,821] INFO [LogLoader partition=test005-186, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,822] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-186, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=186, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (613/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,836] INFO [LogLoader partition=test004-120, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,836] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-120, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=120, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 15ms (614/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,843] INFO [LogLoader partition=test005-252, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,843] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-252, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=252, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (615/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,847] INFO [LogLoader partition=test004-186, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,847] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-186, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=186, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (616/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,850] INFO [LogLoader partition=test004-516, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,851] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-516, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=516, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (617/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,854] INFO [LogLoader partition=test004-714, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,854] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-714, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=714, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (618/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,857] INFO [LogLoader partition=test004-317, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,858] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-317, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=317, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (619/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,861] INFO [LogLoader partition=test004-383, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,861] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-383, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=383, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (620/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,865] INFO [LogLoader partition=test005-53, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,865] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-53, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=53, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (621/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,874] INFO [LogLoader partition=test004-185, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,875] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-185, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=185, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (622/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,884] INFO [LogLoader partition=test004-251, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,885] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-251, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=251, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (623/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,888] INFO [LogLoader partition=test005-251, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,889] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-251, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=251, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (624/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,891] INFO [LogLoader partition=test004-453, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,892] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-453, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=453, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (625/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,899] INFO [LogLoader partition=test004-519, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,899] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-519, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=519, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (626/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,903] INFO [LogLoader partition=test004-585, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,903] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-585, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=585, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (627/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,913] INFO [LogLoader partition=test004-188, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,914] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-188, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=188, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (628/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,917] INFO [LogLoader partition=test005-320, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,917] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-320, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=320, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (629/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,925] INFO [LogLoader partition=test004-320, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,926] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-320, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=320, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (630/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,934] INFO [LogLoader partition=test004-386, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,935] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-386, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=386, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (631/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,938] INFO [LogLoader partition=test005-122, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,939] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-122, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=122, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (632/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,942] INFO [LogLoader partition=test123-23, dir=/data01/kafka-logs-351] Loading producer state till offset 293867 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,942] INFO [LogLoader partition=test123-23, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 293867 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,942] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-23/00000000000000293867.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:48,942] INFO [LogLoader partition=test123-23, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 293867 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,943] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-23, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=23, highWatermark=293867, lastStableOffset=293867, logStartOffset=293867, logEndOffset=293867) with 1 segments in 3ms (633/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,946] INFO [LogLoader partition=test004-56, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,946] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-56, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=56, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (634/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,950] INFO [LogLoader partition=test004-122, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,951] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-122, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=122, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (635/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,954] INFO [LogLoader partition=test004-716, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,955] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-716, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=716, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (636/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,958] INFO [LogLoader partition=test004-584, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,959] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-584, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=584, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (637/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,962] INFO [LogLoader partition=test004-650, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,963] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-650, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=650, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (638/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,965] INFO [LogLoader partition=test004-253, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,966] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-253, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=253, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (639/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,969] INFO [LogLoader partition=test005-253, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,970] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-253, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=253, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (640/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,973] INFO [LogLoader partition=test005-319, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,974] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-319, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=319, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (641/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,980] INFO [LogLoader partition=test004-451, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,980] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-451, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=451, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (642/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,983] INFO [LogLoader partition=test004-55, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,984] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-55, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=55, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (643/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,987] INFO [LogLoader partition=test005-55, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,988] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-55, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=55, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (644/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,991] INFO [LogLoader partition=test005-187, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,992] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-187, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=187, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (645/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:48,998] INFO [LogLoader partition=test005-174, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:48,999] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-174, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=174, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (646/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,003] INFO [LogLoader partition=test004-108, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,003] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-108, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=108, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (647/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,014] INFO [LogLoader partition=test005-240, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,015] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-240, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=240, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (648/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,020] INFO [LogLoader partition=test005-108, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,021] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-108, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=108, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (649/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,024] INFO [LogLoader partition=test004-636, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,025] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-636, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=636, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (650/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,028] INFO [LogLoader partition=test004-702, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,028] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-702, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=702, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (651/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,032] INFO [LogLoader partition=test004-372, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,032] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-372, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=372, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (652/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,035] INFO [LogLoader partition=test004-438, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,036] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-438, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=438, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (653/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,039] INFO [LogLoader partition=test005-173, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,039] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-173, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=173, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (654/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,048] INFO [LogLoader partition=test005-239, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,049] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-239, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=239, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (655/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,052] INFO [LogLoader partition=test004-305, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,053] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-305, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=305, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (656/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,057] INFO [LogLoader partition=test004-41, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,058] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-41, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=41, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (657/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,068] INFO [LogLoader partition=test005-41, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,068] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-41, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=41, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (658/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,076] INFO [LogLoader partition=test004-107, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,077] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-107, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=107, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (659/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,086] INFO [LogLoader partition=test005-107, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,087] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-107, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=107, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (660/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,090] INFO [LogLoader partition=test123-8, dir=/data01/kafka-logs-351] Loading producer state till offset 174702 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,090] INFO [LogLoader partition=test123-8, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 174702 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,090] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-8/00000000000000174702.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:49,091] INFO [LogLoader partition=test123-8, dir=/data01/kafka-logs-351] Producer state recovery took 1ms for snapshot load and 0ms for segment recovery from offset 174702 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,091] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-8, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=8, highWatermark=174702, lastStableOffset=174702, logStartOffset=174702, logEndOffset=174702) with 1 segments in 4ms (661/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,094] INFO [LogLoader partition=test004-503, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,095] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-503, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=503, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (662/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,102] INFO [LogLoader partition=test004-635, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,104] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-635, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=635, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (663/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,106] INFO [LogLoader partition=test123-11, dir=/data01/kafka-logs-351] Loading producer state till offset 293925 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,106] INFO [LogLoader partition=test123-11, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 293925 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,106] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-11/00000000000000293925.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:49,106] INFO [LogLoader partition=test123-11, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 293925 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,107] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-11, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=11, highWatermark=293925, lastStableOffset=293925, logStartOffset=293925, logEndOffset=293925) with 1 segments in 4ms (664/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,110] INFO [LogLoader partition=test004-44, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,111] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-44, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=44, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (665/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,116] INFO [LogLoader partition=test005-242, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,116] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-242, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=242, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (666/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,119] INFO [LogLoader partition=test004-176, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,120] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-176, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=176, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (667/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,129] INFO [LogLoader partition=test005-308, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,130] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-308, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=308, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (668/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,133] INFO [LogLoader partition=test004-242, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,134] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-242, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=242, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (669/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,136] INFO [LogLoader partition=test005-44, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,137] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-44, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=44, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (670/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,140] INFO [LogLoader partition=test004-572, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,141] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-572, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=572, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (671/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,155] INFO [LogLoader partition=test004-704, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,156] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-704, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=704, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 15ms (672/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,165] INFO [LogLoader partition=test004-308, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,165] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-308, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=308, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (673/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,169] INFO [LogLoader partition=test004-374, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,170] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-374, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=374, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (674/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,173] INFO [LogLoader partition=test004-506, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,173] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-506, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=506, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (675/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,176] INFO [LogLoader partition=test004-109, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,176] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-109, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=109, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (676/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,179] INFO [LogLoader partition=test005-109, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,179] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-109, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=109, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (677/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,181] INFO [LogLoader partition=test123-10, dir=/data01/kafka-logs-351] Loading producer state till offset 258042 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,182] INFO [LogLoader partition=test123-10, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 258042 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,182] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-10/00000000000000258042.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:49,182] INFO [LogLoader partition=test123-10, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 258042 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,182] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-10, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=10, highWatermark=258042, lastStableOffset=258042, logStartOffset=258042, logEndOffset=258042) with 1 segments in 3ms (678/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,184] INFO [LogLoader partition=test004-175, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,185] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-175, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=175, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 2ms (679/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,193] INFO [LogLoader partition=test005-175, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,194] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-175, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=175, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (680/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,201] INFO [LogLoader partition=test004-241, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,202] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-241, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=241, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (681/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,208] INFO [LogLoader partition=test004-307, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,208] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-307, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=307, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (682/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,212] INFO [LogLoader partition=test005-307, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,213] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-307, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=307, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (683/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,218] INFO [LogLoader partition=test004-43, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,219] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-43, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=43, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (684/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,222] INFO [LogLoader partition=test005-43, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,223] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-43, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=43, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (685/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,225] INFO [LogLoader partition=test004-637, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,226] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-637, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=637, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (686/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,229] INFO [LogLoader partition=test004-439, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,230] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-439, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=439, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (687/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,238] INFO [LogLoader partition=test004-505, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,239] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-505, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=505, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (688/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,242] INFO [LogLoader partition=test004-571, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,243] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-571, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=571, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (689/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,246] INFO [LogLoader partition=test005-112, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,246] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-112, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=112, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (690/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,249] INFO [LogLoader partition=test123-13, dir=/data01/kafka-logs-351] Loading producer state till offset 259736 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,249] INFO [LogLoader partition=test123-13, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 259736 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,250] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-13/00000000000000259736.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:49,250] INFO [LogLoader partition=test123-13, dir=/data01/kafka-logs-351] Producer state recovery took 1ms for snapshot load and 0ms for segment recovery from offset 259736 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,250] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-13, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=13, highWatermark=259736, lastStableOffset=259736, logStartOffset=259736, logEndOffset=259736) with 1 segments in 4ms (691/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,254] INFO [LogLoader partition=test004-46, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,254] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-46, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=46, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (692/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,257] INFO [LogLoader partition=test005-178, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,258] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-178, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=178, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (693/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,261] INFO [LogLoader partition=test004-178, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,261] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-178, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=178, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (694/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,264] INFO [LogLoader partition=test004-640, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,265] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-640, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=640, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (695/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,267] INFO [LogLoader partition=test004-706, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,268] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-706, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=706, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (696/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,271] INFO [LogLoader partition=test004-244, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,272] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-244, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=244, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (697/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,275] INFO [LogLoader partition=test004-310, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,275] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-310, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=310, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (698/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,281] INFO [LogLoader partition=test004-376, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,282] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-376, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=376, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (699/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,287] INFO [LogLoader partition=test004-45, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,288] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-45, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=45, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (700/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,291] INFO [LogLoader partition=test005-45, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,292] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-45, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=45, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (701/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,295] INFO [LogLoader partition=test004-111, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,296] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-111, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=111, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (702/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,300] INFO [LogLoader partition=test005-111, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,301] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-111, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=111, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (703/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,311] INFO [LogLoader partition=test004-177, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,312] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-177, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=177, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (704/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,316] INFO [LogLoader partition=test005-177, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,317] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-177, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=177, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (705/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,319] INFO [LogLoader partition=test005-243, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,321] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-243, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=243, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (706/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,326] INFO [LogLoader partition=test004-573, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,327] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-573, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=573, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (707/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,329] INFO [LogLoader partition=test004-639, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,330] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-639, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=639, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (708/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,336] INFO [LogLoader partition=test004-705, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,336] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-705, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=705, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 7ms (709/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,349] INFO [LogLoader partition=test004-309, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,350] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-309, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=309, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 13ms (710/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,355] INFO [LogLoader partition=test005-309, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,355] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-309, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=309, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (711/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,359] INFO [LogLoader partition=test004-375, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,360] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-375, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=375, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (712/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,363] INFO [LogLoader partition=test004-441, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,364] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-441, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=441, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (713/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,367] INFO [LogLoader partition=test004-507, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,369] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-507, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=507, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (714/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,372] INFO [LogLoader partition=test005-48, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,373] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-48, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=48, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (715/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,377] INFO [LogLoader partition=test005-114, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,378] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-114, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=114, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (716/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,381] INFO [LogLoader partition=test004-444, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,382] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-444, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=444, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (717/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,386] INFO [LogLoader partition=test004-510, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,386] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-510, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=510, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (718/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,395] INFO [LogLoader partition=test004-576, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,396] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-576, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=576, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (719/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,406] INFO [LogLoader partition=test004-642, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,408] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-642, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=642, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (720/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,411] INFO [LogLoader partition=test005-246, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,411] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-246, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=246, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (721/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,415] INFO [LogLoader partition=test004-180, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,415] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-180, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=180, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (722/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,420] INFO [LogLoader partition=test005-312, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,421] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-312, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=312, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (723/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,432] INFO [LogLoader partition=test004-246, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,432] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-246, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=246, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (724/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,443] INFO [LogLoader partition=test004-312, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,444] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-312, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=312, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (725/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,448] INFO [LogLoader partition=test005-47, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,449] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-47, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=47, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (726/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,459] INFO [LogLoader partition=test004-113, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,459] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-113, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=113, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (727/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,463] INFO [LogLoader partition=test123-14, dir=/data01/kafka-logs-351] Loading producer state till offset 293850 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,463] INFO [LogLoader partition=test123-14, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 293850 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,463] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-14/00000000000000293850.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:49,463] INFO [LogLoader partition=test123-14, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 293850 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,463] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-14, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=14, highWatermark=293850, lastStableOffset=293850, logStartOffset=293850, logEndOffset=293850) with 1 segments in 4ms (728/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,466] INFO [LogLoader partition=test004-708, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,467] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-708, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=708, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (729/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,469] INFO [LogLoader partition=test004-509, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,470] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-509, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=509, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (730/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,473] INFO [LogLoader partition=test004-575, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,473] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-575, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=575, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (731/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,476] INFO [LogLoader partition=test004-245, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,476] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-245, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=245, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (732/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,479] INFO [LogLoader partition=test005-245, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,480] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-245, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=245, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (733/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,483] INFO [LogLoader partition=test005-311, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,483] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-311, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=311, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (734/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,486] INFO [LogLoader partition=test004-443, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,487] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-443, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=443, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (735/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,489] INFO [LogLoader partition=test004-364, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,490] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-364, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=364, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (736/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,494] INFO [LogLoader partition=test004-430, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,495] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-430, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=430, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (737/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,497] INFO [LogLoader partition=test004-496, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,498] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-496, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=496, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (738/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,501] INFO [LogLoader partition=test004-562, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,501] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-562, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=562, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (739/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,504] INFO [LogLoader partition=test005-166, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,505] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-166, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=166, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (740/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,513] INFO [LogLoader partition=test004-100, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,514] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-100, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=100, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (741/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,517] INFO [LogLoader partition=test004-298, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,518] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-298, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=298, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (742/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,523] INFO [LogLoader partition=test004-33, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,529] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-33, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=33, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (743/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,532] INFO [LogLoader partition=test005-99, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,533] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-99, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=99, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (744/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,536] INFO [LogLoader partition=test123-0, dir=/data01/kafka-logs-351] Loading producer state till offset 163200 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,536] INFO [LogLoader partition=test123-0, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 163200 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,536] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-0/00000000000000163200.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:49,536] INFO [LogLoader partition=test123-0, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 163200 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,537] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-0, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=0, highWatermark=163200, lastStableOffset=163200, logStartOffset=163200, logEndOffset=163200) with 1 segments in 4ms (745/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,542] INFO [LogLoader partition=test004-429, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,543] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-429, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=429, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (746/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,545] INFO [LogLoader partition=test004-495, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,553] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-495, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=495, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (747/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,556] INFO [LogLoader partition=test004-561, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,557] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-561, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=561, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (748/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,561] INFO [LogLoader partition=test004-627, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,562] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-627, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=627, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (749/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,565] INFO [LogLoader partition=test004-165, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,565] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-165, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=165, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (750/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,568] INFO [LogLoader partition=test005-165, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,569] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-165, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=165, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (751/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,572] INFO [LogLoader partition=test004-231, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,573] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-231, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=231, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (752/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,576] INFO [LogLoader partition=test005-231, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,576] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-231, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=231, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (753/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,606] INFO [LogLoader partition=test004-297, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,607] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-297, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=297, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 30ms (754/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,610] INFO [LogLoader partition=test005-297, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,619] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-297, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=297, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 13ms (755/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,624] INFO [LogLoader partition=test005-32, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,624] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-32, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=32, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (756/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,628] INFO [LogLoader partition=test004-32, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,628] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-32, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=32, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (757/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,633] INFO [LogLoader partition=test004-98, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,633] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-98, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=98, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (758/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,637] INFO [LogLoader partition=test004-693, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,637] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-693, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=693, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (759/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,642] INFO [LogLoader partition=test004-432, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,643] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-432, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=432, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (760/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,645] INFO [LogLoader partition=test004-498, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,647] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-498, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=498, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (761/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,649] INFO [LogLoader partition=test005-102, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,650] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-102, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=102, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (762/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,658] INFO [LogLoader partition=test123-3, dir=/data01/kafka-logs-351] Loading producer state till offset 259904 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,658] INFO [LogLoader partition=test123-3, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 259904 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,658] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-3/00000000000000259904.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:49,658] INFO [LogLoader partition=test123-3, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 259904 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,659] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-3, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=3, highWatermark=259904, lastStableOffset=259904, logStartOffset=259904, logEndOffset=259904) with 1 segments in 9ms (763/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,662] INFO [LogLoader partition=test004-36, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,663] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-36, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=36, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (764/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,671] INFO [LogLoader partition=test004-102, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,672] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-102, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=102, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (765/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,675] INFO [LogLoader partition=test005-234, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,678] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-234, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=234, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (766/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,690] INFO [LogLoader partition=test005-300, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,691] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-300, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=300, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 13ms (767/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,717] INFO [LogLoader partition=test004-234, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,720] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-234, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=234, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 29ms (768/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,735] INFO [LogLoader partition=test005-35, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,736] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-35, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=35, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 16ms (769/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,744] INFO [LogLoader partition=test004-696, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,745] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-696, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=696, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (770/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,749] INFO [LogLoader partition=test004-365, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,750] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-365, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=365, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (771/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,753] INFO [LogLoader partition=test004-101, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,753] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-101, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=101, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (772/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,759] INFO [LogLoader partition=test123-2, dir=/data01/kafka-logs-351] Loading producer state till offset 293968 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,759] INFO [LogLoader partition=test123-2, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 293968 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,759] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-2/00000000000000293968.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:49,759] INFO [LogLoader partition=test123-2, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 293968 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,760] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-2, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=2, highWatermark=293968, lastStableOffset=293968, logStartOffset=293968, logEndOffset=293968) with 1 segments in 7ms (773/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,764] INFO [LogLoader partition=test004-167, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,765] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-167, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=167, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (774/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,769] INFO [LogLoader partition=test004-233, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,771] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-233, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=233, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (775/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,776] INFO [LogLoader partition=test005-233, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,776] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-233, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=233, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (776/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,779] INFO [LogLoader partition=test005-299, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,779] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-299, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=299, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (777/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,797] INFO [LogLoader partition=test005-34, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,802] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-34, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=34, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 22ms (778/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,812] INFO [LogLoader partition=test005-100, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,813] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-100, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=100, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (779/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,820] INFO [LogLoader partition=test004-34, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,820] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-34, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=34, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (780/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,824] INFO [LogLoader partition=test004-629, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,825] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-629, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=629, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (781/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,828] INFO [LogLoader partition=test004-695, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,829] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-695, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=695, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (782/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,832] INFO [LogLoader partition=test005-302, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,832] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-302, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=302, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (783/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,835] INFO [LogLoader partition=test004-302, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,836] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-302, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=302, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (784/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,839] INFO [LogLoader partition=test004-368, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,839] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-368, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=368, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (785/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,843] INFO [LogLoader partition=test005-38, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,843] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-38, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=38, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (786/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,846] INFO [LogLoader partition=test005-104, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,847] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-104, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=104, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (787/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,850] INFO [LogLoader partition=test004-38, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,851] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-38, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=38, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (788/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,854] INFO [LogLoader partition=test005-170, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,855] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-170, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=170, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (789/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,858] INFO [LogLoader partition=test004-170, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,859] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-170, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=170, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (790/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,865] INFO [LogLoader partition=test004-500, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,866] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-500, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=500, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 6ms (791/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,869] INFO [LogLoader partition=test004-566, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,870] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-566, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=566, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (792/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,872] INFO [LogLoader partition=test004-632, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,873] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-632, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=632, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (793/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,876] INFO [LogLoader partition=test004-698, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,877] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-698, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=698, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (794/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,879] INFO [LogLoader partition=test004-301, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,880] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-301, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=301, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (795/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,888] INFO [LogLoader partition=test004-367, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,889] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-367, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=367, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (796/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,892] INFO [LogLoader partition=test004-433, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,893] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-433, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=433, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (797/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,900] INFO [LogLoader partition=test005-37, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,901] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-37, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=37, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 9ms (798/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,904] INFO [LogLoader partition=test004-103, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,905] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-103, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=103, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (799/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,908] INFO [LogLoader partition=test123-4, dir=/data01/kafka-logs-351] Loading producer state till offset 293715 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,908] INFO [LogLoader partition=test123-4, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 293715 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,908] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-4/00000000000000293715.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:49,909] INFO [LogLoader partition=test123-4, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 1ms for segment recovery from offset 293715 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,909] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-4, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=4, highWatermark=293715, lastStableOffset=293715, logStartOffset=293715, logEndOffset=293715) with 1 segments in 4ms (800/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,914] INFO [LogLoader partition=test004-169, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,915] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-169, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=169, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (801/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,927] INFO [LogLoader partition=test005-169, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,928] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-169, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=169, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 13ms (802/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,932] INFO [LogLoader partition=test004-235, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,933] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-235, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=235, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (803/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,936] INFO [LogLoader partition=test005-235, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,937] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-235, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=235, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (804/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,940] INFO [LogLoader partition=test004-565, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,941] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-565, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=565, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (805/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,944] INFO [LogLoader partition=test004-631, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,944] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-631, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=631, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (806/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,947] INFO [LogLoader partition=test005-238, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,948] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-238, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=238, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (807/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,951] INFO [LogLoader partition=test004-172, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,951] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-172, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=172, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (808/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,956] INFO [LogLoader partition=test005-304, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,956] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-304, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=304, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (809/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,959] INFO [LogLoader partition=test004-238, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,960] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-238, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=238, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (810/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,964] INFO [LogLoader partition=test004-304, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,965] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-304, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=304, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (811/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,974] INFO [LogLoader partition=test004-370, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,975] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-370, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=370, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 11ms (812/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,985] INFO [LogLoader partition=test005-106, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,985] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-106, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=106, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (813/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,993] INFO [LogLoader partition=test123-7, dir=/data01/kafka-logs-351] Loading producer state till offset 294016 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,993] INFO [LogLoader partition=test123-7, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 294016 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,993] INFO Deleted producer state snapshot /data01/kafka-logs-351/test123-7/00000000000000294016.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:49,993] INFO [LogLoader partition=test123-7, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 294016 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,994] INFO Completed load of Log(dir=/data01/kafka-logs-351/test123-7, topicId=xYxZQSYMRGWeuBKqTXlIgQ, topic=test123, partition=7, highWatermark=294016, lastStableOffset=294016, logStartOffset=294016, logEndOffset=294016) with 1 segments in 8ms (814/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:49,996] INFO [LogLoader partition=test005-172, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:49,997] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-172, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=172, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (815/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,009] INFO [LogLoader partition=test004-700, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,010] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-700, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=700, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 14ms (816/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,013] INFO [LogLoader partition=test004-436, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,014] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-436, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=436, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (817/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,018] INFO [LogLoader partition=test004-502, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,018] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-502, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=502, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 5ms (818/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,022] INFO [LogLoader partition=test004-568, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,023] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-568, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=568, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (819/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,030] INFO [LogLoader partition=test004-634, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,030] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-634, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=634, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 8ms (820/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,034] INFO [LogLoader partition=test004-237, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,035] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-237, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=237, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (821/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,037] INFO [LogLoader partition=test005-303, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,038] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-303, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=303, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (822/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,041] INFO [LogLoader partition=test004-369, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,042] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-369, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=369, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (823/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,045] INFO [LogLoader partition=test004-435, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,046] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-435, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=435, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (824/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,049] INFO [LogLoader partition=test004-39, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,049] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-39, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=39, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (825/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,052] INFO [LogLoader partition=test005-39, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,053] INFO Completed load of Log(dir=/data01/kafka-logs-351/test005-39, topicId=9RG-T8tRSXCazONSh51F7A, topic=test005, partition=39, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (826/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,061] INFO [LogLoader partition=test004-105, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,062] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-105, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=105, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 10ms (827/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,065] INFO [LogLoader partition=test004-171, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,066] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-171, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=171, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 3ms (828/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,069] INFO [LogLoader partition=test004-567, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,070] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-567, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=567, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (829/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,073] INFO [LogLoader partition=test004-699, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,073] INFO Completed load of Log(dir=/data01/kafka-logs-351/test004-699, topicId=EZpo1lPpS5G61Tn51H0vcA, topic=test004, partition=699, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments in 4ms (830/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,076] INFO [LogLoader partition=test008-1, dir=/data01/kafka-logs-351] Loading producer state till offset 1811968 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,076] INFO [LogLoader partition=test008-1, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 1811968 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,076] INFO Deleted producer state snapshot /data01/kafka-logs-351/test008-1/00000000000001811968.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,076] INFO [LogLoader partition=test008-1, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 1811968 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,077] INFO Completed load of Log(dir=/data01/kafka-logs-351/test008-1, topicId=qHMmNAMPRCWpKMG9jH52Og, topic=test008, partition=1, highWatermark=1811968, lastStableOffset=1811968, logStartOffset=1811968, logEndOffset=1811968) with 1 segments in 4ms (831/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,080] INFO [LogLoader partition=test008-5, dir=/data01/kafka-logs-351] Loading producer state till offset 1812132 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,080] INFO [LogLoader partition=test008-5, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 1812132 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,080] INFO Deleted producer state snapshot /data01/kafka-logs-351/test008-5/00000000000001812132.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,080] INFO [LogLoader partition=test008-5, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 1812132 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,081] INFO Completed load of Log(dir=/data01/kafka-logs-351/test008-5, topicId=qHMmNAMPRCWpKMG9jH52Og, topic=test008, partition=5, highWatermark=1812132, lastStableOffset=1812132, logStartOffset=1812132, logEndOffset=1812132) with 1 segments in 4ms (832/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,084] INFO [LogLoader partition=test008-20, dir=/data01/kafka-logs-351] Loading producer state till offset 1811633 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,084] INFO [LogLoader partition=test008-20, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 1811633 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,084] INFO Deleted producer state snapshot /data01/kafka-logs-351/test008-20/00000000000001811633.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,084] INFO [LogLoader partition=test008-20, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 1811633 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,085] INFO Completed load of Log(dir=/data01/kafka-logs-351/test008-20, topicId=qHMmNAMPRCWpKMG9jH52Og, topic=test008, partition=20, highWatermark=1811633, lastStableOffset=1811633, logStartOffset=1811633, logEndOffset=1811633) with 1 segments in 4ms (833/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,093] INFO [LogLoader partition=test008-7, dir=/data01/kafka-logs-351] Loading producer state till offset 1811600 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,093] INFO [LogLoader partition=test008-7, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 1811600 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,093] INFO Deleted producer state snapshot /data01/kafka-logs-351/test008-7/00000000000001811600.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,093] INFO [LogLoader partition=test008-7, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 1811600 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,094] INFO Completed load of Log(dir=/data01/kafka-logs-351/test008-7, topicId=qHMmNAMPRCWpKMG9jH52Og, topic=test008, partition=7, highWatermark=1811600, lastStableOffset=1811600, logStartOffset=1811600, logEndOffset=1811600) with 1 segments in 9ms (834/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,097] INFO [LogLoader partition=test008-23, dir=/data01/kafka-logs-351] Loading producer state till offset 1811514 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,097] INFO [LogLoader partition=test008-23, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 1811514 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,097] INFO Deleted producer state snapshot /data01/kafka-logs-351/test008-23/00000000000001811514.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,097] INFO [LogLoader partition=test008-23, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 1811514 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,098] INFO Completed load of Log(dir=/data01/kafka-logs-351/test008-23, topicId=qHMmNAMPRCWpKMG9jH52Og, topic=test008, partition=23, highWatermark=1811514, lastStableOffset=1811514, logStartOffset=1811514, logEndOffset=1811514) with 1 segments in 4ms (835/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,101] INFO [LogLoader partition=test008-25, dir=/data01/kafka-logs-351] Loading producer state till offset 1811917 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,101] INFO [LogLoader partition=test008-25, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 1811917 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,101] INFO Deleted producer state snapshot /data01/kafka-logs-351/test008-25/00000000000001811917.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,101] INFO [LogLoader partition=test008-25, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 1811917 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,106] INFO Completed load of Log(dir=/data01/kafka-logs-351/test008-25, topicId=qHMmNAMPRCWpKMG9jH52Og, topic=test008, partition=25, highWatermark=1811917, lastStableOffset=1811917, logStartOffset=1811917, logEndOffset=1811917) with 1 segments in 8ms (836/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,109] INFO [LogLoader partition=test008-27, dir=/data01/kafka-logs-351] Loading producer state till offset 1811518 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,109] INFO [LogLoader partition=test008-27, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 1811518 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,109] INFO Deleted producer state snapshot /data01/kafka-logs-351/test008-27/00000000000001811518.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,109] INFO [LogLoader partition=test008-27, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 1811518 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,110] INFO Completed load of Log(dir=/data01/kafka-logs-351/test008-27, topicId=qHMmNAMPRCWpKMG9jH52Og, topic=test008, partition=27, highWatermark=1811518, lastStableOffset=1811518, logStartOffset=1811518, logEndOffset=1811518) with 1 segments in 4ms (837/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,113] INFO [LogLoader partition=test008-10, dir=/data01/kafka-logs-351] Loading producer state till offset 1811630 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,113] INFO [LogLoader partition=test008-10, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 1811630 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,113] INFO Deleted producer state snapshot /data01/kafka-logs-351/test008-10/00000000000001811630.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,113] INFO [LogLoader partition=test008-10, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 1811630 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,114] INFO Completed load of Log(dir=/data01/kafka-logs-351/test008-10, topicId=qHMmNAMPRCWpKMG9jH52Og, topic=test008, partition=10, highWatermark=1811630, lastStableOffset=1811630, logStartOffset=1811630, logEndOffset=1811630) with 1 segments in 3ms (838/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,120] INFO [LogLoader partition=test008-13, dir=/data01/kafka-logs-351] Loading producer state till offset 1811426 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,120] INFO [LogLoader partition=test008-13, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 1811426 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,121] INFO Deleted producer state snapshot /data01/kafka-logs-351/test008-13/00000000000001811426.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,121] INFO [LogLoader partition=test008-13, dir=/data01/kafka-logs-351] Producer state recovery took 1ms for snapshot load and 0ms for segment recovery from offset 1811426 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,121] INFO Completed load of Log(dir=/data01/kafka-logs-351/test008-13, topicId=qHMmNAMPRCWpKMG9jH52Og, topic=test008, partition=13, highWatermark=1811426, lastStableOffset=1811426, logStartOffset=1811426, logEndOffset=1811426) with 1 segments in 8ms (839/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,125] INFO [LogLoader partition=test008-15, dir=/data01/kafka-logs-351] Loading producer state till offset 1811917 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,125] INFO [LogLoader partition=test008-15, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 1811917 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,125] INFO Deleted producer state snapshot /data01/kafka-logs-351/test008-15/00000000000001811917.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,125] INFO [LogLoader partition=test008-15, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 1811917 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,126] INFO Completed load of Log(dir=/data01/kafka-logs-351/test008-15, topicId=qHMmNAMPRCWpKMG9jH52Og, topic=test008, partition=15, highWatermark=1811917, lastStableOffset=1811917, logStartOffset=1811917, logEndOffset=1811917) with 1 segments in 4ms (840/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,128] INFO [LogLoader partition=test009-1, dir=/data01/kafka-logs-351] Loading producer state till offset 628425 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,128] INFO [LogLoader partition=test009-1, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 628425 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,129] INFO Deleted producer state snapshot /data01/kafka-logs-351/test009-1/00000000000000628425.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,129] INFO [LogLoader partition=test009-1, dir=/data01/kafka-logs-351] Producer state recovery took 1ms for snapshot load and 0ms for segment recovery from offset 628425 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,129] INFO Completed load of Log(dir=/data01/kafka-logs-351/test009-1, topicId=g95oe921S86FCGM2NqB23w, topic=test009, partition=1, highWatermark=628425, lastStableOffset=628425, logStartOffset=628425, logEndOffset=628425) with 1 segments in 4ms (841/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,134] INFO [LogLoader partition=test009-3, dir=/data01/kafka-logs-351] Loading producer state till offset 628455 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,134] INFO [LogLoader partition=test009-3, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 628455 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,134] INFO Deleted producer state snapshot /data01/kafka-logs-351/test009-3/00000000000000628455.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,134] INFO [LogLoader partition=test009-3, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 628455 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,135] INFO Completed load of Log(dir=/data01/kafka-logs-351/test009-3, topicId=g95oe921S86FCGM2NqB23w, topic=test009, partition=3, highWatermark=628455, lastStableOffset=628455, logStartOffset=628455, logEndOffset=628455) with 1 segments in 5ms (842/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,142] INFO [LogLoader partition=test009-19, dir=/data01/kafka-logs-351] Loading producer state till offset 628455 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,142] INFO [LogLoader partition=test009-19, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 628455 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,142] INFO Deleted producer state snapshot /data01/kafka-logs-351/test009-19/00000000000000628455.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,142] INFO [LogLoader partition=test009-19, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 628455 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,143] INFO Completed load of Log(dir=/data01/kafka-logs-351/test009-19, topicId=g95oe921S86FCGM2NqB23w, topic=test009, partition=19, highWatermark=628455, lastStableOffset=628455, logStartOffset=628455, logEndOffset=628455) with 1 segments in 8ms (843/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,149] INFO [LogLoader partition=test009-22, dir=/data01/kafka-logs-351] Loading producer state till offset 628500 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,149] INFO [LogLoader partition=test009-22, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 628500 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,149] INFO Deleted producer state snapshot /data01/kafka-logs-351/test009-22/00000000000000628500.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,149] INFO [LogLoader partition=test009-22, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 628500 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,150] INFO Completed load of Log(dir=/data01/kafka-logs-351/test009-22, topicId=g95oe921S86FCGM2NqB23w, topic=test009, partition=22, highWatermark=628500, lastStableOffset=628500, logStartOffset=628500, logEndOffset=628500) with 1 segments in 7ms (844/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,154] INFO [LogLoader partition=test009-8, dir=/data01/kafka-logs-351] Loading producer state till offset 628425 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,154] INFO [LogLoader partition=test009-8, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 628425 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,154] INFO Deleted producer state snapshot /data01/kafka-logs-351/test009-8/00000000000000628425.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,154] INFO [LogLoader partition=test009-8, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 628425 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,159] INFO Completed load of Log(dir=/data01/kafka-logs-351/test009-8, topicId=g95oe921S86FCGM2NqB23w, topic=test009, partition=8, highWatermark=628425, lastStableOffset=628425, logStartOffset=628425, logEndOffset=628425) with 1 segments in 10ms (845/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,162] INFO [LogLoader partition=test009-10, dir=/data01/kafka-logs-351] Loading producer state till offset 628485 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,162] INFO [LogLoader partition=test009-10, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 628485 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,162] INFO Deleted producer state snapshot /data01/kafka-logs-351/test009-10/00000000000000628485.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,162] INFO [LogLoader partition=test009-10, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 628485 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,163] INFO Completed load of Log(dir=/data01/kafka-logs-351/test009-10, topicId=g95oe921S86FCGM2NqB23w, topic=test009, partition=10, highWatermark=628485, lastStableOffset=628485, logStartOffset=628485, logEndOffset=628485) with 1 segments in 4ms (846/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,166] INFO [LogLoader partition=test009-26, dir=/data01/kafka-logs-351] Loading producer state till offset 628301 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,166] INFO [LogLoader partition=test009-26, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 628301 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,166] INFO Deleted producer state snapshot /data01/kafka-logs-351/test009-26/00000000000000628301.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,166] INFO [LogLoader partition=test009-26, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 628301 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,173] INFO Completed load of Log(dir=/data01/kafka-logs-351/test009-26, topicId=g95oe921S86FCGM2NqB23w, topic=test009, partition=26, highWatermark=628301, lastStableOffset=628301, logStartOffset=628301, logEndOffset=628301) with 1 segments in 10ms (847/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,176] INFO [LogLoader partition=test009-12, dir=/data01/kafka-logs-351] Loading producer state till offset 628230 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,176] INFO [LogLoader partition=test009-12, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 628230 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,176] INFO Deleted producer state snapshot /data01/kafka-logs-351/test009-12/00000000000000628230.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,176] INFO [LogLoader partition=test009-12, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 628230 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,177] INFO Completed load of Log(dir=/data01/kafka-logs-351/test009-12, topicId=g95oe921S86FCGM2NqB23w, topic=test009, partition=12, highWatermark=628230, lastStableOffset=628230, logStartOffset=628230, logEndOffset=628230) with 1 segments in 3ms (848/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,180] INFO [LogLoader partition=test009-28, dir=/data01/kafka-logs-351] Loading producer state till offset 628425 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,180] INFO [LogLoader partition=test009-28, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 628425 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,180] INFO Deleted producer state snapshot /data01/kafka-logs-351/test009-28/00000000000000628425.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,180] INFO [LogLoader partition=test009-28, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 628425 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,181] INFO Completed load of Log(dir=/data01/kafka-logs-351/test009-28, topicId=g95oe921S86FCGM2NqB23w, topic=test009, partition=28, highWatermark=628425, lastStableOffset=628425, logStartOffset=628425, logEndOffset=628425) with 1 segments in 3ms (849/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,186] INFO [LogLoader partition=test009-16, dir=/data01/kafka-logs-351] Loading producer state till offset 628455 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,186] INFO [LogLoader partition=test009-16, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 628455 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,186] INFO Deleted producer state snapshot /data01/kafka-logs-351/test009-16/00000000000000628455.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,186] INFO [LogLoader partition=test009-16, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 628455 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,188] INFO Completed load of Log(dir=/data01/kafka-logs-351/test009-16, topicId=g95oe921S86FCGM2NqB23w, topic=test009, partition=16, highWatermark=628455, lastStableOffset=628455, logStartOffset=628455, logEndOffset=628455) with 1 segments in 8ms (850/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,191] INFO [LogLoader partition=test009-18, dir=/data01/kafka-logs-351] Loading producer state till offset 648770 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,191] INFO [LogLoader partition=test009-18, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 648770 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,191] INFO Deleted producer state snapshot /data01/kafka-logs-351/test009-18/00000000000000648770.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,191] INFO [LogLoader partition=test009-18, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 648770 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,192] INFO Completed load of Log(dir=/data01/kafka-logs-351/test009-18, topicId=g95oe921S86FCGM2NqB23w, topic=test009, partition=18, highWatermark=648770, lastStableOffset=648770, logStartOffset=648770, logEndOffset=648770) with 1 segments in 3ms (851/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,197] INFO [LogLoader partition=test009-5, dir=/data01/kafka-logs-351] Loading producer state till offset 649110 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,197] INFO [LogLoader partition=test009-5, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 649110 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,197] INFO Deleted producer state snapshot /data01/kafka-logs-351/test009-5/00000000000000649110.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,197] INFO [LogLoader partition=test009-5, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 649110 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,198] INFO Completed load of Log(dir=/data01/kafka-logs-351/test009-5, topicId=g95oe921S86FCGM2NqB23w, topic=test009, partition=5, highWatermark=649110, lastStableOffset=649110, logStartOffset=649110, logEndOffset=649110) with 1 segments in 6ms (852/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,200] INFO [LogLoader partition=test009-21, dir=/data01/kafka-logs-351] Loading producer state till offset 717951 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,200] INFO [LogLoader partition=test009-21, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 717951 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,200] INFO Deleted producer state snapshot /data01/kafka-logs-351/test009-21/00000000000000717951.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,200] INFO [LogLoader partition=test009-21, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 717951 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,201] INFO Completed load of Log(dir=/data01/kafka-logs-351/test009-21, topicId=g95oe921S86FCGM2NqB23w, topic=test009, partition=21, highWatermark=717951, lastStableOffset=717951, logStartOffset=717951, logEndOffset=717951) with 1 segments in 3ms (853/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,209] INFO [LogLoader partition=test009-7, dir=/data01/kafka-logs-351] Loading producer state till offset 648968 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,209] INFO [LogLoader partition=test009-7, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 648968 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,209] INFO Deleted producer state snapshot /data01/kafka-logs-351/test009-7/00000000000000648968.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,209] INFO [LogLoader partition=test009-7, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 648968 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,215] INFO Completed load of Log(dir=/data01/kafka-logs-351/test009-7, topicId=g95oe921S86FCGM2NqB23w, topic=test009, partition=7, highWatermark=648968, lastStableOffset=648968, logStartOffset=648968, logEndOffset=648968) with 1 segments in 14ms (854/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,223] INFO [LogLoader partition=test009-9, dir=/data01/kafka-logs-351] Loading producer state till offset 648906 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,223] INFO [LogLoader partition=test009-9, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 648906 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,223] INFO Deleted producer state snapshot /data01/kafka-logs-351/test009-9/00000000000000648906.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,223] INFO [LogLoader partition=test009-9, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 648906 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,224] INFO Completed load of Log(dir=/data01/kafka-logs-351/test009-9, topicId=g95oe921S86FCGM2NqB23w, topic=test009, partition=9, highWatermark=648906, lastStableOffset=648906, logStartOffset=648906, logEndOffset=648906) with 1 segments in 9ms (855/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,229] INFO [LogLoader partition=test009-25, dir=/data01/kafka-logs-351] Loading producer state till offset 718242 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,229] INFO [LogLoader partition=test009-25, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 718242 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,229] INFO Deleted producer state snapshot /data01/kafka-logs-351/test009-25/00000000000000718242.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,229] INFO [LogLoader partition=test009-25, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 718242 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,230] INFO Completed load of Log(dir=/data01/kafka-logs-351/test009-25, topicId=g95oe921S86FCGM2NqB23w, topic=test009, partition=25, highWatermark=718242, lastStableOffset=718242, logStartOffset=718242, logEndOffset=718242) with 1 segments in 6ms (856/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,232] INFO [LogLoader partition=test009-27, dir=/data01/kafka-logs-351] Loading producer state till offset 717640 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,232] INFO [LogLoader partition=test009-27, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 717640 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,232] INFO Deleted producer state snapshot /data01/kafka-logs-351/test009-27/00000000000000717640.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,233] INFO [LogLoader partition=test009-27, dir=/data01/kafka-logs-351] Producer state recovery took 1ms for snapshot load and 0ms for segment recovery from offset 717640 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,233] INFO Completed load of Log(dir=/data01/kafka-logs-351/test009-27, topicId=g95oe921S86FCGM2NqB23w, topic=test009, partition=27, highWatermark=717640, lastStableOffset=717640, logStartOffset=717640, logEndOffset=717640) with 1 segments in 3ms (857/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,236] INFO [LogLoader partition=test009-14, dir=/data01/kafka-logs-351] Loading producer state till offset 717810 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,236] INFO [LogLoader partition=test009-14, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 717810 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,236] INFO Deleted producer state snapshot /data01/kafka-logs-351/test009-14/00000000000000717810.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,236] INFO [LogLoader partition=test009-14, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 717810 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,238] INFO Completed load of Log(dir=/data01/kafka-logs-351/test009-14, topicId=g95oe921S86FCGM2NqB23w, topic=test009, partition=14, highWatermark=717810, lastStableOffset=717810, logStartOffset=717810, logEndOffset=717810) with 1 segments in 4ms (858/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,247] INFO [LogLoader partition=test009-0, dir=/data01/kafka-logs-351] Loading producer state till offset 718376 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,247] INFO [LogLoader partition=test009-0, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 718376 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,247] INFO Deleted producer state snapshot /data01/kafka-logs-351/test009-0/00000000000000718376.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,247] INFO [LogLoader partition=test009-0, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 718376 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,248] INFO Completed load of Log(dir=/data01/kafka-logs-351/test009-0, topicId=g95oe921S86FCGM2NqB23w, topic=test009, partition=0, highWatermark=718376, lastStableOffset=718376, logStartOffset=718376, logEndOffset=718376) with 1 segments in 10ms (859/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,257] INFO [LogLoader partition=test009-15, dir=/data01/kafka-logs-351] Loading producer state till offset 648548 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,257] INFO [LogLoader partition=test009-15, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 648548 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,257] INFO Deleted producer state snapshot /data01/kafka-logs-351/test009-15/00000000000000648548.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,257] INFO [LogLoader partition=test009-15, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 648548 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,258] INFO Completed load of Log(dir=/data01/kafka-logs-351/test009-15, topicId=g95oe921S86FCGM2NqB23w, topic=test009, partition=15, highWatermark=648548, lastStableOffset=648548, logStartOffset=648548, logEndOffset=648548) with 1 segments in 10ms (860/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,264] INFO Deleted producer state snapshot /data01/kafka-logs-351/test010-8/00000000000002130622.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,265] INFO [LogLoader partition=test010-8, dir=/data01/kafka-logs-351] Loading producer state till offset 2899492 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,265] INFO [LogLoader partition=test010-8, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 2899492 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,265] INFO [ProducerStateManager partition=test010-8]Loading producer state from snapshot file 'SnapshotFile(offset=2899492, file=/data01/kafka-logs-351/test010-8/00000000000002899492.snapshot)' (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:07:50,265] INFO [LogLoader partition=test010-8, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 2899492 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,266] INFO Completed load of Log(dir=/data01/kafka-logs-351/test010-8, topicId=KRrkky6_Qwi605E4lIfOgw, topic=test010, partition=8, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=2899492) with 3 segments in 7ms (861/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,271] INFO Deleted producer state snapshot /data01/kafka-logs-351/test010-10/00000000000002130576.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,271] INFO [LogLoader partition=test010-10, dir=/data01/kafka-logs-351] Loading producer state till offset 2899538 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,271] INFO [LogLoader partition=test010-10, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 2899538 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,271] INFO [ProducerStateManager partition=test010-10]Loading producer state from snapshot file 'SnapshotFile(offset=2899538, file=/data01/kafka-logs-351/test010-10/00000000000002899538.snapshot)' (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:07:50,271] INFO [LogLoader partition=test010-10, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 2899538 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,272] INFO Completed load of Log(dir=/data01/kafka-logs-351/test010-10, topicId=KRrkky6_Qwi605E4lIfOgw, topic=test010, partition=10, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=2899538) with 3 segments in 5ms (862/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,277] INFO Deleted producer state snapshot /data01/kafka-logs-351/test010-26/00000000000002130986.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,277] INFO [LogLoader partition=test010-26, dir=/data01/kafka-logs-351] Loading producer state till offset 2899886 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,277] INFO [LogLoader partition=test010-26, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 2899886 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,277] INFO [ProducerStateManager partition=test010-26]Loading producer state from snapshot file 'SnapshotFile(offset=2899886, file=/data01/kafka-logs-351/test010-26/00000000000002899886.snapshot)' (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:07:50,277] INFO [LogLoader partition=test010-26, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 2899886 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,278] INFO Completed load of Log(dir=/data01/kafka-logs-351/test010-26, topicId=KRrkky6_Qwi605E4lIfOgw, topic=test010, partition=26, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=2899886) with 3 segments in 6ms (863/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,283] INFO Deleted producer state snapshot /data01/kafka-logs-351/test010-14/00000000000002130878.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,283] INFO [LogLoader partition=test010-14, dir=/data01/kafka-logs-351] Loading producer state till offset 2899808 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,283] INFO [LogLoader partition=test010-14, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 2899808 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,283] INFO [ProducerStateManager partition=test010-14]Loading producer state from snapshot file 'SnapshotFile(offset=2899808, file=/data01/kafka-logs-351/test010-14/00000000000002899808.snapshot)' (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:07:50,283] INFO [LogLoader partition=test010-14, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 2899808 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,284] INFO Completed load of Log(dir=/data01/kafka-logs-351/test010-14, topicId=KRrkky6_Qwi605E4lIfOgw, topic=test010, partition=14, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=2899808) with 3 segments in 6ms (864/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,289] INFO Deleted producer state snapshot /data01/kafka-logs-351/test010-29/00000000000002130559.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,289] INFO [LogLoader partition=test010-29, dir=/data01/kafka-logs-351] Loading producer state till offset 2899474 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,289] INFO [LogLoader partition=test010-29, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 2899474 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,289] INFO [ProducerStateManager partition=test010-29]Loading producer state from snapshot file 'SnapshotFile(offset=2899474, file=/data01/kafka-logs-351/test010-29/00000000000002899474.snapshot)' (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:07:50,289] INFO [LogLoader partition=test010-29, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 2899474 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,290] INFO Completed load of Log(dir=/data01/kafka-logs-351/test010-29, topicId=KRrkky6_Qwi605E4lIfOgw, topic=test010, partition=29, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=2899474) with 3 segments in 6ms (865/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,295] INFO Deleted producer state snapshot /data01/kafka-logs-351/test010-2/00000000000002130742.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,295] INFO [LogLoader partition=test010-2, dir=/data01/kafka-logs-351] Loading producer state till offset 2899642 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,295] INFO [LogLoader partition=test010-2, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 2899642 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,295] INFO [ProducerStateManager partition=test010-2]Loading producer state from snapshot file 'SnapshotFile(offset=2899642, file=/data01/kafka-logs-351/test010-2/00000000000002899642.snapshot)' (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:07:50,295] INFO [LogLoader partition=test010-2, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 2899642 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,297] INFO Completed load of Log(dir=/data01/kafka-logs-351/test010-2, topicId=KRrkky6_Qwi605E4lIfOgw, topic=test010, partition=2, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=2899642) with 3 segments in 6ms (866/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,301] INFO Deleted producer state snapshot /data01/kafka-logs-351/test010-17/00000000000002130559.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,302] INFO [LogLoader partition=test010-17, dir=/data01/kafka-logs-351] Loading producer state till offset 2899399 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,302] INFO [LogLoader partition=test010-17, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 2899399 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,302] INFO [ProducerStateManager partition=test010-17]Loading producer state from snapshot file 'SnapshotFile(offset=2899399, file=/data01/kafka-logs-351/test010-17/00000000000002899399.snapshot)' (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:07:50,302] INFO [LogLoader partition=test010-17, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 2899399 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,302] INFO Completed load of Log(dir=/data01/kafka-logs-351/test010-17, topicId=KRrkky6_Qwi605E4lIfOgw, topic=test010, partition=17, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=2899399) with 3 segments in 6ms (867/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,309] INFO Deleted producer state snapshot /data01/kafka-logs-351/test010-20/00000000000002130896.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,309] INFO [LogLoader partition=test010-20, dir=/data01/kafka-logs-351] Loading producer state till offset 2899796 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,309] INFO [LogLoader partition=test010-20, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 2899796 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,309] INFO [ProducerStateManager partition=test010-20]Loading producer state from snapshot file 'SnapshotFile(offset=2899796, file=/data01/kafka-logs-351/test010-20/00000000000002899796.snapshot)' (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:07:50,309] INFO [LogLoader partition=test010-20, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 2899796 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,310] INFO Completed load of Log(dir=/data01/kafka-logs-351/test010-20, topicId=KRrkky6_Qwi605E4lIfOgw, topic=test010, partition=20, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=2899796) with 3 segments in 7ms (868/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,314] INFO Deleted producer state snapshot /data01/kafka-logs-351/test010-5/00000000000002130249.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,314] INFO [LogLoader partition=test010-5, dir=/data01/kafka-logs-351] Loading producer state till offset 2899269 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,314] INFO [LogLoader partition=test010-5, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 2899269 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,314] INFO [ProducerStateManager partition=test010-5]Loading producer state from snapshot file 'SnapshotFile(offset=2899269, file=/data01/kafka-logs-351/test010-5/00000000000002899269.snapshot)' (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:07:50,314] INFO [LogLoader partition=test010-5, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 2899269 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,315] INFO Completed load of Log(dir=/data01/kafka-logs-351/test010-5, topicId=KRrkky6_Qwi605E4lIfOgw, topic=test010, partition=5, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=2899269) with 3 segments in 5ms (869/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,347] INFO Deleted producer state snapshot /data01/kafka-logs-351/test010-21/00000000000002131065.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,348] INFO [LogLoader partition=test010-21, dir=/data01/kafka-logs-351] Loading producer state till offset 2900010 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,348] INFO [LogLoader partition=test010-21, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 2900010 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,348] INFO [ProducerStateManager partition=test010-21]Loading producer state from snapshot file 'SnapshotFile(offset=2900010, file=/data01/kafka-logs-351/test010-21/00000000000002900010.snapshot)' (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:07:50,348] INFO [LogLoader partition=test010-21, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 2900010 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,349] INFO Completed load of Log(dir=/data01/kafka-logs-351/test010-21, topicId=KRrkky6_Qwi605E4lIfOgw, topic=test010, partition=21, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=2900010) with 3 segments in 34ms (870/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,354] INFO Deleted producer state snapshot /data01/kafka-logs-351/test010-7/00000000000001913111.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,355] INFO [LogLoader partition=test010-7, dir=/data01/kafka-logs-351] Loading producer state till offset 2695181 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,355] INFO [LogLoader partition=test010-7, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 2695181 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,355] INFO [ProducerStateManager partition=test010-7]Loading producer state from snapshot file 'SnapshotFile(offset=2695181, file=/data01/kafka-logs-351/test010-7/00000000000002695181.snapshot)' (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:07:50,355] INFO [LogLoader partition=test010-7, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 2695181 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,355] INFO Completed load of Log(dir=/data01/kafka-logs-351/test010-7, topicId=KRrkky6_Qwi605E4lIfOgw, topic=test010, partition=7, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=2695181) with 3 segments in 7ms (871/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,361] INFO Deleted producer state snapshot /data01/kafka-logs-351/test010-23/00000000000001954239.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,361] INFO [LogLoader partition=test010-23, dir=/data01/kafka-logs-351] Loading producer state till offset 2723139 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,361] INFO [LogLoader partition=test010-23, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 2723139 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,361] INFO [ProducerStateManager partition=test010-23]Loading producer state from snapshot file 'SnapshotFile(offset=2723139, file=/data01/kafka-logs-351/test010-23/00000000000002723139.snapshot)' (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:07:50,361] INFO [LogLoader partition=test010-23, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 2723139 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,362] INFO Completed load of Log(dir=/data01/kafka-logs-351/test010-23, topicId=KRrkky6_Qwi605E4lIfOgw, topic=test010, partition=23, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=2723139) with 3 segments in 7ms (872/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,368] INFO Deleted producer state snapshot /data01/kafka-logs-351/test010-9/00000000000001954872.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,368] INFO [LogLoader partition=test010-9, dir=/data01/kafka-logs-351] Loading producer state till offset 2723772 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,368] INFO [LogLoader partition=test010-9, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 2723772 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,368] INFO [ProducerStateManager partition=test010-9]Loading producer state from snapshot file 'SnapshotFile(offset=2723772, file=/data01/kafka-logs-351/test010-9/00000000000002723772.snapshot)' (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:07:50,368] INFO [LogLoader partition=test010-9, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 2723772 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,369] INFO Completed load of Log(dir=/data01/kafka-logs-351/test010-9, topicId=KRrkky6_Qwi605E4lIfOgw, topic=test010, partition=9, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=2723772) with 3 segments in 6ms (873/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,373] INFO Deleted producer state snapshot /data01/kafka-logs-351/test010-25/00000000000001914896.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,373] INFO [LogLoader partition=test010-25, dir=/data01/kafka-logs-351] Loading producer state till offset 2696981 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,373] INFO [LogLoader partition=test010-25, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 2696981 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,373] INFO [ProducerStateManager partition=test010-25]Loading producer state from snapshot file 'SnapshotFile(offset=2696981, file=/data01/kafka-logs-351/test010-25/00000000000002696981.snapshot)' (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:07:50,373] INFO [LogLoader partition=test010-25, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 2696981 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,374] INFO Completed load of Log(dir=/data01/kafka-logs-351/test010-25, topicId=KRrkky6_Qwi605E4lIfOgw, topic=test010, partition=25, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=2696981) with 3 segments in 5ms (874/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,378] INFO Deleted producer state snapshot /data01/kafka-logs-351/test010-28/00000000000001913038.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,378] INFO [LogLoader partition=test010-28, dir=/data01/kafka-logs-351] Loading producer state till offset 2695048 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,378] INFO [LogLoader partition=test010-28, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 2695048 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,378] INFO [ProducerStateManager partition=test010-28]Loading producer state from snapshot file 'SnapshotFile(offset=2695048, file=/data01/kafka-logs-351/test010-28/00000000000002695048.snapshot)' (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:07:50,378] INFO [LogLoader partition=test010-28, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 2695048 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,379] INFO Completed load of Log(dir=/data01/kafka-logs-351/test010-28, topicId=KRrkky6_Qwi605E4lIfOgw, topic=test010, partition=28, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=2695048) with 3 segments in 5ms (875/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,383] INFO Deleted producer state snapshot /data01/kafka-logs-351/test010-13/00000000000001915947.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,384] INFO [LogLoader partition=test010-13, dir=/data01/kafka-logs-351] Loading producer state till offset 2698137 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,384] INFO [LogLoader partition=test010-13, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 2698137 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,384] INFO [ProducerStateManager partition=test010-13]Loading producer state from snapshot file 'SnapshotFile(offset=2698137, file=/data01/kafka-logs-351/test010-13/00000000000002698137.snapshot)' (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:07:50,384] INFO [LogLoader partition=test010-13, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 2698137 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,384] INFO Completed load of Log(dir=/data01/kafka-logs-351/test010-13, topicId=KRrkky6_Qwi605E4lIfOgw, topic=test010, partition=13, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=2698137) with 3 segments in 5ms (876/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,389] INFO Deleted producer state snapshot /data01/kafka-logs-351/test010-16/00000000000001954880.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,389] INFO [LogLoader partition=test010-16, dir=/data01/kafka-logs-351] Loading producer state till offset 2723705 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,389] INFO [LogLoader partition=test010-16, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 2723705 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,389] INFO [ProducerStateManager partition=test010-16]Loading producer state from snapshot file 'SnapshotFile(offset=2723705, file=/data01/kafka-logs-351/test010-16/00000000000002723705.snapshot)' (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:07:50,389] INFO [LogLoader partition=test010-16, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 2723705 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,389] INFO Completed load of Log(dir=/data01/kafka-logs-351/test010-16, topicId=KRrkky6_Qwi605E4lIfOgw, topic=test010, partition=16, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=2723705) with 3 segments in 5ms (877/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,395] INFO Deleted producer state snapshot /data01/kafka-logs-351/test010-1/00000000000001956060.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,395] INFO [LogLoader partition=test010-1, dir=/data01/kafka-logs-351] Loading producer state till offset 2724990 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,395] INFO [LogLoader partition=test010-1, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 2724990 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,395] INFO [ProducerStateManager partition=test010-1]Loading producer state from snapshot file 'SnapshotFile(offset=2724990, file=/data01/kafka-logs-351/test010-1/00000000000002724990.snapshot)' (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:07:50,395] INFO [LogLoader partition=test010-1, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 2724990 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,396] INFO Completed load of Log(dir=/data01/kafka-logs-351/test010-1, topicId=KRrkky6_Qwi605E4lIfOgw, topic=test010, partition=1, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=2724990) with 3 segments in 6ms (878/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,405] INFO Deleted producer state snapshot /data01/kafka-logs-351/test010-4/00000000000001954923.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,405] INFO [LogLoader partition=test010-4, dir=/data01/kafka-logs-351] Loading producer state till offset 2723838 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,405] INFO [LogLoader partition=test010-4, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 2723838 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,405] INFO [ProducerStateManager partition=test010-4]Loading producer state from snapshot file 'SnapshotFile(offset=2723838, file=/data01/kafka-logs-351/test010-4/00000000000002723838.snapshot)' (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:07:50,405] INFO [LogLoader partition=test010-4, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 2723838 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,406] INFO Completed load of Log(dir=/data01/kafka-logs-351/test010-4, topicId=KRrkky6_Qwi605E4lIfOgw, topic=test010, partition=4, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=2723838) with 3 segments in 10ms (879/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,413] INFO Deleted producer state snapshot /data01/kafka-logs-351/test010-19/00000000000001955326.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [2023-08-08 16:07:50,413] INFO [LogLoader partition=test010-19, dir=/data01/kafka-logs-351] Loading producer state till offset 2724227 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,413] INFO [LogLoader partition=test010-19, dir=/data01/kafka-logs-351] Reloading from producer snapshot and rebuilding producer state from offset 2724227 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,413] INFO [ProducerStateManager partition=test010-19]Loading producer state from snapshot file 'SnapshotFile(offset=2724227, file=/data01/kafka-logs-351/test010-19/00000000000002724227.snapshot)' (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:07:50,413] INFO [LogLoader partition=test010-19, dir=/data01/kafka-logs-351] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 2724227 (kafka.log.UnifiedLog$) [2023-08-08 16:07:50,414] INFO Completed load of Log(dir=/data01/kafka-logs-351/test010-19, topicId=KRrkky6_Qwi605E4lIfOgw, topic=test010, partition=19, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=2724227) with 3 segments in 7ms (880/880 completed in /data01/kafka-logs-351) (kafka.log.LogManager) [2023-08-08 16:07:50,419] INFO Loaded 880 logs in 6936ms (kafka.log.LogManager) [2023-08-08 16:07:50,420] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) [2023-08-08 16:07:50,421] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) [2023-08-08 16:07:50,640] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) [2023-08-08 16:07:50,655] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) [2023-08-08 16:07:50,659] INFO [GroupCoordinator 2]: Starting up. (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:50,661] INFO [AddPartitionsToTxnSenderThread-2]: Starting (kafka.server.AddPartitionsToTxnManager) [2023-08-08 16:07:50,668] INFO [GroupCoordinator 2]: Startup complete. (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:50,669] INFO [TransactionCoordinator id=2] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) [2023-08-08 16:07:50,671] INFO [TransactionCoordinator id=2] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) [2023-08-08 16:07:50,674] INFO [BrokerMetadataPublisher id=2] Updating metadata.version to 11 at offset OffsetAndEpoch(offset=1962805, epoch=1892). (kafka.server.metadata.BrokerMetadataPublisher) [2023-08-08 16:07:50,675] INFO [TxnMarkerSenderThread-2]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) [2023-08-08 16:07:50,718] INFO [Partition test004-620 broker=2] Log loaded for partition test004-620 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,720] INFO [Partition test004-686 broker=2] Log loaded for partition test004-686 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,721] INFO [Partition test004-356 broker=2] Log loaded for partition test004-356 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,721] INFO [Partition test004-488 broker=2] Log loaded for partition test004-488 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,722] INFO [Partition test004-554 broker=2] Log loaded for partition test004-554 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,722] INFO [Partition test004-157 broker=2] Log loaded for partition test004-157 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,723] INFO [Partition test005-157 broker=2] Log loaded for partition test005-157 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,724] INFO [Partition test123-58 broker=2] Log loaded for partition test123-58 with initial high watermark 257789 (kafka.cluster.Partition) [2023-08-08 16:07:50,724] INFO [Partition test004-223 broker=2] Log loaded for partition test004-223 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,725] INFO [Partition __consumer_offsets-30 broker=2] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,725] INFO [Partition test005-223 broker=2] Log loaded for partition test005-223 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,726] INFO [Partition test004-289 broker=2] Log loaded for partition test004-289 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,726] INFO [Partition test004-355 broker=2] Log loaded for partition test004-355 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,727] INFO [Partition test005-355 broker=2] Log loaded for partition test005-355 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,728] INFO [Partition test008-25 broker=2] Log loaded for partition test008-25 with initial high watermark 1811917 (kafka.cluster.Partition) [2023-08-08 16:07:50,728] INFO [Partition test009-25 broker=2] Log loaded for partition test009-25 with initial high watermark 718242 (kafka.cluster.Partition) [2023-08-08 16:07:50,729] INFO [Partition test004-25 broker=2] Log loaded for partition test004-25 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,729] INFO [Partition test005-25 broker=2] Log loaded for partition test005-25 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,730] INFO [Partition test004-91 broker=2] Log loaded for partition test004-91 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,730] INFO [Partition test004-421 broker=2] Log loaded for partition test004-421 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,731] INFO [Partition test004-487 broker=2] Log loaded for partition test004-487 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,732] INFO [Partition test010-23 broker=2] Log loaded for partition test010-23 with initial high watermark 2723139 (kafka.cluster.Partition) [2023-08-08 16:07:50,732] INFO [Partition __consumer_offsets-29 broker=2] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,733] INFO [Partition test005-288 broker=2] Log loaded for partition test005-288 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,733] INFO [Partition test004-222 broker=2] Log loaded for partition test004-222 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,734] INFO [Partition test005-354 broker=2] Log loaded for partition test005-354 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,734] INFO [Partition test004-288 broker=2] Log loaded for partition test004-288 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,734] INFO [Partition test005-90 broker=2] Log loaded for partition test005-90 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,735] INFO [Partition test004-24 broker=2] Log loaded for partition test004-24 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,736] INFO [Partition test004-556 broker=2] Log loaded for partition test004-556 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,736] INFO [Partition test004-622 broker=2] Log loaded for partition test004-622 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,737] INFO [Partition test005-358 broker=2] Log loaded for partition test005-358 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,738] INFO [Partition test004-292 broker=2] Log loaded for partition test004-292 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,738] INFO [Partition test004-358 broker=2] Log loaded for partition test004-358 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,739] INFO [Partition test004-424 broker=2] Log loaded for partition test004-424 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,739] INFO [Partition test004-93 broker=2] Log loaded for partition test004-93 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,740] INFO [Partition __consumer_offsets-32 broker=2] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,741] INFO [Partition test005-93 broker=2] Log loaded for partition test005-93 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,741] INFO [Partition test010-26 broker=2] Log loaded for partition test010-26 with initial high watermark 2899886 (kafka.cluster.Partition) [2023-08-08 16:07:50,742] INFO [Partition test004-159 broker=2] Log loaded for partition test004-159 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,742] INFO [Partition test005-159 broker=2] Log loaded for partition test005-159 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,743] INFO [Partition test004-225 broker=2] Log loaded for partition test004-225 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,743] INFO [Partition test005-225 broker=2] Log loaded for partition test005-225 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,744] INFO [Partition test004-291 broker=2] Log loaded for partition test004-291 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,745] INFO [Partition test005-291 broker=2] Log loaded for partition test005-291 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,745] INFO [Partition test008-27 broker=2] Log loaded for partition test008-27 with initial high watermark 1811518 (kafka.cluster.Partition) [2023-08-08 16:07:50,745] INFO [Partition test009-27 broker=2] Log loaded for partition test009-27 with initial high watermark 717640 (kafka.cluster.Partition) [2023-08-08 16:07:50,746] INFO [Partition test004-27 broker=2] Log loaded for partition test004-27 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,747] INFO [Partition test005-27 broker=2] Log loaded for partition test005-27 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,747] INFO [Partition test004-621 broker=2] Log loaded for partition test004-621 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,748] INFO [Partition test004-687 broker=2] Log loaded for partition test004-687 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,748] INFO [Partition test-0 broker=2] Log loaded for partition test-0 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,749] INFO [Partition test004-357 broker=2] Log loaded for partition test004-357 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,749] INFO [Partition test005-357 broker=2] Log loaded for partition test005-357 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,750] INFO [Partition test004-489 broker=2] Log loaded for partition test004-489 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,750] INFO [Partition __consumer_offsets-31 broker=2] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,751] INFO [Partition test005-158 broker=2] Log loaded for partition test005-158 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,751] INFO [Partition test123-59 broker=2] Log loaded for partition test123-59 with initial high watermark 294075 (kafka.cluster.Partition) [2023-08-08 16:07:50,752] INFO [Partition test004-92 broker=2] Log loaded for partition test004-92 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,753] INFO [Partition test005-224 broker=2] Log loaded for partition test005-224 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,753] INFO [Partition test010-25 broker=2] Log loaded for partition test010-25 with initial high watermark 2696981 (kafka.cluster.Partition) [2023-08-08 16:07:50,754] INFO [Partition test004-158 broker=2] Log loaded for partition test004-158 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,754] INFO [Partition test005-290 broker=2] Log loaded for partition test005-290 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,755] INFO [Partition test009-26 broker=2] Log loaded for partition test009-26 with initial high watermark 628301 (kafka.cluster.Partition) [2023-08-08 16:07:50,756] INFO [Partition test005-26 broker=2] Log loaded for partition test005-26 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,756] INFO [Partition test005-92 broker=2] Log loaded for partition test005-92 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,757] INFO [Partition test004-558 broker=2] Log loaded for partition test004-558 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,757] INFO [Partition test004-624 broker=2] Log loaded for partition test004-624 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,758] INFO [Partition test-3 broker=2] Log loaded for partition test-3 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,758] INFO [Partition test005-294 broker=2] Log loaded for partition test005-294 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,759] INFO [Partition test004-228 broker=2] Log loaded for partition test004-228 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,759] INFO [Partition test004-294 broker=2] Log loaded for partition test004-294 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,760] INFO [Partition test004-426 broker=2] Log loaded for partition test004-426 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,760] INFO [Partition test010-28 broker=2] Log loaded for partition test010-28 with initial high watermark 2695048 (kafka.cluster.Partition) [2023-08-08 16:07:50,761] INFO [Partition test004-161 broker=2] Log loaded for partition test004-161 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,762] INFO [Partition test005-161 broker=2] Log loaded for partition test005-161 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,762] INFO [Partition test004-227 broker=2] Log loaded for partition test004-227 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,763] INFO [Partition __consumer_offsets-34 broker=2] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,763] INFO [Partition test004-557 broker=2] Log loaded for partition test004-557 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,764] INFO [Partition test004-689 broker=2] Log loaded for partition test004-689 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,765] INFO [Partition test-2 broker=2] Log loaded for partition test-2 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,765] INFO [Partition test005-293 broker=2] Log loaded for partition test005-293 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,766] INFO [Partition test004-425 broker=2] Log loaded for partition test004-425 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,766] INFO [Partition test004-491 broker=2] Log loaded for partition test004-491 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,767] INFO [Partition test005-94 broker=2] Log loaded for partition test005-94 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,767] INFO [Partition test004-28 broker=2] Log loaded for partition test004-28 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,768] INFO [Partition test004-94 broker=2] Log loaded for partition test004-94 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,768] INFO [Partition test005-226 broker=2] Log loaded for partition test005-226 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,769] INFO [Partition test009-28 broker=2] Log loaded for partition test009-28 with initial high watermark 628425 (kafka.cluster.Partition) [2023-08-08 16:07:50,769] INFO [Partition __consumer_offsets-33 broker=2] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,770] INFO [Partition test005-28 broker=2] Log loaded for partition test005-28 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,770] INFO [Partition test004-494 broker=2] Log loaded for partition test004-494 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,771] INFO [Partition test004-560 broker=2] Log loaded for partition test004-560 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,772] INFO [Partition test004-230 broker=2] Log loaded for partition test004-230 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,772] INFO [Partition test004-296 broker=2] Log loaded for partition test004-296 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,772] INFO [Partition test004-362 broker=2] Log loaded for partition test004-362 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,773] INFO [Partition test004-31 broker=2] Log loaded for partition test004-31 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,773] INFO [Partition test005-31 broker=2] Log loaded for partition test005-31 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,774] INFO [Partition test005-97 broker=2] Log loaded for partition test005-97 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,775] INFO [Partition test004-163 broker=2] Log loaded for partition test004-163 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,775] INFO [Partition test005-163 broker=2] Log loaded for partition test005-163 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,775] INFO [Partition test004-692 broker=2] Log loaded for partition test004-692 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,776] INFO [Partition __consumer_offsets-36 broker=2] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,776] INFO [Partition test004-493 broker=2] Log loaded for partition test004-493 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,777] INFO [Partition test004-625 broker=2] Log loaded for partition test004-625 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,777] INFO [Partition test004-691 broker=2] Log loaded for partition test004-691 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,777] INFO [Partition test005-229 broker=2] Log loaded for partition test005-229 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,778] INFO [Partition test005-295 broker=2] Log loaded for partition test005-295 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,778] INFO [Partition test004-361 broker=2] Log loaded for partition test004-361 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,779] INFO [Partition test004-427 broker=2] Log loaded for partition test004-427 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,779] INFO [Partition test005-96 broker=2] Log loaded for partition test005-96 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,780] INFO [Partition test005-162 broker=2] Log loaded for partition test005-162 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,780] INFO [Partition test004-96 broker=2] Log loaded for partition test004-96 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,780] INFO [Partition test005-228 broker=2] Log loaded for partition test005-228 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,781] INFO [Partition test010-29 broker=2] Log loaded for partition test010-29 with initial high watermark 2899474 (kafka.cluster.Partition) [2023-08-08 16:07:50,781] INFO [Partition test004-162 broker=2] Log loaded for partition test004-162 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,782] INFO [Partition test-4 broker=2] Log loaded for partition test-4 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,782] INFO [Partition __consumer_offsets-35 broker=2] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,783] INFO [Partition test004-17 broker=2] Log loaded for partition test004-17 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,783] INFO [Partition test005-17 broker=2] Log loaded for partition test005-17 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,784] INFO [Partition test004-83 broker=2] Log loaded for partition test004-83 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,784] INFO [Partition test005-83 broker=2] Log loaded for partition test005-83 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,784] INFO [Partition test004-612 broker=2] Log loaded for partition test004-612 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,785] INFO [Partition test004-678 broker=2] Log loaded for partition test004-678 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,785] INFO [Partition test004-413 broker=2] Log loaded for partition test004-413 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,786] INFO [Partition test004-479 broker=2] Log loaded for partition test004-479 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,786] INFO [Partition test004-545 broker=2] Log loaded for partition test004-545 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,787] INFO [Partition test010-16 broker=2] Log loaded for partition test010-16 with initial high watermark 2723705 (kafka.cluster.Partition) [2023-08-08 16:07:50,788] INFO [Partition test004-215 broker=2] Log loaded for partition test004-215 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,788] INFO [Partition test-7 broker=2] Log loaded for partition test-7 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,789] INFO [Partition __consumer_offsets-38 broker=2] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,789] INFO [Partition test005-215 broker=2] Log loaded for partition test005-215 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,789] INFO [Partition test004-281 broker=2] Log loaded for partition test004-281 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,790] INFO [Partition test005-281 broker=2] Log loaded for partition test005-281 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,790] INFO [Partition test004-347 broker=2] Log loaded for partition test004-347 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,791] INFO [Partition test005-347 broker=2] Log loaded for partition test005-347 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,791] INFO [Partition test005-16 broker=2] Log loaded for partition test005-16 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,792] INFO [Partition test004-16 broker=2] Log loaded for partition test004-16 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,792] INFO [Partition test005-148 broker=2] Log loaded for partition test005-148 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,792] INFO [Partition test123-49 broker=2] Log loaded for partition test123-49 with initial high watermark 293879 (kafka.cluster.Partition) [2023-08-08 16:07:50,793] INFO [Partition test004-677 broker=2] Log loaded for partition test004-677 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,793] INFO [Partition test009-16 broker=2] Log loaded for partition test009-16 with initial high watermark 628455 (kafka.cluster.Partition) [2023-08-08 16:07:50,794] INFO [Partition test004-544 broker=2] Log loaded for partition test004-544 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,794] INFO [Partition test004-610 broker=2] Log loaded for partition test004-610 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,794] INFO [Partition test004-148 broker=2] Log loaded for partition test004-148 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,795] INFO [Partition __consumer_offsets-37 broker=2] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,795] INFO [Partition test-6 broker=2] Log loaded for partition test-6 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,796] INFO [Partition test005-346 broker=2] Log loaded for partition test005-346 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,796] INFO [Partition test004-346 broker=2] Log loaded for partition test004-346 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,796] INFO [Partition test009-19 broker=2] Log loaded for partition test009-19 with initial high watermark 628455 (kafka.cluster.Partition) [2023-08-08 16:07:50,797] INFO [Partition test005-19 broker=2] Log loaded for partition test005-19 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,797] INFO [Partition test004-548 broker=2] Log loaded for partition test004-548 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,798] INFO [Partition test004-349 broker=2] Log loaded for partition test004-349 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,798] INFO [Partition test005-349 broker=2] Log loaded for partition test005-349 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,799] INFO [Partition test004-85 broker=2] Log loaded for partition test004-85 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,799] INFO [Partition __consumer_offsets-40 broker=2] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,800] INFO [Partition test004-151 broker=2] Log loaded for partition test004-151 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,800] INFO [Partition test005-151 broker=2] Log loaded for partition test005-151 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,800] INFO [Partition test-9 broker=2] Log loaded for partition test-9 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,801] INFO [Partition test005-217 broker=2] Log loaded for partition test005-217 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,801] INFO [Partition test004-283 broker=2] Log loaded for partition test004-283 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,801] INFO [Partition test009-18 broker=2] Log loaded for partition test009-18 with initial high watermark 648770 (kafka.cluster.Partition) [2023-08-08 16:07:50,802] INFO [Partition test005-84 broker=2] Log loaded for partition test005-84 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,802] INFO [Partition test004-18 broker=2] Log loaded for partition test004-18 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,803] INFO [Partition test004-613 broker=2] Log loaded for partition test004-613 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,803] INFO [Partition test004-679 broker=2] Log loaded for partition test004-679 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,804] INFO [Partition test004-348 broker=2] Log loaded for partition test004-348 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,804] INFO [Partition test004-414 broker=2] Log loaded for partition test004-414 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,805] INFO [Partition test004-480 broker=2] Log loaded for partition test004-480 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,805] INFO [Partition test004-546 broker=2] Log loaded for partition test004-546 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,805] INFO [Partition __consumer_offsets-39 broker=2] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,806] INFO [Partition test123-51 broker=2] Log loaded for partition test123-51 with initial high watermark 294130 (kafka.cluster.Partition) [2023-08-08 16:07:50,807] INFO [Partition test010-17 broker=2] Log loaded for partition test010-17 with initial high watermark 2899399 (kafka.cluster.Partition) [2023-08-08 16:07:50,807] INFO [Partition test005-282 broker=2] Log loaded for partition test005-282 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,808] INFO [Partition test004-216 broker=2] Log loaded for partition test004-216 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,808] INFO [Partition test009-21 broker=2] Log loaded for partition test009-21 with initial high watermark 717951 (kafka.cluster.Partition) [2023-08-08 16:07:50,808] INFO [Partition __consumer_offsets-42 broker=2] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,809] INFO [Partition test004-484 broker=2] Log loaded for partition test004-484 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,809] INFO [Partition test004-682 broker=2] Log loaded for partition test004-682 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,810] INFO [Partition test004-285 broker=2] Log loaded for partition test004-285 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,810] INFO [Partition test005-285 broker=2] Log loaded for partition test005-285 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,811] INFO [Partition test004-351 broker=2] Log loaded for partition test004-351 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,811] INFO [Partition test004-483 broker=2] Log loaded for partition test004-483 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,812] INFO [Partition test004-21 broker=2] Log loaded for partition test004-21 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,812] INFO [Partition test005-21 broker=2] Log loaded for partition test005-21 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,812] INFO [Partition test005-87 broker=2] Log loaded for partition test005-87 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,813] INFO [Partition test010-20 broker=2] Log loaded for partition test010-20 with initial high watermark 2899796 (kafka.cluster.Partition) [2023-08-08 16:07:50,813] INFO [Partition test123-54 broker=2] Log loaded for partition test123-54 with initial high watermark 184725 (kafka.cluster.Partition) [2023-08-08 16:07:50,814] INFO [Partition test004-219 broker=2] Log loaded for partition test004-219 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,814] INFO [Partition test-11 broker=2] Log loaded for partition test-11 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,815] INFO [Partition test005-219 broker=2] Log loaded for partition test005-219 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,815] INFO [Partition test008-20 broker=2] Log loaded for partition test008-20 with initial high watermark 1811633 (kafka.cluster.Partition) [2023-08-08 16:07:50,816] INFO [Partition __consumer_offsets-41 broker=2] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,816] INFO [Partition test005-20 broker=2] Log loaded for partition test005-20 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,816] INFO [Partition test004-549 broker=2] Log loaded for partition test004-549 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,817] INFO [Partition test004-615 broker=2] Log loaded for partition test004-615 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,818] INFO [Partition test005-350 broker=2] Log loaded for partition test005-350 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,818] INFO [Partition test004-284 broker=2] Log loaded for partition test004-284 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,819] INFO [Partition test004-416 broker=2] Log loaded for partition test004-416 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,819] INFO [Partition test004-482 broker=2] Log loaded for partition test004-482 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,820] INFO [Partition test005-86 broker=2] Log loaded for partition test005-86 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,820] INFO [Partition test004-20 broker=2] Log loaded for partition test004-20 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,820] INFO [Partition test005-152 broker=2] Log loaded for partition test005-152 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,821] INFO [Partition test123-53 broker=2] Log loaded for partition test123-53 with initial high watermark 259806 (kafka.cluster.Partition) [2023-08-08 16:07:50,821] INFO [Partition test004-86 broker=2] Log loaded for partition test004-86 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,822] INFO [Partition test005-218 broker=2] Log loaded for partition test005-218 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,822] INFO [Partition test010-19 broker=2] Log loaded for partition test010-19 with initial high watermark 2724227 (kafka.cluster.Partition) [2023-08-08 16:07:50,822] INFO [Partition test004-152 broker=2] Log loaded for partition test004-152 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,823] INFO [Partition test005-284 broker=2] Log loaded for partition test005-284 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,823] INFO [Partition test004-218 broker=2] Log loaded for partition test004-218 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,824] INFO [Partition test004-684 broker=2] Log loaded for partition test004-684 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,824] INFO [Partition __consumer_offsets-44 broker=2] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,824] INFO [Partition test008-23 broker=2] Log loaded for partition test008-23 with initial high watermark 1811514 (kafka.cluster.Partition) [2023-08-08 16:07:50,825] INFO [Partition test004-420 broker=2] Log loaded for partition test004-420 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,825] INFO [Partition test004-552 broker=2] Log loaded for partition test004-552 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,826] INFO [Partition test004-618 broker=2] Log loaded for partition test004-618 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,826] INFO [Partition test004-221 broker=2] Log loaded for partition test004-221 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,827] INFO [Partition test-13 broker=2] Log loaded for partition test-13 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,828] INFO [Partition test004-287 broker=2] Log loaded for partition test004-287 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,828] INFO [Partition test005-287 broker=2] Log loaded for partition test005-287 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,829] INFO [Partition test005-353 broker=2] Log loaded for partition test005-353 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,829] INFO [Partition test004-419 broker=2] Log loaded for partition test004-419 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,829] INFO [Partition test004-89 broker=2] Log loaded for partition test004-89 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,830] INFO [Partition test004-155 broker=2] Log loaded for partition test004-155 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,830] INFO [Partition test005-155 broker=2] Log loaded for partition test005-155 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,830] INFO [Partition test123-56 broker=2] Log loaded for partition test123-56 with initial high watermark 293670 (kafka.cluster.Partition) [2023-08-08 16:07:50,831] INFO [Partition __consumer_offsets-43 broker=2] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,831] INFO [Partition test009-22 broker=2] Log loaded for partition test009-22 with initial high watermark 628500 (kafka.cluster.Partition) [2023-08-08 16:07:50,832] INFO [Partition test004-551 broker=2] Log loaded for partition test004-551 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,832] INFO [Partition test004-617 broker=2] Log loaded for partition test004-617 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,832] INFO [Partition test004-683 broker=2] Log loaded for partition test004-683 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,833] INFO [Partition test-12 broker=2] Log loaded for partition test-12 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,833] INFO [Partition test005-352 broker=2] Log loaded for partition test005-352 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,834] INFO [Partition test004-352 broker=2] Log loaded for partition test004-352 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,834] INFO [Partition test004-418 broker=2] Log loaded for partition test004-418 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,834] INFO [Partition test005-22 broker=2] Log loaded for partition test005-22 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,835] INFO [Partition test005-88 broker=2] Log loaded for partition test005-88 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,835] INFO [Partition test004-22 broker=2] Log loaded for partition test004-22 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,836] INFO [Partition test005-154 broker=2] Log loaded for partition test005-154 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,836] INFO [Partition test004-88 broker=2] Log loaded for partition test004-88 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,837] INFO [Partition test005-220 broker=2] Log loaded for partition test005-220 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,837] INFO [Partition test010-21 broker=2] Log loaded for partition test010-21 with initial high watermark 2900010 (kafka.cluster.Partition) [2023-08-08 16:07:50,838] INFO [Partition test004-154 broker=2] Log loaded for partition test004-154 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,838] INFO [Partition test010-8 broker=2] Log loaded for partition test010-8 with initial high watermark 2899492 (kafka.cluster.Partition) [2023-08-08 16:07:50,838] INFO [Partition test004-141 broker=2] Log loaded for partition test004-141 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,839] INFO [Partition test005-141 broker=2] Log loaded for partition test005-141 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,839] INFO [Partition __consumer_offsets-13 broker=2] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,840] INFO [Partition test004-207 broker=2] Log loaded for partition test004-207 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,840] INFO [Partition test005-207 broker=2] Log loaded for partition test005-207 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,840] INFO [Partition test005-273 broker=2] Log loaded for partition test005-273 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,841] INFO [Partition test005-339 broker=2] Log loaded for partition test005-339 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,841] INFO [Partition test009-9 broker=2] Log loaded for partition test009-9 with initial high watermark 648906 (kafka.cluster.Partition) [2023-08-08 16:07:50,842] INFO [Partition test004-9 broker=2] Log loaded for partition test004-9 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,842] INFO [Partition test005-9 broker=2] Log loaded for partition test005-9 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,843] INFO [Partition test004-75 broker=2] Log loaded for partition test004-75 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,843] INFO [Partition test004-669 broker=2] Log loaded for partition test004-669 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,844] INFO [Partition test009-8 broker=2] Log loaded for partition test009-8 with initial high watermark 628425 (kafka.cluster.Partition) [2023-08-08 16:07:50,844] INFO [Partition test004-471 broker=2] Log loaded for partition test004-471 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,845] INFO [Partition test005-206 broker=2] Log loaded for partition test005-206 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,845] INFO [Partition test010-7 broker=2] Log loaded for partition test010-7 with initial high watermark 2695181 (kafka.cluster.Partition) [2023-08-08 16:07:50,845] INFO [Partition test005-272 broker=2] Log loaded for partition test005-272 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,846] INFO [Partition test004-206 broker=2] Log loaded for partition test004-206 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,846] INFO [Partition __consumer_offsets-12 broker=2] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,846] INFO [Partition test004-272 broker=2] Log loaded for partition test004-272 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,847] INFO [Partition test004-338 broker=2] Log loaded for partition test004-338 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,848] INFO [Partition test005-8 broker=2] Log loaded for partition test005-8 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,849] INFO [Partition test005-74 broker=2] Log loaded for partition test005-74 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,849] INFO [Partition test005-140 broker=2] Log loaded for partition test005-140 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,850] INFO [Partition test004-668 broker=2] Log loaded for partition test004-668 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,850] INFO [Partition test008-7 broker=2] Log loaded for partition test008-7 with initial high watermark 1811600 (kafka.cluster.Partition) [2023-08-08 16:07:50,850] INFO [Partition test009-7 broker=2] Log loaded for partition test009-7 with initial high watermark 648968 (kafka.cluster.Partition) [2023-08-08 16:07:50,851] INFO [Partition test004-470 broker=2] Log loaded for partition test004-470 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,851] INFO [Partition test004-536 broker=2] Log loaded for partition test004-536 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,852] INFO [Partition test004-602 broker=2] Log loaded for partition test004-602 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,852] INFO [Partition __consumer_offsets-15 broker=2] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,853] INFO [Partition test005-77 broker=2] Log loaded for partition test005-77 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,853] INFO [Partition test010-10 broker=2] Log loaded for partition test010-10 with initial high watermark 2899538 (kafka.cluster.Partition) [2023-08-08 16:07:50,854] INFO [Partition test004-143 broker=2] Log loaded for partition test004-143 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,854] INFO [Partition test123-44 broker=2] Log loaded for partition test123-44 with initial high watermark 293991 (kafka.cluster.Partition) [2023-08-08 16:07:50,854] INFO [Partition test004-275 broker=2] Log loaded for partition test004-275 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,855] INFO [Partition test004-11 broker=2] Log loaded for partition test004-11 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,855] INFO [Partition test004-605 broker=2] Log loaded for partition test004-605 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,856] INFO [Partition test004-671 broker=2] Log loaded for partition test004-671 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,856] INFO [Partition test-17 broker=2] Log loaded for partition test-17 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,856] INFO [Partition test004-341 broker=2] Log loaded for partition test004-341 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,857] INFO [Partition test005-341 broker=2] Log loaded for partition test005-341 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,857] INFO [Partition test004-407 broker=2] Log loaded for partition test004-407 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,858] INFO [Partition test004-539 broker=2] Log loaded for partition test004-539 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,858] INFO [Partition test005-142 broker=2] Log loaded for partition test005-142 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,858] INFO [Partition test123-43 broker=2] Log loaded for partition test123-43 with initial high watermark 258970 (kafka.cluster.Partition) [2023-08-08 16:07:50,859] INFO [Partition test004-76 broker=2] Log loaded for partition test004-76 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,859] INFO [Partition test005-208 broker=2] Log loaded for partition test005-208 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,860] INFO [Partition __consumer_offsets-14 broker=2] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,860] INFO [Partition test010-9 broker=2] Log loaded for partition test010-9 with initial high watermark 2723772 (kafka.cluster.Partition) [2023-08-08 16:07:50,860] INFO [Partition test005-274 broker=2] Log loaded for partition test005-274 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,861] INFO [Partition test004-208 broker=2] Log loaded for partition test004-208 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,861] INFO [Partition test004-274 broker=2] Log loaded for partition test004-274 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,862] INFO [Partition test009-10 broker=2] Log loaded for partition test009-10 with initial high watermark 628485 (kafka.cluster.Partition) [2023-08-08 16:07:50,862] INFO [Partition test008-10 broker=2] Log loaded for partition test008-10 with initial high watermark 1811630 (kafka.cluster.Partition) [2023-08-08 16:07:50,862] INFO [Partition test005-10 broker=2] Log loaded for partition test005-10 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,863] INFO [Partition test005-76 broker=2] Log loaded for partition test005-76 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,863] INFO [Partition test004-604 broker=2] Log loaded for partition test004-604 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,864] INFO [Partition test-16 broker=2] Log loaded for partition test-16 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,864] INFO [Partition test004-340 broker=2] Log loaded for partition test004-340 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,864] INFO [Partition test004-406 broker=2] Log loaded for partition test004-406 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,865] INFO [Partition test004-472 broker=2] Log loaded for partition test004-472 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,865] INFO [Partition test004-538 broker=2] Log loaded for partition test004-538 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,865] INFO [Partition test005-13 broker=2] Log loaded for partition test005-13 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,866] INFO [Partition test004-79 broker=2] Log loaded for partition test004-79 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,867] INFO [Partition test005-79 broker=2] Log loaded for partition test005-79 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,867] INFO [Partition test004-145 broker=2] Log loaded for partition test004-145 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,867] INFO [Partition test005-145 broker=2] Log loaded for partition test005-145 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,868] INFO [Partition test004-211 broker=2] Log loaded for partition test004-211 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,868] INFO [Partition test-20 broker=2] Log loaded for partition test-20 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,869] INFO [Partition test008-13 broker=2] Log loaded for partition test008-13 with initial high watermark 1811426 (kafka.cluster.Partition) [2023-08-08 16:07:50,869] INFO [Partition __consumer_offsets-17 broker=2] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,870] INFO [Partition test004-673 broker=2] Log loaded for partition test004-673 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,870] INFO [Partition test-19 broker=2] Log loaded for partition test-19 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,871] INFO [Partition test004-277 broker=2] Log loaded for partition test004-277 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,871] INFO [Partition test005-277 broker=2] Log loaded for partition test005-277 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,871] INFO [Partition test004-343 broker=2] Log loaded for partition test004-343 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,872] INFO [Partition test004-409 broker=2] Log loaded for partition test004-409 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,872] INFO [Partition test004-475 broker=2] Log loaded for partition test004-475 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,873] INFO [Partition __consumer_offsets-16 broker=2] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,873] INFO [Partition test004-12 broker=2] Log loaded for partition test004-12 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,879] INFO [Partition test123-45 broker=2] Log loaded for partition test123-45 with initial high watermark 293995 (kafka.cluster.Partition) [2023-08-08 16:07:50,880] INFO [Partition test004-78 broker=2] Log loaded for partition test004-78 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,880] INFO [Partition test005-210 broker=2] Log loaded for partition test005-210 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,881] INFO [Partition test004-144 broker=2] Log loaded for partition test004-144 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,881] INFO [Partition test005-276 broker=2] Log loaded for partition test005-276 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,882] INFO [Partition test009-12 broker=2] Log loaded for partition test009-12 with initial high watermark 628230 (kafka.cluster.Partition) [2023-08-08 16:07:50,882] INFO [Partition test005-12 broker=2] Log loaded for partition test005-12 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,882] INFO [Partition test004-540 broker=2] Log loaded for partition test004-540 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,883] INFO [Partition test004-606 broker=2] Log loaded for partition test004-606 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,884] INFO [Partition test005-342 broker=2] Log loaded for partition test005-342 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,884] INFO [Partition test004-276 broker=2] Log loaded for partition test004-276 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,884] INFO [Partition test004-342 broker=2] Log loaded for partition test004-342 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,885] INFO [Partition test004-408 broker=2] Log loaded for partition test004-408 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,885] INFO [Partition test004-474 broker=2] Log loaded for partition test004-474 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,886] INFO [Partition test004-81 broker=2] Log loaded for partition test004-81 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,951] INFO [Partition test005-81 broker=2] Log loaded for partition test005-81 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,952] INFO [Partition test010-14 broker=2] Log loaded for partition test010-14 with initial high watermark 2899808 (kafka.cluster.Partition) [2023-08-08 16:07:50,952] INFO [Partition test004-147 broker=2] Log loaded for partition test004-147 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,953] INFO [Partition test005-147 broker=2] Log loaded for partition test005-147 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,953] INFO [Partition test123-48 broker=2] Log loaded for partition test123-48 with initial high watermark 258635 (kafka.cluster.Partition) [2023-08-08 16:07:50,953] INFO [Partition __consumer_offsets-19 broker=2] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,954] INFO [Partition test008-15 broker=2] Log loaded for partition test008-15 with initial high watermark 1811917 (kafka.cluster.Partition) [2023-08-08 16:07:50,954] INFO [Partition test009-15 broker=2] Log loaded for partition test009-15 with initial high watermark 648548 (kafka.cluster.Partition) [2023-08-08 16:07:50,955] INFO [Partition test004-477 broker=2] Log loaded for partition test004-477 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,956] INFO [Partition test004-609 broker=2] Log loaded for partition test004-609 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,956] INFO [Partition test004-675 broker=2] Log loaded for partition test004-675 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,957] INFO [Partition test004-213 broker=2] Log loaded for partition test004-213 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,957] INFO [Partition test005-213 broker=2] Log loaded for partition test005-213 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,957] INFO [Partition test004-279 broker=2] Log loaded for partition test004-279 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,958] INFO [Partition test005-279 broker=2] Log loaded for partition test005-279 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,958] INFO [Partition test004-411 broker=2] Log loaded for partition test004-411 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,958] INFO [Partition test005-80 broker=2] Log loaded for partition test005-80 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,959] INFO [Partition test004-14 broker=2] Log loaded for partition test004-14 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,960] INFO [Partition test005-146 broker=2] Log loaded for partition test005-146 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,960] INFO [Partition test123-47 broker=2] Log loaded for partition test123-47 with initial high watermark 259484 (kafka.cluster.Partition) [2023-08-08 16:07:50,961] INFO [Partition test005-212 broker=2] Log loaded for partition test005-212 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,961] INFO [Partition test010-13 broker=2] Log loaded for partition test010-13 with initial high watermark 2698137 (kafka.cluster.Partition) [2023-08-08 16:07:50,961] INFO [Partition test-21 broker=2] Log loaded for partition test-21 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,962] INFO [Partition test009-14 broker=2] Log loaded for partition test009-14 with initial high watermark 717810 (kafka.cluster.Partition) [2023-08-08 16:07:50,968] INFO [Partition __consumer_offsets-18 broker=2] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,969] INFO [Partition test004-542 broker=2] Log loaded for partition test004-542 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,969] INFO [Partition test004-608 broker=2] Log loaded for partition test004-608 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,970] INFO [Partition test004-674 broker=2] Log loaded for partition test004-674 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,970] INFO [Partition test004-212 broker=2] Log loaded for partition test004-212 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,970] INFO [Partition test005-344 broker=2] Log loaded for partition test005-344 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,971] INFO [Partition test004-463 broker=2] Log loaded for partition test004-463 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,971] INFO [Partition test004-595 broker=2] Log loaded for partition test004-595 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,972] INFO [Partition test004-133 broker=2] Log loaded for partition test004-133 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,972] INFO [Partition test005-133 broker=2] Log loaded for partition test005-133 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,972] INFO [Partition test123-34 broker=2] Log loaded for partition test123-34 with initial high watermark 293970 (kafka.cluster.Partition) [2023-08-08 16:07:50,973] INFO [Partition __consumer_offsets-21 broker=2] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,973] INFO [Partition test004-199 broker=2] Log loaded for partition test004-199 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,973] INFO [Partition test004-265 broker=2] Log loaded for partition test004-265 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,974] INFO [Partition test004-331 broker=2] Log loaded for partition test004-331 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,974] INFO [Partition test005-331 broker=2] Log loaded for partition test005-331 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,974] INFO [Partition test005-0 broker=2] Log loaded for partition test005-0 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,975] INFO [Partition test005-66 broker=2] Log loaded for partition test005-66 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,976] INFO [Partition test004-66 broker=2] Log loaded for partition test004-66 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,976] INFO [Partition test004-661 broker=2] Log loaded for partition test004-661 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,977] INFO [Partition test009-0 broker=2] Log loaded for partition test009-0 with initial high watermark 718376 (kafka.cluster.Partition) [2023-08-08 16:07:50,977] INFO [Partition test004-396 broker=2] Log loaded for partition test004-396 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,977] INFO [Partition test004-462 broker=2] Log loaded for partition test004-462 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,978] INFO [Partition test004-528 broker=2] Log loaded for partition test004-528 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,978] INFO [Partition test004-594 broker=2] Log loaded for partition test004-594 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,978] INFO [Partition test005-198 broker=2] Log loaded for partition test005-198 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,979] INFO [Partition test005-264 broker=2] Log loaded for partition test005-264 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,979] INFO [Partition test-23 broker=2] Log loaded for partition test-23 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,979] INFO [Partition __consumer_offsets-20 broker=2] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,980] INFO [Partition test004-264 broker=2] Log loaded for partition test004-264 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,980] INFO [Partition test004-330 broker=2] Log loaded for partition test004-330 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,980] INFO [Partition test004-65 broker=2] Log loaded for partition test004-65 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,981] INFO [Partition test005-65 broker=2] Log loaded for partition test005-65 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,981] INFO [Partition test004-131 broker=2] Log loaded for partition test004-131 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,981] INFO [Partition test004-660 broker=2] Log loaded for partition test004-660 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,982] INFO [Partition test005-333 broker=2] Log loaded for partition test005-333 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,982] INFO [Partition test004-399 broker=2] Log loaded for partition test004-399 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,982] INFO [Partition __consumer_offsets-23 broker=2] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,983] INFO [Partition test010-2 broker=2] Log loaded for partition test010-2 with initial high watermark 2899642 (kafka.cluster.Partition) [2023-08-08 16:07:50,983] INFO [Partition test004-135 broker=2] Log loaded for partition test004-135 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,983] INFO [Partition test004-201 broker=2] Log loaded for partition test004-201 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,984] INFO [Partition test004-267 broker=2] Log loaded for partition test004-267 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,984] INFO [Partition test-26 broker=2] Log loaded for partition test-26 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,985] INFO [Partition test005-68 broker=2] Log loaded for partition test005-68 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,985] INFO [Partition test004-2 broker=2] Log loaded for partition test004-2 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,997] INFO [Partition test004-663 broker=2] Log loaded for partition test004-663 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,997] INFO [Partition test004-398 broker=2] Log loaded for partition test004-398 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,998] INFO [Partition test004-530 broker=2] Log loaded for partition test004-530 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,998] INFO [Partition test005-134 broker=2] Log loaded for partition test005-134 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:50,998] INFO [Partition test123-35 broker=2] Log loaded for partition test123-35 with initial high watermark 181620 (kafka.cluster.Partition) [2023-08-08 16:07:51,001] INFO [Partition test005-200 broker=2] Log loaded for partition test005-200 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,001] INFO [Partition __consumer_offsets-22 broker=2] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,002] INFO [Partition test010-1 broker=2] Log loaded for partition test010-1 with initial high watermark 2724990 (kafka.cluster.Partition) [2023-08-08 16:07:51,002] INFO [Partition test004-134 broker=2] Log loaded for partition test004-134 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,002] INFO [Partition test005-266 broker=2] Log loaded for partition test005-266 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,003] INFO [Partition test-25 broker=2] Log loaded for partition test-25 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,003] INFO [Partition test004-200 broker=2] Log loaded for partition test004-200 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,004] INFO [Partition test005-332 broker=2] Log loaded for partition test005-332 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,004] INFO [Partition test008-1 broker=2] Log loaded for partition test008-1 with initial high watermark 1811968 (kafka.cluster.Partition) [2023-08-08 16:07:51,004] INFO [Partition test009-1 broker=2] Log loaded for partition test009-1 with initial high watermark 628425 (kafka.cluster.Partition) [2023-08-08 16:07:51,005] INFO [Partition test004-1 broker=2] Log loaded for partition test004-1 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,005] INFO [Partition test005-1 broker=2] Log loaded for partition test005-1 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,005] INFO [Partition test004-67 broker=2] Log loaded for partition test004-67 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,006] INFO [Partition test004-269 broker=2] Log loaded for partition test004-269 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,006] INFO [Partition test005-269 broker=2] Log loaded for partition test005-269 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,007] INFO [Partition test-28 broker=2] Log loaded for partition test-28 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,007] INFO [Partition test004-335 broker=2] Log loaded for partition test004-335 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,007] INFO [Partition test005-335 broker=2] Log loaded for partition test005-335 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,008] INFO [Partition test004-401 broker=2] Log loaded for partition test004-401 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,008] INFO [Partition test004-467 broker=2] Log loaded for partition test004-467 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,008] INFO [Partition test004-71 broker=2] Log loaded for partition test004-71 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,009] INFO [Partition test005-71 broker=2] Log loaded for partition test005-71 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,009] INFO [Partition test010-4 broker=2] Log loaded for partition test010-4 with initial high watermark 2723838 (kafka.cluster.Partition) [2023-08-08 16:07:51,010] INFO [Partition test005-137 broker=2] Log loaded for partition test005-137 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,010] INFO [Partition test123-38 broker=2] Log loaded for partition test123-38 with initial high watermark 161655 (kafka.cluster.Partition) [2023-08-08 16:07:51,011] INFO [Partition test005-203 broker=2] Log loaded for partition test005-203 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,011] INFO [Partition test005-4 broker=2] Log loaded for partition test005-4 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,011] INFO [Partition __consumer_offsets-26 broker=2] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,012] INFO [Partition test004-533 broker=2] Log loaded for partition test004-533 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,012] INFO [Partition test004-599 broker=2] Log loaded for partition test004-599 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,012] INFO [Partition test004-334 broker=2] Log loaded for partition test004-334 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,013] INFO [Partition test004-466 broker=2] Log loaded for partition test004-466 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,013] INFO [Partition test005-70 broker=2] Log loaded for partition test005-70 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,013] INFO [Partition __consumer_offsets-24 broker=2] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,014] INFO [Partition test004-4 broker=2] Log loaded for partition test004-4 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,014] INFO [Partition test005-136 broker=2] Log loaded for partition test005-136 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,014] INFO [Partition test123-37 broker=2] Log loaded for partition test123-37 with initial high watermark 294060 (kafka.cluster.Partition) [2023-08-08 16:07:51,015] INFO [Partition test004-70 broker=2] Log loaded for partition test004-70 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,015] INFO [Partition test005-202 broker=2] Log loaded for partition test005-202 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,015] INFO [Partition test004-136 broker=2] Log loaded for partition test004-136 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,016] INFO [Partition test005-268 broker=2] Log loaded for partition test005-268 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,016] INFO [Partition test-27 broker=2] Log loaded for partition test-27 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,017] INFO [Partition test004-202 broker=2] Log loaded for partition test004-202 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,017] INFO [Partition test009-3 broker=2] Log loaded for partition test009-3 with initial high watermark 628455 (kafka.cluster.Partition) [2023-08-08 16:07:51,017] INFO [Partition __consumer_offsets-25 broker=2] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,018] INFO [Partition test004-3 broker=2] Log loaded for partition test004-3 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,018] INFO [Partition test005-3 broker=2] Log loaded for partition test005-3 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,018] INFO [Partition test004-532 broker=2] Log loaded for partition test004-532 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,019] INFO [Partition test004-598 broker=2] Log loaded for partition test004-598 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,019] INFO [Partition test004-664 broker=2] Log loaded for partition test004-664 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,020] INFO [Partition test004-205 broker=2] Log loaded for partition test004-205 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,020] INFO [Partition test004-271 broker=2] Log loaded for partition test004-271 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,020] INFO [Partition test005-271 broker=2] Log loaded for partition test005-271 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,021] INFO [Partition test005-337 broker=2] Log loaded for partition test005-337 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,021] INFO [Partition test004-403 broker=2] Log loaded for partition test004-403 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,021] INFO [Partition test004-7 broker=2] Log loaded for partition test004-7 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,022] INFO [Partition test004-73 broker=2] Log loaded for partition test004-73 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,022] INFO [Partition test004-139 broker=2] Log loaded for partition test004-139 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,022] INFO [Partition test123-40 broker=2] Log loaded for partition test123-40 with initial high watermark 141690 (kafka.cluster.Partition) [2023-08-08 16:07:51,023] INFO [Partition __consumer_offsets-28 broker=2] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,023] INFO [Partition test005-336 broker=2] Log loaded for partition test005-336 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,024] INFO [Partition test004-336 broker=2] Log loaded for partition test004-336 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,024] INFO [Partition test004-402 broker=2] Log loaded for partition test004-402 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,024] INFO [Partition test005-6 broker=2] Log loaded for partition test005-6 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,025] INFO [Partition test005-72 broker=2] Log loaded for partition test005-72 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,025] INFO [Partition test004-6 broker=2] Log loaded for partition test004-6 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,026] INFO [Partition test005-138 broker=2] Log loaded for partition test005-138 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,026] INFO [Partition test123-39 broker=2] Log loaded for partition test123-39 with initial high watermark 293971 (kafka.cluster.Partition) [2023-08-08 16:07:51,026] INFO [Partition test004-72 broker=2] Log loaded for partition test004-72 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,027] INFO [Partition test005-204 broker=2] Log loaded for partition test005-204 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,027] INFO [Partition test010-5 broker=2] Log loaded for partition test010-5 with initial high watermark 2899269 (kafka.cluster.Partition) [2023-08-08 16:07:51,027] INFO [Partition test004-138 broker=2] Log loaded for partition test004-138 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,028] INFO [Partition __consumer_offsets-27 broker=2] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,028] INFO [Partition test008-5 broker=2] Log loaded for partition test008-5 with initial high watermark 1812132 (kafka.cluster.Partition) [2023-08-08 16:07:51,028] INFO [Partition test009-5 broker=2] Log loaded for partition test009-5 with initial high watermark 649110 (kafka.cluster.Partition) [2023-08-08 16:07:51,029] INFO [Partition test004-468 broker=2] Log loaded for partition test004-468 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,029] INFO [Partition test004-534 broker=2] Log loaded for partition test004-534 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,030] INFO [Partition test004-600 broker=2] Log loaded for partition test004-600 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,030] INFO [Partition test004-666 broker=2] Log loaded for partition test004-666 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,030] INFO [Partition test004-653 broker=2] Log loaded for partition test004-653 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,031] INFO [Partition test004-719 broker=2] Log loaded for partition test004-719 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,031] INFO [Partition test004-389 broker=2] Log loaded for partition test004-389 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,032] INFO [Partition test004-521 broker=2] Log loaded for partition test004-521 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,032] INFO [Partition test005-190 broker=2] Log loaded for partition test005-190 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,032] INFO [Partition test004-124 broker=2] Log loaded for partition test004-124 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,033] INFO [Partition test005-256 broker=2] Log loaded for partition test005-256 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,033] INFO [Partition test004-190 broker=2] Log loaded for partition test004-190 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,033] INFO [Partition test005-322 broker=2] Log loaded for partition test005-322 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,034] INFO [Partition test004-322 broker=2] Log loaded for partition test004-322 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,034] INFO [Partition test005-58 broker=2] Log loaded for partition test005-58 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,034] INFO [Partition test005-124 broker=2] Log loaded for partition test005-124 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,035] INFO [Partition test004-652 broker=2] Log loaded for partition test004-652 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,035] INFO [Partition test004-718 broker=2] Log loaded for partition test004-718 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,036] INFO [Partition test004-388 broker=2] Log loaded for partition test004-388 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,036] INFO [Partition test004-454 broker=2] Log loaded for partition test004-454 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,036] INFO [Partition test004-586 broker=2] Log loaded for partition test004-586 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,037] INFO [Partition test005-189 broker=2] Log loaded for partition test005-189 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,038] INFO [Partition test004-255 broker=2] Log loaded for partition test004-255 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,038] INFO [Partition test004-321 broker=2] Log loaded for partition test004-321 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,039] INFO [Partition test005-321 broker=2] Log loaded for partition test005-321 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,039] INFO [Partition test004-57 broker=2] Log loaded for partition test004-57 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,039] INFO [Partition test005-57 broker=2] Log loaded for partition test005-57 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,040] INFO [Partition test004-123 broker=2] Log loaded for partition test004-123 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,040] INFO [Partition test123-24 broker=2] Log loaded for partition test123-24 with initial high watermark 185280 (kafka.cluster.Partition) [2023-08-08 16:07:51,040] INFO [Partition test004-589 broker=2] Log loaded for partition test004-589 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,041] INFO [Partition test004-655 broker=2] Log loaded for partition test004-655 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,041] INFO [Partition test004-325 broker=2] Log loaded for partition test004-325 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,042] INFO [Partition test004-391 broker=2] Log loaded for partition test004-391 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,042] INFO [Partition test004-457 broker=2] Log loaded for partition test004-457 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,042] INFO [Partition test004-523 broker=2] Log loaded for partition test004-523 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,043] INFO [Partition test005-126 broker=2] Log loaded for partition test005-126 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,043] INFO [Partition test004-60 broker=2] Log loaded for partition test004-60 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,044] INFO [Partition test004-126 broker=2] Log loaded for partition test004-126 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,044] INFO [Partition test005-258 broker=2] Log loaded for partition test005-258 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,044] INFO [Partition test004-192 broker=2] Log loaded for partition test004-192 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,045] INFO [Partition test005-324 broker=2] Log loaded for partition test005-324 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,045] INFO [Partition test004-258 broker=2] Log loaded for partition test004-258 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,045] INFO [Partition test005-60 broker=2] Log loaded for partition test005-60 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,046] INFO [Partition test004-654 broker=2] Log loaded for partition test004-654 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,046] INFO [Partition test004-390 broker=2] Log loaded for partition test004-390 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,046] INFO [Partition test004-456 broker=2] Log loaded for partition test004-456 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,047] INFO [Partition test004-522 broker=2] Log loaded for partition test004-522 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,047] INFO [Partition test005-125 broker=2] Log loaded for partition test005-125 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,047] INFO [Partition test123-26 broker=2] Log loaded for partition test123-26 with initial high watermark 294030 (kafka.cluster.Partition) [2023-08-08 16:07:51,048] INFO [Partition test004-191 broker=2] Log loaded for partition test004-191 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,048] INFO [Partition test004-257 broker=2] Log loaded for partition test004-257 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,049] INFO [Partition test005-257 broker=2] Log loaded for partition test005-257 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,049] INFO [Partition test004-59 broker=2] Log loaded for partition test004-59 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,049] INFO [Partition test004-591 broker=2] Log loaded for partition test004-591 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,050] INFO [Partition test004-393 broker=2] Log loaded for partition test004-393 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,050] INFO [Partition test004-459 broker=2] Log loaded for partition test004-459 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,050] INFO [Partition test005-128 broker=2] Log loaded for partition test005-128 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,051] INFO [Partition test123-29 broker=2] Log loaded for partition test123-29 with initial high watermark 294075 (kafka.cluster.Partition) [2023-08-08 16:07:51,051] INFO [Partition test005-194 broker=2] Log loaded for partition test005-194 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,051] INFO [Partition test004-194 broker=2] Log loaded for partition test004-194 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,052] INFO [Partition __consumer_offsets-1 broker=2] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,052] INFO [Partition test004-590 broker=2] Log loaded for partition test004-590 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,053] INFO [Partition test005-326 broker=2] Log loaded for partition test005-326 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,053] INFO [Partition test004-260 broker=2] Log loaded for partition test004-260 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,053] INFO [Partition test004-326 broker=2] Log loaded for partition test004-326 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,054] INFO [Partition test004-61 broker=2] Log loaded for partition test004-61 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,054] INFO [Partition __consumer_offsets-0 broker=2] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,054] INFO [Partition test005-61 broker=2] Log loaded for partition test005-61 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,055] INFO [Partition test004-127 broker=2] Log loaded for partition test004-127 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,055] INFO [Partition test123-28 broker=2] Log loaded for partition test123-28 with initial high watermark 260246 (kafka.cluster.Partition) [2023-08-08 16:07:51,056] INFO [Partition test005-193 broker=2] Log loaded for partition test005-193 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,056] INFO [Partition test005-259 broker=2] Log loaded for partition test005-259 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,056] INFO [Partition test004-461 broker=2] Log loaded for partition test004-461 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,057] INFO [Partition test004-527 broker=2] Log loaded for partition test004-527 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,057] INFO [Partition test004-659 broker=2] Log loaded for partition test004-659 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,057] INFO [Partition test004-263 broker=2] Log loaded for partition test004-263 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,058] INFO [Partition test005-263 broker=2] Log loaded for partition test005-263 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,058] INFO [Partition test004-329 broker=2] Log loaded for partition test004-329 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,058] INFO [Partition test005-329 broker=2] Log loaded for partition test005-329 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,059] INFO [Partition test005-64 broker=2] Log loaded for partition test005-64 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,059] INFO [Partition test005-130 broker=2] Log loaded for partition test005-130 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,059] INFO [Partition test123-31 broker=2] Log loaded for partition test123-31 with initial high watermark 259134 (kafka.cluster.Partition) [2023-08-08 16:07:51,060] INFO [Partition test004-64 broker=2] Log loaded for partition test004-64 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,060] INFO [Partition test005-196 broker=2] Log loaded for partition test005-196 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,061] INFO [Partition __consumer_offsets-3 broker=2] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,061] INFO [Partition test004-526 broker=2] Log loaded for partition test004-526 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,061] INFO [Partition test004-592 broker=2] Log loaded for partition test004-592 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,062] INFO [Partition test004-658 broker=2] Log loaded for partition test004-658 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,062] INFO [Partition test005-262 broker=2] Log loaded for partition test005-262 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,063] INFO [Partition test004-196 broker=2] Log loaded for partition test004-196 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,063] INFO [Partition test005-328 broker=2] Log loaded for partition test005-328 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,063] INFO [Partition test004-262 broker=2] Log loaded for partition test004-262 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,064] INFO [Partition test004-328 broker=2] Log loaded for partition test004-328 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,064] INFO [Partition test004-394 broker=2] Log loaded for partition test004-394 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,064] INFO [Partition test004-129 broker=2] Log loaded for partition test004-129 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,065] INFO [Partition test005-129 broker=2] Log loaded for partition test005-129 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,065] INFO [Partition test123-30 broker=2] Log loaded for partition test123-30 with initial high watermark 293880 (kafka.cluster.Partition) [2023-08-08 16:07:51,065] INFO [Partition test004-195 broker=2] Log loaded for partition test004-195 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,066] INFO [Partition test005-195 broker=2] Log loaded for partition test005-195 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,066] INFO [Partition __consumer_offsets-2 broker=2] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,066] INFO [Partition test123-17 broker=2] Log loaded for partition test123-17 with initial high watermark 151470 (kafka.cluster.Partition) [2023-08-08 16:07:51,067] INFO [Partition test004-50 broker=2] Log loaded for partition test004-50 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,067] INFO [Partition test004-711 broker=2] Log loaded for partition test004-711 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,067] INFO [Partition test004-380 broker=2] Log loaded for partition test004-380 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,068] INFO [Partition test004-446 broker=2] Log loaded for partition test004-446 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,068] INFO [Partition test004-578 broker=2] Log loaded for partition test004-578 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,069] INFO [Partition test005-182 broker=2] Log loaded for partition test005-182 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,069] INFO [Partition test004-116 broker=2] Log loaded for partition test004-116 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,069] INFO [Partition __consumer_offsets-5 broker=2] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,070] INFO [Partition test005-248 broker=2] Log loaded for partition test005-248 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,070] INFO [Partition test004-182 broker=2] Log loaded for partition test004-182 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,070] INFO [Partition test005-314 broker=2] Log loaded for partition test005-314 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,071] INFO [Partition test004-314 broker=2] Log loaded for partition test004-314 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,071] INFO [Partition test004-49 broker=2] Log loaded for partition test004-49 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,071] INFO [Partition test005-49 broker=2] Log loaded for partition test005-49 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,072] INFO [Partition test004-115 broker=2] Log loaded for partition test004-115 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,072] INFO [Partition test005-115 broker=2] Log loaded for partition test005-115 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,072] INFO [Partition test123-16 broker=2] Log loaded for partition test123-16 with initial high watermark 294021 (kafka.cluster.Partition) [2023-08-08 16:07:51,073] INFO [Partition test004-710 broker=2] Log loaded for partition test004-710 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,073] INFO [Partition test004-511 broker=2] Log loaded for partition test004-511 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,074] INFO [Partition test004-643 broker=2] Log loaded for partition test004-643 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,074] INFO [Partition test005-181 broker=2] Log loaded for partition test005-181 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,074] INFO [Partition test004-247 broker=2] Log loaded for partition test004-247 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,075] INFO [Partition __consumer_offsets-4 broker=2] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,075] INFO [Partition test004-379 broker=2] Log loaded for partition test004-379 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,075] INFO [Partition test005-52 broker=2] Log loaded for partition test005-52 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,076] INFO [Partition test004-647 broker=2] Log loaded for partition test004-647 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,077] INFO [Partition test004-713 broker=2] Log loaded for partition test004-713 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,077] INFO [Partition test004-382 broker=2] Log loaded for partition test004-382 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,077] INFO [Partition test004-448 broker=2] Log loaded for partition test004-448 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,078] INFO [Partition test004-514 broker=2] Log loaded for partition test004-514 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,078] INFO [Partition __consumer_offsets-7 broker=2] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,079] INFO [Partition test005-118 broker=2] Log loaded for partition test005-118 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,079] INFO [Partition test123-19 broker=2] Log loaded for partition test123-19 with initial high watermark 186720 (kafka.cluster.Partition) [2023-08-08 16:07:51,080] INFO [Partition test004-52 broker=2] Log loaded for partition test004-52 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,080] INFO [Partition test005-184 broker=2] Log loaded for partition test005-184 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,080] INFO [Partition test004-118 broker=2] Log loaded for partition test004-118 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,081] INFO [Partition test005-250 broker=2] Log loaded for partition test005-250 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,081] INFO [Partition test005-316 broker=2] Log loaded for partition test005-316 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,082] INFO [Partition test004-51 broker=2] Log loaded for partition test004-51 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,082] INFO [Partition test004-580 broker=2] Log loaded for partition test004-580 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,082] INFO [Partition test004-646 broker=2] Log loaded for partition test004-646 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,083] INFO [Partition test004-447 broker=2] Log loaded for partition test004-447 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,083] INFO [Partition test004-513 broker=2] Log loaded for partition test004-513 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,084] INFO [Partition test004-579 broker=2] Log loaded for partition test004-579 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,084] INFO [Partition test004-117 broker=2] Log loaded for partition test004-117 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,084] INFO [Partition test005-117 broker=2] Log loaded for partition test005-117 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,085] INFO [Partition test123-18 broker=2] Log loaded for partition test123-18 with initial high watermark 293940 (kafka.cluster.Partition) [2023-08-08 16:07:51,085] INFO [Partition test004-183 broker=2] Log loaded for partition test004-183 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,085] INFO [Partition __consumer_offsets-6 broker=2] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,086] INFO [Partition test005-183 broker=2] Log loaded for partition test005-183 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,087] INFO [Partition test004-249 broker=2] Log loaded for partition test004-249 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,087] INFO [Partition test004-315 broker=2] Log loaded for partition test004-315 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,088] INFO [Partition test005-315 broker=2] Log loaded for partition test005-315 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,088] INFO [Partition __consumer_offsets-9 broker=2] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,088] INFO [Partition test004-517 broker=2] Log loaded for partition test004-517 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,089] INFO [Partition test004-583 broker=2] Log loaded for partition test004-583 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,089] INFO [Partition test004-649 broker=2] Log loaded for partition test004-649 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,089] INFO [Partition test004-252 broker=2] Log loaded for partition test004-252 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,090] INFO [Partition test004-318 broker=2] Log loaded for partition test004-318 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,090] INFO [Partition test004-384 broker=2] Log loaded for partition test004-384 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,091] INFO [Partition test004-450 broker=2] Log loaded for partition test004-450 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,091] INFO [Partition test005-54 broker=2] Log loaded for partition test005-54 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,091] INFO [Partition test005-120 broker=2] Log loaded for partition test005-120 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,092] INFO [Partition test123-21 broker=2] Log loaded for partition test123-21 with initial high watermark 189660 (kafka.cluster.Partition) [2023-08-08 16:07:51,092] INFO [Partition test005-186 broker=2] Log loaded for partition test005-186 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,092] INFO [Partition test004-120 broker=2] Log loaded for partition test004-120 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,093] INFO [Partition test005-252 broker=2] Log loaded for partition test005-252 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,093] INFO [Partition test004-186 broker=2] Log loaded for partition test004-186 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,093] INFO [Partition test004-516 broker=2] Log loaded for partition test004-516 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,094] INFO [Partition test004-714 broker=2] Log loaded for partition test004-714 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,094] INFO [Partition test004-317 broker=2] Log loaded for partition test004-317 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,095] INFO [Partition test004-383 broker=2] Log loaded for partition test004-383 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,095] INFO [Partition __consumer_offsets-8 broker=2] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,095] INFO [Partition test005-53 broker=2] Log loaded for partition test005-53 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,096] INFO [Partition test004-185 broker=2] Log loaded for partition test004-185 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,096] INFO [Partition test004-251 broker=2] Log loaded for partition test004-251 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,096] INFO [Partition test005-251 broker=2] Log loaded for partition test005-251 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,097] INFO [Partition __consumer_offsets-11 broker=2] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,097] INFO [Partition test004-453 broker=2] Log loaded for partition test004-453 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,097] INFO [Partition test004-519 broker=2] Log loaded for partition test004-519 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,097] INFO [Partition test004-585 broker=2] Log loaded for partition test004-585 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,098] INFO [Partition test004-188 broker=2] Log loaded for partition test004-188 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,098] INFO [Partition test005-320 broker=2] Log loaded for partition test005-320 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,099] INFO [Partition test004-320 broker=2] Log loaded for partition test004-320 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,099] INFO [Partition test004-386 broker=2] Log loaded for partition test004-386 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,099] INFO [Partition test005-122 broker=2] Log loaded for partition test005-122 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,100] INFO [Partition test123-23 broker=2] Log loaded for partition test123-23 with initial high watermark 293867 (kafka.cluster.Partition) [2023-08-08 16:07:51,100] INFO [Partition test004-56 broker=2] Log loaded for partition test004-56 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,100] INFO [Partition test004-122 broker=2] Log loaded for partition test004-122 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,101] INFO [Partition test004-716 broker=2] Log loaded for partition test004-716 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,101] INFO [Partition __consumer_offsets-10 broker=2] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,101] INFO [Partition test004-584 broker=2] Log loaded for partition test004-584 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,102] INFO [Partition test004-650 broker=2] Log loaded for partition test004-650 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,102] INFO [Partition test004-253 broker=2] Log loaded for partition test004-253 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,102] INFO [Partition test005-253 broker=2] Log loaded for partition test005-253 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,103] INFO [Partition test005-319 broker=2] Log loaded for partition test005-319 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,103] INFO [Partition test004-451 broker=2] Log loaded for partition test004-451 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,103] INFO [Partition test004-55 broker=2] Log loaded for partition test004-55 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,103] INFO [Partition test005-55 broker=2] Log loaded for partition test005-55 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,104] INFO [Partition test005-187 broker=2] Log loaded for partition test005-187 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,104] INFO [Partition test005-174 broker=2] Log loaded for partition test005-174 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,104] INFO [Partition test004-108 broker=2] Log loaded for partition test004-108 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,105] INFO [Partition test005-240 broker=2] Log loaded for partition test005-240 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,105] INFO [Partition __consumer_offsets-46 broker=2] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,105] INFO [Partition test005-108 broker=2] Log loaded for partition test005-108 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,106] INFO [Partition test004-636 broker=2] Log loaded for partition test004-636 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,106] INFO [Partition test004-702 broker=2] Log loaded for partition test004-702 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,106] INFO [Partition test004-372 broker=2] Log loaded for partition test004-372 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,107] INFO [Partition test004-438 broker=2] Log loaded for partition test004-438 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,107] INFO [Partition test005-173 broker=2] Log loaded for partition test005-173 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,107] INFO [Partition __consumer_offsets-45 broker=2] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,108] INFO [Partition test005-239 broker=2] Log loaded for partition test005-239 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,108] INFO [Partition test004-305 broker=2] Log loaded for partition test004-305 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,108] INFO [Partition test004-41 broker=2] Log loaded for partition test004-41 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,109] INFO [Partition test005-41 broker=2] Log loaded for partition test005-41 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,109] INFO [Partition test004-107 broker=2] Log loaded for partition test004-107 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,109] INFO [Partition test005-107 broker=2] Log loaded for partition test005-107 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,110] INFO [Partition test123-8 broker=2] Log loaded for partition test123-8 with initial high watermark 174702 (kafka.cluster.Partition) [2023-08-08 16:07:51,110] INFO [Partition test004-503 broker=2] Log loaded for partition test004-503 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,110] INFO [Partition test004-635 broker=2] Log loaded for partition test004-635 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,110] INFO [Partition test123-11 broker=2] Log loaded for partition test123-11 with initial high watermark 293925 (kafka.cluster.Partition) [2023-08-08 16:07:51,111] INFO [Partition __consumer_offsets-48 broker=2] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,111] INFO [Partition test004-44 broker=2] Log loaded for partition test004-44 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,111] INFO [Partition test005-242 broker=2] Log loaded for partition test005-242 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,112] INFO [Partition test004-176 broker=2] Log loaded for partition test004-176 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,112] INFO [Partition test005-308 broker=2] Log loaded for partition test005-308 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,112] INFO [Partition test004-242 broker=2] Log loaded for partition test004-242 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,113] INFO [Partition test005-44 broker=2] Log loaded for partition test005-44 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,113] INFO [Partition test004-572 broker=2] Log loaded for partition test004-572 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,113] INFO [Partition test004-704 broker=2] Log loaded for partition test004-704 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,114] INFO [Partition test004-308 broker=2] Log loaded for partition test004-308 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,114] INFO [Partition test004-374 broker=2] Log loaded for partition test004-374 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,114] INFO [Partition test004-506 broker=2] Log loaded for partition test004-506 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,115] INFO [Partition __consumer_offsets-47 broker=2] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,115] INFO [Partition test004-109 broker=2] Log loaded for partition test004-109 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,115] INFO [Partition test005-109 broker=2] Log loaded for partition test005-109 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,116] INFO [Partition test123-10 broker=2] Log loaded for partition test123-10 with initial high watermark 258042 (kafka.cluster.Partition) [2023-08-08 16:07:51,116] INFO [Partition test004-175 broker=2] Log loaded for partition test004-175 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,116] INFO [Partition test005-175 broker=2] Log loaded for partition test005-175 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,117] INFO [Partition test004-241 broker=2] Log loaded for partition test004-241 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,117] INFO [Partition test004-307 broker=2] Log loaded for partition test004-307 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,117] INFO [Partition test005-307 broker=2] Log loaded for partition test005-307 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,118] INFO [Partition test004-43 broker=2] Log loaded for partition test004-43 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,118] INFO [Partition test005-43 broker=2] Log loaded for partition test005-43 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,118] INFO [Partition test004-637 broker=2] Log loaded for partition test004-637 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,119] INFO [Partition test004-439 broker=2] Log loaded for partition test004-439 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,119] INFO [Partition test004-505 broker=2] Log loaded for partition test004-505 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,119] INFO [Partition test004-571 broker=2] Log loaded for partition test004-571 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,120] INFO [Partition test005-112 broker=2] Log loaded for partition test005-112 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,120] INFO [Partition test123-13 broker=2] Log loaded for partition test123-13 with initial high watermark 259736 (kafka.cluster.Partition) [2023-08-08 16:07:51,120] INFO [Partition test004-46 broker=2] Log loaded for partition test004-46 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,121] INFO [Partition test005-178 broker=2] Log loaded for partition test005-178 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,121] INFO [Partition test004-178 broker=2] Log loaded for partition test004-178 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,121] INFO [Partition test004-640 broker=2] Log loaded for partition test004-640 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,122] INFO [Partition test004-706 broker=2] Log loaded for partition test004-706 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,122] INFO [Partition test004-244 broker=2] Log loaded for partition test004-244 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,122] INFO [Partition test004-310 broker=2] Log loaded for partition test004-310 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,122] INFO [Partition test004-376 broker=2] Log loaded for partition test004-376 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,123] INFO [Partition test004-45 broker=2] Log loaded for partition test004-45 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,123] INFO [Partition test005-45 broker=2] Log loaded for partition test005-45 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,123] INFO [Partition test004-111 broker=2] Log loaded for partition test004-111 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,124] INFO [Partition test005-111 broker=2] Log loaded for partition test005-111 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,124] INFO [Partition test004-177 broker=2] Log loaded for partition test004-177 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,125] INFO [Partition test005-177 broker=2] Log loaded for partition test005-177 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,125] INFO [Partition test005-243 broker=2] Log loaded for partition test005-243 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,125] INFO [Partition __consumer_offsets-49 broker=2] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,126] INFO [Partition test004-573 broker=2] Log loaded for partition test004-573 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,126] INFO [Partition test004-639 broker=2] Log loaded for partition test004-639 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,126] INFO [Partition test004-705 broker=2] Log loaded for partition test004-705 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,127] INFO [Partition test004-309 broker=2] Log loaded for partition test004-309 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,127] INFO [Partition test005-309 broker=2] Log loaded for partition test005-309 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,127] INFO [Partition test004-375 broker=2] Log loaded for partition test004-375 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,128] INFO [Partition test004-441 broker=2] Log loaded for partition test004-441 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,128] INFO [Partition test004-507 broker=2] Log loaded for partition test004-507 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,130] INFO [Partition test005-48 broker=2] Log loaded for partition test005-48 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,131] INFO [Partition test005-114 broker=2] Log loaded for partition test005-114 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,131] INFO [Partition test004-444 broker=2] Log loaded for partition test004-444 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,131] INFO [Partition test004-510 broker=2] Log loaded for partition test004-510 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,132] INFO [Partition test004-576 broker=2] Log loaded for partition test004-576 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,132] INFO [Partition test004-642 broker=2] Log loaded for partition test004-642 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,133] INFO [Partition test005-246 broker=2] Log loaded for partition test005-246 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,133] INFO [Partition test004-180 broker=2] Log loaded for partition test004-180 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,134] INFO [Partition test005-312 broker=2] Log loaded for partition test005-312 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,134] INFO [Partition test004-246 broker=2] Log loaded for partition test004-246 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,134] INFO [Partition test004-312 broker=2] Log loaded for partition test004-312 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,135] INFO [Partition test005-47 broker=2] Log loaded for partition test005-47 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,135] INFO [Partition test004-113 broker=2] Log loaded for partition test004-113 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,135] INFO [Partition test123-14 broker=2] Log loaded for partition test123-14 with initial high watermark 293850 (kafka.cluster.Partition) [2023-08-08 16:07:51,136] INFO [Partition test004-708 broker=2] Log loaded for partition test004-708 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,136] INFO [Partition test004-509 broker=2] Log loaded for partition test004-509 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,136] INFO [Partition test004-575 broker=2] Log loaded for partition test004-575 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,136] INFO [Partition test004-245 broker=2] Log loaded for partition test004-245 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,137] INFO [Partition test005-245 broker=2] Log loaded for partition test005-245 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,137] INFO [Partition test005-311 broker=2] Log loaded for partition test005-311 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,137] INFO [Partition test004-443 broker=2] Log loaded for partition test004-443 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,138] INFO [Partition test004-364 broker=2] Log loaded for partition test004-364 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,138] INFO [Partition test004-430 broker=2] Log loaded for partition test004-430 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,138] INFO [Partition test004-496 broker=2] Log loaded for partition test004-496 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,139] INFO [Partition test004-562 broker=2] Log loaded for partition test004-562 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,139] INFO [Partition test005-166 broker=2] Log loaded for partition test005-166 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,149] INFO [Partition test004-100 broker=2] Log loaded for partition test004-100 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,150] INFO [Partition test004-298 broker=2] Log loaded for partition test004-298 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,150] INFO [Partition test004-33 broker=2] Log loaded for partition test004-33 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,151] INFO [Partition test005-99 broker=2] Log loaded for partition test005-99 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,151] INFO [Partition test123-0 broker=2] Log loaded for partition test123-0 with initial high watermark 163200 (kafka.cluster.Partition) [2023-08-08 16:07:51,151] INFO [Partition test004-429 broker=2] Log loaded for partition test004-429 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,152] INFO [Partition test004-495 broker=2] Log loaded for partition test004-495 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,152] INFO [Partition test004-561 broker=2] Log loaded for partition test004-561 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,152] INFO [Partition test004-627 broker=2] Log loaded for partition test004-627 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,153] INFO [Partition test004-165 broker=2] Log loaded for partition test004-165 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,153] INFO [Partition test005-165 broker=2] Log loaded for partition test005-165 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,153] INFO [Partition test004-231 broker=2] Log loaded for partition test004-231 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,154] INFO [Partition test005-231 broker=2] Log loaded for partition test005-231 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,154] INFO [Partition test004-297 broker=2] Log loaded for partition test004-297 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,154] INFO [Partition test005-297 broker=2] Log loaded for partition test005-297 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,155] INFO [Partition test005-32 broker=2] Log loaded for partition test005-32 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,155] INFO [Partition test004-32 broker=2] Log loaded for partition test004-32 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,155] INFO [Partition test004-98 broker=2] Log loaded for partition test004-98 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,156] INFO [Partition test004-693 broker=2] Log loaded for partition test004-693 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,156] INFO [Partition test004-432 broker=2] Log loaded for partition test004-432 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,156] INFO [Partition test004-498 broker=2] Log loaded for partition test004-498 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,156] INFO [Partition test005-102 broker=2] Log loaded for partition test005-102 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,157] INFO [Partition test123-3 broker=2] Log loaded for partition test123-3 with initial high watermark 259904 (kafka.cluster.Partition) [2023-08-08 16:07:51,157] INFO [Partition test004-36 broker=2] Log loaded for partition test004-36 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,157] INFO [Partition test004-102 broker=2] Log loaded for partition test004-102 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,158] INFO [Partition test005-234 broker=2] Log loaded for partition test005-234 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,158] INFO [Partition test005-300 broker=2] Log loaded for partition test005-300 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,158] INFO [Partition test004-234 broker=2] Log loaded for partition test004-234 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,159] INFO [Partition test005-35 broker=2] Log loaded for partition test005-35 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,159] INFO [Partition test004-696 broker=2] Log loaded for partition test004-696 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,159] INFO [Partition test004-365 broker=2] Log loaded for partition test004-365 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,160] INFO [Partition test004-101 broker=2] Log loaded for partition test004-101 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,160] INFO [Partition test123-2 broker=2] Log loaded for partition test123-2 with initial high watermark 293968 (kafka.cluster.Partition) [2023-08-08 16:07:51,160] INFO [Partition test004-167 broker=2] Log loaded for partition test004-167 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,161] INFO [Partition test004-233 broker=2] Log loaded for partition test004-233 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,161] INFO [Partition test005-233 broker=2] Log loaded for partition test005-233 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,161] INFO [Partition test005-299 broker=2] Log loaded for partition test005-299 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,162] INFO [Partition test005-34 broker=2] Log loaded for partition test005-34 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,162] INFO [Partition test005-100 broker=2] Log loaded for partition test005-100 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,162] INFO [Partition test004-34 broker=2] Log loaded for partition test004-34 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,162] INFO [Partition test004-629 broker=2] Log loaded for partition test004-629 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,163] INFO [Partition test004-695 broker=2] Log loaded for partition test004-695 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,163] INFO [Partition test005-302 broker=2] Log loaded for partition test005-302 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,163] INFO [Partition test004-302 broker=2] Log loaded for partition test004-302 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,164] INFO [Partition test004-368 broker=2] Log loaded for partition test004-368 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,164] INFO [Partition test005-38 broker=2] Log loaded for partition test005-38 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,164] INFO [Partition test005-104 broker=2] Log loaded for partition test005-104 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,165] INFO [Partition test004-38 broker=2] Log loaded for partition test004-38 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,165] INFO [Partition test005-170 broker=2] Log loaded for partition test005-170 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,165] INFO [Partition test004-170 broker=2] Log loaded for partition test004-170 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,166] INFO [Partition test004-500 broker=2] Log loaded for partition test004-500 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,166] INFO [Partition test004-566 broker=2] Log loaded for partition test004-566 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,166] INFO [Partition test004-632 broker=2] Log loaded for partition test004-632 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,167] INFO [Partition test004-698 broker=2] Log loaded for partition test004-698 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,167] INFO [Partition test004-301 broker=2] Log loaded for partition test004-301 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,168] INFO [Partition test004-367 broker=2] Log loaded for partition test004-367 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,168] INFO [Partition test004-433 broker=2] Log loaded for partition test004-433 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,168] INFO [Partition test005-37 broker=2] Log loaded for partition test005-37 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,169] INFO [Partition test004-103 broker=2] Log loaded for partition test004-103 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,169] INFO [Partition test123-4 broker=2] Log loaded for partition test123-4 with initial high watermark 293715 (kafka.cluster.Partition) [2023-08-08 16:07:51,170] INFO [Partition test004-169 broker=2] Log loaded for partition test004-169 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,171] INFO [Partition test005-169 broker=2] Log loaded for partition test005-169 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,172] INFO [Partition test004-235 broker=2] Log loaded for partition test004-235 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,172] INFO [Partition test005-235 broker=2] Log loaded for partition test005-235 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,172] INFO [Partition test004-565 broker=2] Log loaded for partition test004-565 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,173] INFO [Partition test004-631 broker=2] Log loaded for partition test004-631 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,173] INFO [Partition test005-238 broker=2] Log loaded for partition test005-238 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,174] INFO [Partition test004-172 broker=2] Log loaded for partition test004-172 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,174] INFO [Partition test005-304 broker=2] Log loaded for partition test005-304 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,174] INFO [Partition test004-238 broker=2] Log loaded for partition test004-238 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,175] INFO [Partition test004-304 broker=2] Log loaded for partition test004-304 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,175] INFO [Partition test004-370 broker=2] Log loaded for partition test004-370 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,176] INFO [Partition test005-106 broker=2] Log loaded for partition test005-106 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,176] INFO [Partition test123-7 broker=2] Log loaded for partition test123-7 with initial high watermark 294016 (kafka.cluster.Partition) [2023-08-08 16:07:51,176] INFO [Partition test005-172 broker=2] Log loaded for partition test005-172 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,177] INFO [Partition test004-700 broker=2] Log loaded for partition test004-700 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,177] INFO [Partition test004-436 broker=2] Log loaded for partition test004-436 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,177] INFO [Partition test004-502 broker=2] Log loaded for partition test004-502 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,178] INFO [Partition test004-568 broker=2] Log loaded for partition test004-568 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,178] INFO [Partition test004-634 broker=2] Log loaded for partition test004-634 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,178] INFO [Partition test004-237 broker=2] Log loaded for partition test004-237 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,179] INFO [Partition test005-303 broker=2] Log loaded for partition test005-303 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,179] INFO [Partition test004-369 broker=2] Log loaded for partition test004-369 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,180] INFO [Partition test004-435 broker=2] Log loaded for partition test004-435 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,180] INFO [Partition test004-39 broker=2] Log loaded for partition test004-39 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,181] INFO [Partition test005-39 broker=2] Log loaded for partition test005-39 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,181] INFO [Partition test004-105 broker=2] Log loaded for partition test004-105 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,182] INFO [Partition test004-171 broker=2] Log loaded for partition test004-171 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,182] INFO [Partition test004-567 broker=2] Log loaded for partition test004-567 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,182] INFO [Partition test004-699 broker=2] Log loaded for partition test004-699 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:07:51,185] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test004-620, test004-686, test004-356, test004-488, test004-554, test004-157, test005-157, test123-58, test004-223, test005-223, __consumer_offsets-30, test004-289, test004-355, test005-355, test008-25, test009-25, test004-25, test005-25, test004-91, test004-421, test004-487, test010-23, test005-288, __consumer_offsets-29, test004-222, test005-354, test004-288, test005-90, test004-24, test004-556, test004-622, test005-358, test004-292, test004-358, test004-424, test004-93, test005-93, __consumer_offsets-32, test004-159, test010-26, test005-159, test004-225, test005-225, test004-291, test005-291, test008-27, test009-27, test004-27, test005-27, test004-621, test004-687, test-0, test004-357, test005-357, test004-489, test005-158, test123-59, __consumer_offsets-31, test004-92, test005-224, test004-158, test010-25, test005-290, test009-26, test005-26, test005-92, test004-558, test004-624, test-3, test005-294, test004-228, test004-294, test004-426, test004-161, test010-28, test005-161, test004-227, __consumer_offsets-34, test004-557, test004-689, test-2, test005-293, test004-425, test004-491, test005-94, test004-28, test004-94, test005-226, test009-28, test005-28, __consumer_offsets-33, test004-494, test004-560, test004-230, test004-296, test004-362, test004-31, test005-31, test005-97, test004-163, test005-163, test004-692, __consumer_offsets-36, test004-493, test004-625, test004-691, test005-229, test005-295, test004-361, test004-427, test005-96, test005-162, test004-96, test005-228, test004-162, test010-29, test-4, __consumer_offsets-35, test004-17, test005-17, test004-83, test005-83, test004-612, test004-678, test004-413, test004-479, test004-545, test010-16, test004-215, test-7, test005-215, __consumer_offsets-38, test004-281, test005-281, test004-347, test005-347, test005-16, test004-16, test005-148, test123-49, test004-677, test009-16, test004-544, test004-610, test004-148, __consumer_offsets-37, test-6, test005-346, test004-346, test009-19, test005-19, test004-548, test004-349, test005-349, test004-85, __consumer_offsets-40, test004-151, test005-151, test-9, test005-217, test004-283, test009-18, test005-84, test004-18, test004-613, test004-679, test004-348, test004-414, test004-480, test004-546, test123-51, __consumer_offsets-39, test010-17, test005-282, test004-216, test009-21, __consumer_offsets-42, test004-484, test004-682, test004-285, test005-285, test004-351, test004-483, test004-21, test005-21, test005-87, test010-20, test123-54, test004-219, test-11, test005-219, test008-20, test005-20, __consumer_offsets-41, test004-549, test004-615, test005-350, test004-284, test004-416, test004-482, test005-86, test004-20, test005-152, test123-53, test004-86, test005-218, test004-152, test010-19, test005-284, test004-218, test004-684, __consumer_offsets-44, test008-23, test004-420, test004-552, test004-618, test004-221, test-13, test004-287, test005-287, test005-353, test004-419, test004-89, test004-155, test005-155, test123-56, test009-22, __consumer_offsets-43, test004-551, test004-617, test004-683, test-12, test005-352, test004-352, test004-418, test005-22, test005-88, test004-22, test005-154, test004-88, test005-220, test004-154, test010-21, test004-141, test010-8, test005-141, test004-207, __consumer_offsets-13, test005-207, test005-273, test005-339, test009-9, test004-9, test005-9, test004-75, test004-669, test009-8, test004-471, test005-206, test010-7, test005-272, test004-206, test004-272, __consumer_offsets-12, test004-338, test005-8, test005-74, test005-140, test004-668, test008-7, test009-7, test004-470, test004-536, test004-602, __consumer_offsets-15, test005-77, test004-143, test010-10, test123-44, test004-275, test004-11, test004-605, test004-671, test-17, test004-341, test005-341, test004-407, test004-539, test005-142, test123-43, test004-76, test005-208, test010-9, __consumer_offsets-14, test005-274, test004-208, test004-274, test009-10, test008-10, test005-10, test005-76, test004-604, test-16, test004-340, test004-406, test004-472, test004-538, test005-13, test004-79, test005-79, test004-145, test005-145, test004-211, test-20, test008-13, __consumer_offsets-17, test004-673, test-19, test004-277, test005-277, test004-343, test004-409, test004-475, test004-12, __consumer_offsets-16, test123-45, test004-78, test005-210, test004-144, test005-276, test009-12, test005-12, test004-540, test004-606, test005-342, test004-276, test004-342, test004-408, test004-474, test004-81, test005-81, test004-147, test010-14, test005-147, test123-48, __consumer_offsets-19, test008-15, test009-15, test004-477, test004-609, test004-675, test004-213, test005-213, test004-279, test005-279, test004-411, test005-80, test004-14, test005-146, test123-47, test005-212, test010-13, test-21, test009-14, __consumer_offsets-18, test004-542, test004-608, test004-674, test004-212, test005-344, test004-463, test004-595, test004-133, test005-133, test123-34, test004-199, __consumer_offsets-21, test004-265, test004-331, test005-331, test005-0, test005-66, test004-66, test004-661, test009-0, test004-396, test004-462, test004-528, test004-594, test005-198, test005-264, test-23, test004-264, __consumer_offsets-20, test004-330, test004-65, test005-65, test004-131, test004-660, test005-333, test004-399, __consumer_offsets-23, test004-135, test010-2, test004-201, test004-267, test-26, test005-68, test004-2, test004-663, test004-398, test004-530, test005-134, test123-35, test005-200, test004-134, test010-1, __consumer_offsets-22, test005-266, test-25, test004-200, test005-332, test008-1, test009-1, test004-1, test005-1, test004-67, test004-269, test005-269, test-28, test004-335, test005-335, test004-401, test004-467, test004-71, test005-71, test010-4, test005-137, test123-38, test005-203, test005-4, __consumer_offsets-26, test004-533, test004-599, test004-334, test004-466, test005-70, test004-4, __consumer_offsets-24, test005-136, test123-37, test004-70, test005-202, test004-136, test005-268, test-27, test004-202, test009-3, test004-3, __consumer_offsets-25, test005-3, test004-532, test004-598, test004-664, test004-205, test004-271, test005-271, test005-337, test004-403, test004-7, test004-73, test004-139, test123-40, __consumer_offsets-28, test005-336, test004-336, test004-402, test005-6, test005-72, test004-6, test005-138, test123-39, test004-72, test005-204, test004-138, test010-5, test008-5, __consumer_offsets-27, test009-5, test004-468, test004-534, test004-600, test004-666, test004-653, test004-719, test004-389, test004-521, test005-190, test004-124, test005-256, test004-190, test005-322, test004-322, test005-58, test005-124, test004-652, test004-718, test004-388, test004-454, test004-586, test005-189, test004-255, test004-321, test005-321, test004-57, test005-57, test004-123, test123-24, test004-589, test004-655, test004-325, test004-391, test004-457, test004-523, test005-126, test004-60, test004-126, test005-258, test004-192, test005-324, test004-258, test005-60, test004-654, test004-390, test004-456, test004-522, test005-125, test123-26, test004-191, test004-257, test005-257, test004-59, test004-591, test004-393, test004-459, test005-128, test123-29, test005-194, test004-194, __consumer_offsets-1, test004-590, test005-326, test004-260, test004-326, test004-61, test005-61, __consumer_offsets-0, test004-127, test123-28, test005-193, test005-259, test004-461, test004-527, test004-659, test004-263, test005-263, test004-329, test005-329, test005-64, test005-130, test123-31, test004-64, test005-196, __consumer_offsets-3, test004-526, test004-592, test004-658, test005-262, test004-196, test005-328, test004-262, test004-328, test004-394, test004-129, test005-129, test123-30, test004-195, test005-195, __consumer_offsets-2, test123-17, test004-50, test004-711, test004-380, test004-446, test004-578, test005-182, test004-116, test005-248, __consumer_offsets-5, test004-182, test005-314, test004-314, test004-49, test005-49, test004-115, test005-115, test123-16, test004-710, test004-511, test004-643, test005-181, test004-247, __consumer_offsets-4, test004-379, test005-52, test004-647, test004-713, test004-382, test004-448, test004-514, test005-118, test123-19, __consumer_offsets-7, test004-52, test005-184, test004-118, test005-250, test005-316, test004-51, test004-580, test004-646, test004-447, test004-513, test004-579, test004-117, test005-117, test123-18, test004-183, test005-183, __consumer_offsets-6, test004-249, test004-315, test005-315, __consumer_offsets-9, test004-517, test004-583, test004-649, test004-252, test004-318, test004-384, test004-450, test005-54, test005-120, test123-21, test005-186, test004-120, test005-252, test004-186, test004-516, test004-714, test004-317, test004-383, test005-53, __consumer_offsets-8, test004-185, test004-251, test005-251, __consumer_offsets-11, test004-453, test004-519, test004-585, test004-188, test005-320, test004-320, test004-386, test005-122, test123-23, test004-56, test004-122, test004-716, __consumer_offsets-10, test004-584, test004-650, test004-253, test005-253, test005-319, test004-451, test004-55, test005-55, test005-187, test005-174, test004-108, test005-240, __consumer_offsets-46, test005-108, test004-636, test004-702, test004-372, test004-438, test005-173, __consumer_offsets-45, test005-239, test004-305, test004-41, test005-41, test004-107, test005-107, test123-8, test004-503, test004-635, test123-11, test004-44, __consumer_offsets-48, test005-242, test004-176, test005-308, test004-242, test005-44, test004-572, test004-704, test004-308, test004-374, test004-506, test004-109, __consumer_offsets-47, test005-109, test123-10, test004-175, test005-175, test004-241, test004-307, test005-307, test004-43, test005-43, test004-637, test004-439, test004-505, test004-571, test005-112, test123-13, test004-46, test005-178, test004-178, test004-640, test004-706, test004-244, test004-310, test004-376, test004-45, test005-45, test004-111, test005-111, test004-177, test005-177, test005-243, __consumer_offsets-49, test004-573, test004-639, test004-705, test004-309, test005-309, test004-375, test004-441, test004-507, test005-48, test005-114, test004-444, test004-510, test004-576, test004-642, test005-246, test004-180, test005-312, test004-246, test004-312, test005-47, test004-113, test123-14, test004-708, test004-509, test004-575, test004-245, test005-245, test005-311, test004-443, test004-364, test004-430, test004-496, test004-562, test005-166, test004-100, test004-298, test004-33, test005-99, test123-0, test004-429, test004-495, test004-561, test004-627, test004-165, test005-165, test004-231, test005-231, test004-297, test005-297, test005-32, test004-32, test004-98, test004-693, test004-432, test004-498, test005-102, test123-3, test004-36, test004-102, test005-234, test005-300, test004-234, test005-35, test004-696, test004-365, test004-101, test123-2, test004-167, test004-233, test005-233, test005-299, test005-34, test005-100, test004-34, test004-629, test004-695, test005-302, test004-302, test004-368, test005-38, test005-104, test004-38, test005-170, test004-170, test004-500, test004-566, test004-632, test004-698, test004-301, test004-367, test004-433, test005-37, test004-103, test123-4, test004-169, test005-169, test004-235, test005-235, test004-565, test004-631, test005-238, test004-172, test005-304, test004-238, test004-304, test004-370, test005-106, test123-7, test005-172, test004-700, test004-436, test004-502, test004-568, test004-634, test004-237, test005-303, test004-369, test004-435, test004-39, test005-39, test004-105, test004-171, test004-567, test004-699) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:07:51,218] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 13 in epoch OptionalInt[42] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,219] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-13 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,219] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 46 in epoch OptionalInt[42] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,219] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-46 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,219] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 9 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,219] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-9 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,219] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 42 in epoch OptionalInt[42] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,219] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-42 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,219] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 21 in epoch OptionalInt[45] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,219] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-21 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,223] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-13 for coordinator epoch OptionalInt[42]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,223] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-46 for coordinator epoch OptionalInt[42]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,223] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-9 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,223] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-42 for coordinator epoch OptionalInt[42]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,225] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 17 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,225] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-17 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,225] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 30 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,225] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-30 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,225] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 26 in epoch OptionalInt[42] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,225] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-26 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,225] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 5 in epoch OptionalInt[45] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,225] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-5 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,225] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 38 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,225] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-38 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,225] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 1 in epoch OptionalInt[45] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,225] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-1 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,225] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 34 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,226] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-34 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,226] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 16 in epoch OptionalInt[45] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,226] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-16 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,226] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 45 in epoch OptionalInt[45] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,226] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-45 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,226] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 12 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,226] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-12 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,226] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 41 in epoch OptionalInt[45] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,226] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-41 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,226] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 24 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,226] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-24 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,226] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 20 in epoch OptionalInt[42] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,226] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-20 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,226] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 49 in epoch OptionalInt[45] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,226] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-49 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,226] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 0 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,226] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-0 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,226] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 29 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,226] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-29 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,226] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 25 in epoch OptionalInt[45] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,226] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-25 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,226] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 8 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,226] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-8 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,226] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 37 in epoch OptionalInt[42] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,226] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-37 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,226] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 4 in epoch OptionalInt[42] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,226] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-4 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,226] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 33 in epoch OptionalInt[42] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,226] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-33 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,226] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 15 in epoch OptionalInt[42] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,226] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-15 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,226] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 48 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,226] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-48 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,226] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 11 in epoch OptionalInt[42] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,226] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-11 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,226] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 44 in epoch OptionalInt[45] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,226] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-44 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,226] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 23 in epoch OptionalInt[42] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,226] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-23 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,226] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 19 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,226] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-19 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,227] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 32 in epoch OptionalInt[45] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,227] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-32 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,227] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 28 in epoch OptionalInt[42] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,227] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-28 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,227] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 7 in epoch OptionalInt[45] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,227] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-7 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,227] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 40 in epoch OptionalInt[42] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,227] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-40 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,227] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 3 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,227] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-3 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,227] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 36 in epoch OptionalInt[45] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,227] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-36 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,227] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 47 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,227] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-47 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,227] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 14 in epoch OptionalInt[45] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,227] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-14 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,227] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 43 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,227] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-43 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,227] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 10 in epoch OptionalInt[45] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,227] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-10 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,227] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 22 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,227] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-22 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,227] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 18 in epoch OptionalInt[45] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,227] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-18 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,227] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 31 in epoch OptionalInt[42] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,227] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-31 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,227] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 27 in epoch OptionalInt[45] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,227] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-27 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,227] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 39 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,227] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-39 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,227] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 6 in epoch OptionalInt[42] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,227] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-6 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,227] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 35 in epoch OptionalInt[45] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,227] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-35 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,227] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 2 in epoch OptionalInt[42] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:51,227] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-2 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,229] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-21 for coordinator epoch OptionalInt[45]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,229] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-17 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,229] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-30 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,229] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-26 for coordinator epoch OptionalInt[42]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,229] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-5 for coordinator epoch OptionalInt[45]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,229] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-38 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,229] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-1 for coordinator epoch OptionalInt[45]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,229] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-34 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,229] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-16 for coordinator epoch OptionalInt[45]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,229] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-45 for coordinator epoch OptionalInt[45]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,229] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-12 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,229] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-41 for coordinator epoch OptionalInt[45]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,229] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-24 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,229] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-20 for coordinator epoch OptionalInt[42]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,229] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-49 for coordinator epoch OptionalInt[45]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,229] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-0 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,229] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-29 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,229] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-25 for coordinator epoch OptionalInt[45]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,229] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-8 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,229] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-37 for coordinator epoch OptionalInt[42]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,230] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-4 for coordinator epoch OptionalInt[42]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,230] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-33 for coordinator epoch OptionalInt[42]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,230] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-15 for coordinator epoch OptionalInt[42]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,230] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-48 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,230] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-11 for coordinator epoch OptionalInt[42]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,230] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-44 for coordinator epoch OptionalInt[45]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,230] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-23 for coordinator epoch OptionalInt[42]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,230] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-19 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,230] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-32 for coordinator epoch OptionalInt[45]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,230] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-28 for coordinator epoch OptionalInt[42]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,230] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-7 for coordinator epoch OptionalInt[45]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,230] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-40 for coordinator epoch OptionalInt[42]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,230] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-3 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,230] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-36 for coordinator epoch OptionalInt[45]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,230] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-47 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,230] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-14 for coordinator epoch OptionalInt[45]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,230] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-43 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,230] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-10 for coordinator epoch OptionalInt[45]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,230] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-22 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,230] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-18 for coordinator epoch OptionalInt[45]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,230] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-31 for coordinator epoch OptionalInt[42]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,230] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-27 for coordinator epoch OptionalInt[45]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,230] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-39 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,230] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-6 for coordinator epoch OptionalInt[42]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,230] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-35 for coordinator epoch OptionalInt[45]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,230] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-2 for coordinator epoch OptionalInt[42]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:51,233] INFO [DynamicConfigPublisher broker id=2] Updating topic test008 with new configuration : retention.ms -> 1800000 (kafka.server.metadata.DynamicConfigPublisher) [2023-08-08 16:07:51,241] INFO [DynamicConfigPublisher broker id=2] Updating topic test009 with new configuration : retention.ms -> 1800000 (kafka.server.metadata.DynamicConfigPublisher) [2023-08-08 16:07:51,242] INFO [DynamicConfigPublisher broker id=2] Updating topic test with new configuration : retention.ms -> 1800000 (kafka.server.metadata.DynamicConfigPublisher) [2023-08-08 16:07:51,243] INFO [DynamicConfigPublisher broker id=2] Updating topic test004 with new configuration : retention.ms -> 1800000 (kafka.server.metadata.DynamicConfigPublisher) [2023-08-08 16:07:51,245] INFO [DynamicConfigPublisher broker id=2] Updating topic test005 with new configuration : retention.ms -> 1800000 (kafka.server.metadata.DynamicConfigPublisher) [2023-08-08 16:07:51,246] INFO [DynamicConfigPublisher broker id=2] Updating topic test123 with new configuration : retention.ms -> 1800000 (kafka.server.metadata.DynamicConfigPublisher) [2023-08-08 16:07:51,247] INFO [DynamicConfigPublisher broker id=2] Updating topic __consumer_offsets with new configuration : compression.type -> producer,cleanup.policy -> compact,segment.bytes -> 104857600 (kafka.server.metadata.DynamicConfigPublisher) [2023-08-08 16:07:51,272] INFO [BrokerLifecycleManager id=2] The broker has caught up. Transitioning from STARTING to RECOVERY. (kafka.server.BrokerLifecycleManager) [2023-08-08 16:07:51,277] INFO [BrokerServer id=2] Finished waiting for the controller to acknowledge that we are caught up (kafka.server.BrokerServer) [2023-08-08 16:07:51,277] INFO [BrokerServer id=2] Waiting for the initial broker metadata update to be published (kafka.server.BrokerServer) [2023-08-08 16:07:51,277] INFO [BrokerServer id=2] Finished waiting for the initial broker metadata update to be published (kafka.server.BrokerServer) [2023-08-08 16:07:51,279] INFO KafkaConfig values: advertised.listeners = PLAINTEXT://10.58.12.165:9092 alter.config.policy.class.name = null alter.log.dirs.replication.quota.window.num = 11 alter.log.dirs.replication.quota.window.size.seconds = 1 authorizer.class.name = auto.create.topics.enable = false auto.include.jmx.reporter = true auto.leader.rebalance.enable = true background.threads = 10 broker.heartbeat.interval.ms = 2000 broker.id = 2 broker.id.generation.enable = true broker.rack = null broker.session.timeout.ms = 9000 client.quota.callback.class = null compression.type = producer connection.failed.authentication.delay.ms = 100 connections.max.idle.ms = 600000 connections.max.reauth.ms = 0 control.plane.listener.name = null controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.listener.names = CONTROLLER controller.quorum.append.linger.ms = 25 controller.quorum.election.backoff.max.ms = 1000 controller.quorum.election.timeout.ms = 1000 controller.quorum.fetch.timeout.ms = 2000 controller.quorum.request.timeout.ms = 2000 controller.quorum.retry.backoff.ms = 20 controller.quorum.voters = [1@10.58.16.231:9093, 2@10.58.12.165:9093, 3@10.58.12.217:9093] controller.quota.window.num = 11 controller.quota.window.size.seconds = 1 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 2 delegation.token.expiry.check.interval.ms = 3600000 delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null delegation.token.max.lifetime.ms = 604800000 delegation.token.secret.key = null delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true early.start.listeners = null fetch.max.bytes = 57671680 fetch.purgatory.purge.interval.requests = 1000 group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] group.consumer.heartbeat.interval.ms = 5000 group.consumer.max.heartbeat.interval.ms = 15000 group.consumer.max.session.timeout.ms = 60000 group.consumer.max.size = 2147483647 group.consumer.min.heartbeat.interval.ms = 5000 group.consumer.min.session.timeout.ms = 45000 group.consumer.session.timeout.ms = 45000 group.coordinator.new.enable = false group.coordinator.threads = 1 group.initial.rebalance.delay.ms = 3000 group.max.session.timeout.ms = 1800000 group.max.size = 2147483647 group.min.session.timeout.ms = 6000 initial.broker.registration.timeout.ms = 60000 inter.broker.listener.name = PLAINTEXT inter.broker.protocol.version = 3.6-IV0 kafka.metrics.polling.interval.secs = 10 kafka.metrics.reporters = [] leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL listeners = PLAINTEXT://10.58.12.165:9092,CONTROLLER://10.58.12.165:9093 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.max.compaction.lag.ms = 9223372036854775807 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-logs log.dirs = /data01/kafka-logs-351 log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.local.retention.bytes = -2 log.local.retention.ms = -2 log.message.downconversion.enable = true log.message.format.version = 3.0-IV1 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 300000 log.retention.hours = 72 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connection.creation.rate = 2147483647 max.connections = 2147483647 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots = 1000 message.max.bytes = 52428800 metadata.log.dir = null metadata.log.max.record.bytes.between.snapshots = 20971520 metadata.log.max.snapshot.interval.ms = 3600000 metadata.log.segment.bytes = 1073741824 metadata.log.segment.min.bytes = 8388608 metadata.log.segment.ms = 604800000 metadata.max.idle.interval.ms = 500 metadata.max.retention.bytes = 104857600 metadata.max.retention.ms = 604800000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 node.id = 2 num.io.threads = 8 num.network.threads = 5 num.partitions = 3 num.recovery.threads.per.data.dir = 1 num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 4320 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 3 offsets.topic.segment.bytes = 104857600 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding password.encoder.iterations = 4096 password.encoder.key.length = 128 password.encoder.keyfactory.algorithm = null password.encoder.old.secret = null password.encoder.secret = null principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder process.roles = [broker, controller] producer.id.expiration.check.interval.ms = 600000 producer.id.expiration.ms = 86400000 producer.purgatory.purge.interval.requests = 1000 queued.max.request.bytes = -1 queued.max.requests = 500 quota.window.num = 11 quota.window.size.seconds = 1 remote.log.index.file.cache.total.size.bytes = 1073741824 remote.log.manager.task.interval.ms = 30000 remote.log.manager.task.retry.backoff.max.ms = 30000 remote.log.manager.task.retry.backoff.ms = 500 remote.log.manager.task.retry.jitter = 0.2 remote.log.manager.thread.pool.size = 10 remote.log.metadata.custom.metadata.max.bytes = 128 remote.log.metadata.manager.class.name = null remote.log.metadata.manager.class.path = null remote.log.metadata.manager.impl.prefix = null remote.log.metadata.manager.listener.name = null remote.log.reader.max.pending.tasks = 100 remote.log.reader.threads = 10 remote.log.storage.manager.class.name = null remote.log.storage.manager.class.path = null remote.log.storage.manager.impl.prefix = null remote.log.storage.system.enable = false replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 52428800 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 30000 replica.selector.class = null replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.client.callback.handler.class = null sasl.enabled.mechanisms = [GSSAPI] sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism.controller.protocol = GSSAPI sasl.mechanism.inter.broker.protocol = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null sasl.server.callback.handler.class = null sasl.server.max.receive.size = 524288 security.inter.broker.protocol = PLAINTEXT security.providers = null server.max.startup.time.ms = 9223372036854775807 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 socket.listen.backlog.size = 50 socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = [] ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.principal.mapping.rules = DEFAULT ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 transaction.max.timeout.ms = 900000 transaction.partition.verification.enable = true transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 1 transaction.state.log.num.partitions = 50 transaction.state.log.replication.factor = 3 transaction.state.log.segment.bytes = 104857600 transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false unstable.api.versions.enable = false zookeeper.clientCnxnSocket = null zookeeper.connect = null zookeeper.connection.timeout.ms = null zookeeper.max.in.flight.requests = 10 zookeeper.metadata.migration.enable = false zookeeper.session.timeout.ms = 18000 zookeeper.set.acl = false zookeeper.ssl.cipher.suites = null zookeeper.ssl.client.enable = false zookeeper.ssl.crl.enable = false zookeeper.ssl.enabled.protocols = null zookeeper.ssl.endpoint.identification.algorithm = HTTPS zookeeper.ssl.keystore.location = null zookeeper.ssl.keystore.password = null zookeeper.ssl.keystore.type = null zookeeper.ssl.ocsp.enable = false zookeeper.ssl.protocol = TLSv1.2 zookeeper.ssl.truststore.location = null zookeeper.ssl.truststore.password = null zookeeper.ssl.truststore.type = null (kafka.server.KafkaConfig) [2023-08-08 16:07:51,284] INFO [BrokerServer id=2] Waiting for the broker to be unfenced (kafka.server.BrokerServer) [2023-08-08 16:07:51,358] INFO [BrokerLifecycleManager id=2] The broker has been unfenced. Transitioning from RECOVERY to RUNNING. (kafka.server.BrokerLifecycleManager) [2023-08-08 16:07:51,358] INFO [BrokerServer id=2] Finished waiting for the broker to be unfenced (kafka.server.BrokerServer) [2023-08-08 16:07:51,359] INFO authorizerStart completed for endpoint PLAINTEXT. Endpoint is now READY. (org.apache.kafka.server.network.EndpointReadyFutures) [2023-08-08 16:07:51,360] INFO [SocketServer listenerType=BROKER, nodeId=2] Enabling request processing. (kafka.network.SocketServer) [2023-08-08 16:07:51,360] INFO Awaiting socket connections on 10.58.12.165:9092. (kafka.network.DataPlaneAcceptor) [2023-08-08 16:07:51,372] INFO [BrokerServer id=2] Waiting for all of the authorizer futures to be completed (kafka.server.BrokerServer) [2023-08-08 16:07:51,372] INFO [BrokerServer id=2] Finished waiting for all of the authorizer futures to be completed (kafka.server.BrokerServer) [2023-08-08 16:07:51,372] INFO [BrokerServer id=2] Waiting for all of the SocketServer Acceptors to be started (kafka.server.BrokerServer) [2023-08-08 16:07:51,372] INFO [BrokerServer id=2] Finished waiting for all of the SocketServer Acceptors to be started (kafka.server.BrokerServer) [2023-08-08 16:07:51,372] INFO [BrokerServer id=2] Transition from STARTING to STARTED (kafka.server.BrokerServer) [2023-08-08 16:07:51,373] INFO Kafka version: 3.6.0-SNAPSHOT (org.apache.kafka.common.utils.AppInfoParser) [2023-08-08 16:07:51,373] INFO Kafka commitId: 8dec3e66163420ee (org.apache.kafka.common.utils.AppInfoParser) [2023-08-08 16:07:51,373] INFO Kafka startTimeMs: 1691482071372 (org.apache.kafka.common.utils.AppInfoParser) [2023-08-08 16:07:51,375] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test010-8, test010-9, test008-20, test010-14, test008-23, test010-1, test008-10, test010-4, test008-13, test008-15, test008-1, test010-23, test010-26, test008-5, test008-7, test010-29, test010-16, test008-25, test008-27, test010-19) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:07:51,375] INFO [KafkaRaftServer nodeId=2] Kafka Server started (kafka.server.KafkaRaftServer) [2023-08-08 16:07:51,664] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test004-620, test004-719, test004-389, test004-521, test004-124, test005-256, test005-223, test005-322, test004-355, test009-25, test004-91, test004-652, test004-421, test004-487, test004-454, test004-586, test005-189, test004-222, test005-354, test004-321, test004-288, test004-24, test005-57, test004-556, test004-655, test004-622, test004-325, test004-292, test004-93, test005-93, test004-60, test004-159, test005-159, test004-126, test004-225, test005-225, test005-291, test004-258, test009-27, test005-27, test004-357, test005-357, test004-390, test004-456, test004-522, test005-158, test123-59, test005-125, test123-26, test004-191, test004-158, test005-290, test004-257, test005-26, test005-92, test004-59, test004-591, test005-294, test004-228, test004-294, test004-459, test005-128, test123-29, test005-194, test004-194, test004-590, test004-689, test-2, test005-326, test004-425, test004-491, test004-28, test005-61, test005-259, test004-527, test004-494, test004-560, test004-659, test004-263, test004-329, test004-362, test004-31, test005-31, test005-97, test004-64, test004-163, test005-163, test004-625, test004-691, test005-262, test005-328, test004-427, test004-394, test004-129, test005-129, test123-30, test004-195, test005-228, test005-195, test-4, test005-17, test004-50, test004-612, test004-678, test004-413, test004-578, test004-116, test004-182, test005-314, test005-281, test004-347, test005-347, test004-314, test004-16, test005-49, test005-148, test123-49, test123-16, test004-677, test009-16, test004-544, test004-643, test004-148, test005-181, test-6, test004-379, test009-19, test005-52, test004-548, test004-713, test004-349, test005-349, test004-85, test005-118, test004-52, test004-151, test005-184, test005-250, test004-283, test005-84, test004-646, test004-447, test004-414, test004-513, test004-480, test004-579, test123-51, test004-117, test123-18, test005-282, test004-216, test004-315, test005-315, test009-21, test004-484, test004-583, test004-252, test005-21, test005-186, test005-252, test-11, test005-20, test004-516, test004-615, test004-383, test004-20, test005-152, test004-185, test005-218, test004-251, test004-684, test004-552, test004-221, test004-188, test004-287, test005-287, test004-320, test004-386, test005-122, test123-23, test004-56, test004-155, test005-155, test123-56, test004-122, test004-716, test004-551, test004-683, test004-650, test-12, test005-352, test005-319, test004-352, test004-451, test004-418, test005-88, test004-22, test005-55, test004-88, test005-220, test004-141, test005-174, test004-207, test005-240, test005-207, test005-339, test004-9, test005-9, test004-669, test004-636, test004-702, test009-8, test004-372, test004-438, test005-206, test004-206, test004-305, test004-338, test004-41, test005-140, test004-668, test004-503, test004-602, test123-11, test005-77, test123-44, test004-176, test005-308, test004-242, test004-572, test004-341, test004-308, test004-407, test004-539, test004-506, test005-142, test004-109, test004-76, test005-109, test005-274, test004-274, test009-10, test004-43, test005-43, test004-604, test-16, test004-472, test005-13, test004-79, test004-145, test-20, test004-640, test005-277, test004-244, test004-310, test004-409, test004-475, test004-45, test123-45, test004-111, test005-111, test004-177, test005-210, test005-177, test005-243, test004-540, test004-705, test004-276, test005-309, test004-375, test004-342, test004-507, test004-81, test005-114, test005-81, test004-477, test004-444, test004-510, test004-609, test004-213, test005-246, test005-213, test004-279, test004-246, test005-80, test004-14, test005-47, test005-146, test123-14, test-21, test004-708, test009-14, test004-575, test004-608, test004-674, test004-212, test005-344, test004-443, test004-364, test004-463, test004-430, test004-496, test004-595, test004-562, test123-34, test004-100, test004-199, test004-265, test004-331, test005-331, test004-298, test005-0, test004-661, test009-0, test005-165, test005-231, test005-297, test005-65, test004-98, test004-693, test004-399, test004-432, test005-102, test004-135, test004-201, test005-234, test004-267, test005-68, test004-663, test004-696, test004-398, test004-530, test005-134, test123-2, test004-167, test005-200, test004-134, test005-266, test004-233, test-25, test004-1, test005-34, test004-67, test005-100, test004-34, test004-629, test005-302, test-28, test004-335, test005-335, test004-302, test004-467, test005-71, test004-38, test005-170, test005-137, test005-203, test005-4, test004-533, test004-500, test004-566, test004-367, test004-4, test005-37, test123-37, test004-103, test004-70, test123-4, test004-169, test005-268, test004-235, test009-3, test004-631, test004-598, test005-238, test004-172, test004-271, test005-304, test005-271, test004-238, test004-403, test005-106, test123-7, test004-139, test005-172, test004-700, test004-634, test005-336, test004-369, test004-435, test005-6, test005-72, test004-6, test005-39, test123-39, test004-105, test004-72, test004-468, test004-567, test004-534) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:07:51,689] INFO [ReplicaFetcherThread-0-1]: Starting (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,698] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-620 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,698] INFO [ReplicaFetcherManager on broker 2] Added fetcher to broker 1 for partitions HashMap(test004-620 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-719 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-389 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-521 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-124 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-256 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-223 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-322 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-355 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test009-25 -> InitialFetchState(Some(g95oe921S86FCGM2NqB23w),BrokerEndPoint(id=1, host=10.58.16.231:9092),15,718242), test004-91 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-652 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-421 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-487 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-454 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-586 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-189 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-222 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-354 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-321 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-288 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-24 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-57 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-556 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-655 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-622 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-325 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-292 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-93 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-93 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-60 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-159 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-159 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-126 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-225 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-225 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-291 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-258 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test009-27 -> InitialFetchState(Some(g95oe921S86FCGM2NqB23w),BrokerEndPoint(id=1, host=10.58.16.231:9092),15,717640), test005-27 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-357 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-357 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-390 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-456 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-522 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-158 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test123-59 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),39,294075), test005-125 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test123-26 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),48,294030), test004-191 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-158 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-290 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-257 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-26 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-92 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-59 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-591 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-294 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-228 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-294 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-459 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-128 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test123-29 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),45,294075), test005-194 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-194 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-590 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-689 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test-2 -> InitialFetchState(Some(HeEEmpDsSGeLVSIfaRiRqQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),53,0), test005-326 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-425 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-491 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-28 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-61 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-259 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-527 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-494 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-560 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-659 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-263 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-329 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-362 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-31 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-31 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-97 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-64 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-163 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-163 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-625 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-691 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-262 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-328 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-427 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-394 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-129 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-129 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test123-30 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),40,293880), test004-195 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-228 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-195 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test-4 -> InitialFetchState(Some(HeEEmpDsSGeLVSIfaRiRqQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),48,0), test005-17 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-50 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-612 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-678 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-413 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-578 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-116 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-182 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-314 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-281 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-347 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-347 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-314 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-16 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-49 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-148 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test123-49 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),39,293879), test123-16 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),48,294021), test004-677 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test009-16 -> InitialFetchState(Some(g95oe921S86FCGM2NqB23w),BrokerEndPoint(id=1, host=10.58.16.231:9092),12,628455), test004-544 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-643 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-148 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-181 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test-6 -> InitialFetchState(Some(HeEEmpDsSGeLVSIfaRiRqQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),53,0), test004-379 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test009-19 -> InitialFetchState(Some(g95oe921S86FCGM2NqB23w),BrokerEndPoint(id=1, host=10.58.16.231:9092),12,628455), test005-52 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-548 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-713 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-349 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-349 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-85 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-118 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-52 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-151 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-184 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-250 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-283 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-84 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-646 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-447 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-414 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-513 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-480 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-579 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test123-51 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),39,294130), test004-117 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test123-18 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),48,293940), test005-282 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-216 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-315 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-315 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test009-21 -> InitialFetchState(Some(g95oe921S86FCGM2NqB23w),BrokerEndPoint(id=1, host=10.58.16.231:9092),15,717951), test004-484 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-583 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-252 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-21 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-186 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-252 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test-11 -> InitialFetchState(Some(HeEEmpDsSGeLVSIfaRiRqQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),53,0), test005-20 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-516 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-615 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-383 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-20 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-152 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-185 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-218 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-251 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-684 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-552 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-221 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-188 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-287 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-287 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-320 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-386 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-122 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test123-23 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),48,293867), test004-56 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-155 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-155 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test123-56 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),40,293670), test004-122 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-716 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-551 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-683 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-650 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test-12 -> InitialFetchState(Some(HeEEmpDsSGeLVSIfaRiRqQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),53,0), test005-352 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-319 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-352 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-451 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-418 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-88 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-22 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-55 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-88 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-220 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-141 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-174 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-207 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-240 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-207 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-339 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-9 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-9 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-669 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-636 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-702 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test009-8 -> InitialFetchState(Some(g95oe921S86FCGM2NqB23w),BrokerEndPoint(id=1, host=10.58.16.231:9092),12,628425), test004-372 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-438 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-206 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-206 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-305 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-338 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-41 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-140 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-668 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-503 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-602 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test123-11 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),45,293925), test005-77 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test123-44 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),39,293991), test004-176 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-308 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-242 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-572 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-341 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-308 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-407 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-539 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-506 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-142 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-109 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-76 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-109 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-274 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-274 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test009-10 -> InitialFetchState(Some(g95oe921S86FCGM2NqB23w),BrokerEndPoint(id=1, host=10.58.16.231:9092),12,628485), test004-43 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-43 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-604 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test-16 -> InitialFetchState(Some(HeEEmpDsSGeLVSIfaRiRqQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),53,0), test004-472 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-13 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-79 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-145 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test-20 -> InitialFetchState(Some(HeEEmpDsSGeLVSIfaRiRqQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),48,0), test004-640 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-277 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-244 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-310 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-409 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-475 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-45 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test123-45 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),39,293995), test004-111 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-111 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-177 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-210 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-177 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-243 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-540 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-705 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-276 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-309 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-375 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-342 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-507 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-81 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-114 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-81 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-477 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-444 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-510 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-609 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-213 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-246 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-213 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-279 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-246 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-80 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-14 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-47 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-146 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test123-14 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),45,293850), test-21 -> InitialFetchState(Some(HeEEmpDsSGeLVSIfaRiRqQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),48,0), test004-708 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test009-14 -> InitialFetchState(Some(g95oe921S86FCGM2NqB23w),BrokerEndPoint(id=1, host=10.58.16.231:9092),14,717810), test004-575 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-608 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-674 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-212 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-344 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-443 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-364 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-463 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-430 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-496 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-595 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-562 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test123-34 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),40,293970), test004-100 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-199 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-265 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-331 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-331 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-298 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-0 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-661 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test009-0 -> InitialFetchState(Some(g95oe921S86FCGM2NqB23w),BrokerEndPoint(id=1, host=10.58.16.231:9092),15,718376), test005-165 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-231 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-297 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-65 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-98 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-693 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-399 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-432 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-102 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-135 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-201 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-234 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-267 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-68 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-663 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-696 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-398 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-530 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-134 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test123-2 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),48,293968), test004-167 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-200 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-134 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-266 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-233 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test-25 -> InitialFetchState(Some(HeEEmpDsSGeLVSIfaRiRqQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),53,0), test004-1 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-34 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-67 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-100 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-34 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-629 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-302 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test-28 -> InitialFetchState(Some(HeEEmpDsSGeLVSIfaRiRqQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),48,0), test004-335 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-335 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-302 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-467 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-71 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-38 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-170 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-137 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-203 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-4 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-533 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-500 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-566 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-367 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-4 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-37 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test123-37 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),40,294060), test004-103 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-70 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test123-4 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),45,293715), test004-169 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-268 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-235 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test009-3 -> InitialFetchState(Some(g95oe921S86FCGM2NqB23w),BrokerEndPoint(id=1, host=10.58.16.231:9092),12,628455), test004-631 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-598 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-238 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-172 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-271 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-304 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-271 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-238 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-403 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-106 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test123-7 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),48,294016), test004-139 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-172 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-700 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-634 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-336 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-369 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-435 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-6 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test005-72 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-6 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test005-39 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test123-39 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),40,293971), test004-105 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-72 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-468 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0), test004-567 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),30,0), test004-534 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=1, host=10.58.16.231:9092),26,0)) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:07:51,699] INFO [UnifiedLog partition=test004-620, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,701] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-719 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,701] INFO [UnifiedLog partition=test004-719, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,701] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-389 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,701] INFO [UnifiedLog partition=test004-389, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,701] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-521 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,701] INFO [UnifiedLog partition=test004-521, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,701] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-124 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,701] INFO [UnifiedLog partition=test004-124, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,701] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-256 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,702] INFO [UnifiedLog partition=test005-256, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,702] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-223 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,702] INFO [UnifiedLog partition=test005-223, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,702] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-322 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,702] INFO [UnifiedLog partition=test005-322, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,702] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-355 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,702] INFO [UnifiedLog partition=test004-355, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,702] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test009-25 with TruncationState(offset=718242, completed=true) due to local high watermark 718242 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,702] INFO [UnifiedLog partition=test009-25, dir=/data01/kafka-logs-351] Truncating to 718242 has no effect as the largest offset in the log is 718241 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,702] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-91 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,702] INFO [UnifiedLog partition=test004-91, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,702] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-652 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,702] INFO [UnifiedLog partition=test004-652, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,702] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-421 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,702] INFO [UnifiedLog partition=test004-421, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,702] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-487 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,702] INFO [UnifiedLog partition=test004-487, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,702] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-454 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,702] INFO [UnifiedLog partition=test004-454, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,703] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-586 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,703] INFO [UnifiedLog partition=test004-586, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,703] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-189 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,703] INFO [UnifiedLog partition=test005-189, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,703] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-222 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,703] INFO [UnifiedLog partition=test004-222, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,703] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-354 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,703] INFO [UnifiedLog partition=test005-354, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,703] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-321 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,703] INFO [UnifiedLog partition=test004-321, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,703] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-288 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,703] INFO [UnifiedLog partition=test004-288, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,703] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-24 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,703] INFO [UnifiedLog partition=test004-24, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,703] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-57 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,703] INFO [UnifiedLog partition=test005-57, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,703] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-556 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,703] INFO [UnifiedLog partition=test004-556, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,703] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-655 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,704] INFO [UnifiedLog partition=test004-655, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,704] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-622 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,704] INFO [UnifiedLog partition=test004-622, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,704] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-325 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,704] INFO [UnifiedLog partition=test004-325, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,704] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-292 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,704] INFO [UnifiedLog partition=test004-292, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,704] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-93 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,704] INFO [UnifiedLog partition=test004-93, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,704] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-93 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,704] INFO [UnifiedLog partition=test005-93, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,705] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-60 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,705] INFO [UnifiedLog partition=test004-60, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,705] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-159 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,705] INFO [UnifiedLog partition=test004-159, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,705] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-159 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,705] INFO [UnifiedLog partition=test005-159, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,705] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-126 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,705] INFO [UnifiedLog partition=test004-126, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,705] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-225 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,705] INFO [UnifiedLog partition=test004-225, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,705] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-225 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,705] INFO [UnifiedLog partition=test005-225, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,705] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-291 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,705] INFO [UnifiedLog partition=test005-291, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,705] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-258 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,705] INFO [UnifiedLog partition=test004-258, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,705] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test009-27 with TruncationState(offset=717640, completed=true) due to local high watermark 717640 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,705] INFO [UnifiedLog partition=test009-27, dir=/data01/kafka-logs-351] Truncating to 717640 has no effect as the largest offset in the log is 717639 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,706] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-27 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,706] INFO [UnifiedLog partition=test005-27, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,706] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-357 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,706] INFO [UnifiedLog partition=test004-357, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,706] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-357 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,706] INFO [UnifiedLog partition=test005-357, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,706] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-390 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,706] INFO [UnifiedLog partition=test004-390, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,706] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-456 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,706] INFO [UnifiedLog partition=test004-456, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,706] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-522 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,706] INFO [UnifiedLog partition=test004-522, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,706] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-158 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,706] INFO [UnifiedLog partition=test005-158, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,706] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test123-59 with TruncationState(offset=294075, completed=true) due to local high watermark 294075 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,707] INFO [UnifiedLog partition=test123-59, dir=/data01/kafka-logs-351] Truncating to 294075 has no effect as the largest offset in the log is 294074 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,707] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-125 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,707] INFO [UnifiedLog partition=test005-125, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,707] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test123-26 with TruncationState(offset=294030, completed=true) due to local high watermark 294030 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,707] INFO [UnifiedLog partition=test123-26, dir=/data01/kafka-logs-351] Truncating to 294030 has no effect as the largest offset in the log is 294029 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,707] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-191 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,707] INFO [UnifiedLog partition=test004-191, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,707] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-158 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,707] INFO [UnifiedLog partition=test004-158, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,707] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-290 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,707] INFO [UnifiedLog partition=test005-290, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,707] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-257 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,707] INFO [UnifiedLog partition=test004-257, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,708] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-26 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,708] INFO [UnifiedLog partition=test005-26, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,708] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-92 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,708] INFO [UnifiedLog partition=test005-92, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,708] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-59 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,708] INFO [UnifiedLog partition=test004-59, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,708] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-591 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,708] INFO [UnifiedLog partition=test004-591, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,708] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-294 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,708] INFO [UnifiedLog partition=test005-294, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,708] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-228 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,708] INFO [UnifiedLog partition=test004-228, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,708] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-294 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,708] INFO [UnifiedLog partition=test004-294, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,708] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-459 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,708] INFO [UnifiedLog partition=test004-459, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,709] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-128 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,709] INFO [UnifiedLog partition=test005-128, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,709] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test123-29 with TruncationState(offset=294075, completed=true) due to local high watermark 294075 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,709] INFO [UnifiedLog partition=test123-29, dir=/data01/kafka-logs-351] Truncating to 294075 has no effect as the largest offset in the log is 294074 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,709] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-194 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,709] INFO [UnifiedLog partition=test005-194, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,709] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-194 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,709] INFO [UnifiedLog partition=test004-194, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,709] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-590 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,709] INFO [UnifiedLog partition=test004-590, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,709] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-689 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,709] INFO [UnifiedLog partition=test004-689, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,709] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test-2 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,709] INFO [UnifiedLog partition=test-2, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,709] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-326 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,709] INFO [UnifiedLog partition=test005-326, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,709] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-425 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,710] INFO [UnifiedLog partition=test004-425, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,710] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-491 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,710] INFO [UnifiedLog partition=test004-491, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,710] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-28 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,710] INFO [UnifiedLog partition=test004-28, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,710] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-61 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,710] INFO [UnifiedLog partition=test005-61, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,710] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-259 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,710] INFO [UnifiedLog partition=test005-259, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,710] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-527 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,710] INFO [UnifiedLog partition=test004-527, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,710] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-494 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,710] INFO [UnifiedLog partition=test004-494, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,710] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-560 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,710] INFO [UnifiedLog partition=test004-560, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,710] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-659 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,710] INFO [UnifiedLog partition=test004-659, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,710] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-263 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,710] INFO [UnifiedLog partition=test004-263, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,711] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-329 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,711] INFO [UnifiedLog partition=test004-329, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,711] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-362 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,711] INFO [UnifiedLog partition=test004-362, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,711] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-31 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,711] INFO [UnifiedLog partition=test004-31, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,711] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-31 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,711] INFO [UnifiedLog partition=test005-31, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,711] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-97 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,711] INFO [UnifiedLog partition=test005-97, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,711] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-64 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,711] INFO [UnifiedLog partition=test004-64, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,711] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-163 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,711] INFO [UnifiedLog partition=test004-163, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,711] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-163 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,711] INFO [UnifiedLog partition=test005-163, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,711] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-625 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,712] INFO [UnifiedLog partition=test004-625, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,712] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-691 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,712] INFO [UnifiedLog partition=test004-691, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,712] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-262 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,712] INFO [UnifiedLog partition=test005-262, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,712] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-328 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,712] INFO [UnifiedLog partition=test005-328, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,712] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-427 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,712] INFO [UnifiedLog partition=test004-427, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,712] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-394 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,712] INFO [UnifiedLog partition=test004-394, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,712] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-129 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,712] INFO [UnifiedLog partition=test004-129, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,712] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-129 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,712] INFO [UnifiedLog partition=test005-129, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,712] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test123-30 with TruncationState(offset=293880, completed=true) due to local high watermark 293880 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,712] INFO [UnifiedLog partition=test123-30, dir=/data01/kafka-logs-351] Truncating to 293880 has no effect as the largest offset in the log is 293879 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,712] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-195 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,713] INFO [UnifiedLog partition=test004-195, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,713] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-228 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,713] INFO [UnifiedLog partition=test005-228, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,713] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-195 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,713] INFO [UnifiedLog partition=test005-195, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,713] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test-4 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,713] INFO [UnifiedLog partition=test-4, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,713] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-17 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,713] INFO [UnifiedLog partition=test005-17, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,713] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-50 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,713] INFO [UnifiedLog partition=test004-50, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,713] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-612 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,713] INFO [UnifiedLog partition=test004-612, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,713] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-678 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,713] INFO [UnifiedLog partition=test004-678, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,713] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-413 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,713] INFO [UnifiedLog partition=test004-413, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,714] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-578 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,714] INFO [UnifiedLog partition=test004-578, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,714] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-116 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,714] INFO [UnifiedLog partition=test004-116, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,714] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-182 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,714] INFO [UnifiedLog partition=test004-182, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,714] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-314 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,714] INFO [UnifiedLog partition=test005-314, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,714] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-281 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,714] INFO [UnifiedLog partition=test005-281, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,714] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-347 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,714] INFO [UnifiedLog partition=test004-347, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,714] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-347 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,714] INFO [UnifiedLog partition=test005-347, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,714] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-314 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,714] INFO [UnifiedLog partition=test004-314, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,715] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-16 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,715] INFO [UnifiedLog partition=test004-16, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,715] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-49 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,715] INFO [UnifiedLog partition=test005-49, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,715] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-148 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,715] INFO [UnifiedLog partition=test005-148, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,715] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test123-49 with TruncationState(offset=293879, completed=true) due to local high watermark 293879 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,715] INFO [UnifiedLog partition=test123-49, dir=/data01/kafka-logs-351] Truncating to 293879 has no effect as the largest offset in the log is 293878 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,715] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test123-16 with TruncationState(offset=294021, completed=true) due to local high watermark 294021 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,715] INFO [UnifiedLog partition=test123-16, dir=/data01/kafka-logs-351] Truncating to 294021 has no effect as the largest offset in the log is 294020 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,715] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-677 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,715] INFO [UnifiedLog partition=test004-677, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,715] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test009-16 with TruncationState(offset=628455, completed=true) due to local high watermark 628455 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,715] INFO [UnifiedLog partition=test009-16, dir=/data01/kafka-logs-351] Truncating to 628455 has no effect as the largest offset in the log is 628454 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,715] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-544 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,716] INFO [UnifiedLog partition=test004-544, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,716] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-643 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,716] INFO [UnifiedLog partition=test004-643, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,716] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-148 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,716] INFO [UnifiedLog partition=test004-148, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,716] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-181 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,716] INFO [UnifiedLog partition=test005-181, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,716] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test-6 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,716] INFO [UnifiedLog partition=test-6, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,716] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-379 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,716] INFO [UnifiedLog partition=test004-379, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,716] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test009-19 with TruncationState(offset=628455, completed=true) due to local high watermark 628455 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,716] INFO [UnifiedLog partition=test009-19, dir=/data01/kafka-logs-351] Truncating to 628455 has no effect as the largest offset in the log is 628454 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,716] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-52 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,716] INFO [UnifiedLog partition=test005-52, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,716] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-548 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,717] INFO [UnifiedLog partition=test004-548, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,717] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-713 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,717] INFO [UnifiedLog partition=test004-713, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,717] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-349 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,717] INFO [UnifiedLog partition=test004-349, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,717] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-349 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,717] INFO [UnifiedLog partition=test005-349, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,717] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-85 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,717] INFO [UnifiedLog partition=test004-85, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,717] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-118 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,717] INFO [UnifiedLog partition=test005-118, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,717] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-52 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,717] INFO [UnifiedLog partition=test004-52, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,717] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-151 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,717] INFO [UnifiedLog partition=test004-151, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,717] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-184 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,717] INFO [UnifiedLog partition=test005-184, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,717] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-250 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,717] INFO [UnifiedLog partition=test005-250, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,718] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-283 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,718] INFO [UnifiedLog partition=test004-283, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,718] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-84 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,718] INFO [UnifiedLog partition=test005-84, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,718] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-646 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,718] INFO [UnifiedLog partition=test004-646, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,718] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-447 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,718] INFO [UnifiedLog partition=test004-447, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,718] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-414 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,718] INFO [UnifiedLog partition=test004-414, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,718] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-513 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,718] INFO [UnifiedLog partition=test004-513, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,718] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-480 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,718] INFO [UnifiedLog partition=test004-480, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,718] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-579 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,718] INFO [UnifiedLog partition=test004-579, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,718] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test123-51 with TruncationState(offset=294130, completed=true) due to local high watermark 294130 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,719] INFO [UnifiedLog partition=test123-51, dir=/data01/kafka-logs-351] Truncating to 294130 has no effect as the largest offset in the log is 294129 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,719] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-117 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,719] INFO [UnifiedLog partition=test004-117, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,719] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test123-18 with TruncationState(offset=293940, completed=true) due to local high watermark 293940 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,719] INFO [UnifiedLog partition=test123-18, dir=/data01/kafka-logs-351] Truncating to 293940 has no effect as the largest offset in the log is 293939 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,719] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-282 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,719] INFO [UnifiedLog partition=test005-282, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,719] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-216 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,719] INFO [UnifiedLog partition=test004-216, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,719] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-315 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,719] INFO [UnifiedLog partition=test004-315, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,719] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-315 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,719] INFO [UnifiedLog partition=test005-315, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,719] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test009-21 with TruncationState(offset=717951, completed=true) due to local high watermark 717951 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,720] INFO [UnifiedLog partition=test009-21, dir=/data01/kafka-logs-351] Truncating to 717951 has no effect as the largest offset in the log is 717950 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,720] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-484 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,720] INFO [UnifiedLog partition=test004-484, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,720] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-583 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,720] INFO [UnifiedLog partition=test004-583, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,720] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-252 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,720] INFO [UnifiedLog partition=test004-252, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,720] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-21 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,720] INFO [UnifiedLog partition=test005-21, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,720] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-186 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,720] INFO [UnifiedLog partition=test005-186, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,720] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-252 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,720] INFO [UnifiedLog partition=test005-252, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,720] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test-11 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,720] INFO [UnifiedLog partition=test-11, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,721] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-20 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,721] INFO [UnifiedLog partition=test005-20, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,721] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-516 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,721] INFO [UnifiedLog partition=test004-516, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,721] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-615 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,721] INFO [UnifiedLog partition=test004-615, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,721] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-383 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,721] INFO [UnifiedLog partition=test004-383, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,721] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-20 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,721] INFO [UnifiedLog partition=test004-20, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,721] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-152 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,721] INFO [UnifiedLog partition=test005-152, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,721] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-185 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,721] INFO [UnifiedLog partition=test004-185, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,721] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-218 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,721] INFO [UnifiedLog partition=test005-218, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,721] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-251 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,721] INFO [UnifiedLog partition=test004-251, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,722] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-684 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,722] INFO [UnifiedLog partition=test004-684, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,722] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-552 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,722] INFO [UnifiedLog partition=test004-552, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,722] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-221 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,722] INFO [UnifiedLog partition=test004-221, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,722] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-188 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,722] INFO [UnifiedLog partition=test004-188, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,722] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-287 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,722] INFO [UnifiedLog partition=test004-287, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,722] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-287 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,722] INFO [UnifiedLog partition=test005-287, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,722] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-320 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,722] INFO [UnifiedLog partition=test004-320, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,722] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-386 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,722] INFO [UnifiedLog partition=test004-386, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,723] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-122 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,723] INFO [UnifiedLog partition=test005-122, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,723] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test123-23 with TruncationState(offset=293867, completed=true) due to local high watermark 293867 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,723] INFO [UnifiedLog partition=test123-23, dir=/data01/kafka-logs-351] Truncating to 293867 has no effect as the largest offset in the log is 293866 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,723] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-56 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,723] INFO [UnifiedLog partition=test004-56, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,723] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-155 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,723] INFO [UnifiedLog partition=test004-155, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,723] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-155 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,723] INFO [UnifiedLog partition=test005-155, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,723] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test123-56 with TruncationState(offset=293670, completed=true) due to local high watermark 293670 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,723] INFO [UnifiedLog partition=test123-56, dir=/data01/kafka-logs-351] Truncating to 293670 has no effect as the largest offset in the log is 293669 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,723] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-122 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,723] INFO [UnifiedLog partition=test004-122, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,724] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-716 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,724] INFO [UnifiedLog partition=test004-716, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,724] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-551 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,724] INFO [UnifiedLog partition=test004-551, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,724] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-683 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,724] INFO [UnifiedLog partition=test004-683, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,724] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-650 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,724] INFO [UnifiedLog partition=test004-650, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,724] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test-12 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,724] INFO [UnifiedLog partition=test-12, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,724] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-352 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,724] INFO [UnifiedLog partition=test005-352, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,724] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-319 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,724] INFO [UnifiedLog partition=test005-319, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,724] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-352 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,724] INFO [UnifiedLog partition=test004-352, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,724] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-451 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,724] INFO [UnifiedLog partition=test004-451, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,724] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-418 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,724] INFO [UnifiedLog partition=test004-418, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,725] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-88 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,725] INFO [UnifiedLog partition=test005-88, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,725] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-22 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,725] INFO [UnifiedLog partition=test004-22, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,725] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-55 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,725] INFO [UnifiedLog partition=test005-55, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,725] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-88 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,725] INFO [UnifiedLog partition=test004-88, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,725] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-220 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,725] INFO [UnifiedLog partition=test005-220, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,725] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-141 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,725] INFO [UnifiedLog partition=test004-141, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,725] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-174 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,725] INFO [UnifiedLog partition=test005-174, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,725] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-207 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,725] INFO [UnifiedLog partition=test004-207, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,725] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-240 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,725] INFO [UnifiedLog partition=test005-240, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,726] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-207 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,726] INFO [UnifiedLog partition=test005-207, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,726] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-339 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,726] INFO [UnifiedLog partition=test005-339, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,726] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-9 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,726] INFO [UnifiedLog partition=test004-9, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,726] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-9 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,726] INFO [UnifiedLog partition=test005-9, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,726] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-669 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,726] INFO [UnifiedLog partition=test004-669, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,726] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-636 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,726] INFO [UnifiedLog partition=test004-636, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,726] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-702 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,726] INFO [UnifiedLog partition=test004-702, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,726] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test009-8 with TruncationState(offset=628425, completed=true) due to local high watermark 628425 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,726] INFO [UnifiedLog partition=test009-8, dir=/data01/kafka-logs-351] Truncating to 628425 has no effect as the largest offset in the log is 628424 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,726] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-372 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,727] INFO [UnifiedLog partition=test004-372, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,727] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-438 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,727] INFO [UnifiedLog partition=test004-438, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,727] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-206 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,727] INFO [UnifiedLog partition=test005-206, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,727] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-206 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,727] INFO [UnifiedLog partition=test004-206, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,727] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-305 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,727] INFO [UnifiedLog partition=test004-305, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,727] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-338 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,727] INFO [UnifiedLog partition=test004-338, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,727] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-41 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,727] INFO [UnifiedLog partition=test004-41, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,727] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-140 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,727] INFO [UnifiedLog partition=test005-140, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,727] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-668 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,727] INFO [UnifiedLog partition=test004-668, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,727] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-503 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,728] INFO [UnifiedLog partition=test004-503, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,728] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-602 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,728] INFO [UnifiedLog partition=test004-602, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,728] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test123-11 with TruncationState(offset=293925, completed=true) due to local high watermark 293925 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,728] INFO [UnifiedLog partition=test123-11, dir=/data01/kafka-logs-351] Truncating to 293925 has no effect as the largest offset in the log is 293924 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,728] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-77 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,728] INFO [UnifiedLog partition=test005-77, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,728] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test123-44 with TruncationState(offset=293991, completed=true) due to local high watermark 293991 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,728] INFO [UnifiedLog partition=test123-44, dir=/data01/kafka-logs-351] Truncating to 293991 has no effect as the largest offset in the log is 293990 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,728] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-176 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,728] INFO [UnifiedLog partition=test004-176, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,728] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-308 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,728] INFO [UnifiedLog partition=test005-308, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,728] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-242 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,728] INFO [UnifiedLog partition=test004-242, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,728] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-572 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,728] INFO [UnifiedLog partition=test004-572, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,728] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-341 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,728] INFO [UnifiedLog partition=test004-341, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,728] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-308 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,728] INFO [UnifiedLog partition=test004-308, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,728] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-407 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,728] INFO [UnifiedLog partition=test004-407, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,729] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-539 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,729] INFO [UnifiedLog partition=test004-539, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,729] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-506 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,729] INFO [UnifiedLog partition=test004-506, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,729] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-142 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,729] INFO [UnifiedLog partition=test005-142, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,729] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-109 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,729] INFO [UnifiedLog partition=test004-109, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,729] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-76 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,729] INFO [UnifiedLog partition=test004-76, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,729] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-109 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,729] INFO [UnifiedLog partition=test005-109, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,729] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-274 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,729] INFO [UnifiedLog partition=test005-274, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,729] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-274 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,729] INFO [UnifiedLog partition=test004-274, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,729] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test009-10 with TruncationState(offset=628485, completed=true) due to local high watermark 628485 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,729] INFO [UnifiedLog partition=test009-10, dir=/data01/kafka-logs-351] Truncating to 628485 has no effect as the largest offset in the log is 628484 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,729] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-43 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,730] INFO [UnifiedLog partition=test004-43, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,730] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-43 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,730] INFO [UnifiedLog partition=test005-43, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,730] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-604 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,730] INFO [UnifiedLog partition=test004-604, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,730] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test-16 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,730] INFO [UnifiedLog partition=test-16, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,730] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-472 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,730] INFO [UnifiedLog partition=test004-472, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,730] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-13 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,730] INFO [UnifiedLog partition=test005-13, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,730] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-79 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,730] INFO [UnifiedLog partition=test004-79, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,730] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-145 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,730] INFO [UnifiedLog partition=test004-145, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,730] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test-20 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,730] INFO [UnifiedLog partition=test-20, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,730] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-640 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,730] INFO [UnifiedLog partition=test004-640, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,730] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-277 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,730] INFO [UnifiedLog partition=test005-277, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,731] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-244 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,731] INFO [UnifiedLog partition=test004-244, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,731] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-310 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,731] INFO [UnifiedLog partition=test004-310, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,731] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-409 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,731] INFO [UnifiedLog partition=test004-409, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,731] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-475 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,731] INFO [UnifiedLog partition=test004-475, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,731] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-45 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,731] INFO [UnifiedLog partition=test004-45, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,731] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test123-45 with TruncationState(offset=293995, completed=true) due to local high watermark 293995 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,731] INFO [UnifiedLog partition=test123-45, dir=/data01/kafka-logs-351] Truncating to 293995 has no effect as the largest offset in the log is 293994 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,731] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-111 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,731] INFO [UnifiedLog partition=test004-111, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,731] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-111 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,731] INFO [UnifiedLog partition=test005-111, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,731] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-177 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,731] INFO [UnifiedLog partition=test004-177, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,731] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-210 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,731] INFO [UnifiedLog partition=test005-210, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,731] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-177 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,732] INFO [UnifiedLog partition=test005-177, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,732] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-243 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,732] INFO [UnifiedLog partition=test005-243, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,732] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-540 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,732] INFO [UnifiedLog partition=test004-540, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,732] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-705 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,732] INFO [UnifiedLog partition=test004-705, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,732] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-276 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,732] INFO [UnifiedLog partition=test004-276, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,732] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-309 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,732] INFO [UnifiedLog partition=test005-309, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,732] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-375 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,732] INFO [UnifiedLog partition=test004-375, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,732] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-342 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,732] INFO [UnifiedLog partition=test004-342, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,732] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-507 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,732] INFO [UnifiedLog partition=test004-507, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,732] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-81 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,732] INFO [UnifiedLog partition=test004-81, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,732] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-114 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,732] INFO [UnifiedLog partition=test005-114, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,732] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-81 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,732] INFO [UnifiedLog partition=test005-81, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,733] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-477 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,733] INFO [UnifiedLog partition=test004-477, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,733] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-444 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,733] INFO [UnifiedLog partition=test004-444, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,733] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-510 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,733] INFO [UnifiedLog partition=test004-510, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,733] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-609 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,733] INFO [UnifiedLog partition=test004-609, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,733] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-213 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,733] INFO [UnifiedLog partition=test004-213, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,733] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-246 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,733] INFO [UnifiedLog partition=test005-246, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,733] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-213 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,733] INFO [UnifiedLog partition=test005-213, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,733] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-279 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,733] INFO [UnifiedLog partition=test004-279, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,733] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-246 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,733] INFO [UnifiedLog partition=test004-246, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,733] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-80 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,733] INFO [UnifiedLog partition=test005-80, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,733] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-14 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,733] INFO [UnifiedLog partition=test004-14, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,733] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-47 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,733] INFO [UnifiedLog partition=test005-47, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,734] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-146 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,734] INFO [UnifiedLog partition=test005-146, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,734] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test123-14 with TruncationState(offset=293850, completed=true) due to local high watermark 293850 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,734] INFO [UnifiedLog partition=test123-14, dir=/data01/kafka-logs-351] Truncating to 293850 has no effect as the largest offset in the log is 293849 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,734] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test-21 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,734] INFO [UnifiedLog partition=test-21, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,734] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-708 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,734] INFO [UnifiedLog partition=test004-708, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,734] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test009-14 with TruncationState(offset=717810, completed=true) due to local high watermark 717810 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,734] INFO [UnifiedLog partition=test009-14, dir=/data01/kafka-logs-351] Truncating to 717810 has no effect as the largest offset in the log is 717809 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,734] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-575 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,734] INFO [UnifiedLog partition=test004-575, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,734] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-608 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,734] INFO [UnifiedLog partition=test004-608, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,734] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-674 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,734] INFO [UnifiedLog partition=test004-674, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,734] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-212 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,734] INFO [UnifiedLog partition=test004-212, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,734] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-344 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,734] INFO [UnifiedLog partition=test005-344, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,734] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-443 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,735] INFO [UnifiedLog partition=test004-443, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,735] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-364 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,735] INFO [UnifiedLog partition=test004-364, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,735] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-463 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,735] INFO [UnifiedLog partition=test004-463, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,735] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-430 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,735] INFO [UnifiedLog partition=test004-430, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,735] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-496 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,735] INFO [UnifiedLog partition=test004-496, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,735] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-595 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,735] INFO [UnifiedLog partition=test004-595, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,735] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-562 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,735] INFO [UnifiedLog partition=test004-562, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,735] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test123-34 with TruncationState(offset=293970, completed=true) due to local high watermark 293970 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,735] INFO [UnifiedLog partition=test123-34, dir=/data01/kafka-logs-351] Truncating to 293970 has no effect as the largest offset in the log is 293969 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,735] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-100 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,735] INFO [UnifiedLog partition=test004-100, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,735] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-199 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,735] INFO [UnifiedLog partition=test004-199, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,735] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-265 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,735] INFO [UnifiedLog partition=test004-265, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,735] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-331 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,735] INFO [UnifiedLog partition=test004-331, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,735] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-331 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,736] INFO [UnifiedLog partition=test005-331, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,736] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-298 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,736] INFO [UnifiedLog partition=test004-298, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,736] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-0 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,736] INFO [UnifiedLog partition=test005-0, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,736] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-661 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,736] INFO [UnifiedLog partition=test004-661, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,736] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test009-0 with TruncationState(offset=718376, completed=true) due to local high watermark 718376 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,736] INFO [UnifiedLog partition=test009-0, dir=/data01/kafka-logs-351] Truncating to 718376 has no effect as the largest offset in the log is 718375 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,736] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-165 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,736] INFO [UnifiedLog partition=test005-165, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,736] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-231 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,736] INFO [UnifiedLog partition=test005-231, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,736] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-297 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,736] INFO [UnifiedLog partition=test005-297, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,736] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-65 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,736] INFO [UnifiedLog partition=test005-65, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,736] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-98 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,736] INFO [UnifiedLog partition=test004-98, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,736] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-693 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,736] INFO [UnifiedLog partition=test004-693, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,736] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-399 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,736] INFO [UnifiedLog partition=test004-399, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,737] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-432 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,737] INFO [UnifiedLog partition=test004-432, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,737] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-102 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,737] INFO [UnifiedLog partition=test005-102, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,737] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-135 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,737] INFO [UnifiedLog partition=test004-135, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,737] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-201 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,737] INFO [UnifiedLog partition=test004-201, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,737] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-234 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,737] INFO [UnifiedLog partition=test005-234, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,737] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-267 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,737] INFO [UnifiedLog partition=test004-267, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,737] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-68 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,737] INFO [UnifiedLog partition=test005-68, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,737] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-663 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,737] INFO [UnifiedLog partition=test004-663, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,737] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-696 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,737] INFO [UnifiedLog partition=test004-696, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,737] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-398 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,737] INFO [UnifiedLog partition=test004-398, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,737] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-530 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,737] INFO [UnifiedLog partition=test004-530, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,737] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-134 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,737] INFO [UnifiedLog partition=test005-134, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,738] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test123-2 with TruncationState(offset=293968, completed=true) due to local high watermark 293968 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,738] INFO [UnifiedLog partition=test123-2, dir=/data01/kafka-logs-351] Truncating to 293968 has no effect as the largest offset in the log is 293967 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,738] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-167 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,738] INFO [UnifiedLog partition=test004-167, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,738] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-200 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,738] INFO [UnifiedLog partition=test005-200, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,738] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-134 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,738] INFO [UnifiedLog partition=test004-134, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,738] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-266 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,738] INFO [UnifiedLog partition=test005-266, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,738] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-233 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,738] INFO [UnifiedLog partition=test004-233, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,738] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test-25 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,738] INFO [UnifiedLog partition=test-25, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,738] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-1 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,738] INFO [UnifiedLog partition=test004-1, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,738] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-34 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,738] INFO [UnifiedLog partition=test005-34, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,738] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-67 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,738] INFO [UnifiedLog partition=test004-67, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,738] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-100 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,738] INFO [UnifiedLog partition=test005-100, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,738] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-34 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,738] INFO [UnifiedLog partition=test004-34, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,738] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-629 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,739] INFO [UnifiedLog partition=test004-629, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,739] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-302 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,739] INFO [UnifiedLog partition=test005-302, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,739] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test-28 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,739] INFO [UnifiedLog partition=test-28, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,739] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-335 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,739] INFO [UnifiedLog partition=test004-335, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,739] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-335 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,739] INFO [UnifiedLog partition=test005-335, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,739] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-302 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,739] INFO [UnifiedLog partition=test004-302, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,739] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-467 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,739] INFO [UnifiedLog partition=test004-467, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,739] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-71 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,739] INFO [UnifiedLog partition=test005-71, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,739] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-38 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,739] INFO [UnifiedLog partition=test004-38, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,739] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-170 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,739] INFO [UnifiedLog partition=test005-170, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,739] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-137 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,739] INFO [UnifiedLog partition=test005-137, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,739] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-203 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,739] INFO [UnifiedLog partition=test005-203, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,739] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-4 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,739] INFO [UnifiedLog partition=test005-4, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,739] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-533 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,740] INFO [UnifiedLog partition=test004-533, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,740] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-500 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,740] INFO [UnifiedLog partition=test004-500, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,740] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-566 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,740] INFO [UnifiedLog partition=test004-566, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,740] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-367 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,740] INFO [UnifiedLog partition=test004-367, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,740] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-4 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,740] INFO [UnifiedLog partition=test004-4, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,740] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-37 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,740] INFO [UnifiedLog partition=test005-37, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,740] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test123-37 with TruncationState(offset=294060, completed=true) due to local high watermark 294060 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,740] INFO [UnifiedLog partition=test123-37, dir=/data01/kafka-logs-351] Truncating to 294060 has no effect as the largest offset in the log is 294059 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,740] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-103 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,740] INFO [UnifiedLog partition=test004-103, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,740] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-70 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,740] INFO [UnifiedLog partition=test004-70, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,740] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test123-4 with TruncationState(offset=293715, completed=true) due to local high watermark 293715 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,740] INFO [UnifiedLog partition=test123-4, dir=/data01/kafka-logs-351] Truncating to 293715 has no effect as the largest offset in the log is 293714 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,740] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-169 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,741] INFO [UnifiedLog partition=test004-169, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,741] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-268 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,741] INFO [UnifiedLog partition=test005-268, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,741] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-235 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,741] INFO [UnifiedLog partition=test004-235, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,741] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test009-3 with TruncationState(offset=628455, completed=true) due to local high watermark 628455 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,741] INFO [UnifiedLog partition=test009-3, dir=/data01/kafka-logs-351] Truncating to 628455 has no effect as the largest offset in the log is 628454 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,741] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-631 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,741] INFO [UnifiedLog partition=test004-631, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,741] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-598 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,741] INFO [UnifiedLog partition=test004-598, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,741] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-238 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,741] INFO [UnifiedLog partition=test005-238, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,741] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-172 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,741] INFO [UnifiedLog partition=test004-172, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,741] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-271 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,741] INFO [UnifiedLog partition=test004-271, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,741] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-304 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,741] INFO [UnifiedLog partition=test005-304, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,741] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-271 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,742] INFO [UnifiedLog partition=test005-271, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,742] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-238 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,742] INFO [UnifiedLog partition=test004-238, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,742] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-403 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,742] INFO [UnifiedLog partition=test004-403, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,742] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-106 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,742] INFO [UnifiedLog partition=test005-106, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,742] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test123-7 with TruncationState(offset=294016, completed=true) due to local high watermark 294016 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,742] INFO [UnifiedLog partition=test123-7, dir=/data01/kafka-logs-351] Truncating to 294016 has no effect as the largest offset in the log is 294015 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,742] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-139 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,742] INFO [UnifiedLog partition=test004-139, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,742] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-172 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,742] INFO [UnifiedLog partition=test005-172, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,742] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-700 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,742] INFO [UnifiedLog partition=test004-700, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,742] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-634 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,742] INFO [UnifiedLog partition=test004-634, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,742] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-336 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,742] INFO [UnifiedLog partition=test005-336, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,742] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-369 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,742] INFO [UnifiedLog partition=test004-369, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,742] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-435 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,742] INFO [UnifiedLog partition=test004-435, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,742] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-6 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,742] INFO [UnifiedLog partition=test005-6, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,743] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-72 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,743] INFO [UnifiedLog partition=test005-72, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,743] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-6 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,743] INFO [UnifiedLog partition=test004-6, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,743] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test005-39 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,743] INFO [UnifiedLog partition=test005-39, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,743] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test123-39 with TruncationState(offset=293971, completed=true) due to local high watermark 293971 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,743] INFO [UnifiedLog partition=test123-39, dir=/data01/kafka-logs-351] Truncating to 293971 has no effect as the largest offset in the log is 293970 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,743] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-105 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,743] INFO [UnifiedLog partition=test004-105, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,743] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-72 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,743] INFO [UnifiedLog partition=test004-72, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,743] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-468 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,743] INFO [UnifiedLog partition=test004-468, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,743] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-567 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,743] INFO [UnifiedLog partition=test004-567, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:51,743] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test004-534 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:51,743] INFO [UnifiedLog partition=test004-534, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:55,717] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test010-14) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:07:55,750] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test010-29) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:07:55,752] INFO [Partition test010-29 broker=2] ISR updated to 2,1 and version updated to 13 (kafka.cluster.Partition) [2023-08-08 16:07:56,953] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test010-16) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:07:56,953] INFO [Partition test010-16 broker=2] ISR updated to 2,1 and version updated to 14 (kafka.cluster.Partition) [2023-08-08 16:07:58,564] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test004-653, test004-686, test004-356, test004-488, test004-554, test004-157, test005-190, test005-157, test123-58, test004-223, test004-190, __consumer_offsets-30, test004-289, test005-355, test004-322, test004-25, test005-58, test005-25, test005-124, test004-718, test004-388, test005-288, test004-255, __consumer_offsets-29, test005-321, test005-90, test004-57, test004-123, test123-24, test004-589, test005-358, test004-391, test004-358, test004-457, test004-424, test004-523, test005-126, __consumer_offsets-32, test005-258, test004-192, test004-291, test005-324, test004-27, test005-60, test004-621, test004-687, test004-654, test-0, test004-489, __consumer_offsets-31, test004-92, test005-224, test010-25, test005-257, test009-26, test004-558, test004-624, test-3, test004-393, test004-426, test004-161, test010-28, test005-161, test004-227, __consumer_offsets-1, __consumer_offsets-34, test004-557, test005-293, test004-260, test004-326, test005-94, test004-61, __consumer_offsets-0, test004-127, test004-94, test123-28, test005-226, test005-193, test009-28, test005-28, __consumer_offsets-33, test004-461, test004-230, test005-263, test004-296, test005-329, test005-64, test005-130, test123-31, test005-196, test004-692, __consumer_offsets-3, __consumer_offsets-36, test004-493, test004-526, test004-592, test004-658, test004-196, test005-229, test005-295, test004-262, test004-361, test004-328, test005-96, test005-162, test004-96, test004-162, __consumer_offsets-35, __consumer_offsets-2, test004-17, test004-83, test123-17, test005-83, test004-711, test004-380, test004-479, test004-446, test004-545, test005-182, test004-215, test005-248, __consumer_offsets-5, test-7, test005-215, __consumer_offsets-38, test004-281, test005-16, test004-49, test004-115, test005-115, test004-710, test004-511, test004-610, test004-247, __consumer_offsets-37, test005-346, __consumer_offsets-4, test004-346, test005-19, test004-647, test004-382, test004-448, test004-514, test123-19, __consumer_offsets-7, __consumer_offsets-40, test005-151, test004-118, test-9, test005-217, test005-316, test009-18, test004-51, test004-18, test004-613, test004-580, test004-679, test004-348, test004-546, __consumer_offsets-39, test005-117, test004-183, test005-183, test010-17, __consumer_offsets-6, test004-249, __consumer_offsets-9, __consumer_offsets-42, test004-517, test004-649, test004-682, test004-285, test005-285, test004-351, test004-318, test004-384, test004-483, test004-450, test004-21, test005-54, test005-120, test123-21, test005-87, test010-20, test123-54, test004-120, test004-219, test004-186, test005-219, __consumer_offsets-41, test004-549, test004-714, test005-350, test004-317, test004-284, test004-416, test004-482, test005-86, test005-53, __consumer_offsets-8, test123-53, test004-86, test004-152, test005-284, test004-218, test005-251, __consumer_offsets-11, __consumer_offsets-44, test004-453, test004-420, test004-519, test004-585, test004-618, test-13, test005-320, test005-353, test004-419, test004-89, test009-22, __consumer_offsets-43, __consumer_offsets-10, test004-617, test004-584, test004-253, test005-253, test005-22, test004-55, test005-154, test004-154, test005-187, test010-21, test005-141, test004-108, __consumer_offsets-13, __consumer_offsets-46, test005-273, test009-9, test004-75, test005-108, test004-471, test005-173, test010-7, test005-272, __consumer_offsets-45, test005-239, test004-272, __consumer_offsets-12, test005-8, test005-74, test005-41, test004-107, test005-107, test123-8, test009-7, test004-470, test004-536, test004-635, __consumer_offsets-15, test004-44, __consumer_offsets-48, test004-143, test010-10, test005-242, test004-275, test004-11, test005-44, test004-605, test004-671, test-17, test004-704, test005-341, test004-374, test123-43, __consumer_offsets-47, test123-10, test004-175, test005-208, test005-175, __consumer_offsets-14, test004-241, test004-208, test004-307, test005-307, test005-10, test005-76, test004-637, test004-340, test004-439, test004-406, test004-505, test004-571, test004-538, test005-112, test123-13, test005-79, test004-46, test005-178, test005-145, test004-211, test004-178, __consumer_offsets-17, test004-673, test-19, test004-706, test004-277, test004-343, test004-376, test004-12, test005-45, __consumer_offsets-16, test004-78, test004-144, test005-276, test009-12, test005-12, __consumer_offsets-49, test004-573, test004-639, test004-606, test005-342, test004-309, test004-441, test004-408, test004-474, test005-48, test004-147, test005-147, test123-48, __consumer_offsets-19, test009-15, test004-576, test004-675, test004-642, test004-180, test005-312, test005-279, test004-312, test004-411, test123-47, test004-113, test005-212, test010-13, __consumer_offsets-18, test004-509, test004-542, test004-245, test005-245, test005-311, test004-133, test005-166, test005-133, __consumer_offsets-21, test005-66, test004-33, test004-66, test005-99, test123-0, test004-429, test004-396, test004-495, test004-462, test004-561, test004-528, test004-627, test004-594, test004-165, test005-198, test005-264, test004-231, test-23, test004-297, test004-264, __consumer_offsets-20, test004-330, test005-32, test004-65, test004-32, test004-131, test004-660, test005-333, test004-498, test123-3, __consumer_offsets-23, test004-36, test010-2, test004-102, test005-300, test004-234, test-26, test004-2, test005-35, test004-365, test123-35, test004-101, __consumer_offsets-22, test004-200, test005-233, test005-332, test005-299, test009-1, test005-1, test004-695, test004-269, test005-269, test004-401, test004-368, test005-38, test004-71, test005-104, test123-38, test004-170, __consumer_offsets-26, test004-599, test004-632, test004-698, test004-301, test004-334, test004-433, test004-466, test005-70, __consumer_offsets-24, test005-136, test005-202, test004-136, test005-169, test-27, test004-202, test005-235, test004-3, __consumer_offsets-25, test005-3, test004-565, test004-532, test004-664, test004-205, test005-337, test004-304, test004-370, test004-7, test004-73, test123-40, __consumer_offsets-28, test004-436, test004-502, test004-568, test004-237, test005-303, test004-336, test004-402, test004-39, test005-138, test004-171, test005-204, test004-138, test010-5, __consumer_offsets-27, test009-5, test004-600, test004-699, test004-666) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:07:58,572] INFO [ReplicaFetcherThread-0-3]: Starting (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,574] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-653 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,574] INFO [UnifiedLog partition=test004-653, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,574] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-686 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,574] INFO [UnifiedLog partition=test004-686, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,574] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-356 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,574] INFO [UnifiedLog partition=test004-356, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,574] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-488 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,575] INFO [UnifiedLog partition=test004-488, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,575] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-554 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,575] INFO [UnifiedLog partition=test004-554, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,575] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-157 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,575] INFO [UnifiedLog partition=test004-157, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,575] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-190 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,575] INFO [UnifiedLog partition=test005-190, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,575] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-157 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,575] INFO [UnifiedLog partition=test005-157, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,575] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test123-58 with TruncationState(offset=257789, completed=true) due to local high watermark 257789 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,575] INFO [UnifiedLog partition=test123-58, dir=/data01/kafka-logs-351] Truncating to 257789 has no effect as the largest offset in the log is 257788 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,575] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-223 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,575] INFO [UnifiedLog partition=test004-223, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,575] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-190 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,575] INFO [UnifiedLog partition=test004-190, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,575] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-30 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,575] INFO [UnifiedLog partition=__consumer_offsets-30, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,575] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-289 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,575] INFO [UnifiedLog partition=test004-289, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,575] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-355 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,575] INFO [UnifiedLog partition=test005-355, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,575] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-322 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,575] INFO [UnifiedLog partition=test004-322, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,575] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-25 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,575] INFO [UnifiedLog partition=test004-25, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,575] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-58 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,575] INFO [UnifiedLog partition=test005-58, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,575] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-25 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,575] INFO [UnifiedLog partition=test005-25, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,576] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-124 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,576] INFO [UnifiedLog partition=test005-124, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,576] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-718 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,576] INFO [UnifiedLog partition=test004-718, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,576] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-388 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,576] INFO [UnifiedLog partition=test004-388, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,576] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-288 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,576] INFO [UnifiedLog partition=test005-288, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,576] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-255 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,576] INFO [UnifiedLog partition=test004-255, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,576] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-29 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,576] INFO [UnifiedLog partition=__consumer_offsets-29, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,576] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-321 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,576] INFO [UnifiedLog partition=test005-321, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,576] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-90 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,576] INFO [UnifiedLog partition=test005-90, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,576] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-57 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,576] INFO [UnifiedLog partition=test004-57, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,576] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-123 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,576] INFO [UnifiedLog partition=test004-123, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,576] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test123-24 with TruncationState(offset=185280, completed=true) due to local high watermark 185280 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,576] INFO [UnifiedLog partition=test123-24, dir=/data01/kafka-logs-351] Truncating to 185280 has no effect as the largest offset in the log is 185279 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,576] INFO [ReplicaFetcherManager on broker 2] Added fetcher to broker 3 for partitions HashMap(test004-653 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-686 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-356 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-488 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-554 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-157 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-190 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test005-157 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test123-58 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),33,257789), test004-223 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-190 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), __consumer_offsets-30 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),48,0), test004-289 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-355 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test004-322 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-25 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-58 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-25 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),25,0), test005-124 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-718 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-388 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-288 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test004-255 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), __consumer_offsets-29 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),47,0), test005-321 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),22,0), test005-90 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-57 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-123 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test123-24 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),44,185280), test004-589 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-358 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test004-391 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-358 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-457 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-424 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-523 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-126 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), __consumer_offsets-32 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),46,0), test005-258 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),22,0), test004-192 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-291 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-324 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test004-27 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-60 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),25,0), test004-621 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-687 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-654 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test-0 -> InitialFetchState(Some(HeEEmpDsSGeLVSIfaRiRqQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),46,0), test004-489 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), __consumer_offsets-31 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),43,0), test004-92 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-224 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test010-25 -> InitialFetchState(Some(KRrkky6_Qwi605E4lIfOgw),BrokerEndPoint(id=3, host=10.58.12.217:9092),7,2696981), test005-257 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test009-26 -> InitialFetchState(Some(g95oe921S86FCGM2NqB23w),BrokerEndPoint(id=3, host=10.58.12.217:9092),11,628301), test004-558 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-624 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test-3 -> InitialFetchState(Some(HeEEmpDsSGeLVSIfaRiRqQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),44,0), test004-393 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-426 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-161 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test010-28 -> InitialFetchState(Some(KRrkky6_Qwi605E4lIfOgw),BrokerEndPoint(id=3, host=10.58.12.217:9092),7,2695048), test005-161 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-227 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), __consumer_offsets-1 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),46,0), __consumer_offsets-34 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),47,0), test004-557 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-293 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),22,0), test004-260 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-326 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-94 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-61 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), __consumer_offsets-0 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),47,0), test004-127 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-94 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test123-28 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),42,260246), test005-226 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test005-193 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),22,0), test009-28 -> InitialFetchState(Some(g95oe921S86FCGM2NqB23w),BrokerEndPoint(id=3, host=10.58.12.217:9092),11,628425), test005-28 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), __consumer_offsets-33 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),43,0), test004-461 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-230 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-263 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test004-296 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-329 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test005-64 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),25,0), test005-130 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test123-31 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),35,259134), test005-196 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test004-692 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), __consumer_offsets-3 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),48,0), __consumer_offsets-36 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),46,0), test004-493 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-526 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-592 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-658 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-196 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-229 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test005-295 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test004-262 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-361 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-328 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-96 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-162 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-96 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-162 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), __consumer_offsets-35 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),46,0), __consumer_offsets-2 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),43,0), test004-17 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-83 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test123-17 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),44,151470), test005-83 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),25,0), test004-711 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-380 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-479 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-446 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-545 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-182 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test004-215 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-248 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),22,0), __consumer_offsets-5 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),46,0), test-7 -> InitialFetchState(Some(HeEEmpDsSGeLVSIfaRiRqQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),47,0), test005-215 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),22,0), __consumer_offsets-38 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),47,0), test004-281 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-16 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),25,0), test004-49 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-115 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-115 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-710 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-511 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-610 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-247 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), __consumer_offsets-37 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),43,0), test005-346 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),22,0), __consumer_offsets-4 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),43,0), test004-346 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-19 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),25,0), test004-647 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-382 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-448 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-514 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test123-19 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),44,186720), __consumer_offsets-7 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),46,0), __consumer_offsets-40 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),43,0), test005-151 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-118 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test-9 -> InitialFetchState(Some(HeEEmpDsSGeLVSIfaRiRqQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),46,0), test005-217 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),22,0), test005-316 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test009-18 -> InitialFetchState(Some(g95oe921S86FCGM2NqB23w),BrokerEndPoint(id=3, host=10.58.12.217:9092),13,648770), test004-51 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-18 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-613 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-580 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-679 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-348 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-546 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), __consumer_offsets-39 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),48,0), test005-117 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-183 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-183 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),22,0), test010-17 -> InitialFetchState(Some(KRrkky6_Qwi605E4lIfOgw),BrokerEndPoint(id=3, host=10.58.12.217:9092),7,2899399), __consumer_offsets-6 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),43,0), test004-249 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), __consumer_offsets-9 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),47,0), __consumer_offsets-42 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),43,0), test004-517 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-649 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-682 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-285 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-285 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test004-351 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-318 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-384 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-483 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-450 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-21 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-54 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),25,0), test005-120 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test123-21 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),44,189660), test005-87 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),25,0), test010-20 -> InitialFetchState(Some(KRrkky6_Qwi605E4lIfOgw),BrokerEndPoint(id=3, host=10.58.12.217:9092),7,2899796), test123-54 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),35,184725), test004-120 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-219 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-186 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-219 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),22,0), __consumer_offsets-41 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),46,0), test004-549 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-714 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-350 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test004-317 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-284 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-416 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-482 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-86 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),25,0), test005-53 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), __consumer_offsets-8 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),48,0), test123-53 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),34,259806), test004-86 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-152 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-284 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),22,0), test004-218 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-251 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), __consumer_offsets-11 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),43,0), __consumer_offsets-44 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),46,0), test004-453 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-420 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-519 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-585 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-618 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test-13 -> InitialFetchState(Some(HeEEmpDsSGeLVSIfaRiRqQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),46,0), test005-320 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test005-353 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test004-419 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-89 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test009-22 -> InitialFetchState(Some(g95oe921S86FCGM2NqB23w),BrokerEndPoint(id=3, host=10.58.12.217:9092),11,628500), __consumer_offsets-43 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),47,0), __consumer_offsets-10 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),46,0), test004-617 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-584 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-253 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-253 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test005-22 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-55 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-154 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-154 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-187 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test010-21 -> InitialFetchState(Some(KRrkky6_Qwi605E4lIfOgw),BrokerEndPoint(id=3, host=10.58.12.217:9092),7,2900010), test005-141 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-108 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), __consumer_offsets-13 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),43,0), __consumer_offsets-46 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),43,0), test005-273 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),22,0), test009-9 -> InitialFetchState(Some(g95oe921S86FCGM2NqB23w),BrokerEndPoint(id=3, host=10.58.12.217:9092),13,648906), test004-75 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-108 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-471 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-173 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test010-7 -> InitialFetchState(Some(KRrkky6_Qwi605E4lIfOgw),BrokerEndPoint(id=3, host=10.58.12.217:9092),7,2695181), test005-272 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), __consumer_offsets-45 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),46,0), test005-239 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test004-272 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), __consumer_offsets-12 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),48,0), test005-8 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),25,0), test005-74 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),25,0), test005-41 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),25,0), test004-107 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-107 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test123-8 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),44,174702), test009-7 -> InitialFetchState(Some(g95oe921S86FCGM2NqB23w),BrokerEndPoint(id=3, host=10.58.12.217:9092),13,648968), test004-470 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-536 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-635 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), __consumer_offsets-15 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),43,0), test004-44 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), __consumer_offsets-48 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),47,0), test004-143 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test010-10 -> InitialFetchState(Some(KRrkky6_Qwi605E4lIfOgw),BrokerEndPoint(id=3, host=10.58.12.217:9092),7,2899538), test005-242 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),22,0), test004-275 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-11 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-44 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-605 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-671 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test-17 -> InitialFetchState(Some(HeEEmpDsSGeLVSIfaRiRqQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),47,0), test004-704 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-341 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),22,0), test004-374 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test123-43 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),34,258970), __consumer_offsets-47 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),47,0), test123-10 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),42,258042), test004-175 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-208 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test005-175 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), __consumer_offsets-14 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),46,0), test004-241 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-208 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-307 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-307 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),22,0), test005-10 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-76 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),25,0), test004-637 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-340 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-439 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-406 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-505 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-571 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-538 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-112 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test123-13 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),42,259736), test005-79 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),25,0), test004-46 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-178 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-145 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-211 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-178 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), __consumer_offsets-17 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),48,0), test004-673 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test-19 -> InitialFetchState(Some(HeEEmpDsSGeLVSIfaRiRqQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),44,0), test004-706 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-277 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-343 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-376 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-12 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-45 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), __consumer_offsets-16 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),46,0), test004-78 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-144 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-276 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),22,0), test009-12 -> InitialFetchState(Some(g95oe921S86FCGM2NqB23w),BrokerEndPoint(id=3, host=10.58.12.217:9092),11,628230), test005-12 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),25,0), __consumer_offsets-49 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),46,0), test004-573 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-639 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-606 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-342 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test004-309 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-441 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-408 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-474 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-48 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),25,0), test004-147 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-147 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test123-48 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),33,258635), __consumer_offsets-19 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),48,0), test009-15 -> InitialFetchState(Some(g95oe921S86FCGM2NqB23w),BrokerEndPoint(id=3, host=10.58.12.217:9092),13,648548), test004-576 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-675 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-642 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-180 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-312 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test005-279 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test004-312 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-411 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test123-47 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),34,259484), test004-113 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-212 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),22,0), test010-13 -> InitialFetchState(Some(KRrkky6_Qwi605E4lIfOgw),BrokerEndPoint(id=3, host=10.58.12.217:9092),7,2698137), __consumer_offsets-18 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),46,0), test004-509 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-542 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-245 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-245 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),22,0), test005-311 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),22,0), test004-133 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-166 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-133 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), __consumer_offsets-21 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),46,0), test005-66 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-33 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-66 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-99 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test123-0 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),44,163200), test004-429 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-396 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-495 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-462 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-561 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-528 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-627 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-594 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-165 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-198 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test005-264 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test004-231 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test-23 -> InitialFetchState(Some(HeEEmpDsSGeLVSIfaRiRqQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),44,0), test004-297 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-264 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), __consumer_offsets-20 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),43,0), test004-330 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-32 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-65 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-32 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-131 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-660 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-333 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test004-498 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test123-3 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),41,259904), __consumer_offsets-23 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),43,0), test004-36 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test010-2 -> InitialFetchState(Some(KRrkky6_Qwi605E4lIfOgw),BrokerEndPoint(id=3, host=10.58.12.217:9092),7,2899642), test004-102 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-300 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test004-234 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test-26 -> InitialFetchState(Some(HeEEmpDsSGeLVSIfaRiRqQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),46,0), test004-2 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-35 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-365 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test123-35 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),35,181620), test004-101 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), __consumer_offsets-22 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),48,0), test004-200 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-233 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),22,0), test005-332 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test005-299 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),22,0), test009-1 -> InitialFetchState(Some(g95oe921S86FCGM2NqB23w),BrokerEndPoint(id=3, host=10.58.12.217:9092),11,628425), test005-1 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-695 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-269 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-269 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test004-401 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-368 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-38 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-71 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-104 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test123-38 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),35,161655), test004-170 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), __consumer_offsets-26 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),43,0), test004-599 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-632 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-698 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-301 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-334 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-433 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-466 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-70 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),25,0), __consumer_offsets-24 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),47,0), test005-136 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-202 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),22,0), test004-136 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-169 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test-27 -> InitialFetchState(Some(HeEEmpDsSGeLVSIfaRiRqQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),44,0), test004-202 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-235 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test004-3 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), __consumer_offsets-25 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),46,0), test005-3 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),25,0), test004-565 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-532 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-664 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-205 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-337 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test004-304 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-370 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-7 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-73 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test123-40 -> InitialFetchState(Some(xYxZQSYMRGWeuBKqTXlIgQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),35,141690), __consumer_offsets-28 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),43,0), test004-436 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-502 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-568 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-237 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-303 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),22,0), test004-336 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-402 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-39 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-138 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-171 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test005-204 -> InitialFetchState(Some(9RG-T8tRSXCazONSh51F7A),BrokerEndPoint(id=3, host=10.58.12.217:9092),23,0), test004-138 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test010-5 -> InitialFetchState(Some(KRrkky6_Qwi605E4lIfOgw),BrokerEndPoint(id=3, host=10.58.12.217:9092),7,2899269), __consumer_offsets-27 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=3, host=10.58.12.217:9092),46,0), test009-5 -> InitialFetchState(Some(g95oe921S86FCGM2NqB23w),BrokerEndPoint(id=3, host=10.58.12.217:9092),13,649110), test004-600 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-699 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0), test004-666 -> InitialFetchState(Some(EZpo1lPpS5G61Tn51H0vcA),BrokerEndPoint(id=3, host=10.58.12.217:9092),24,0)) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:07:58,576] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-589 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,579] INFO [UnifiedLog partition=test004-589, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,579] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-358 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,579] INFO [UnifiedLog partition=test005-358, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,579] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-391 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,580] INFO [UnifiedLog partition=test004-391, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,580] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-358 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,580] INFO [UnifiedLog partition=test004-358, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,580] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-457 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,580] INFO [UnifiedLog partition=test004-457, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,580] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-424 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,580] INFO [UnifiedLog partition=test004-424, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,580] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-523 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,580] INFO [UnifiedLog partition=test004-523, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,580] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-126 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,580] INFO [UnifiedLog partition=test005-126, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,580] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-32 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,580] INFO [UnifiedLog partition=__consumer_offsets-32, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,580] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-258 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,580] INFO [UnifiedLog partition=test005-258, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,580] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-192 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,580] INFO [UnifiedLog partition=test004-192, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,580] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-291 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,580] INFO [UnifiedLog partition=test004-291, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,580] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-324 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,580] INFO [UnifiedLog partition=test005-324, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,580] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-27 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,580] INFO [UnifiedLog partition=test004-27, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,580] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-60 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,580] INFO [UnifiedLog partition=test005-60, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,580] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-621 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,580] INFO [UnifiedLog partition=test004-621, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,581] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-687 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,581] INFO [UnifiedLog partition=test004-687, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,581] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-654 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,581] INFO [UnifiedLog partition=test004-654, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,581] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test-0 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,581] INFO [UnifiedLog partition=test-0, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,581] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-489 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,581] INFO [UnifiedLog partition=test004-489, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,581] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-31 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,581] INFO [UnifiedLog partition=__consumer_offsets-31, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,581] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-92 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,581] INFO [UnifiedLog partition=test004-92, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,581] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-224 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,581] INFO [UnifiedLog partition=test005-224, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,581] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-257 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,581] INFO [UnifiedLog partition=test005-257, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,581] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test009-26 with TruncationState(offset=628301, completed=true) due to local high watermark 628301 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,581] INFO [UnifiedLog partition=test009-26, dir=/data01/kafka-logs-351] Truncating to 628301 has no effect as the largest offset in the log is 628300 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,581] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-558 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,581] INFO [UnifiedLog partition=test004-558, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,581] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-624 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,581] INFO [UnifiedLog partition=test004-624, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,581] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test-3 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,581] INFO [UnifiedLog partition=test-3, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,581] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-393 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,581] INFO [UnifiedLog partition=test004-393, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,581] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-426 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,581] INFO [UnifiedLog partition=test004-426, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,581] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-161 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,581] INFO [UnifiedLog partition=test004-161, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,581] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-161 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,582] INFO [UnifiedLog partition=test005-161, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,582] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-227 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,582] INFO [UnifiedLog partition=test004-227, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,582] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-1 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,582] INFO [UnifiedLog partition=__consumer_offsets-1, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,582] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-34 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,582] INFO [UnifiedLog partition=__consumer_offsets-34, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,582] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-557 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,582] INFO [UnifiedLog partition=test004-557, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,582] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-293 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,582] INFO [UnifiedLog partition=test005-293, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,582] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-260 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,582] INFO [UnifiedLog partition=test004-260, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,582] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-326 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,582] INFO [UnifiedLog partition=test004-326, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,582] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-94 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,582] INFO [UnifiedLog partition=test005-94, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,582] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-61 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,582] INFO [UnifiedLog partition=test004-61, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,582] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-0 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,582] INFO [UnifiedLog partition=__consumer_offsets-0, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,582] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-127 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,582] INFO [UnifiedLog partition=test004-127, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,582] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-94 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,582] INFO [UnifiedLog partition=test004-94, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,583] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test123-28 with TruncationState(offset=260246, completed=true) due to local high watermark 260246 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,583] INFO [UnifiedLog partition=test123-28, dir=/data01/kafka-logs-351] Truncating to 260246 has no effect as the largest offset in the log is 260245 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,583] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-226 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,583] INFO [UnifiedLog partition=test005-226, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,583] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-193 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,583] INFO [UnifiedLog partition=test005-193, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,583] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test009-28 with TruncationState(offset=628425, completed=true) due to local high watermark 628425 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,583] INFO [UnifiedLog partition=test009-28, dir=/data01/kafka-logs-351] Truncating to 628425 has no effect as the largest offset in the log is 628424 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,583] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-28 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,583] INFO [UnifiedLog partition=test005-28, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,583] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-33 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,583] INFO [UnifiedLog partition=__consumer_offsets-33, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,583] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-461 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,583] INFO [UnifiedLog partition=test004-461, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,583] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-230 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,583] INFO [UnifiedLog partition=test004-230, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,583] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-263 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,583] INFO [UnifiedLog partition=test005-263, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,583] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-296 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,583] INFO [UnifiedLog partition=test004-296, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,583] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-329 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,583] INFO [UnifiedLog partition=test005-329, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,583] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-64 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,583] INFO [UnifiedLog partition=test005-64, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,583] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-130 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,583] INFO [UnifiedLog partition=test005-130, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,583] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test123-31 with TruncationState(offset=259134, completed=true) due to local high watermark 259134 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,583] INFO [UnifiedLog partition=test123-31, dir=/data01/kafka-logs-351] Truncating to 259134 has no effect as the largest offset in the log is 259133 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,583] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-196 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,583] INFO [UnifiedLog partition=test005-196, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,583] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-692 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,583] INFO [UnifiedLog partition=test004-692, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,583] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-3 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,583] INFO [UnifiedLog partition=__consumer_offsets-3, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,583] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-36 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,583] INFO [UnifiedLog partition=__consumer_offsets-36, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,583] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-493 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,583] INFO [UnifiedLog partition=test004-493, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,584] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-526 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,584] INFO [UnifiedLog partition=test004-526, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,584] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-592 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,584] INFO [UnifiedLog partition=test004-592, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,584] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-658 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,584] INFO [UnifiedLog partition=test004-658, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,584] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-196 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,584] INFO [UnifiedLog partition=test004-196, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,584] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-229 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,584] INFO [UnifiedLog partition=test005-229, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,584] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-295 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,584] INFO [UnifiedLog partition=test005-295, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,584] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-262 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,584] INFO [UnifiedLog partition=test004-262, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,584] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-361 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,584] INFO [UnifiedLog partition=test004-361, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,584] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-328 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,584] INFO [UnifiedLog partition=test004-328, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,584] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-96 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,584] INFO [UnifiedLog partition=test005-96, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,584] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-162 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,584] INFO [UnifiedLog partition=test005-162, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,584] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-96 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,584] INFO [UnifiedLog partition=test004-96, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,584] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-162 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,584] INFO [UnifiedLog partition=test004-162, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,584] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-35 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,584] INFO [UnifiedLog partition=__consumer_offsets-35, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,584] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-2 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,584] INFO [UnifiedLog partition=__consumer_offsets-2, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,584] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-17 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,584] INFO [UnifiedLog partition=test004-17, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,584] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-83 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,584] INFO [UnifiedLog partition=test004-83, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,584] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test123-17 with TruncationState(offset=151470, completed=true) due to local high watermark 151470 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,585] INFO [UnifiedLog partition=test123-17, dir=/data01/kafka-logs-351] Truncating to 151470 has no effect as the largest offset in the log is 151469 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,585] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-83 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,585] INFO [UnifiedLog partition=test005-83, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,585] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-711 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,585] INFO [UnifiedLog partition=test004-711, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,585] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-380 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,585] INFO [UnifiedLog partition=test004-380, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,585] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-479 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,585] INFO [UnifiedLog partition=test004-479, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,585] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-446 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,585] INFO [UnifiedLog partition=test004-446, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,585] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-545 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,585] INFO [UnifiedLog partition=test004-545, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,585] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-182 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,585] INFO [UnifiedLog partition=test005-182, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,585] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-215 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,585] INFO [UnifiedLog partition=test004-215, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,585] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-248 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,585] INFO [UnifiedLog partition=test005-248, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,585] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-5 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,585] INFO [UnifiedLog partition=__consumer_offsets-5, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,585] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test-7 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,585] INFO [UnifiedLog partition=test-7, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,585] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-215 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,585] INFO [UnifiedLog partition=test005-215, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,585] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-38 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,585] INFO [UnifiedLog partition=__consumer_offsets-38, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,586] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-281 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,586] INFO [UnifiedLog partition=test004-281, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,586] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-16 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,586] INFO [UnifiedLog partition=test005-16, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,586] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-49 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,586] INFO [UnifiedLog partition=test004-49, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,586] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-115 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,586] INFO [UnifiedLog partition=test004-115, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,586] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-115 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,586] INFO [UnifiedLog partition=test005-115, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,586] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-710 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,586] INFO [UnifiedLog partition=test004-710, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,586] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-511 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,586] INFO [UnifiedLog partition=test004-511, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,586] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-610 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,586] INFO [UnifiedLog partition=test004-610, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,586] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-247 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,586] INFO [UnifiedLog partition=test004-247, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,586] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-37 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,586] INFO [UnifiedLog partition=__consumer_offsets-37, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,586] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-346 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,586] INFO [UnifiedLog partition=test005-346, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,586] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-4 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,586] INFO [UnifiedLog partition=__consumer_offsets-4, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,586] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-346 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,586] INFO [UnifiedLog partition=test004-346, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,586] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-19 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,586] INFO [UnifiedLog partition=test005-19, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,586] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-647 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,586] INFO [UnifiedLog partition=test004-647, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,586] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-382 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,586] INFO [UnifiedLog partition=test004-382, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,586] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-448 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,586] INFO [UnifiedLog partition=test004-448, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,586] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-514 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,586] INFO [UnifiedLog partition=test004-514, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,586] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test123-19 with TruncationState(offset=186720, completed=true) due to local high watermark 186720 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,586] INFO [UnifiedLog partition=test123-19, dir=/data01/kafka-logs-351] Truncating to 186720 has no effect as the largest offset in the log is 186719 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,587] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-7 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,587] INFO [UnifiedLog partition=__consumer_offsets-7, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,587] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-40 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,587] INFO [UnifiedLog partition=__consumer_offsets-40, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,587] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-151 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,587] INFO [UnifiedLog partition=test005-151, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,587] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-118 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,587] INFO [UnifiedLog partition=test004-118, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,587] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test-9 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,587] INFO [UnifiedLog partition=test-9, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,587] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-217 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,587] INFO [UnifiedLog partition=test005-217, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,587] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-316 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,587] INFO [UnifiedLog partition=test005-316, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,587] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test009-18 with TruncationState(offset=648770, completed=true) due to local high watermark 648770 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,587] INFO [UnifiedLog partition=test009-18, dir=/data01/kafka-logs-351] Truncating to 648770 has no effect as the largest offset in the log is 648769 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,587] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-51 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,587] INFO [UnifiedLog partition=test004-51, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,587] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-18 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,587] INFO [UnifiedLog partition=test004-18, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,587] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-613 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,587] INFO [UnifiedLog partition=test004-613, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,587] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-580 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,587] INFO [UnifiedLog partition=test004-580, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,587] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-679 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,587] INFO [UnifiedLog partition=test004-679, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,587] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-348 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,587] INFO [UnifiedLog partition=test004-348, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,587] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-546 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,587] INFO [UnifiedLog partition=test004-546, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,587] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-39 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,587] INFO [UnifiedLog partition=__consumer_offsets-39, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,587] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-117 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,587] INFO [UnifiedLog partition=test005-117, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,588] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-183 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,588] INFO [UnifiedLog partition=test004-183, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,588] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-183 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,588] INFO [UnifiedLog partition=test005-183, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,588] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-6 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,588] INFO [UnifiedLog partition=__consumer_offsets-6, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,588] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-249 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,588] INFO [UnifiedLog partition=test004-249, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,588] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-9 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,588] INFO [UnifiedLog partition=__consumer_offsets-9, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,588] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-42 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,588] INFO [UnifiedLog partition=__consumer_offsets-42, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,588] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-517 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,588] INFO [UnifiedLog partition=test004-517, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,588] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-649 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,588] INFO [UnifiedLog partition=test004-649, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,588] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-682 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,588] INFO [UnifiedLog partition=test004-682, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,588] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-285 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,588] INFO [UnifiedLog partition=test004-285, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,588] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-285 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,588] INFO [UnifiedLog partition=test005-285, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,588] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-351 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,588] INFO [UnifiedLog partition=test004-351, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,588] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-318 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,588] INFO [UnifiedLog partition=test004-318, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,588] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-384 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,588] INFO [UnifiedLog partition=test004-384, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,588] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-483 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,588] INFO [UnifiedLog partition=test004-483, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,588] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-450 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,588] INFO [UnifiedLog partition=test004-450, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,588] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-21 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,588] INFO [UnifiedLog partition=test004-21, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,588] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-54 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,589] INFO [UnifiedLog partition=test005-54, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,589] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-120 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,589] INFO [UnifiedLog partition=test005-120, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,589] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test123-21 with TruncationState(offset=189660, completed=true) due to local high watermark 189660 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,589] INFO [UnifiedLog partition=test123-21, dir=/data01/kafka-logs-351] Truncating to 189660 has no effect as the largest offset in the log is 189659 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,589] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-87 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,589] INFO [UnifiedLog partition=test005-87, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,589] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test123-54 with TruncationState(offset=184725, completed=true) due to local high watermark 184725 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,589] INFO [UnifiedLog partition=test123-54, dir=/data01/kafka-logs-351] Truncating to 184725 has no effect as the largest offset in the log is 184724 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,589] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-120 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,589] INFO [UnifiedLog partition=test004-120, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,589] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-219 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,589] INFO [UnifiedLog partition=test004-219, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,589] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-186 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,589] INFO [UnifiedLog partition=test004-186, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,589] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-219 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,589] INFO [UnifiedLog partition=test005-219, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,589] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-41 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,589] INFO [UnifiedLog partition=__consumer_offsets-41, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,589] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-549 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,589] INFO [UnifiedLog partition=test004-549, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,589] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-714 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,589] INFO [UnifiedLog partition=test004-714, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,589] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-350 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,589] INFO [UnifiedLog partition=test005-350, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,589] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-317 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,589] INFO [UnifiedLog partition=test004-317, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,589] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-284 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,589] INFO [UnifiedLog partition=test004-284, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,589] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-416 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,589] INFO [UnifiedLog partition=test004-416, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,589] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-482 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,589] INFO [UnifiedLog partition=test004-482, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,590] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-86 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,590] INFO [UnifiedLog partition=test005-86, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,590] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-53 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,590] INFO [UnifiedLog partition=test005-53, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,590] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-8 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,590] INFO [UnifiedLog partition=__consumer_offsets-8, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,590] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test123-53 with TruncationState(offset=259806, completed=true) due to local high watermark 259806 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,590] INFO [UnifiedLog partition=test123-53, dir=/data01/kafka-logs-351] Truncating to 259806 has no effect as the largest offset in the log is 259805 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,590] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-86 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,590] INFO [UnifiedLog partition=test004-86, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,590] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-152 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,590] INFO [UnifiedLog partition=test004-152, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,590] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-284 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,590] INFO [UnifiedLog partition=test005-284, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,590] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-218 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,590] INFO [UnifiedLog partition=test004-218, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,590] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-251 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,590] INFO [UnifiedLog partition=test005-251, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,590] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-11 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,590] INFO [UnifiedLog partition=__consumer_offsets-11, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,590] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-44 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,590] INFO [UnifiedLog partition=__consumer_offsets-44, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,590] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-453 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,590] INFO [UnifiedLog partition=test004-453, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,590] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-420 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,590] INFO [UnifiedLog partition=test004-420, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,590] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-519 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,590] INFO [UnifiedLog partition=test004-519, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,590] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-585 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,590] INFO [UnifiedLog partition=test004-585, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,590] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-618 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,590] INFO [UnifiedLog partition=test004-618, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,590] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test-13 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,590] INFO [UnifiedLog partition=test-13, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,591] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-320 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,591] INFO [UnifiedLog partition=test005-320, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,591] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-353 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,591] INFO [UnifiedLog partition=test005-353, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,591] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-419 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,591] INFO [UnifiedLog partition=test004-419, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,591] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-89 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,591] INFO [UnifiedLog partition=test004-89, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,591] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test009-22 with TruncationState(offset=628500, completed=true) due to local high watermark 628500 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,591] INFO [UnifiedLog partition=test009-22, dir=/data01/kafka-logs-351] Truncating to 628500 has no effect as the largest offset in the log is 628499 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,591] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-43 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,591] INFO [UnifiedLog partition=__consumer_offsets-43, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,591] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-10 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,591] INFO [UnifiedLog partition=__consumer_offsets-10, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,591] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-617 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,591] INFO [UnifiedLog partition=test004-617, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,591] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-584 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,591] INFO [UnifiedLog partition=test004-584, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,591] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-253 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,591] INFO [UnifiedLog partition=test004-253, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,591] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-253 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,591] INFO [UnifiedLog partition=test005-253, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,591] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-22 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,591] INFO [UnifiedLog partition=test005-22, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,591] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-55 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,591] INFO [UnifiedLog partition=test004-55, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,591] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-154 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,591] INFO [UnifiedLog partition=test005-154, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,591] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-154 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,591] INFO [UnifiedLog partition=test004-154, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,591] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-187 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,591] INFO [UnifiedLog partition=test005-187, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,591] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-141 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,591] INFO [UnifiedLog partition=test005-141, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,592] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-108 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,592] INFO [UnifiedLog partition=test004-108, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,592] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-13 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,592] INFO [UnifiedLog partition=__consumer_offsets-13, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,592] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-46 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,592] INFO [UnifiedLog partition=__consumer_offsets-46, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,592] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-273 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,592] INFO [UnifiedLog partition=test005-273, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,592] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test009-9 with TruncationState(offset=648906, completed=true) due to local high watermark 648906 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,592] INFO [UnifiedLog partition=test009-9, dir=/data01/kafka-logs-351] Truncating to 648906 has no effect as the largest offset in the log is 648905 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,592] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-75 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,592] INFO [UnifiedLog partition=test004-75, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,592] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-108 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,592] INFO [UnifiedLog partition=test005-108, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,592] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-471 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,592] INFO [UnifiedLog partition=test004-471, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,592] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-173 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,592] INFO [UnifiedLog partition=test005-173, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,592] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-272 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,592] INFO [UnifiedLog partition=test005-272, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,592] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-45 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,592] INFO [UnifiedLog partition=__consumer_offsets-45, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,592] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-239 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,592] INFO [UnifiedLog partition=test005-239, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,592] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-272 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,592] INFO [UnifiedLog partition=test004-272, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,592] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-12 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,592] INFO [UnifiedLog partition=__consumer_offsets-12, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,592] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-8 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,592] INFO [UnifiedLog partition=test005-8, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,592] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-74 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,592] INFO [UnifiedLog partition=test005-74, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,592] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-41 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,593] INFO [UnifiedLog partition=test005-41, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,593] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-107 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,593] INFO [UnifiedLog partition=test004-107, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,593] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-107 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,593] INFO [UnifiedLog partition=test005-107, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,593] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test123-8 with TruncationState(offset=174702, completed=true) due to local high watermark 174702 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,593] INFO [UnifiedLog partition=test123-8, dir=/data01/kafka-logs-351] Truncating to 174702 has no effect as the largest offset in the log is 174701 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,593] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test009-7 with TruncationState(offset=648968, completed=true) due to local high watermark 648968 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,593] INFO [UnifiedLog partition=test009-7, dir=/data01/kafka-logs-351] Truncating to 648968 has no effect as the largest offset in the log is 648967 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,593] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-470 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,593] INFO [UnifiedLog partition=test004-470, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,593] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-536 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,593] INFO [UnifiedLog partition=test004-536, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,593] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-635 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,593] INFO [UnifiedLog partition=test004-635, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,593] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-15 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,593] INFO [UnifiedLog partition=__consumer_offsets-15, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,593] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-44 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,593] INFO [UnifiedLog partition=test004-44, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,593] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-48 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,593] INFO [UnifiedLog partition=__consumer_offsets-48, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,593] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-143 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,593] INFO [UnifiedLog partition=test004-143, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,593] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-242 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,593] INFO [UnifiedLog partition=test005-242, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,593] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-275 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,593] INFO [UnifiedLog partition=test004-275, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,593] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-11 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,593] INFO [UnifiedLog partition=test004-11, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,593] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-44 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,593] INFO [UnifiedLog partition=test005-44, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,593] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-605 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,593] INFO [UnifiedLog partition=test004-605, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,593] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-671 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,594] INFO [UnifiedLog partition=test004-671, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,594] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test-17 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,594] INFO [UnifiedLog partition=test-17, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,594] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-704 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,594] INFO [UnifiedLog partition=test004-704, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,594] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-341 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,594] INFO [UnifiedLog partition=test005-341, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,594] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-374 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,594] INFO [UnifiedLog partition=test004-374, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,594] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test123-43 with TruncationState(offset=258970, completed=true) due to local high watermark 258970 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,594] INFO [UnifiedLog partition=test123-43, dir=/data01/kafka-logs-351] Truncating to 258970 has no effect as the largest offset in the log is 258969 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,594] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-47 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,594] INFO [UnifiedLog partition=__consumer_offsets-47, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,594] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test123-10 with TruncationState(offset=258042, completed=true) due to local high watermark 258042 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,594] INFO [UnifiedLog partition=test123-10, dir=/data01/kafka-logs-351] Truncating to 258042 has no effect as the largest offset in the log is 258041 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,594] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-175 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,594] INFO [UnifiedLog partition=test004-175, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,594] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-208 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,594] INFO [UnifiedLog partition=test005-208, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,594] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-175 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,594] INFO [UnifiedLog partition=test005-175, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,594] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-14 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,594] INFO [UnifiedLog partition=__consumer_offsets-14, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,594] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-241 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,594] INFO [UnifiedLog partition=test004-241, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,594] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-208 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,594] INFO [UnifiedLog partition=test004-208, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,594] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-307 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,594] INFO [UnifiedLog partition=test004-307, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,594] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-307 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,594] INFO [UnifiedLog partition=test005-307, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,594] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-10 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,594] INFO [UnifiedLog partition=test005-10, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,594] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-76 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,594] INFO [UnifiedLog partition=test005-76, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,594] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-637 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,594] INFO [UnifiedLog partition=test004-637, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,595] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-340 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,595] INFO [UnifiedLog partition=test004-340, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,595] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-439 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,595] INFO [UnifiedLog partition=test004-439, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,595] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-406 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,595] INFO [UnifiedLog partition=test004-406, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,595] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-505 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,595] INFO [UnifiedLog partition=test004-505, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,595] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-571 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,595] INFO [UnifiedLog partition=test004-571, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,595] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-538 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,595] INFO [UnifiedLog partition=test004-538, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,595] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-112 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,595] INFO [UnifiedLog partition=test005-112, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,595] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test123-13 with TruncationState(offset=259736, completed=true) due to local high watermark 259736 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,595] INFO [UnifiedLog partition=test123-13, dir=/data01/kafka-logs-351] Truncating to 259736 has no effect as the largest offset in the log is 259735 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,595] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-79 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,595] INFO [UnifiedLog partition=test005-79, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,595] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-46 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,595] INFO [UnifiedLog partition=test004-46, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,595] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-178 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,595] INFO [UnifiedLog partition=test005-178, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,595] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-145 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,595] INFO [UnifiedLog partition=test005-145, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,595] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-211 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,595] INFO [UnifiedLog partition=test004-211, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,595] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-178 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,595] INFO [UnifiedLog partition=test004-178, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,595] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-17 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,595] INFO [UnifiedLog partition=__consumer_offsets-17, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,595] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-673 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,595] INFO [UnifiedLog partition=test004-673, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,595] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test-19 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,595] INFO [UnifiedLog partition=test-19, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,595] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-706 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,595] INFO [UnifiedLog partition=test004-706, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,595] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-277 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,595] INFO [UnifiedLog partition=test004-277, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,595] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-343 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,595] INFO [UnifiedLog partition=test004-343, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,595] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-376 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,595] INFO [UnifiedLog partition=test004-376, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,595] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-12 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,595] INFO [UnifiedLog partition=test004-12, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,595] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-45 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,596] INFO [UnifiedLog partition=test005-45, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,596] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-16 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,596] INFO [UnifiedLog partition=__consumer_offsets-16, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,596] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-78 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,596] INFO [UnifiedLog partition=test004-78, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,596] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-144 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,596] INFO [UnifiedLog partition=test004-144, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,596] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-276 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,596] INFO [UnifiedLog partition=test005-276, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,596] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test009-12 with TruncationState(offset=628230, completed=true) due to local high watermark 628230 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,596] INFO [UnifiedLog partition=test009-12, dir=/data01/kafka-logs-351] Truncating to 628230 has no effect as the largest offset in the log is 628229 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,596] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-12 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,596] INFO [UnifiedLog partition=test005-12, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,596] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-49 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,596] INFO [UnifiedLog partition=__consumer_offsets-49, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,596] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-573 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,596] INFO [UnifiedLog partition=test004-573, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,596] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-639 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,596] INFO [UnifiedLog partition=test004-639, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,596] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-606 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,596] INFO [UnifiedLog partition=test004-606, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,596] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-342 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,596] INFO [UnifiedLog partition=test005-342, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,596] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-309 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,596] INFO [UnifiedLog partition=test004-309, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,596] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-441 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,596] INFO [UnifiedLog partition=test004-441, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,596] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-408 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,596] INFO [UnifiedLog partition=test004-408, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,596] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-474 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,596] INFO [UnifiedLog partition=test004-474, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,596] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-48 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,596] INFO [UnifiedLog partition=test005-48, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,597] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-147 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,597] INFO [UnifiedLog partition=test004-147, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,597] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-147 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,597] INFO [UnifiedLog partition=test005-147, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,597] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test123-48 with TruncationState(offset=258635, completed=true) due to local high watermark 258635 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,597] INFO [UnifiedLog partition=test123-48, dir=/data01/kafka-logs-351] Truncating to 258635 has no effect as the largest offset in the log is 258634 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,597] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-19 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,597] INFO [UnifiedLog partition=__consumer_offsets-19, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,597] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test009-15 with TruncationState(offset=648548, completed=true) due to local high watermark 648548 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,597] INFO [UnifiedLog partition=test009-15, dir=/data01/kafka-logs-351] Truncating to 648548 has no effect as the largest offset in the log is 648547 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,597] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-576 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,597] INFO [UnifiedLog partition=test004-576, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,597] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-675 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,597] INFO [UnifiedLog partition=test004-675, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,597] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-642 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,597] INFO [UnifiedLog partition=test004-642, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,597] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-180 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,597] INFO [UnifiedLog partition=test004-180, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,597] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-312 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,597] INFO [UnifiedLog partition=test005-312, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,597] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-279 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,597] INFO [UnifiedLog partition=test005-279, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,597] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-312 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,597] INFO [UnifiedLog partition=test004-312, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,597] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-411 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,597] INFO [UnifiedLog partition=test004-411, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,597] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test123-47 with TruncationState(offset=259484, completed=true) due to local high watermark 259484 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,597] INFO [UnifiedLog partition=test123-47, dir=/data01/kafka-logs-351] Truncating to 259484 has no effect as the largest offset in the log is 259483 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,597] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-113 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,597] INFO [UnifiedLog partition=test004-113, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,597] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-212 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,597] INFO [UnifiedLog partition=test005-212, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,597] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-18 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,597] INFO [UnifiedLog partition=__consumer_offsets-18, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,597] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-509 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,598] INFO [UnifiedLog partition=test004-509, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,598] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-542 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,598] INFO [UnifiedLog partition=test004-542, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,598] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-245 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,598] INFO [UnifiedLog partition=test004-245, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,598] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-245 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,598] INFO [UnifiedLog partition=test005-245, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,598] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-311 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,598] INFO [UnifiedLog partition=test005-311, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,598] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-133 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,598] INFO [UnifiedLog partition=test004-133, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,598] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-166 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,598] INFO [UnifiedLog partition=test005-166, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,598] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-133 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,598] INFO [UnifiedLog partition=test005-133, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,598] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-21 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,598] INFO [UnifiedLog partition=__consumer_offsets-21, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,598] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-66 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,598] INFO [UnifiedLog partition=test005-66, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,598] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-33 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,598] INFO [UnifiedLog partition=test004-33, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,598] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-66 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,598] INFO [UnifiedLog partition=test004-66, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,598] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-99 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,598] INFO [UnifiedLog partition=test005-99, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,598] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test123-0 with TruncationState(offset=163200, completed=true) due to local high watermark 163200 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,598] INFO [UnifiedLog partition=test123-0, dir=/data01/kafka-logs-351] Truncating to 163200 has no effect as the largest offset in the log is 163199 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,598] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-429 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,598] INFO [UnifiedLog partition=test004-429, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,598] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-396 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,598] INFO [UnifiedLog partition=test004-396, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,598] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-495 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,598] INFO [UnifiedLog partition=test004-495, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,598] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-462 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,598] INFO [UnifiedLog partition=test004-462, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,598] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-561 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,598] INFO [UnifiedLog partition=test004-561, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,598] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-528 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,598] INFO [UnifiedLog partition=test004-528, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,598] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-627 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,598] INFO [UnifiedLog partition=test004-627, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,598] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-594 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,598] INFO [UnifiedLog partition=test004-594, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,598] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-165 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,598] INFO [UnifiedLog partition=test004-165, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,598] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-198 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,598] INFO [UnifiedLog partition=test005-198, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,598] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-264 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,599] INFO [UnifiedLog partition=test005-264, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,599] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-231 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,599] INFO [UnifiedLog partition=test004-231, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,599] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test-23 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,599] INFO [UnifiedLog partition=test-23, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,599] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-297 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,599] INFO [UnifiedLog partition=test004-297, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,599] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-264 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,599] INFO [UnifiedLog partition=test004-264, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,599] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-20 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,599] INFO [UnifiedLog partition=__consumer_offsets-20, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,599] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-330 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,599] INFO [UnifiedLog partition=test004-330, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,599] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-32 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,599] INFO [UnifiedLog partition=test005-32, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,599] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-65 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,599] INFO [UnifiedLog partition=test004-65, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,599] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-32 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,599] INFO [UnifiedLog partition=test004-32, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,599] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-131 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,599] INFO [UnifiedLog partition=test004-131, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,599] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-660 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,599] INFO [UnifiedLog partition=test004-660, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,599] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-333 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,599] INFO [UnifiedLog partition=test005-333, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,599] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-498 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,599] INFO [UnifiedLog partition=test004-498, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,599] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test123-3 with TruncationState(offset=259904, completed=true) due to local high watermark 259904 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,599] INFO [UnifiedLog partition=test123-3, dir=/data01/kafka-logs-351] Truncating to 259904 has no effect as the largest offset in the log is 259903 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,599] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-23 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,599] INFO [UnifiedLog partition=__consumer_offsets-23, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,599] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-36 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,599] INFO [UnifiedLog partition=test004-36, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,599] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-102 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,599] INFO [UnifiedLog partition=test004-102, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,599] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-300 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,599] INFO [UnifiedLog partition=test005-300, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,599] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-234 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,599] INFO [UnifiedLog partition=test004-234, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,599] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test-26 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,599] INFO [UnifiedLog partition=test-26, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,599] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-2 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,599] INFO [UnifiedLog partition=test004-2, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,600] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-35 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,600] INFO [UnifiedLog partition=test005-35, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,600] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-365 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,600] INFO [UnifiedLog partition=test004-365, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,600] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test123-35 with TruncationState(offset=181620, completed=true) due to local high watermark 181620 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,600] INFO [UnifiedLog partition=test123-35, dir=/data01/kafka-logs-351] Truncating to 181620 has no effect as the largest offset in the log is 181619 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,600] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-101 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,600] INFO [UnifiedLog partition=test004-101, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,600] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-22 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,600] INFO [UnifiedLog partition=__consumer_offsets-22, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,600] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-200 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,600] INFO [UnifiedLog partition=test004-200, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,600] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-233 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,600] INFO [UnifiedLog partition=test005-233, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,600] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-332 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,600] INFO [UnifiedLog partition=test005-332, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,600] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-299 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,600] INFO [UnifiedLog partition=test005-299, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,600] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test009-1 with TruncationState(offset=628425, completed=true) due to local high watermark 628425 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,600] INFO [UnifiedLog partition=test009-1, dir=/data01/kafka-logs-351] Truncating to 628425 has no effect as the largest offset in the log is 628424 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,600] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-1 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,600] INFO [UnifiedLog partition=test005-1, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,600] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-695 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,600] INFO [UnifiedLog partition=test004-695, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,600] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-269 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,600] INFO [UnifiedLog partition=test004-269, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,600] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-269 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,600] INFO [UnifiedLog partition=test005-269, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,600] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-401 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,600] INFO [UnifiedLog partition=test004-401, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,600] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-368 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,600] INFO [UnifiedLog partition=test004-368, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,600] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-38 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,600] INFO [UnifiedLog partition=test005-38, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,600] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-71 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,600] INFO [UnifiedLog partition=test004-71, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,600] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-104 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,600] INFO [UnifiedLog partition=test005-104, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,600] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test123-38 with TruncationState(offset=161655, completed=true) due to local high watermark 161655 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,600] INFO [UnifiedLog partition=test123-38, dir=/data01/kafka-logs-351] Truncating to 161655 has no effect as the largest offset in the log is 161654 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,600] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-170 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,601] INFO [UnifiedLog partition=test004-170, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,601] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-26 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,601] INFO [UnifiedLog partition=__consumer_offsets-26, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,601] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-599 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,601] INFO [UnifiedLog partition=test004-599, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,601] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-632 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,601] INFO [UnifiedLog partition=test004-632, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,601] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-698 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,601] INFO [UnifiedLog partition=test004-698, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,601] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-301 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,601] INFO [UnifiedLog partition=test004-301, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,601] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-334 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,601] INFO [UnifiedLog partition=test004-334, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,601] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-433 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,601] INFO [UnifiedLog partition=test004-433, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,601] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-466 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,601] INFO [UnifiedLog partition=test004-466, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,601] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-70 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,601] INFO [UnifiedLog partition=test005-70, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,601] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-24 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,601] INFO [UnifiedLog partition=__consumer_offsets-24, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,601] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-136 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,601] INFO [UnifiedLog partition=test005-136, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,601] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-202 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,601] INFO [UnifiedLog partition=test005-202, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,601] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-136 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,601] INFO [UnifiedLog partition=test004-136, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,601] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-169 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,601] INFO [UnifiedLog partition=test005-169, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,601] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test-27 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,601] INFO [UnifiedLog partition=test-27, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,601] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-202 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,601] INFO [UnifiedLog partition=test004-202, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,601] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-235 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,601] INFO [UnifiedLog partition=test005-235, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,601] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-3 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,601] INFO [UnifiedLog partition=test004-3, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,601] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-25 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,601] INFO [UnifiedLog partition=__consumer_offsets-25, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,602] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-3 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,602] INFO [UnifiedLog partition=test005-3, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,602] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-565 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,602] INFO [UnifiedLog partition=test004-565, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,602] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-532 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,602] INFO [UnifiedLog partition=test004-532, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,602] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-664 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,602] INFO [UnifiedLog partition=test004-664, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,602] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-205 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,602] INFO [UnifiedLog partition=test004-205, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,602] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-337 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,602] INFO [UnifiedLog partition=test005-337, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,602] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-304 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,602] INFO [UnifiedLog partition=test004-304, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,602] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-370 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,602] INFO [UnifiedLog partition=test004-370, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,602] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-7 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,602] INFO [UnifiedLog partition=test004-7, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,602] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-73 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,602] INFO [UnifiedLog partition=test004-73, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,602] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test123-40 with TruncationState(offset=141690, completed=true) due to local high watermark 141690 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,602] INFO [UnifiedLog partition=test123-40, dir=/data01/kafka-logs-351] Truncating to 141690 has no effect as the largest offset in the log is 141689 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,602] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-28 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,602] INFO [UnifiedLog partition=__consumer_offsets-28, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,602] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-436 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,602] INFO [UnifiedLog partition=test004-436, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,602] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-502 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,602] INFO [UnifiedLog partition=test004-502, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,602] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-568 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,602] INFO [UnifiedLog partition=test004-568, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,602] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-237 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,602] INFO [UnifiedLog partition=test004-237, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,602] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-303 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,602] INFO [UnifiedLog partition=test005-303, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,602] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-336 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,602] INFO [UnifiedLog partition=test004-336, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,602] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-402 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,602] INFO [UnifiedLog partition=test004-402, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,602] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-39 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,602] INFO [UnifiedLog partition=test004-39, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,602] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-138 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,602] INFO [UnifiedLog partition=test005-138, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,602] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-171 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,602] INFO [UnifiedLog partition=test004-171, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,602] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test005-204 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,602] INFO [UnifiedLog partition=test005-204, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,603] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-138 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,603] INFO [UnifiedLog partition=test004-138, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,603] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition __consumer_offsets-27 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,603] INFO [UnifiedLog partition=__consumer_offsets-27, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,603] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test009-5 with TruncationState(offset=649110, completed=true) due to local high watermark 649110 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,603] INFO [UnifiedLog partition=test009-5, dir=/data01/kafka-logs-351] Truncating to 649110 has no effect as the largest offset in the log is 649109 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,603] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-600 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,603] INFO [UnifiedLog partition=test004-600, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,603] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-699 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,603] INFO [UnifiedLog partition=test004-699, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,603] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test004-666 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:07:58,603] INFO [UnifiedLog partition=test004-666, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:07:58,606] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 13 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,606] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-13 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,606] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 46 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,606] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-46 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,606] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 9 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,606] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-9 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,607] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 42 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,607] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-42 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,607] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 21 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,607] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-21 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,607] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 17 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,607] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-17 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,607] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 30 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,607] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-30 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,607] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 26 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,607] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-26 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,607] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-13 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,607] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-46 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,607] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-9 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,607] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-42 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,607] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 5 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,607] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-21 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,607] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-17 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,607] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-5 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,607] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 38 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,607] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-38 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,607] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 1 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,607] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-1 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,607] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 34 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,607] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-34 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,607] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 16 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,607] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-16 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,607] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 45 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,607] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-45 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,607] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 12 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,607] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-12 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,607] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 41 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,607] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-41 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,607] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 24 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,607] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-24 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,607] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 20 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,607] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-20 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,607] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 49 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,607] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-49 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,607] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 0 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,607] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-0 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,608] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 29 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,608] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-29 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,608] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 25 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,608] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-25 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,608] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 8 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,608] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-8 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,608] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 37 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,608] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-37 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,608] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 4 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,608] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-4 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,608] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 33 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,608] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-33 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,608] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 15 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,608] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-15 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,608] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 48 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,608] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-48 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,608] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 11 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,608] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-11 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,608] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 44 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,608] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-44 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,608] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 23 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,608] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-23 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,608] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 19 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,608] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-19 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,608] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 32 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,608] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-32 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,608] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 28 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,608] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-28 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,608] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 7 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,608] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-7 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,608] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 40 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,608] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-40 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,608] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 3 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,608] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-3 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,608] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 36 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,608] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-36 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,608] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 47 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,608] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-47 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,608] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 14 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,608] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-14 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,608] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 43 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,608] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-43 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,607] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-30 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,608] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-26 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,608] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 10 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,608] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-5 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,609] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-38 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,609] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-1 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,609] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-34 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,609] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-16 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,609] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-45 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,609] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-12 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,609] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-41 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,609] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-24 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,609] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-20 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,609] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-49 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,609] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-0 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,609] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-29 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,609] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-25 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,609] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-8 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,609] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-37 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,609] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-4 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,609] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-33 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,610] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-15 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,610] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-48 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,610] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-11 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,610] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-44 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,610] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-23 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,610] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-19 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,610] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-32 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,610] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-28 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,610] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-7 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,610] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-40 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,610] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-3 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,610] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-36 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,610] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-47 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,610] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-14 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,610] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-43 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,608] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-10 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,610] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 22 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,610] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-22 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,610] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 18 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,610] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-18 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,610] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-10 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,610] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 31 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,610] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-22 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,610] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-31 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,611] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-18 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,611] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 27 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,611] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-31 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,611] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-27 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,611] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 39 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,611] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-39 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,611] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 6 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,611] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-6 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,611] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 35 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,611] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-35 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,611] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 2 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:58,611] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-2 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,611] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-27 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,611] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-39 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,611] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-6 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,611] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-35 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:58,611] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-2 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,047] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 30 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:59,047] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-30 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,048] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-30 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,136] INFO [Partition test010-8 broker=2] ISR updated to 2,1 and version updated to 13 (kafka.cluster.Partition) [2023-08-08 16:07:59,136] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test010-8) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:07:59,265] INFO [Partition test010-26 broker=2] ISR updated to 2,1 and version updated to 13 (kafka.cluster.Partition) [2023-08-08 16:07:59,265] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test010-26) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:07:59,818] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 15 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:59,818] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-15 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,818] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 13 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:59,818] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-13 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,818] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 44 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:59,818] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-44 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,818] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 42 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:59,818] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-42 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,818] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-15 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,818] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 21 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:59,818] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-21 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,818] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 19 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:59,818] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-19 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,818] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-13 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,818] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 17 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:59,818] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-17 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,818] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-44 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,818] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 32 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:59,818] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-32 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,818] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-42 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,818] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 30 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:59,818] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-30 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,818] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-21 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,818] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 40 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:59,818] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-40 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,818] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-19 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,818] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 38 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:59,818] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-17 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,819] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-32 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,818] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-38 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,819] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-30 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,819] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 36 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:59,819] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-36 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,819] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-40 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,819] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 34 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:59,819] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-34 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,819] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-38 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,819] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 16 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:59,819] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-16 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,819] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-36 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,819] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 14 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:59,819] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-14 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,819] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-34 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,819] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 43 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:59,819] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-43 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,819] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-16 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,819] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 12 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:59,819] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-12 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,819] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 41 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:59,819] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-41 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,819] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 18 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:59,819] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-18 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,819] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 31 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:59,819] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-31 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,819] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 29 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:59,819] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-29 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,819] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 39 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:59,819] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-39 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,819] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 37 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:59,819] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-37 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,819] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 35 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:59,819] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-35 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,819] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 33 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:07:59,819] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-33 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,819] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-14 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,820] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-43 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,820] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-12 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,820] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-41 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,820] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-18 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,820] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-31 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,821] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-29 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,821] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-39 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,821] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-37 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,821] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-35 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,821] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-33 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:07:59,882] INFO [Partition test010-9 broker=2] ISR updated to 2,1 and version updated to 14 (kafka.cluster.Partition) [2023-08-08 16:07:59,882] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test010-9) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:08:00,104] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 15 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,104] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-15 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,104] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 13 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,104] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-13 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,104] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 44 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,104] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-44 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,104] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 42 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,104] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-15 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,104] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-42 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,104] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-13 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,104] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-44 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,104] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 23 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,104] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-23 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,104] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 21 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,104] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-21 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,104] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 19 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,105] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-19 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,105] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 17 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,105] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-17 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,105] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 32 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,105] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-32 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,105] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 28 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,105] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-28 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,105] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 26 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,105] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-26 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,105] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 40 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,105] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-40 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,105] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 38 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,105] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-38 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,105] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 36 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,105] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-36 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,105] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 34 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,105] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-34 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,105] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 16 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,105] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-16 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,105] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 14 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,104] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-42 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,105] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-23 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,105] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-14 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,105] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-21 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,105] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 43 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,105] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-43 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,105] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-19 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,105] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 12 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,105] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-17 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,105] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-12 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,105] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-32 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,105] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 41 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,105] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-41 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,105] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-28 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,105] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 24 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,105] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-26 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,105] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-24 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,105] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-40 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,105] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-38 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,105] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-36 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,105] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-34 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,106] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-16 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,106] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-14 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,106] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-43 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,106] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-12 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,106] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-41 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,106] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-24 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,105] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 22 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,106] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-22 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,106] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 20 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,106] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-20 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,106] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 18 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,106] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-18 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,106] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 31 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,106] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-31 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,106] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 29 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,106] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-29 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,106] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 27 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,106] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-27 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,106] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 25 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,106] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-25 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,106] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 39 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,106] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-39 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,106] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 37 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,106] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-37 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,106] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 35 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,106] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-35 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,106] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 33 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,106] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-33 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,106] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-22 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,107] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-20 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,107] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-18 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,107] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-31 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,107] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-29 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,107] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-27 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,107] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-25 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,107] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-39 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,107] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-37 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,107] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-35 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,107] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-33 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,510] INFO [Partition test010-1 broker=2] ISR updated to 2,1 and version updated to 14 (kafka.cluster.Partition) [2023-08-08 16:08:00,510] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test010-1) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:08:00,542] INFO [Partition test010-4 broker=2] ISR updated to 2,1 and version updated to 14 (kafka.cluster.Partition) [2023-08-08 16:08:00,542] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test010-4) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:08:00,873] INFO [Partition test010-19 broker=2] ISR updated to 2,1 and version updated to 14 (kafka.cluster.Partition) [2023-08-08 16:08:00,882] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test010-19) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:08:00,923] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 48 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,923] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-48 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,923] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 46 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,923] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-46 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,923] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 11 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,923] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-11 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,923] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 9 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,923] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-9 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,923] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 23 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,923] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-23 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,923] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 28 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,923] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-28 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,923] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 26 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,923] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-26 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,923] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 7 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,923] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-7 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,924] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 5 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,924] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-5 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,924] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 3 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,924] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-3 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,924] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 1 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,924] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-1 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,924] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 47 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,924] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-47 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,924] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 45 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,924] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-45 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,924] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 10 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,924] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-10 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,924] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 24 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,924] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-24 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,924] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 22 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,924] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-22 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,924] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 20 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,924] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-20 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,924] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 49 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,924] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-49 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,924] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 0 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,924] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-0 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,924] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 27 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,924] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-27 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,924] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 25 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,924] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-25 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,924] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 8 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,924] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-8 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,924] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 6 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,924] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-6 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,924] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 4 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,924] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-4 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,924] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 2 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:00,924] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-2 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,924] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-48 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,925] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-46 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,925] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-11 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,925] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-9 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,925] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-23 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,925] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-28 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,925] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-26 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,925] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-7 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,925] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-5 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,925] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-3 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,925] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-1 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,925] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-47 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,925] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-45 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,925] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-10 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,925] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-24 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,925] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-22 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,925] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-20 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,925] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-49 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,925] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-0 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,925] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-27 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,925] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-25 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,925] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-8 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,925] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-6 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,926] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-4 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:00,926] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-2 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,451] INFO [Partition test010-23 broker=2] ISR updated to 2,1 and version updated to 14 (kafka.cluster.Partition) [2023-08-08 16:08:01,452] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test010-23) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:08:01,518] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 47 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:01,518] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-47 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,518] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 48 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:01,518] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-48 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,518] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 45 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:01,518] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-45 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,518] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 46 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:01,518] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-46 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,518] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 11 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:01,518] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-11 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,518] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 9 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:01,518] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-9 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,518] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-47 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,518] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 10 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:01,518] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-10 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,518] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-48 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,518] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 49 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:01,518] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-49 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,518] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-45 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,518] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 0 in epoch OptionalInt[47] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:01,518] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-46 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,518] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-0 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,518] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-11 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,518] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 7 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:01,518] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-7 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,519] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 8 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:01,519] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-8 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,518] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-9 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,519] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 5 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:01,519] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-5 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,519] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 6 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:01,519] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-6 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,519] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-10 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,519] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 3 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:01,519] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-3 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,519] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-49 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,519] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 4 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:01,519] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-4 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,519] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 1 in epoch OptionalInt[46] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:01,519] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-1 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,519] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 2 in epoch OptionalInt[43] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:08:01,519] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-2 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,519] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-0 for coordinator epoch OptionalInt[47]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,519] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-7 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,519] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-8 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,519] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-5 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,519] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-6 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,519] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-3 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,519] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-4 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,519] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-1 for coordinator epoch OptionalInt[46]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:08:01,519] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-2 for coordinator epoch OptionalInt[43]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:09:32,064] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test011-25, test011-27, test011-10, test011-12, test011-15, test011-0, test011-19, test011-21, test011-4, test011-6) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:09:32,073] INFO [LogLoader partition=test011-25, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:09:32,076] INFO Created log for partition test011-25 in /data01/kafka-logs-351/test011-25 with properties {} (kafka.log.LogManager) [2023-08-08 16:09:32,078] INFO [Partition test011-25 broker=2] No checkpointed highwatermark is found for partition test011-25 (kafka.cluster.Partition) [2023-08-08 16:09:32,079] INFO [Partition test011-25 broker=2] Log loaded for partition test011-25 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:09:32,083] INFO [LogLoader partition=test011-27, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:09:32,084] INFO Created log for partition test011-27 in /data01/kafka-logs-351/test011-27 with properties {} (kafka.log.LogManager) [2023-08-08 16:09:32,084] INFO [Partition test011-27 broker=2] No checkpointed highwatermark is found for partition test011-27 (kafka.cluster.Partition) [2023-08-08 16:09:32,084] INFO [Partition test011-27 broker=2] Log loaded for partition test011-27 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:09:32,089] INFO [LogLoader partition=test011-10, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:09:32,089] INFO Created log for partition test011-10 in /data01/kafka-logs-351/test011-10 with properties {} (kafka.log.LogManager) [2023-08-08 16:09:32,089] INFO [Partition test011-10 broker=2] No checkpointed highwatermark is found for partition test011-10 (kafka.cluster.Partition) [2023-08-08 16:09:32,089] INFO [Partition test011-10 broker=2] Log loaded for partition test011-10 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:09:32,093] INFO [LogLoader partition=test011-12, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:09:32,093] INFO Created log for partition test011-12 in /data01/kafka-logs-351/test011-12 with properties {} (kafka.log.LogManager) [2023-08-08 16:09:32,093] INFO [Partition test011-12 broker=2] No checkpointed highwatermark is found for partition test011-12 (kafka.cluster.Partition) [2023-08-08 16:09:32,093] INFO [Partition test011-12 broker=2] Log loaded for partition test011-12 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:09:32,097] INFO [LogLoader partition=test011-15, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:09:32,098] INFO Created log for partition test011-15 in /data01/kafka-logs-351/test011-15 with properties {} (kafka.log.LogManager) [2023-08-08 16:09:32,098] INFO [Partition test011-15 broker=2] No checkpointed highwatermark is found for partition test011-15 (kafka.cluster.Partition) [2023-08-08 16:09:32,098] INFO [Partition test011-15 broker=2] Log loaded for partition test011-15 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:09:32,101] INFO [LogLoader partition=test011-0, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:09:32,102] INFO Created log for partition test011-0 in /data01/kafka-logs-351/test011-0 with properties {} (kafka.log.LogManager) [2023-08-08 16:09:32,102] INFO [Partition test011-0 broker=2] No checkpointed highwatermark is found for partition test011-0 (kafka.cluster.Partition) [2023-08-08 16:09:32,102] INFO [Partition test011-0 broker=2] Log loaded for partition test011-0 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:09:32,105] INFO [LogLoader partition=test011-19, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:09:32,106] INFO Created log for partition test011-19 in /data01/kafka-logs-351/test011-19 with properties {} (kafka.log.LogManager) [2023-08-08 16:09:32,106] INFO [Partition test011-19 broker=2] No checkpointed highwatermark is found for partition test011-19 (kafka.cluster.Partition) [2023-08-08 16:09:32,106] INFO [Partition test011-19 broker=2] Log loaded for partition test011-19 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:09:32,109] INFO [LogLoader partition=test011-21, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:09:32,110] INFO Created log for partition test011-21 in /data01/kafka-logs-351/test011-21 with properties {} (kafka.log.LogManager) [2023-08-08 16:09:32,110] INFO [Partition test011-21 broker=2] No checkpointed highwatermark is found for partition test011-21 (kafka.cluster.Partition) [2023-08-08 16:09:32,110] INFO [Partition test011-21 broker=2] Log loaded for partition test011-21 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:09:32,113] INFO [LogLoader partition=test011-4, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:09:32,113] INFO Created log for partition test011-4 in /data01/kafka-logs-351/test011-4 with properties {} (kafka.log.LogManager) [2023-08-08 16:09:32,113] INFO [Partition test011-4 broker=2] No checkpointed highwatermark is found for partition test011-4 (kafka.cluster.Partition) [2023-08-08 16:09:32,114] INFO [Partition test011-4 broker=2] Log loaded for partition test011-4 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:09:32,117] INFO [LogLoader partition=test011-6, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:09:32,117] INFO Created log for partition test011-6 in /data01/kafka-logs-351/test011-6 with properties {} (kafka.log.LogManager) [2023-08-08 16:09:32,117] INFO [Partition test011-6 broker=2] No checkpointed highwatermark is found for partition test011-6 (kafka.cluster.Partition) [2023-08-08 16:09:32,117] INFO [Partition test011-6 broker=2] Log loaded for partition test011-6 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:09:32,121] INFO [LogLoader partition=test011-9, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:09:32,121] INFO Created log for partition test011-9 in /data01/kafka-logs-351/test011-9 with properties {} (kafka.log.LogManager) [2023-08-08 16:09:32,121] INFO [Partition test011-9 broker=2] No checkpointed highwatermark is found for partition test011-9 (kafka.cluster.Partition) [2023-08-08 16:09:32,121] INFO [Partition test011-9 broker=2] Log loaded for partition test011-9 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:09:32,123] INFO [LogLoader partition=test011-8, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:09:32,123] INFO Created log for partition test011-8 in /data01/kafka-logs-351/test011-8 with properties {} (kafka.log.LogManager) [2023-08-08 16:09:32,123] INFO [Partition test011-8 broker=2] No checkpointed highwatermark is found for partition test011-8 (kafka.cluster.Partition) [2023-08-08 16:09:32,123] INFO [Partition test011-8 broker=2] Log loaded for partition test011-8 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:09:32,124] INFO [LogLoader partition=test011-24, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:09:32,125] INFO Created log for partition test011-24 in /data01/kafka-logs-351/test011-24 with properties {} (kafka.log.LogManager) [2023-08-08 16:09:32,125] INFO [Partition test011-24 broker=2] No checkpointed highwatermark is found for partition test011-24 (kafka.cluster.Partition) [2023-08-08 16:09:32,125] INFO [Partition test011-24 broker=2] Log loaded for partition test011-24 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:09:32,126] INFO [LogLoader partition=test011-29, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:09:32,127] INFO Created log for partition test011-29 in /data01/kafka-logs-351/test011-29 with properties {} (kafka.log.LogManager) [2023-08-08 16:09:32,127] INFO [Partition test011-29 broker=2] No checkpointed highwatermark is found for partition test011-29 (kafka.cluster.Partition) [2023-08-08 16:09:32,127] INFO [Partition test011-29 broker=2] Log loaded for partition test011-29 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:09:32,129] INFO [LogLoader partition=test011-14, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:09:32,129] INFO Created log for partition test011-14 in /data01/kafka-logs-351/test011-14 with properties {} (kafka.log.LogManager) [2023-08-08 16:09:32,129] INFO [Partition test011-14 broker=2] No checkpointed highwatermark is found for partition test011-14 (kafka.cluster.Partition) [2023-08-08 16:09:32,129] INFO [Partition test011-14 broker=2] Log loaded for partition test011-14 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:09:32,132] INFO [LogLoader partition=test011-17, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:09:32,132] INFO Created log for partition test011-17 in /data01/kafka-logs-351/test011-17 with properties {} (kafka.log.LogManager) [2023-08-08 16:09:32,132] INFO [Partition test011-17 broker=2] No checkpointed highwatermark is found for partition test011-17 (kafka.cluster.Partition) [2023-08-08 16:09:32,132] INFO [Partition test011-17 broker=2] Log loaded for partition test011-17 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:09:32,134] INFO [LogLoader partition=test011-3, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:09:32,135] INFO Created log for partition test011-3 in /data01/kafka-logs-351/test011-3 with properties {} (kafka.log.LogManager) [2023-08-08 16:09:32,135] INFO [Partition test011-3 broker=2] No checkpointed highwatermark is found for partition test011-3 (kafka.cluster.Partition) [2023-08-08 16:09:32,135] INFO [Partition test011-3 broker=2] Log loaded for partition test011-3 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:09:32,137] INFO [LogLoader partition=test011-2, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:09:32,137] INFO Created log for partition test011-2 in /data01/kafka-logs-351/test011-2 with properties {} (kafka.log.LogManager) [2023-08-08 16:09:32,137] INFO [Partition test011-2 broker=2] No checkpointed highwatermark is found for partition test011-2 (kafka.cluster.Partition) [2023-08-08 16:09:32,137] INFO [Partition test011-2 broker=2] Log loaded for partition test011-2 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:09:32,139] INFO [LogLoader partition=test011-18, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:09:32,139] INFO Created log for partition test011-18 in /data01/kafka-logs-351/test011-18 with properties {} (kafka.log.LogManager) [2023-08-08 16:09:32,139] INFO [Partition test011-18 broker=2] No checkpointed highwatermark is found for partition test011-18 (kafka.cluster.Partition) [2023-08-08 16:09:32,139] INFO [Partition test011-18 broker=2] Log loaded for partition test011-18 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:09:32,142] INFO [LogLoader partition=test011-23, dir=/data01/kafka-logs-351] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-08-08 16:09:32,142] INFO Created log for partition test011-23 in /data01/kafka-logs-351/test011-23 with properties {} (kafka.log.LogManager) [2023-08-08 16:09:32,142] INFO [Partition test011-23 broker=2] No checkpointed highwatermark is found for partition test011-23 (kafka.cluster.Partition) [2023-08-08 16:09:32,142] INFO [Partition test011-23 broker=2] Log loaded for partition test011-23 with initial high watermark 0 (kafka.cluster.Partition) [2023-08-08 16:09:32,143] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test011-9, test011-24, test011-8, test011-29, test011-14, test011-17, test011-3, test011-18, test011-2, test011-23) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:09:32,145] INFO [ReplicaFetcherManager on broker 2] Added fetcher to broker 3 for partitions HashMap(test011-14 -> InitialFetchState(Some(P_EcwzSoTo2D12qx-WIdRQ),BrokerEndPoint(id=3, host=10.58.12.217:9092),0,0)) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:09:32,146] INFO [ReplicaFetcherManager on broker 2] Added fetcher to broker 1 for partitions HashMap(test011-9 -> InitialFetchState(Some(P_EcwzSoTo2D12qx-WIdRQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),0,0), test011-24 -> InitialFetchState(Some(P_EcwzSoTo2D12qx-WIdRQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),0,0), test011-8 -> InitialFetchState(Some(P_EcwzSoTo2D12qx-WIdRQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),0,0), test011-29 -> InitialFetchState(Some(P_EcwzSoTo2D12qx-WIdRQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),0,0), test011-17 -> InitialFetchState(Some(P_EcwzSoTo2D12qx-WIdRQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),0,0), test011-3 -> InitialFetchState(Some(P_EcwzSoTo2D12qx-WIdRQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),0,0), test011-18 -> InitialFetchState(Some(P_EcwzSoTo2D12qx-WIdRQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),0,0), test011-2 -> InitialFetchState(Some(P_EcwzSoTo2D12qx-WIdRQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),0,0), test011-23 -> InitialFetchState(Some(P_EcwzSoTo2D12qx-WIdRQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),0,0)) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:09:32,305] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test011-9 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:09:32,305] INFO [UnifiedLog partition=test011-9, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:09:32,306] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test011-24 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:09:32,306] INFO [UnifiedLog partition=test011-24, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:09:32,306] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test011-8 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:09:32,306] INFO [UnifiedLog partition=test011-8, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:09:32,306] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test011-29 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:09:32,306] INFO [UnifiedLog partition=test011-29, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:09:32,306] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test011-17 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:09:32,306] INFO [UnifiedLog partition=test011-17, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:09:32,306] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test011-3 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:09:32,306] INFO [UnifiedLog partition=test011-3, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:09:32,306] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test011-18 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:09:32,306] INFO [UnifiedLog partition=test011-18, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:09:32,306] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test011-2 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:09:32,306] INFO [UnifiedLog partition=test011-2, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:09:32,306] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test011-23 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:09:32,306] INFO [UnifiedLog partition=test011-23, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:09:32,640] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Truncating partition test011-14 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:09:32,640] INFO [UnifiedLog partition=test011-14, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:09:32,822] WARN [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Received UNKNOWN_TOPIC_ID from the leader for partition test011-24. This error may be returned transiently when the partition is being created or deleted, but it is not expected to persist. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:09:32,822] WARN [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Received UNKNOWN_TOPIC_ID from the leader for partition test011-23. This error may be returned transiently when the partition is being created or deleted, but it is not expected to persist. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:09:32,822] WARN [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Received UNKNOWN_TOPIC_ID from the leader for partition test011-29. This error may be returned transiently when the partition is being created or deleted, but it is not expected to persist. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:09:32,822] WARN [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Received UNKNOWN_TOPIC_ID from the leader for partition test011-18. This error may be returned transiently when the partition is being created or deleted, but it is not expected to persist. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:09:32,822] WARN [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Received UNKNOWN_TOPIC_ID from the leader for partition test011-17. This error may be returned transiently when the partition is being created or deleted, but it is not expected to persist. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:09:32,822] WARN [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Received UNKNOWN_TOPIC_ID from the leader for partition test011-8. This error may be returned transiently when the partition is being created or deleted, but it is not expected to persist. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:09:32,822] WARN [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Received UNKNOWN_TOPIC_ID from the leader for partition test011-9. This error may be returned transiently when the partition is being created or deleted, but it is not expected to persist. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:09:32,822] WARN [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Received UNKNOWN_TOPIC_ID from the leader for partition test011-2. This error may be returned transiently when the partition is being created or deleted, but it is not expected to persist. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:09:32,822] WARN [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Received UNKNOWN_TOPIC_ID from the leader for partition test011-3. This error may be returned transiently when the partition is being created or deleted, but it is not expected to persist. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:10:20,648] INFO [Partition test011-21 broker=2] Shrinking ISR from 2,3 to 2. Leader: (highWatermark: 117000, endOffset: 209415). Out of sync replicas: (brokerId: 3, endOffset: 117000, lastCaughtUpTimeMs: 1691482190049). (kafka.cluster.Partition) [2023-08-08 16:10:21,318] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test011-21) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:10:21,740] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test011-24) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:10:21,743] INFO [ReplicaFetcherManager on broker 2] Added fetcher to broker 1 for partitions HashMap(test011-24 -> InitialFetchState(Some(P_EcwzSoTo2D12qx-WIdRQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),1,177266)) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:10:22,405] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test011-9, test011-8, test011-29, test011-17, test011-3, test011-18, test011-2, test011-23) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:10:22,407] INFO [ReplicaFetcherManager on broker 2] Added fetcher to broker 1 for partitions HashMap(test011-9 -> InitialFetchState(Some(P_EcwzSoTo2D12qx-WIdRQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),1,187335), test011-8 -> InitialFetchState(Some(P_EcwzSoTo2D12qx-WIdRQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),1,183915), test011-29 -> InitialFetchState(Some(P_EcwzSoTo2D12qx-WIdRQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),1,190800), test011-17 -> InitialFetchState(Some(P_EcwzSoTo2D12qx-WIdRQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),1,173895), test011-3 -> InitialFetchState(Some(P_EcwzSoTo2D12qx-WIdRQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),1,177217), test011-18 -> InitialFetchState(Some(P_EcwzSoTo2D12qx-WIdRQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),1,184069), test011-2 -> InitialFetchState(Some(P_EcwzSoTo2D12qx-WIdRQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),1,183975), test011-23 -> InitialFetchState(Some(P_EcwzSoTo2D12qx-WIdRQ),BrokerEndPoint(id=1, host=10.58.16.231:9092),1,187305)) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:10:35,642] INFO [Partition test011-25 broker=2] Shrinking ISR from 2,3 to 2. Leader: (highWatermark: 191610, endOffset: 263820). Out of sync replicas: (brokerId: 3, endOffset: 191610, lastCaughtUpTimeMs: 1691482202758). (kafka.cluster.Partition) [2023-08-08 16:10:35,644] INFO [Partition test011-15 broker=2] Shrinking ISR from 2,3 to 2. Leader: (highWatermark: 147135, endOffset: 263775). Out of sync replicas: (brokerId: 3, endOffset: 147135, lastCaughtUpTimeMs: 1691482191477). (kafka.cluster.Partition) [2023-08-08 16:10:35,645] INFO [Partition test011-6 broker=2] Shrinking ISR from 2,3 to 2. Leader: (highWatermark: 162330, endOffset: 263790). Out of sync replicas: (brokerId: 3, endOffset: 162330, lastCaughtUpTimeMs: 1691482194391). (kafka.cluster.Partition) [2023-08-08 16:10:35,949] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test011-25) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:10:36,140] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test011-15, test011-6) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:10:50,642] INFO [Partition test011-27 broker=2] Shrinking ISR from 2,3 to 2. Leader: (highWatermark: 280275, endOffset: 360360). Out of sync replicas: (brokerId: 3, endOffset: 280275, lastCaughtUpTimeMs: 1691482218098). (kafka.cluster.Partition) [2023-08-08 16:10:50,890] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test011-27) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:11:20,642] INFO [Partition test011-19 broker=2] Shrinking ISR from 2,3 to 2. Leader: (highWatermark: 399315, endOffset: 538140). Out of sync replicas: (brokerId: 3, endOffset: 399315, lastCaughtUpTimeMs: 1691482236465). (kafka.cluster.Partition) [2023-08-08 16:11:20,644] INFO [Partition test011-10 broker=2] Shrinking ISR from 2,3 to 2. Leader: (highWatermark: 404280, endOffset: 538065). Out of sync replicas: (brokerId: 3, endOffset: 404280, lastCaughtUpTimeMs: 1691482237506). (kafka.cluster.Partition) [2023-08-08 16:11:20,645] INFO [Partition test011-0 broker=2] Shrinking ISR from 2,3 to 2. Leader: (highWatermark: 406230, endOffset: 538095). Out of sync replicas: (brokerId: 3, endOffset: 406230, lastCaughtUpTimeMs: 1691482235805). (kafka.cluster.Partition) [2023-08-08 16:11:20,645] INFO [Partition test011-4 broker=2] Shrinking ISR from 2,3 to 2. Leader: (highWatermark: 407505, endOffset: 538050). Out of sync replicas: (brokerId: 3, endOffset: 407505, lastCaughtUpTimeMs: 1691482236244). (kafka.cluster.Partition) [2023-08-08 16:11:23,343] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test011-19) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:11:23,385] INFO [RaftManager id=2] Completed transition to Unattached(epoch=1894, voters=[1, 2, 3], electionTimeoutMs=1827) from Leader(localId=2, epoch=1893, epochStartOffset=1962807, highWatermark=Optional[LogOffsetMetadata(offset=1965937, metadata=Optional[(segmentBaseOffset=1084546,relativePositionInSegment=62362119)])], voterStates={1=ReplicaState(nodeId=1, endOffset=Optional[LogOffsetMetadata(offset=1965933, metadata=Optional[(segmentBaseOffset=1084546,relativePositionInSegment=62361920)])], lastFetchTimestamp=1691482283283, lastCaughtUpTimestamp=1691482280052, hasAcknowledgedLeader=true), 2=ReplicaState(nodeId=2, endOffset=Optional[LogOffsetMetadata(offset=1965940, metadata=Optional[(segmentBaseOffset=1084546,relativePositionInSegment=62362335)])], lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true), 3=ReplicaState(nodeId=3, endOffset=Optional[LogOffsetMetadata(offset=1965937, metadata=Optional[(segmentBaseOffset=1084546,relativePositionInSegment=62362119)])], lastFetchTimestamp=1691482283338, lastCaughtUpTimestamp=1691482281575, hasAcknowledgedLeader=true)}) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:11:23,386] INFO [RaftManager id=2] Vote request VoteRequestData(clusterId='VTx-f_krQviH03igQw0AVw', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=1894, candidateId=1, lastOffsetEpoch=1893, lastOffset=1965933)])]) with epoch 1894 is rejected (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:11:23,387] INFO [QuorumController id=2] failAll(NotControllerException): failing writeNoOpRecord(753178555). (org.apache.kafka.deferred.DeferredEventQueue) [2023-08-08 16:11:23,388] INFO [QuorumController id=2] failAll(NotControllerException): failing writeNoOpRecord(821067758). (org.apache.kafka.deferred.DeferredEventQueue) [2023-08-08 16:11:23,388] INFO [QuorumController id=2] failAll(NotControllerException): failing writeNoOpRecord(613336585). (org.apache.kafka.deferred.DeferredEventQueue) [2023-08-08 16:11:23,388] INFO [QuorumController id=2] failAll(NotControllerException): failing alterPartition(1599560153). (org.apache.kafka.deferred.DeferredEventQueue) [2023-08-08 16:11:23,389] INFO [BrokerToControllerChannelManager id=2 name=alter-partition] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:11:23,982] INFO [RaftManager id=2] Completed transition to Unattached(epoch=1895, voters=[1, 2, 3], electionTimeoutMs=1381) from Unattached(epoch=1894, voters=[1, 2, 3], electionTimeoutMs=1827) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:11:23,983] INFO [RaftManager id=2] Vote request VoteRequestData(clusterId='VTx-f_krQviH03igQw0AVw', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=1895, candidateId=1, lastOffsetEpoch=1893, lastOffset=1965933)])]) with epoch 1895 is rejected (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:11:24,840] INFO [RaftManager id=2] Completed transition to Unattached(epoch=1896, voters=[1, 2, 3], electionTimeoutMs=458) from Unattached(epoch=1895, voters=[1, 2, 3], electionTimeoutMs=1381) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:11:24,840] INFO [RaftManager id=2] Vote request VoteRequestData(clusterId='VTx-f_krQviH03igQw0AVw', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=1896, candidateId=1, lastOffsetEpoch=1893, lastOffset=1965933)])]) with epoch 1896 is rejected (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:11:25,173] INFO [RaftManager id=2] Completed transition to CandidateState(localId=2, epoch=1897, retries=1, voteStates={1=UNRECORDED, 2=GRANTED, 3=UNRECORDED}, highWatermark=Optional[LogOffsetMetadata(offset=1965937, metadata=Optional[(segmentBaseOffset=1084546,relativePositionInSegment=62362119)])], electionTimeoutMs=1570) from Unattached(epoch=1896, voters=[1, 2, 3], electionTimeoutMs=458) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:11:25,346] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:11:26,232] INFO [RaftManager id=2] Completed transition to Leader(localId=2, epoch=1897, epochStartOffset=1965940, highWatermark=Optional.empty, voterStates={1=ReplicaState(nodeId=1, endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=false), 2=ReplicaState(nodeId=2, endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true), 3=ReplicaState(nodeId=3, endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=false)}) from CandidateState(localId=2, epoch=1897, retries=1, voteStates={1=UNRECORDED, 2=GRANTED, 3=GRANTED}, highWatermark=Optional[LogOffsetMetadata(offset=1965937, metadata=Optional[(segmentBaseOffset=1084546,relativePositionInSegment=62362119)])], electionTimeoutMs=1570) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:11:26,249] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,259] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:11:26,259] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,297] INFO [broker-2-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,308] INFO [BrokerToControllerChannelManager id=2 name=alter-partition] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:11:26,308] INFO [broker-2-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,309] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,311] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:11:26,311] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,358] INFO [broker-2-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,361] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,363] INFO [BrokerToControllerChannelManager id=2 name=alter-partition] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:11:26,363] INFO [broker-2-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,364] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:11:26,364] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,413] INFO [broker-2-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,414] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,417] INFO [BrokerToControllerChannelManager id=2 name=alter-partition] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:11:26,417] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:11:26,417] INFO [broker-2-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,417] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,467] INFO [broker-2-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,468] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,471] INFO [BrokerToControllerChannelManager id=2 name=alter-partition] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:11:26,471] INFO [broker-2-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,472] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:11:26,472] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,522] INFO [broker-2-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,522] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,524] INFO [BrokerToControllerChannelManager id=2 name=alter-partition] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:11:26,524] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:11:26,524] INFO [broker-2-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,524] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,574] INFO [broker-2-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,574] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,576] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:11:26,577] INFO [BrokerToControllerChannelManager id=2 name=alter-partition] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:11:26,577] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,577] INFO [broker-2-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,627] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,627] INFO [broker-2-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,629] INFO [BrokerToControllerChannelManager id=2 name=alter-partition] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:11:26,629] INFO [broker-2-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,630] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:11:26,630] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,671] INFO [RaftManager id=2] High watermark set to LogOffsetMetadata(offset=1965941, metadata=Optional[(segmentBaseOffset=1084546,relativePositionInSegment=62362441)]) for the first time for epoch 1897 based on indexOfHw 1 and voters [ReplicaState(nodeId=2, endOffset=Optional[LogOffsetMetadata(offset=1965941, metadata=Optional[(segmentBaseOffset=1084546,relativePositionInSegment=62362441)])], lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true), ReplicaState(nodeId=3, endOffset=Optional[LogOffsetMetadata(offset=1965941, metadata=Optional[(segmentBaseOffset=1084546,relativePositionInSegment=62362441)])], lastFetchTimestamp=1691482286671, lastCaughtUpTimestamp=1691482286671, hasAcknowledgedLeader=true), ReplicaState(nodeId=1, endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=false)] (org.apache.kafka.raft.LeaderState) [2023-08-08 16:11:26,680] INFO [broker-2-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:26,680] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-dev-d-010058012165.hz.td:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:11:27,296] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test011-0, test011-10, test011-4) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:12:14,314] INFO [RaftManager id=2] Completed transition to Unattached(epoch=1898, voters=[1, 2, 3], electionTimeoutMs=1343) from Leader(localId=2, epoch=1897, epochStartOffset=1965940, highWatermark=Optional[LogOffsetMetadata(offset=1966038, metadata=Optional[(segmentBaseOffset=1084546,relativePositionInSegment=62369341)])], voterStates={1=ReplicaState(nodeId=1, endOffset=Optional[LogOffsetMetadata(offset=1966038, metadata=Optional[(segmentBaseOffset=1084546,relativePositionInSegment=62369341)])], lastFetchTimestamp=1691482333924, lastCaughtUpTimestamp=1691482333924, hasAcknowledgedLeader=true), 2=ReplicaState(nodeId=2, endOffset=Optional[LogOffsetMetadata(offset=1966038, metadata=Optional[(segmentBaseOffset=1084546,relativePositionInSegment=62369341)])], lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true), 3=ReplicaState(nodeId=3, endOffset=Optional[LogOffsetMetadata(offset=1966033, metadata=Optional[(segmentBaseOffset=1084546,relativePositionInSegment=62369042)])], lastFetchTimestamp=1691482333452, lastCaughtUpTimestamp=1691482331289, hasAcknowledgedLeader=true)}) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:12:14,315] INFO [RaftManager id=2] Vote request VoteRequestData(clusterId='VTx-f_krQviH03igQw0AVw', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=1898, candidateId=3, lastOffsetEpoch=1897, lastOffset=1966033)])]) with epoch 1898 is rejected (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:12:14,316] INFO [QuorumController id=2] failAll(NotControllerException): failing writeNoOpRecord(677820119). (org.apache.kafka.deferred.DeferredEventQueue) [2023-08-08 16:12:15,059] INFO [RaftManager id=2] Completed transition to Unattached(epoch=1899, voters=[1, 2, 3], electionTimeoutMs=517) from Unattached(epoch=1898, voters=[1, 2, 3], electionTimeoutMs=1343) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:12:15,059] INFO [RaftManager id=2] Vote request VoteRequestData(clusterId='VTx-f_krQviH03igQw0AVw', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=1899, candidateId=3, lastOffsetEpoch=1897, lastOffset=1966033)])]) with epoch 1899 is rejected (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:12:15,214] INFO [RaftManager id=2] Completed transition to Unattached(epoch=1900, voters=[1, 2, 3], electionTimeoutMs=309) from Unattached(epoch=1899, voters=[1, 2, 3], electionTimeoutMs=517) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:12:15,333] INFO [RaftManager id=2] Completed transition to Voted(epoch=1900, votedId=1, voters=[1, 2, 3], electionTimeoutMs=1237) from Unattached(epoch=1900, voters=[1, 2, 3], electionTimeoutMs=309) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:12:15,333] INFO [RaftManager id=2] Vote request VoteRequestData(clusterId='VTx-f_krQviH03igQw0AVw', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=1900, candidateId=1, lastOffsetEpoch=1897, lastOffset=1966038)])]) with epoch 1900 is granted (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:12:15,487] INFO [RaftManager id=2] Completed transition to FollowerState(fetchTimeoutMs=2000, epoch=1900, leaderId=1, voters=[1, 2, 3], highWatermark=Optional[LogOffsetMetadata(offset=1966038, metadata=Optional[(segmentBaseOffset=1084546,relativePositionInSegment=62369341)])], fetchingSnapshot=Optional.empty) from Voted(epoch=1900, votedId=1, voters=[1, 2, 3], electionTimeoutMs=1237) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:12:15,495] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:12:15,569] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.16.231:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:12:35,315] INFO [LocalLog partition=test011-14, dir=/data01/kafka-logs-351] Rolled new log segment at offset 1035352 in 8 ms. (kafka.log.LocalLog) [2023-08-08 16:12:36,038] INFO [ProducerStateManager partition=test011-14]Wrote producer snapshot at offset 1035352 with 0 producer ids in 722 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:13:07,158] INFO [LocalLog partition=test011-12, dir=/data01/kafka-logs-351] Rolled new log segment at offset 1035355 in 3 ms. (kafka.log.LocalLog) [2023-08-08 16:13:07,158] INFO [LocalLog partition=test011-4, dir=/data01/kafka-logs-351] Rolled new log segment at offset 1035360 in 4 ms. (kafka.log.LocalLog) [2023-08-08 16:13:07,305] INFO [ProducerStateManager partition=test011-12]Wrote producer snapshot at offset 1035355 with 0 producer ids in 100 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:13:07,305] INFO [ProducerStateManager partition=test011-4]Wrote producer snapshot at offset 1035360 with 0 producer ids in 100 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:13:07,341] INFO [LocalLog partition=test011-0, dir=/data01/kafka-logs-351] Rolled new log segment at offset 1035362 in 1 ms. (kafka.log.LocalLog) [2023-08-08 16:13:07,456] INFO [ProducerStateManager partition=test011-0]Wrote producer snapshot at offset 1035362 with 0 producer ids in 115 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:13:07,712] INFO [LocalLog partition=test011-19, dir=/data01/kafka-logs-351] Rolled new log segment at offset 1035355 in 3 ms. (kafka.log.LocalLog) [2023-08-08 16:13:07,900] INFO [ProducerStateManager partition=test011-19]Wrote producer snapshot at offset 1035355 with 0 producer ids in 187 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:13:07,916] INFO [LocalLog partition=test011-21, dir=/data01/kafka-logs-351] Rolled new log segment at offset 1035357 in 2 ms. (kafka.log.LocalLog) [2023-08-08 16:13:07,967] INFO [ProducerStateManager partition=test011-21]Wrote producer snapshot at offset 1035357 with 0 producer ids in 52 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:13:07,974] INFO [LocalLog partition=test011-10, dir=/data01/kafka-logs-351] Rolled new log segment at offset 1035360 in 2 ms. (kafka.log.LocalLog) [2023-08-08 16:13:08,368] INFO [ProducerStateManager partition=test011-10]Wrote producer snapshot at offset 1035360 with 0 producer ids in 394 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:13:08,389] INFO [LocalLog partition=test011-27, dir=/data01/kafka-logs-351] Rolled new log segment at offset 1035358 in 3 ms. (kafka.log.LocalLog) [2023-08-08 16:13:08,754] INFO [ProducerStateManager partition=test011-27]Wrote producer snapshot at offset 1035358 with 0 producer ids in 364 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:13:08,769] INFO [LocalLog partition=test011-15, dir=/data01/kafka-logs-351] Rolled new log segment at offset 1035355 in 2 ms. (kafka.log.LocalLog) [2023-08-08 16:13:08,935] INFO [ProducerStateManager partition=test011-15]Wrote producer snapshot at offset 1035355 with 0 producer ids in 166 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:13:08,958] INFO [LocalLog partition=test011-6, dir=/data01/kafka-logs-351] Rolled new log segment at offset 1035360 in 2 ms. (kafka.log.LocalLog) [2023-08-08 16:13:09,050] INFO [ProducerStateManager partition=test011-6]Wrote producer snapshot at offset 1035360 with 0 producer ids in 91 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:13:09,085] INFO [LocalLog partition=test011-25, dir=/data01/kafka-logs-351] Rolled new log segment at offset 1035361 in 2 ms. (kafka.log.LocalLog) [2023-08-08 16:13:09,153] INFO [ProducerStateManager partition=test011-25]Wrote producer snapshot at offset 1035361 with 0 producer ids in 68 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:13:36,737] INFO [RaftManager id=2] Completed transition to Unattached(epoch=1901, voters=[1, 2, 3], electionTimeoutMs=1925) from FollowerState(fetchTimeoutMs=2000, epoch=1900, leaderId=1, voters=[1, 2, 3], highWatermark=Optional[LogOffsetMetadata(offset=1966196, metadata=Optional.empty)], fetchingSnapshot=Optional.empty) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:13:36,738] INFO [RaftManager id=2] Vote request VoteRequestData(clusterId='VTx-f_krQviH03igQw0AVw', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=1901, candidateId=3, lastOffsetEpoch=1900, lastOffset=1966196)])]) with epoch 1901 is rejected (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:13:36,994] INFO [RaftManager id=2] Completed transition to Unattached(epoch=1902, voters=[1, 2, 3], electionTimeoutMs=1756) from Unattached(epoch=1901, voters=[1, 2, 3], electionTimeoutMs=1925) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:13:36,994] INFO [RaftManager id=2] Vote request VoteRequestData(clusterId='VTx-f_krQviH03igQw0AVw', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=1902, candidateId=3, lastOffsetEpoch=1900, lastOffset=1966196)])]) with epoch 1902 is rejected (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:13:38,386] INFO [RaftManager id=2] Completed transition to Unattached(epoch=1903, voters=[1, 2, 3], electionTimeoutMs=592) from Unattached(epoch=1902, voters=[1, 2, 3], electionTimeoutMs=1756) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:13:38,386] INFO [RaftManager id=2] Vote request VoteRequestData(clusterId='VTx-f_krQviH03igQw0AVw', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=1903, candidateId=3, lastOffsetEpoch=1900, lastOffset=1966196)])]) with epoch 1903 is rejected (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:13:39,258] INFO [RaftManager id=2] Completed transition to Unattached(epoch=1904, voters=[1, 2, 3], electionTimeoutMs=212) from Unattached(epoch=1903, voters=[1, 2, 3], electionTimeoutMs=592) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:13:39,956] INFO [RaftManager id=2] Completed transition to Voted(epoch=1904, votedId=1, voters=[1, 2, 3], electionTimeoutMs=1645) from Unattached(epoch=1904, voters=[1, 2, 3], electionTimeoutMs=212) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:13:39,956] INFO [RaftManager id=2] Vote request VoteRequestData(clusterId='VTx-f_krQviH03igQw0AVw', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=1904, candidateId=1, lastOffsetEpoch=1900, lastOffset=1966200)])]) with epoch 1904 is granted (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:13:40,073] INFO [RaftManager id=2] Completed transition to Unattached(epoch=1905, voters=[1, 2, 3], electionTimeoutMs=946) from Voted(epoch=1904, votedId=1, voters=[1, 2, 3], electionTimeoutMs=1645) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:13:40,092] INFO [RaftManager id=2] Completed transition to Voted(epoch=1905, votedId=3, voters=[1, 2, 3], electionTimeoutMs=1270) from Unattached(epoch=1905, voters=[1, 2, 3], electionTimeoutMs=946) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:13:40,092] INFO [RaftManager id=2] Vote request VoteRequestData(clusterId='VTx-f_krQviH03igQw0AVw', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=1905, candidateId=3, lastOffsetEpoch=1904, lastOffset=1966201)])]) with epoch 1905 is granted (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:13:40,095] INFO [RaftManager id=2] Completed transition to FollowerState(fetchTimeoutMs=2000, epoch=1905, leaderId=3, voters=[1, 2, 3], highWatermark=Optional[LogOffsetMetadata(offset=1966196, metadata=Optional.empty)], fetchingSnapshot=Optional.empty) from Voted(epoch=1905, votedId=3, voters=[1, 2, 3], electionTimeoutMs=1270) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:13:40,606] INFO [RaftManager id=2] Become candidate due to fetch timeout (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:13:40,618] INFO [RaftManager id=2] Completed transition to CandidateState(localId=2, epoch=1906, retries=1, voteStates={1=UNRECORDED, 2=GRANTED, 3=UNRECORDED}, highWatermark=Optional[LogOffsetMetadata(offset=1966196, metadata=Optional.empty)], electionTimeoutMs=1698) from FollowerState(fetchTimeoutMs=2000, epoch=1905, leaderId=3, voters=[1, 2, 3], highWatermark=Optional[LogOffsetMetadata(offset=1966196, metadata=Optional.empty)], fetchingSnapshot=Optional.empty) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:13:40,632] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 1 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:13:41,258] INFO [RaftManager id=2] Insufficient remaining votes to become leader (rejected by [1, 3]). We will backoff before retrying election again (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:13:41,258] INFO [RaftManager id=2] Re-elect as candidate after election backoff has completed (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:13:41,479] INFO [RaftManager id=2] Completed transition to CandidateState(localId=2, epoch=1907, retries=2, voteStates={1=UNRECORDED, 2=GRANTED, 3=UNRECORDED}, highWatermark=Optional[LogOffsetMetadata(offset=1966196, metadata=Optional.empty)], electionTimeoutMs=1497) from CandidateState(localId=2, epoch=1906, retries=1, voteStates={1=REJECTED, 2=GRANTED, 3=REJECTED}, highWatermark=Optional[LogOffsetMetadata(offset=1966196, metadata=Optional.empty)], electionTimeoutMs=1698) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:13:42,265] INFO [RaftManager id=2] Insufficient remaining votes to become leader (rejected by [1, 3]). We will backoff before retrying election again (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:13:42,365] INFO [RaftManager id=2] Re-elect as candidate after election backoff has completed (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:13:42,434] INFO [RaftManager id=2] Completed transition to CandidateState(localId=2, epoch=1908, retries=3, voteStates={1=UNRECORDED, 2=GRANTED, 3=UNRECORDED}, highWatermark=Optional[LogOffsetMetadata(offset=1966196, metadata=Optional.empty)], electionTimeoutMs=1033) from CandidateState(localId=2, epoch=1907, retries=2, voteStates={1=REJECTED, 2=GRANTED, 3=REJECTED}, highWatermark=Optional[LogOffsetMetadata(offset=1966196, metadata=Optional.empty)], electionTimeoutMs=1497) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:13:42,917] INFO [RaftManager id=2] Insufficient remaining votes to become leader (rejected by [1, 3]). We will backoff before retrying election again (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:13:43,617] INFO [RaftManager id=2] Re-elect as candidate after election backoff has completed (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:13:44,131] INFO [RaftManager id=2] Completed transition to CandidateState(localId=2, epoch=1909, retries=4, voteStates={1=UNRECORDED, 2=GRANTED, 3=UNRECORDED}, highWatermark=Optional[LogOffsetMetadata(offset=1966196, metadata=Optional.empty)], electionTimeoutMs=1537) from CandidateState(localId=2, epoch=1908, retries=3, voteStates={1=REJECTED, 2=GRANTED, 3=REJECTED}, highWatermark=Optional[LogOffsetMetadata(offset=1966196, metadata=Optional.empty)], electionTimeoutMs=1033) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:13:44,132] INFO [RaftManager id=2] Vote request VoteRequestData(clusterId='VTx-f_krQviH03igQw0AVw', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=1909, candidateId=1, lastOffsetEpoch=1906, lastOffset=1966203)])]) with epoch 1909 is rejected (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:13:44,183] INFO [RaftManager id=2] Insufficient remaining votes to become leader (rejected by [1, 3]). We will backoff before retrying election again (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:13:44,184] INFO [RaftManager id=2] Vote request VoteRequestData(clusterId='VTx-f_krQviH03igQw0AVw', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=1909, candidateId=3, lastOffsetEpoch=1905, lastOffset=1966202)])]) with epoch 1909 is rejected (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:13:44,904] INFO [RaftManager id=2] Completed transition to Unattached(epoch=1910, voters=[1, 2, 3], electionTimeoutMs=866) from CandidateState(localId=2, epoch=1909, retries=4, voteStates={1=REJECTED, 2=GRANTED, 3=REJECTED}, highWatermark=Optional[LogOffsetMetadata(offset=1966196, metadata=Optional.empty)], electionTimeoutMs=1537) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:13:45,150] INFO [BrokerLifecycleManager id=2] Unable to send a heartbeat because the RPC got timed out before it could be sent. (kafka.server.BrokerLifecycleManager) [2023-08-08 16:13:45,547] INFO [RaftManager id=2] Completed transition to Voted(epoch=1910, votedId=1, voters=[1, 2, 3], electionTimeoutMs=1139) from Unattached(epoch=1910, voters=[1, 2, 3], electionTimeoutMs=866) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:13:45,547] INFO [RaftManager id=2] Vote request VoteRequestData(clusterId='VTx-f_krQviH03igQw0AVw', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=1910, candidateId=1, lastOffsetEpoch=1906, lastOffset=1966203)])]) with epoch 1910 is granted (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:13:45,547] INFO [RaftManager id=2] Vote request VoteRequestData(clusterId='VTx-f_krQviH03igQw0AVw', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=1910, candidateId=3, lastOffsetEpoch=1905, lastOffset=1966202)])]) with epoch 1910 is rejected (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:13:46,362] INFO [RaftManager id=2] Completed transition to Unattached(epoch=1911, voters=[1, 2, 3], electionTimeoutMs=169) from Voted(epoch=1910, votedId=1, voters=[1, 2, 3], electionTimeoutMs=1139) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:13:46,833] INFO [RaftManager id=2] Completed transition to Voted(epoch=1911, votedId=3, voters=[1, 2, 3], electionTimeoutMs=1851) from Unattached(epoch=1911, voters=[1, 2, 3], electionTimeoutMs=169) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:13:46,833] INFO [RaftManager id=2] Vote request VoteRequestData(clusterId='VTx-f_krQviH03igQw0AVw', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=1911, candidateId=3, lastOffsetEpoch=1905, lastOffset=1966202)])]) with epoch 1911 is granted (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:13:47,777] INFO [RaftManager id=2] Completed transition to FollowerState(fetchTimeoutMs=2000, epoch=1911, leaderId=3, voters=[1, 2, 3], highWatermark=Optional[LogOffsetMetadata(offset=1966196, metadata=Optional.empty)], fetchingSnapshot=Optional.empty) from Voted(epoch=1911, votedId=3, voters=[1, 2, 3], electionTimeoutMs=1851) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:13:47,855] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:13:47,859] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 3 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:13:47,859] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:13:47,910] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:13:47,916] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 3 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:13:47,916] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:13:47,966] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:13:47,973] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 3 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:13:47,973] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:13:48,024] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:13:48,031] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 3 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:13:48,032] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:13:48,081] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:13:50,395] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 3 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:13:50,395] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:13:50,445] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:13:50,451] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 3 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:13:50,451] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:13:50,502] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:13:50,505] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 3 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:13:50,505] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:13:50,555] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:13:50,560] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 3 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:13:50,560] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:13:50,613] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:13:50,619] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 3 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:13:50,619] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:13:50,669] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:13:50,672] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 3 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:13:50,672] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:13:50,722] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:13:50,726] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 3 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:13:50,726] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:13:50,776] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:13:50,780] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 3 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:13:50,781] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:13:50,830] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:13:50,833] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 3 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:13:50,833] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:13:50,883] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:13:50,886] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 3 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:13:50,886] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:13:50,927] INFO [RaftManager id=2] Completed transition to Unattached(epoch=1912, voters=[1, 2, 3], electionTimeoutMs=1903) from FollowerState(fetchTimeoutMs=2000, epoch=1911, leaderId=3, voters=[1, 2, 3], highWatermark=Optional[LogOffsetMetadata(offset=1966206, metadata=Optional.empty)], fetchingSnapshot=Optional.empty) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:13:50,928] INFO [RaftManager id=2] Vote request VoteRequestData(clusterId='VTx-f_krQviH03igQw0AVw', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=1912, candidateId=1, lastOffsetEpoch=1905, lastOffset=1966202)])]) with epoch 1912 is rejected (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:13:52,173] INFO [RaftManager id=2] Completed transition to Unattached(epoch=1913, voters=[1, 2, 3], electionTimeoutMs=1116) from Unattached(epoch=1912, voters=[1, 2, 3], electionTimeoutMs=1903) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:13:52,174] INFO [RaftManager id=2] Vote request VoteRequestData(clusterId='VTx-f_krQviH03igQw0AVw', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=1913, candidateId=1, lastOffsetEpoch=1905, lastOffset=1966202)])]) with epoch 1913 is rejected (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:13:52,601] INFO [RaftManager id=2] Completed transition to Unattached(epoch=1914, voters=[1, 2, 3], electionTimeoutMs=292) from Unattached(epoch=1913, voters=[1, 2, 3], electionTimeoutMs=1116) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:13:53,164] INFO [RaftManager id=2] Completed transition to Voted(epoch=1914, votedId=3, voters=[1, 2, 3], electionTimeoutMs=1992) from Unattached(epoch=1914, voters=[1, 2, 3], electionTimeoutMs=292) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:13:53,164] INFO [RaftManager id=2] Vote request VoteRequestData(clusterId='VTx-f_krQviH03igQw0AVw', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=1914, candidateId=3, lastOffsetEpoch=1911, lastOffset=1966207)])]) with epoch 1914 is granted (org.apache.kafka.raft.KafkaRaftClient) [2023-08-08 16:13:53,418] INFO [RaftManager id=2] Completed transition to FollowerState(fetchTimeoutMs=2000, epoch=1914, leaderId=3, voters=[1, 2, 3], highWatermark=Optional[LogOffsetMetadata(offset=1966206, metadata=Optional.empty)], fetchingSnapshot=Optional.empty) from Voted(epoch=1914, votedId=3, voters=[1, 2, 3], electionTimeoutMs=1992) (org.apache.kafka.raft.QuorumState) [2023-08-08 16:13:53,441] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:14:30,607] INFO [LocalLog partition=test011-14, dir=/data01/kafka-logs-351] Rolled new log segment at offset 2070715 in 3 ms. (kafka.log.LocalLog) [2023-08-08 16:14:30,964] INFO [ProducerStateManager partition=test011-14]Wrote producer snapshot at offset 2070715 with 0 producer ids in 356 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:15:36,459] INFO [LocalLog partition=test011-24, dir=/data01/kafka-logs-351] Rolled new log segment at offset 1035362 in 2 ms. (kafka.log.LocalLog) [2023-08-08 16:15:36,974] INFO [ProducerStateManager partition=test011-24]Wrote producer snapshot at offset 1035362 with 0 producer ids in 515 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:15:40,143] INFO [LocalLog partition=test011-29, dir=/data01/kafka-logs-351] Rolled new log segment at offset 1035358 in 3 ms. (kafka.log.LocalLog) [2023-08-08 16:15:40,158] INFO [ProducerStateManager partition=test011-29]Wrote producer snapshot at offset 1035358 with 0 producer ids in 14 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:15:40,726] INFO [LocalLog partition=test011-3, dir=/data01/kafka-logs-351] Rolled new log segment at offset 1035354 in 1 ms. (kafka.log.LocalLog) [2023-08-08 16:15:40,748] INFO [ProducerStateManager partition=test011-3]Wrote producer snapshot at offset 1035354 with 0 producer ids in 21 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:15:41,153] INFO [LocalLog partition=test011-23, dir=/data01/kafka-logs-351] Rolled new log segment at offset 1035360 in 2 ms. (kafka.log.LocalLog) [2023-08-08 16:15:41,189] INFO [ProducerStateManager partition=test011-23]Wrote producer snapshot at offset 1035360 with 0 producer ids in 35 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:15:41,680] INFO [LocalLog partition=test011-9, dir=/data01/kafka-logs-351] Rolled new log segment at offset 1035353 in 2 ms. (kafka.log.LocalLog) [2023-08-08 16:15:41,682] INFO [ProducerStateManager partition=test011-9]Wrote producer snapshot at offset 1035353 with 0 producer ids in 1 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:15:42,455] INFO [LocalLog partition=test011-8, dir=/data01/kafka-logs-351] Rolled new log segment at offset 1035360 in 1 ms. (kafka.log.LocalLog) [2023-08-08 16:15:42,468] INFO [ProducerStateManager partition=test011-8]Wrote producer snapshot at offset 1035360 with 0 producer ids in 13 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:15:43,031] INFO [LocalLog partition=test011-2, dir=/data01/kafka-logs-351] Rolled new log segment at offset 1035352 in 1 ms. (kafka.log.LocalLog) [2023-08-08 16:15:43,033] INFO [ProducerStateManager partition=test011-2]Wrote producer snapshot at offset 1035352 with 0 producer ids in 2 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:15:44,526] INFO [LocalLog partition=test011-17, dir=/data01/kafka-logs-351] Rolled new log segment at offset 1035350 in 2 ms. (kafka.log.LocalLog) [2023-08-08 16:15:44,528] INFO [ProducerStateManager partition=test011-17]Wrote producer snapshot at offset 1035350 with 0 producer ids in 2 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:15:45,023] INFO [LocalLog partition=test011-18, dir=/data01/kafka-logs-351] Rolled new log segment at offset 1035364 in 1 ms. (kafka.log.LocalLog) [2023-08-08 16:15:45,026] INFO [ProducerStateManager partition=test011-18]Wrote producer snapshot at offset 1035364 with 0 producer ids in 3 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager) [2023-08-08 16:18:39,481] INFO [BrokerToControllerChannelManager id=2 name=alter-partition] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) [2023-08-08 16:18:39,488] INFO [broker-2-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node 10.58.12.217:9093 (id: 3 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-08-08 16:18:39,559] INFO [Partition test011-0 broker=2] ISR updated to 2,3 and version updated to 2 (kafka.cluster.Partition) [2023-08-08 16:18:39,612] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test011-0) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:18:39,651] INFO [Partition test011-4 broker=2] ISR updated to 2,3 and version updated to 2 (kafka.cluster.Partition) [2023-08-08 16:18:40,112] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test011-4) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:18:40,526] INFO [Partition test011-10 broker=2] ISR updated to 2,3 and version updated to 2 (kafka.cluster.Partition) [2023-08-08 16:18:40,701] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test011-10) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:18:42,834] INFO [Partition test011-19 broker=2] ISR updated to 2,3 and version updated to 2 (kafka.cluster.Partition) [2023-08-08 16:18:43,113] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test011-19) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:18:44,452] INFO [Partition test011-27 broker=2] ISR updated to 2,3 and version updated to 2 (kafka.cluster.Partition) [2023-08-08 16:18:44,529] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test011-27) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:18:44,531] INFO [Partition test011-21 broker=2] ISR updated to 2,3 and version updated to 2 (kafka.cluster.Partition) [2023-08-08 16:18:44,670] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test011-21) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:18:46,491] INFO [Partition test011-25 broker=2] ISR updated to 2,3 and version updated to 2 (kafka.cluster.Partition) [2023-08-08 16:18:46,653] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test011-25) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:18:46,939] INFO [Partition test011-6 broker=2] ISR updated to 2,3 and version updated to 2 (kafka.cluster.Partition) [2023-08-08 16:18:47,167] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test011-6) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:18:47,646] INFO [Partition test011-15 broker=2] ISR updated to 2,3 and version updated to 2 (kafka.cluster.Partition) [2023-08-08 16:18:48,123] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test011-15) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:18:52,293] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-356 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,295] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-356 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,295] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-488 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,295] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-488 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,295] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-223 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,295] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-223 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,295] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-30 has an older epoch (48) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,296] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-30 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,296] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-289 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,296] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-289 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,296] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-355 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,296] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-355 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,296] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-25 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,297] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-25 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,297] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-288 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,297] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-288 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,297] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-29 has an older epoch (47) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,297] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-29 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,297] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-90 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,297] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-90 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,297] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-358 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,297] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-358 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,297] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-358 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,297] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-358 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,297] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-32 has an older epoch (46) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,298] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-32 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,298] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-687 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,298] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-687 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,298] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test-0 has an older epoch (46) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,298] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test-0 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,298] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-489 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,298] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-489 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,298] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-92 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,298] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-92 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,298] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-224 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,298] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-224 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,298] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test009-26 has an older epoch (11) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,298] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test009-26 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,298] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-558 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,299] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-558 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,299] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-34 has an older epoch (47) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,299] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-34 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,299] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-557 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,299] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-557 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,299] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-94 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,299] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-94 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,299] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-94 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,299] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-94 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,299] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-226 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,299] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-226 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,299] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test009-28 has an older epoch (11) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,299] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test009-28 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,299] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-28 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,299] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-28 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,299] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-692 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,300] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-692 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,300] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-36 has an older epoch (46) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,300] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-36 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,300] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-229 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,300] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-229 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,300] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-295 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,300] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-295 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,300] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-96 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,300] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-96 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,300] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-35 has an older epoch (46) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,300] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-35 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,301] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-17 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,301] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-17 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,301] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-545 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,301] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-545 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,301] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test-7 has an older epoch (47) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,301] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test-7 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,301] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-38 has an older epoch (47) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,301] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-38 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,301] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-610 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,301] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-610 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,301] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test-9 has an older epoch (46) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,301] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test-9 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,301] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-18 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,301] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-18 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,302] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-613 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,302] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-613 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,302] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-679 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,302] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-679 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,302] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-546 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,302] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-546 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,302] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-39 has an older epoch (48) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,302] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-39 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,302] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test010-17 has an older epoch (7) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,302] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test010-17 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,302] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-285 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,302] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-285 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,302] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-285 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,303] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-285 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,303] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test010-20 has an older epoch (7) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,303] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test010-20 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,303] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test123-54 has an older epoch (35) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,303] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test123-54 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,303] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-219 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,303] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-219 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,303] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-41 has an older epoch (46) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,303] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-41 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,303] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-549 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,303] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-549 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,303] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-350 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,304] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-350 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,304] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-284 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,304] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-284 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,304] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-86 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,304] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-86 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,304] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-152 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,304] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-152 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,304] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-44 has an older epoch (46) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,304] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-44 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,304] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-618 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,304] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-618 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,304] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test-13 has an older epoch (46) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,304] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test-13 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,305] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-353 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,305] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-353 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,305] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-419 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,305] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-419 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,305] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-89 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,305] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-89 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,305] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test009-22 has an older epoch (11) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,305] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test009-22 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,305] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-43 has an older epoch (47) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,305] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-43 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,305] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-22 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,305] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-22 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,305] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test010-21 has an older epoch (7) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,305] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test010-21 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,306] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-272 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,306] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-272 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,306] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-272 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,306] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-272 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,306] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-12 has an older epoch (48) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,306] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-12 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,306] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test010-10 has an older epoch (7) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,306] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test010-10 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,306] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-275 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,306] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-275 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,306] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-605 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,306] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-605 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,306] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test-17 has an older epoch (47) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,306] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test-17 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,306] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-208 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,306] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-208 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,307] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-14 has an older epoch (46) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,307] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-14 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,307] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-208 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,307] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-208 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,307] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-10 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,307] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-10 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,307] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-17 has an older epoch (48) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,307] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-17 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,307] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-277 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,307] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-277 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,307] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-343 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,307] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-343 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,307] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-12 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,307] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-12 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,307] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-16 has an older epoch (46) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,308] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-16 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,308] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test009-12 has an older epoch (11) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,308] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test009-12 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,308] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-606 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,308] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-606 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,308] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-342 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,308] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-342 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,308] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-19 has an older epoch (48) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,308] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-19 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,308] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-675 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,308] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-675 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,308] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-279 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,308] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-279 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,308] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-411 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,308] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-411 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,309] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-18 has an older epoch (46) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,309] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-18 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,309] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-21 has an older epoch (46) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,309] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-21 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,309] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-66 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,309] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-66 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,309] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-396 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,309] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-396 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,309] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-528 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,309] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-528 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,309] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-198 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,309] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-198 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,309] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-264 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,309] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-264 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,309] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-653 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,310] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-653 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,310] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-190 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,310] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-190 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,310] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-322 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,310] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-322 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,310] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-58 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,310] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-58 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,310] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-173 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,310] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-173 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,310] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-45 has an older epoch (46) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,310] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-45 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,310] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-239 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,310] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-239 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,310] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-107 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,310] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-107 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,310] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test123-8 has an older epoch (44) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,311] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test123-8 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,311] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-255 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,311] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-255 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,311] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-57 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,311] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-57 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,311] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-635 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,311] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-635 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,311] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test123-24 has an older epoch (44) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,311] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test123-24 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,311] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-44 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,311] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-44 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,311] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-48 has an older epoch (47) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,311] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-48 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,311] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-391 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,311] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-391 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,311] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-457 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,312] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-457 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,312] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-523 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,312] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-523 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,312] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-44 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,312] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-44 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,312] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-126 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,312] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-126 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,312] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-192 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,312] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-192 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,312] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-324 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,312] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-324 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,312] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-47 has an older epoch (47) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,312] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-47 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,312] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-175 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,312] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-175 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,312] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-637 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,312] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-637 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,312] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-257 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,313] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-257 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,313] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-439 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,313] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-439 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,313] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-112 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,313] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-112 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,313] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-46 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,313] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-46 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,313] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-178 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,313] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-178 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,313] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-178 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,313] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-178 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,313] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-706 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,313] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-706 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,313] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-376 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,313] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-376 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,313] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-1 has an older epoch (46) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,314] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-1 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,314] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-45 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,314] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-45 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,314] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-326 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,314] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-326 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,314] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-49 has an older epoch (46) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,314] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-49 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,314] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-573 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,314] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-573 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,314] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-61 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,314] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-61 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,314] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-0 has an older epoch (47) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,314] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-0 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,314] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-127 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,314] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-127 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,314] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-441 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,314] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-441 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,314] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-263 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,314] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-263 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,314] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-329 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,315] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-329 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,315] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-130 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,315] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-130 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,315] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test123-31 has an older epoch (35) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,315] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test123-31 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,315] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-576 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,315] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-576 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,315] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-196 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,315] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-196 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,315] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-180 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,315] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-180 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,315] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-312 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,315] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-312 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,315] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-3 has an older epoch (48) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,315] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-3 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,315] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-312 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,315] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-312 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,315] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-592 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,315] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-592 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,315] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-196 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,316] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-196 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,316] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-245 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,316] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-245 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,316] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test123-17 has an older epoch (44) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,316] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test123-17 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,316] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-166 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,316] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-166 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,316] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-711 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,316] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-711 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,316] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-380 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,316] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-380 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,316] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test123-0 has an older epoch (44) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,316] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test123-0 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,316] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-182 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,316] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-182 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,316] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-5 has an older epoch (46) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,316] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-5 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,316] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-627 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,316] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-627 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,316] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-115 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,317] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-115 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,317] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-165 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,317] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-165 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,317] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-231 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,317] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-231 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,317] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-511 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,317] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-511 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,317] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-32 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,317] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-32 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,317] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-65 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,317] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-65 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,317] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-32 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,317] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-32 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,317] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-247 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,317] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-247 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,317] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-333 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,317] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-333 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,317] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-498 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,318] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-498 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,318] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-36 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,318] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-36 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,318] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-647 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,318] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-647 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,318] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test010-2 has an older epoch (7) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,318] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test010-2 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,318] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-300 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,318] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-300 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,318] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test-26 has an older epoch (46) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,318] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test-26 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,318] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-448 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,318] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-448 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,318] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-514 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,318] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-514 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,318] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-2 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,318] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-2 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,318] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-35 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,319] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-35 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,319] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test123-19 has an older epoch (44) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,319] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test123-19 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,319] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-7 has an older epoch (46) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,319] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-7 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,319] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-118 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,319] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-118 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,319] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-316 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,319] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-316 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,319] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test123-35 has an older epoch (35) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,319] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test123-35 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,319] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-580 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,319] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-580 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,319] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-22 has an older epoch (48) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,320] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-22 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,320] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-200 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,320] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-200 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,320] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-332 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,320] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-332 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,320] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test009-1 has an older epoch (11) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,320] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test009-1 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,320] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-1 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,320] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-1 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,320] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-183 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,320] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-183 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,320] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-249 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,320] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-249 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,320] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-269 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,321] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-269 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,321] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-9 has an older epoch (47) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,321] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-9 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,321] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-517 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,321] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-517 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,321] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-71 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,321] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-71 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,321] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test123-38 has an older epoch (35) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,321] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test123-38 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,321] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-318 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,321] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-318 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,321] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-384 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,321] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-384 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,322] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-599 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,322] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-599 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,322] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-120 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,322] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-120 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,322] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test123-21 has an older epoch (44) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,322] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test123-21 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,322] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-120 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,322] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-120 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,322] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-186 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,322] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-186 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,322] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-24 has an older epoch (47) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,322] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-24 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,322] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-136 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,322] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-136 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,322] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-714 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,322] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-714 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,322] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-202 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,322] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-202 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,322] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-25 has an older epoch (46) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,323] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-25 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,323] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-53 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,323] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-53 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,323] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-8 has an older epoch (48) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,323] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-8 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,323] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-664 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,323] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-664 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,323] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-251 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,323] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-251 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,323] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-337 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,323] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-337 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,323] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-519 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,323] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-519 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,323] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-7 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,323] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-7 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,323] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-73 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,323] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-73 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,323] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test123-40 has an older epoch (35) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,323] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test123-40 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,323] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-320 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,324] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-320 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,324] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-336 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,324] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-336 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,324] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-10 has an older epoch (46) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,324] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-10 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,324] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-138 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,324] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-138 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,324] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-584 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,324] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-584 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,324] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-204 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,324] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-204 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,324] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test010-5 has an older epoch (7) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,324] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test010-5 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,324] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-253 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,324] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-253 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,324] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-253 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,324] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-253 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,324] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-27 has an older epoch (46) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,325] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition __consumer_offsets-27 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,325] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-600 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,325] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-600 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,325] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-666 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,325] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-666 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,325] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-187 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,325] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-187 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,325] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-368 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,325] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-368 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,325] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-370 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,325] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-370 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,325] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-38 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,325] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-38 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,325] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-170 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,325] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-170 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,325] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-436 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,325] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-436 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,325] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-632 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,326] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-632 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,326] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-568 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,326] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-568 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,326] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-365 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,326] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-365 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,326] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-433 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,326] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-433 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,326] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-101 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,326] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-101 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,326] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-39 has an older epoch (24) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,326] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test004-39 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,326] INFO [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-235 has an older epoch (23) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,326] WARN [ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Partition test005-235 marked as failed (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:52,723] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(test004-653, test004-719, test004-389, test004-356, test004-488, test005-190, test004-124, test004-223, test005-322, test004-289, test005-355, test004-322, test005-58, test004-25, test004-421, test004-454, test004-586, test005-288, test004-255, test005-90, test004-57, test123-24, test004-655, test004-622, test005-358, test004-292, test004-391, test004-358, test004-457, test004-523, test005-126, __consumer_offsets-32, test004-159, test005-159, test004-225, test004-192, test005-324, test005-291, test004-258, test004-687, test-0, test004-489, test005-158, test123-59, test005-125, test004-92, test005-224, test004-191, test004-158, test005-257, test009-26, test005-26, test004-558, test004-228, test004-294, test004-459, test123-29, test005-194, __consumer_offsets-1, test004-557, test004-590, test004-326, test004-425, test005-94, test004-61, test005-61, test004-28, test004-127, test004-94, test005-226, test005-259, test009-28, test005-28, test004-527, test004-494, test004-659, test004-263, test005-263, test004-329, test005-329, test004-362, test005-130, test123-31, test005-97, test005-196, test004-163, test005-163, test004-692, __consumer_offsets-36, test004-625, test004-592, test005-229, test004-196, test005-295, test004-427, test004-394, test004-129, test004-96, test-4, __consumer_offsets-35, test004-17, test005-17, test123-17, test004-50, test004-711, test004-380, test004-545, test005-182, test004-116, __consumer_offsets-5, test-7, test004-347, test005-347, test005-49, test005-148, test123-49, test005-115, test009-16, test004-511, test004-643, test004-610, test004-148, test004-247, test009-19, test004-647, test004-349, test004-448, test004-514, test005-118, __consumer_offsets-7, test123-19, test004-52, test005-184, test004-118, test-9, test005-316, test005-84, test004-18, test004-613, test004-580, test004-679, test004-414, test004-480, test004-546, test123-51, test004-183, test010-17, test005-282, test004-249, test004-216, test004-315, test004-517, test004-484, test004-285, test005-285, test004-318, test004-384, test005-120, test123-21, test010-20, test004-120, test123-54, test004-219, test004-186, test005-20, __consumer_offsets-41, test004-549, test004-615, test004-714, test005-350, test004-284, test004-383, test005-53, test005-152, test004-86, test005-218, test004-152, test005-251, test004-684, __consumer_offsets-44, test004-519, test004-552, test004-618, test-13, test005-320, test005-353, test004-419, test004-89, test004-56, test004-155, test005-155, test009-22, __consumer_offsets-10, test004-584, test004-683, test004-650, test004-253, test005-253, test004-352, test004-451, test005-22, test005-88, test005-55, test004-22, test005-220, test005-187, test010-21, test004-141, test005-240, test005-339, test004-9, test004-669, test004-702, test009-8, test004-372, test005-173, test005-272, __consumer_offsets-45, test005-239, test004-206, test004-305, test004-272, test005-107, test123-8, test004-503, test004-635, test123-11, test005-77, test004-44, test010-10, test123-44, test004-176, test005-308, test004-275, test004-242, test005-44, test004-605, test004-572, test-17, test004-341, test004-308, test004-407, test004-539, test004-506, test005-142, test004-109, test005-109, test004-76, test005-208, test005-175, __consumer_offsets-14, test005-274, test004-208, test009-10, test005-10, test004-637, test004-439, test004-472, test005-13, test005-112, test004-79, test004-46, test005-178, test004-145, test004-178, test-20, test004-640, test004-706, test004-277, test005-277, test004-343, test004-310, test004-409, test004-376, test004-475, test005-45, __consumer_offsets-16, test004-12, test004-111, test123-45, test005-210, test005-243, test009-12, __consumer_offsets-49, test004-573, test004-540, test004-606, test005-342, test005-309, test004-441, test004-507, test004-81, test005-81, test004-477, test004-444, test004-576, test004-675, test005-246, test004-213, test005-213, test004-180, test005-312, test004-279, test005-279, test004-312, test004-411, test005-80, test005-146, test123-14, test-21, test004-708, __consumer_offsets-18, test004-674, test004-245, test004-212, test004-463, test004-430, test004-496, test004-595, test004-562, test005-166, __consumer_offsets-21, test004-265, test004-331, test004-298, test005-66, test123-0, test004-661, test004-396, test004-528, test004-627, test005-198, test004-165, test005-264, test004-231, test005-231, test005-297, test005-32, test004-65, test005-65, test004-32, test004-693, test005-333, test004-399, test004-498, test005-102, test004-36, test010-2, test005-300, test004-267, test-26, test005-35, test004-2, test004-696, test004-365, test005-134, test004-101, test123-35, test004-134, test004-200, test005-332, test009-1, test005-1, test005-100, test004-67, test004-34, test005-269, test-28, test004-335, test004-302, test004-368, test004-467, test005-38, test004-71, test005-71, test005-170, test005-137, test123-38, test005-203, test004-170, test005-4, test004-533, test004-599, test004-566, test004-632, test004-433, test004-4, test004-103, test123-4, test004-136, test004-235, test005-235, test004-202, test009-3, __consumer_offsets-25, test004-664, test004-172, test005-304, test004-238, test005-337, test004-403, test004-370, test004-7, test004-73, test004-139, test123-40, test004-700, test004-436, test004-568, test004-336, test005-6, test005-72, test004-39, test005-39, test005-138, test004-105, test005-204, test010-5, __consumer_offsets-27, test004-468, test004-534, test004-600, test004-666) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:18:54,219] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(__consumer_offsets-47, __consumer_offsets-48, test010-9, __consumer_offsets-43, __consumer_offsets-12, __consumer_offsets-9, __consumer_offsets-24, test010-1, __consumer_offsets-22, test010-4, __consumer_offsets-19, __consumer_offsets-17, test010-23, __consumer_offsets-0, __consumer_offsets-29, __consumer_offsets-30, test010-16, __consumer_offsets-39, __consumer_offsets-8, __consumer_offsets-38, __consumer_offsets-3, test010-19, __consumer_offsets-34) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:18:54,221] INFO [ReplicaFetcherManager on broker 2] Added fetcher to broker 1 for partitions HashMap(__consumer_offsets-47 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=1, host=10.58.16.231:9092),48,0), __consumer_offsets-48 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=1, host=10.58.16.231:9092),48,0), test010-9 -> InitialFetchState(Some(KRrkky6_Qwi605E4lIfOgw),BrokerEndPoint(id=1, host=10.58.16.231:9092),10,2723772), __consumer_offsets-43 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=1, host=10.58.16.231:9092),48,0), __consumer_offsets-12 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=1, host=10.58.16.231:9092),49,0), __consumer_offsets-9 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=1, host=10.58.16.231:9092),48,0), __consumer_offsets-24 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=1, host=10.58.16.231:9092),48,0), test010-1 -> InitialFetchState(Some(KRrkky6_Qwi605E4lIfOgw),BrokerEndPoint(id=1, host=10.58.16.231:9092),10,2724990), __consumer_offsets-22 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=1, host=10.58.16.231:9092),49,0), test010-4 -> InitialFetchState(Some(KRrkky6_Qwi605E4lIfOgw),BrokerEndPoint(id=1, host=10.58.16.231:9092),10,2723838), __consumer_offsets-19 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=1, host=10.58.16.231:9092),49,0), __consumer_offsets-17 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=1, host=10.58.16.231:9092),49,0), test010-23 -> InitialFetchState(Some(KRrkky6_Qwi605E4lIfOgw),BrokerEndPoint(id=1, host=10.58.16.231:9092),10,2723139), __consumer_offsets-0 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=1, host=10.58.16.231:9092),48,0), __consumer_offsets-29 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=1, host=10.58.16.231:9092),48,0), __consumer_offsets-30 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=1, host=10.58.16.231:9092),49,0), test010-16 -> InitialFetchState(Some(KRrkky6_Qwi605E4lIfOgw),BrokerEndPoint(id=1, host=10.58.16.231:9092),10,2723705), __consumer_offsets-39 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=1, host=10.58.16.231:9092),49,0), __consumer_offsets-8 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=1, host=10.58.16.231:9092),49,0), __consumer_offsets-38 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=1, host=10.58.16.231:9092),48,0), __consumer_offsets-3 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=1, host=10.58.16.231:9092),49,0), test010-19 -> InitialFetchState(Some(KRrkky6_Qwi605E4lIfOgw),BrokerEndPoint(id=1, host=10.58.16.231:9092),10,2724227), __consumer_offsets-34 -> InitialFetchState(Some(VTTnHOjHS1i07Zhb99_tfg),BrokerEndPoint(id=1, host=10.58.16.231:9092),48,0)) (kafka.server.ReplicaFetcherManager) [2023-08-08 16:18:54,226] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 16 in epoch 47 (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,227] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 47 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,229] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 45 in epoch 47 (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,229] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 47 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,229] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 14 in epoch 47 (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,229] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 47 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,229] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 44 in epoch 47 (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,229] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 47 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,229] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 41 in epoch 47 (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,229] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 47 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,229] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 10 in epoch 47 (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,229] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 47 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,229] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 21 in epoch 47 (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,229] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 47 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,229] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 49 in epoch 47 (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,229] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 47 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,229] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 18 in epoch 47 (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,229] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 47 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,229] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 32 in epoch 47 (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,229] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 47 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,229] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 27 in epoch 47 (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,229] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 47 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,229] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 25 in epoch 47 (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,229] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 47 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,229] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 7 in epoch 47 (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,230] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 47 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,230] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 5 in epoch 47 (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,230] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 47 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,230] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 35 in epoch 47 (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,230] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 47 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,230] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 36 in epoch 47 (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,230] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 47 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,230] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 1 in epoch 47 (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,230] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 47 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,230] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 47 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,230] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-47 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,230] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 48 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,230] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-48 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,230] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 43 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,230] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-43 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,230] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 12 in epoch OptionalInt[49] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,230] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-12 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,230] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 9 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,230] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-9 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,230] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 24 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,230] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-24 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,230] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 22 in epoch OptionalInt[49] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,230] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-22 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,230] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 19 in epoch OptionalInt[49] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,230] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-19 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,230] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 17 in epoch OptionalInt[49] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,230] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-17 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,230] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 0 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,230] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-0 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,230] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 29 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,230] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-29 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,230] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 30 in epoch OptionalInt[49] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,230] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-30 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,230] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 39 in epoch OptionalInt[49] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,230] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-39 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,230] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 8 in epoch OptionalInt[49] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,230] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-8 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,230] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 38 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,230] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-38 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,230] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 3 in epoch OptionalInt[49] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,230] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-3 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,230] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 34 in epoch OptionalInt[48] (kafka.coordinator.group.GroupCoordinator) [2023-08-08 16:18:54,230] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-34 (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,239] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-16 in 9 milliseconds for epoch 47, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,239] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-45 in 10 milliseconds for epoch 47, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,239] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-14 in 10 milliseconds for epoch 47, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,239] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-44 in 10 milliseconds for epoch 47, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,239] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-41 in 10 milliseconds for epoch 47, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,240] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-10 in 11 milliseconds for epoch 47, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,240] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-21 in 11 milliseconds for epoch 47, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,240] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-49 in 11 milliseconds for epoch 47, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,240] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-18 in 11 milliseconds for epoch 47, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,240] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-32 in 11 milliseconds for epoch 47, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,240] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-27 in 11 milliseconds for epoch 47, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,240] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-25 in 11 milliseconds for epoch 47, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,241] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-7 in 11 milliseconds for epoch 47, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,241] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-5 in 11 milliseconds for epoch 47, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,241] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-35 in 11 milliseconds for epoch 47, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,241] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-36 in 11 milliseconds for epoch 47, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,241] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-1 in 11 milliseconds for epoch 47, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,241] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-47 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,241] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-48 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,241] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-43 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,241] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-12 for coordinator epoch OptionalInt[49]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,241] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-9 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,241] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-24 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,242] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-22 for coordinator epoch OptionalInt[49]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,242] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-19 for coordinator epoch OptionalInt[49]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,242] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-17 for coordinator epoch OptionalInt[49]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,242] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-0 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,242] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-29 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,242] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-30 for coordinator epoch OptionalInt[49]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,242] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-39 for coordinator epoch OptionalInt[49]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,242] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-8 for coordinator epoch OptionalInt[49]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,242] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-38 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,242] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-3 for coordinator epoch OptionalInt[49]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,242] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-34 for coordinator epoch OptionalInt[48]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) [2023-08-08 16:18:54,568] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-47 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:54,568] INFO [UnifiedLog partition=__consumer_offsets-47, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:18:54,568] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-48 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:54,568] INFO [UnifiedLog partition=__consumer_offsets-48, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:18:54,568] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-43 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:54,568] INFO [UnifiedLog partition=__consumer_offsets-43, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:18:54,568] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-12 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:54,568] INFO [UnifiedLog partition=__consumer_offsets-12, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:18:54,568] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-9 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:54,568] INFO [UnifiedLog partition=__consumer_offsets-9, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:18:54,568] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-24 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:54,568] INFO [UnifiedLog partition=__consumer_offsets-24, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:18:54,568] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-22 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:54,568] INFO [UnifiedLog partition=__consumer_offsets-22, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:18:54,568] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-19 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:54,568] INFO [UnifiedLog partition=__consumer_offsets-19, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:18:54,568] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-17 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:54,568] INFO [UnifiedLog partition=__consumer_offsets-17, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:18:54,568] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-0 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:54,568] INFO [UnifiedLog partition=__consumer_offsets-0, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:18:54,569] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-29 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:54,569] INFO [UnifiedLog partition=__consumer_offsets-29, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:18:54,569] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-30 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:54,569] INFO [UnifiedLog partition=__consumer_offsets-30, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:18:54,569] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-39 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:54,569] INFO [UnifiedLog partition=__consumer_offsets-39, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:18:54,569] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-8 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:54,569] INFO [UnifiedLog partition=__consumer_offsets-8, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:18:54,569] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-38 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:54,569] INFO [UnifiedLog partition=__consumer_offsets-38, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:18:54,569] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-3 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:54,569] INFO [UnifiedLog partition=__consumer_offsets-3, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:18:54,569] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-34 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:54,569] INFO [UnifiedLog partition=__consumer_offsets-34, dir=/data01/kafka-logs-351] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [2023-08-08 16:18:54,575] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test010-23 with TruncationState(offset=2723139, completed=true) due to leader epoch and offset EpochEndOffset(errorCode=0, partition=23, leaderEpoch=6, endOffset=2723139) (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:54,575] INFO [UnifiedLog partition=test010-23, dir=/data01/kafka-logs-351] Truncating to 2723139 has no effect as the largest offset in the log is 2723138 (kafka.log.UnifiedLog) [2023-08-08 16:18:54,579] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test010-9 with TruncationState(offset=2723772, completed=true) due to leader epoch and offset EpochEndOffset(errorCode=0, partition=9, leaderEpoch=6, endOffset=2723772) (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:54,579] INFO [UnifiedLog partition=test010-9, dir=/data01/kafka-logs-351] Truncating to 2723772 has no effect as the largest offset in the log is 2723771 (kafka.log.UnifiedLog) [2023-08-08 16:18:55,616] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test010-16 with TruncationState(offset=2723705, completed=true) due to leader epoch and offset EpochEndOffset(errorCode=0, partition=16, leaderEpoch=6, endOffset=2723705) (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:55,617] INFO [UnifiedLog partition=test010-16, dir=/data01/kafka-logs-351] Truncating to 2723705 has no effect as the largest offset in the log is 2723704 (kafka.log.UnifiedLog) [2023-08-08 16:18:55,630] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test010-1 with TruncationState(offset=2724990, completed=true) due to leader epoch and offset EpochEndOffset(errorCode=0, partition=1, leaderEpoch=6, endOffset=2724990) (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:55,630] INFO [UnifiedLog partition=test010-1, dir=/data01/kafka-logs-351] Truncating to 2724990 has no effect as the largest offset in the log is 2724989 (kafka.log.UnifiedLog) [2023-08-08 16:18:55,640] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test010-4 with TruncationState(offset=2723838, completed=true) due to leader epoch and offset EpochEndOffset(errorCode=0, partition=4, leaderEpoch=6, endOffset=2723838) (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:55,641] INFO [UnifiedLog partition=test010-4, dir=/data01/kafka-logs-351] Truncating to 2723838 has no effect as the largest offset in the log is 2723837 (kafka.log.UnifiedLog) [2023-08-08 16:18:55,646] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition test010-19 with TruncationState(offset=2724227, completed=true) due to leader epoch and offset EpochEndOffset(errorCode=0, partition=19, leaderEpoch=6, endOffset=2724227) (kafka.server.ReplicaFetcherThread) [2023-08-08 16:18:55,646] INFO [UnifiedLog partition=test010-19, dir=/data01/kafka-logs-351] Truncating to 2724227 has no effect as the largest offset in the log is 2724226 (kafka.log.UnifiedLog)