Details
-
Improvement
-
Status: Open
-
Minor
-
Resolution: Unresolved
-
2.1.1
-
None
-
Kafka 2.1.1
Description
I added two brokers (brokerId 4,5) to a 3-node (brokerId 1,2,3) cluster where there were 32 topics and 64 partitions on each, replication 3.
Running reassigning partitions.
On each run, I can see the following WARN messages, but when the reassignment partition process finishes, it all seems OK. ISR is OK (count is 3 in every partition).
But I get the following messages types, one per partition:
[2019-06-27 12:42:03,946] WARN [LeaderEpochCache visitors-0.0.1-10] New epoch entry EpochEntry(epoch=24, startOffset=51540) caused truncation of conflicting entries ListBuffer(EpochEntry(epoch=22, startOffset=51540)). Cache now contains 5 entries. (kafka.server.epoch.LeaderEpochFileCache)
-> This relates to cache, so I suppose it's pretty safe.
[2019-06-27 12:42:04,250] WARN [ReplicaManager broker=1] Leader 1 failed to record follower 3's position 47981 since the replica is not recognized to be one of the assigned replicas 1,2,5 for partition visitors-0.0.1-28. Empty records will be returned for this partition. (kafka.server.ReplicaManager)
-> This is scary. I'm not sure about the severity of this, but it looks like it may be missing records?
[2019-06-27 12:42:03,709] WARN [ReplicaManager broker=1] While recording the replica LEO, the partition visitors-0.0.1-58 hasn't been created. (kafka.server.ReplicaManager)
-> Here, these partitions are created.
First of all - am I supposed to be missing data here? I am assuming I don't, and so far I don't see traces of losing anything.
If so, I'm not sure what these messages are trying to say here. Should they really be at WARN level? If so - should the message clarify better the different risks involved?