Details
-
Improvement
-
Status: Open
-
Major
-
Resolution: Unresolved
-
None
-
None
-
None
Description
Since Kafka require all the reading happen in the leader for the consistency.
in case, for large volume data of a Topic have a number of consumers, among them, some consumers are Not latency-sensitive But Data-Loss sensitive have a consuming lag, then, it will pollute the Page cache as they are reading from the disk, hence, for other consumers which are latency-sensitive will be sad due to the pagecache miss
If there could be possible for the reading can happen in replicas for a eventually consistent (likely version based), thus, consumers marked as "read from replica" can be separated from others