Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
0.7
-
None
-
None
Description
The max.message.size check is not performed for compressed messages, but only for each message that forms a compressed message. Due to this, even if the max.message.size is set to 1MB, the producer can technically send n 1MB messages as one compressed message. This can cause memory issues on the server as well as deserialization issues on the consumer. The consumer's fetch size has to be > max.message.size in order to be able to read data. If one message is larger than the fetch.size, the consumer will throw an exception and cannot proceed until the fetch.size is increased.
Due to this bug, even if the fetch.size > max.message.size, the consumer can still get stuck on a message that is larger than max.message.size.
Attachments
Issue Links
- is part of
-
KAFKA-273 Occassional GZIP errors on the server while writing compressed data to disk
- Resolved
1.
|
Enforce max.message.size on the total message size, not just on payload size | Resolved | Unassigned |