Affects Version/s: 0.7, 0.7.1
Fix Version/s: None
Currently consuming multiple topics through ZK by first acquiring ConsumerConnector and then fetching message streams for wanted topics. And when the messages have been consumed, the current consuming state is commited with the method ConsumerConnector#commitOffsets().
This scheme has a flaw when the consuming application is used as sort of a data piping proxy instead of final consuming sink. In our case we read data from Kafka, repackage it and only then move it to persistent storage. The repackaging step is relatively long running and may span several hours (usually a few minutes) which in addition is mixed with highly asymmetric topic throughputs; one of our topics gets about 80% of total throughput. We have about 20 topics in total. As an unwanted side effect of all this, commiting the offset whenever the per-topic persistence step has been taken means commiting offsets for other topics too which may eventually manifest as loss of data if the consuming application or the machine it is running on crashes.
So, while this loss of data can be alleviated to some extent with for example local temp storage, it would be cleaner if KafkaStream itself would allow for partition level offset commiting.
|Field||Original Value||New Value|
|Priority||Major [ 3 ]||Minor [ 4 ]|
|Workflow||no-reopen-closed, patch-avail [ 12725828 ]||Apache Kafka Workflow [ 13052596 ]|
|Workflow||Apache Kafka Workflow [ 13052596 ]||no-reopen-closed, patch-avail [ 13054888 ]|