Description
It would be valuable to us to have the the encoding format in a Kafka topic decoupled from the encoding format used to cache the data locally in a kafka streams app.
We would like to use the `range()` function in the interactive queries API to look up a series of results, but can't with our encoding scheme due to our keys being variable length.
We use protobuf, but based on what I've read Avro, Flatbuffers and Cap'n proto have similar problems.
Currently we use the following code to work around this problem:
builder .stream("input-topic", Consumed.with(inputKeySerde, inputValueSerde)) .to("intermediate-topic", Produced.with(intermediateKeySerde, intermediateValueSerde)); t1 = builder .table("intermediate-topic", Consumed.with(intermediateKeySerde, intermediateValueSerde), t1Materialized);
With the encoding formats decoupled, the code above could be reduced to a single step, not requiring an intermediate topic.
Based on feedback on my SO question a change that introduces this would impact state restoration when using an input topic for recovery.
Attachments
Issue Links
- is duplicated by
-
KAFKA-9321 StreamsBuilder table method overwrites the materialized parameter
- Resolved