Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
1.0.0
Description
I have implemented a StreamBuilder#addGlobalStore supplying a custom processor responsible to transform a K,V record from the input stream into a V,K records. It works fine and my store.all() does print the correct persisted V,K records. However, if I clean the local store and restart the stream app, the global table is reloaded but without going through the processor supplied; instead, it calls GlobalStateManagerImp#restoreState which simply stores the input topic K,V records into rocksDB (hence bypassing the mapping function of my custom processor). I believe this must not be the expected result?
This is a follow up on stackoverflow discussion around storing a K,V topic as a global table with some stateless transformations based on a "custom" processor added on the global store:
If we address this issue, we should also apply `default.deserialization.exception.handler` during restore (cf. KAFKA-8037)
Attachments
Attachments
Issue Links
- is duplicated by
-
KAFKA-8143 Kafka-Streams GlobalStore cannot be read after application restart
- Resolved
- is related to
-
KAFKA-8037 KTable restore may load bad data
- Open
- relates to
-
KAFKA-10199 Separate state restoration into separate threads
- Resolved
- links to