I have implemented a StreamBuilder#addGlobalStore supplying a custom processor responsible to transform a K,V record from the input stream into a V,K records. It works fine and my store.all() does print the correct persisted V,K records. However, if I clean the local store and restart the stream app, the global table is reloaded but without going through the processor supplied; instead, it calls GlobalStateManagerImp#restoreState which simply stores the input topic K,V records into rocksDB (hence bypassing the mapping function of my custom processor). I believe this must not be the expected result?
This is a follow up on stackoverflow discussion around storing a K,V topic as a global table with some stateless transformations based on a "custom" processor added on the global store:
If we address this issue, we should also apply `default.deserialization.exception.handler` during restore (cf. KAFKA-8037)