In the DSL, Kafka Streams applies an optimization for non-materialized tables: when these are queried an upstream state store is accessed. To ensure that the correct value is returned from the lookup, all intermediate processors after the materialized store, and before the processor that triggers the lookup are re-applied (cf `KTableValueGetter`).
For re-applying DSL operators like filter/mapValues that works fine. However, for transformValue(), the method is executed with the incorrect `RecordContext` (note that DSL operators like filter don't have access to the `RecordContext` and thus, are not subject to this bug). Instead of using the record context from the value that was received from the upstream state store (and that is re-processed), the transformer would see the context from the record that triggered the lookup.
Thus, the information about timestamp, offset, partition, topic name, and headers is incorrect.