Details
-
Bug
-
Status: Resolved
-
P2
-
Resolution: Fixed
-
2.23.0
-
None
Description
When using KafkaIO with ConfluentSchemaRegistryDeserializerProvider, exception could be thrown when consuming a topic with evolved schema.
It is because when the DeserializerProvider is initialized, it create a AvroCoder instance using either the latest Avro schema by default, or a specific version of provided.
If the Kafka topic contains records with multiple schema versions, AvroCoder will fail to encode records with different schemas. The specific exception differs depending on the schema change. For example, I have encountered type cast error and null pointer error.
To fix this issue, we can make use of the writer-reader schema arguments from Avro to deserialize Kafka records to the same schema with the AvroCoder. The method is available in io.confluent.kafka.serializers.KafkaAvroDeserializer
public Object deserialize(String s, byte[] bytes, Schema readerSchema) { return this.deserialize(bytes, readerSchema); }
Attachments
Issue Links
- links to