Uploaded image for project: 'Flink'
  1. Flink
  2. FLINK-5583

Support flexible error handling in the Kafka consumer

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 1.3.0
    • Component/s: Kafka Connector
    • Labels:
      None

      Description

      We found that it is valuable to allow the applications to handle errors and exceptions in the Kafka consumer in order to build a robust application in production.

      The context is the following:

      (1) We have schematized, Avro records flowing through Kafka.
      (2) The decoder implements the DeserializationSchema to decode the records.
      (3) Occasionally there are corrupted records (e.g., schema issues). The streaming pipeline might want to bail out (which is the current behavior) or to skip the corrupted records depending on the applications.

      Two options are available:

      (1) Have a variant of DeserializationSchema to return a FlatMap like structure as suggested in FLINK-3679.
      (2) Allow the applications to catch and handle the exception by exposing some APIs that are similar to the ExceptionProxy.

      Thoughts?

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                wheat9 Haohui Mai
                Reporter:
                wheat9 Haohui Mai
              • Votes:
                0 Vote for this issue
                Watchers:
                3 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: