Uploaded image for project: 'Kafka'
  1. Kafka
  2. KAFKA-966

Allow high level consumer to 'nak' a message and force Kafka to close the KafkaStream without losing that message

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Minor
    • Resolution: Auto Closed
    • 0.8.0
    • None
    • consumer
    • None

    Description

      Enhancement request.

      The high level consumer is very close to handling a lot of situations a 'typical' client would need. Except for when the message received from Kafka is valid, but the business logic that wants to consume it has a problem.

      For example if I want to write the value to a MongoDB or Cassandra database and the database is not available. I won't know until I go to do the write that the database isn't available, but by then it is too late to NOT read the message from Kafka. Thus if I call shutdown() to stop reading, that message is lost since the offset Kafka writes to ZooKeeper is the next offset.

      Ideally I'd like to be able to tell Kafka: close the KafkaStream but set the next offset to read for this partition to this message when I start up again. And if there are any messages in the BlockingQueue for other partitions, find the lowest # and use it for that partitions offset since I haven't consumed them yet.

      Thus I can cleanly shutdown my processing, resolve whatever the issue is and restart the process.

      Another idea might be to allow a 'peek' into the next message and if I succeed in writing to the database call 'next' to remove it from the queue.

      I understand this won't deal with a 'kill -9' or hard failure of the JVM leading to the latest offsets not being written to ZooKeeper but it addresses a likely common scenario for consumers. Nor will it add true transactional support since the ZK update could fail.

      Attachments

        Activity

          People

            nehanarkhede Neha Narkhede
            chriscurtin Chris Curtin
            Votes:
            2 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: