Uploaded image for project: 'Kafka'
  1. Kafka
  2. KAFKA-2260

Allow specifying expected offset on produce

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Open
    • Minor
    • Resolution: Unresolved
    • None
    • None
    • producer
    • None

    Description

      I'd like to propose a change that adds a simple CAS-like mechanism to the Kafka producer. This update has a small footprint, but enables a bunch of interesting uses in stream processing or as a commit log for process state.

      Proposed Change

      In short:

      • Allow the user to attach a specific offset to each message produced.
      • The server assigns offsets to messages in the usual way. However, if the expected offset doesn't match the actual offset, the server should fail the produce request instead of completing the write.

      This is a form of optimistic concurrency control, like the ubiquitous check-and-set – but instead of checking the current value of some state, it checks the current offset of the log.

      Motivation

      Much like check-and-set, this feature is only useful when there's very low contention. Happily, when Kafka is used as a commit log or as a stream-processing transport, it's common to have just one producer (or a small number) for a given partition – and in many of these cases, predicting offsets turns out to be quite useful.

      • We get the same benefits as the 'idempotent producer' proposal: a producer can retry a write indefinitely and be sure that at most one of those attempts will succeed; and if two producers accidentally write to the end of the partition at once, we can be certain that at least one of them will fail.
      • It's possible to 'bulk load' Kafka this way – you can write a list of n messages consecutively to a partition, even if the list is much larger than the buffer size or the producer has to be restarted.
      • If a process is using Kafka as a commit log – reading from a partition to bootstrap, then writing any updates to that same partition – it can be sure that it's seen all of the messages in that partition at the moment it does its first (successful) write.

      There's a bunch of other similar use-cases here, but they all have roughly the same flavour.

      Implementation

      The major advantage of this proposal over other suggested transaction / idempotency mechanisms is its minimality: it gives the 'obvious' meaning to a currently-unused field, adds no new APIs, and requires very little new code or additional work from the server.

      • Produced messages already carry an offset field, which is currently ignored by the server. This field could be used for the 'expected offset', with a sigil value for the current behaviour. (-1 is a natural choice, since it's already used to mean 'next available offset'.)
      • We'd need a new error and error code for a 'CAS failure'.
      • The server assigns offsets to produced messages in ByteBufferMessageSet.validateMessagesAndAssignOffsets. After this changed, this method would assign offsets in the same way – but if they don't match the offset in the message, we'd return an error instead of completing the write.
      • To avoid breaking existing clients, this behaviour would need to live behind some config flag. (Possibly global, but probably more useful per-topic?)

      I understand all this is unsolicited and possibly strange: happy to answer questions, and if this seems interesting, I'd be glad to flesh this out into a full KIP or patch. (And apologies if this is the wrong venue for this sort of thing!)

      Attachments

        1. KAFKA-2260.patch
          39 kB
          Flavio Paiva Junqueira
        2. expected-offsets.patch
          45 kB
          Ben Kirwin

        Activity

          People

            Unassigned Unassigned
            bkirwi Ben Kirwin
            Votes:
            7 Vote for this issue
            Watchers:
            32 Start watching this issue

            Dates

              Created:
              Updated: