• Type: New Feature
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 0.8.1
    • Fix Version/s: None
    • Component/s: core
    • Labels:


      Currently Kafka has only one way to bound the space of the log, namely by deleting old segments. The policy that controls which segments are deleted can be configured based either on the number of bytes to retain or the age of the messages. This makes sense for event or log data which has no notion of primary key. However lots of data has a primary key and consists of updates by primary key. For this data it would be nice to be able to ensure that the log contained at least the last version of every key.

      As an example, say that the Kafka topic contains a sequence of User Account messages, each capturing the current state of a given user account. Rather than simply discarding old segments, since the set of user accounts is finite, it might make more sense to delete individual records that have been made obsolete by a more recent update for the same key. This would ensure that the topic contained at least the current state of each record.


        1. KAFKA-631-v1.patch
          166 kB
          Jay Kreps
        2. KAFKA-631-v2.patch
          169 kB
          Jay Kreps
        3. KAFKA-631-v3.patch
          170 kB
          Jay Kreps
        4. KAFKA-631-v4.patch
          170 kB
          Jay Kreps
        5. KAFKA-631-v5.patch
          171 kB
          Jay Kreps
        6. KAFKA-631-v6.patch
          173 kB
          Jay Kreps
        7. KAFKA-631-v7.patch
          175 kB
          Jay Kreps
        8. KAFKA-631-v8.patch
          178 kB
          Jay Kreps
        9. KAFKA-631-v9.patch
          178 kB
          Jay Kreps

          Issue Links



              • Assignee:
                jkreps Jay Kreps
                jkreps Jay Kreps
              • Votes:
                0 Vote for this issue
                7 Start watching this issue


                • Created: