Uploaded image for project: 'Samza'
  1. Samza
  2. SAMZA-489

Support Amazon Kinesis

Attach filesAttach ScreenshotAdd voteVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • New Feature
    • Status: Open
    • Major
    • Resolution: Unresolved
    • None
    • None
    • None

    Description

      AWS Kinesis is a publish-subscribe message broker service quite similar to Kafka, provided as a hosted service by Amazon. I have spoken to a few people who are interested in using Kinesis with Samza, since the options for stateful stream processing with Kinesis are currently quite limited. Samza's local state support would be great for Kinesis users.

      I've looked a little into what it would take to support Kinesis in Samza. Useful resources:

      Kinesis is similar to Kafka in that it has total ordering of messages within a partition (which Kinesis calls a "shard"). The most notable differences I noticed are:

      • Kinesis does not support compaction by key, and only keeps messages for 24 hours (the "trim horizon"). Thus it cannot be used for checkpointing and state store changelogging. Another service must be used for durable storage (Amazon recommends DynamoDB).
      • It is common for the number of shards in a stream to change ("resharding"), because a Kinesis shard is a unit of resourcing, not a logical grouping. A Kinesis shard is more like a Kafka broker node, not like a Kafka partition.

      The second point suggests that Kinesis shards should not be mapped 1:1 to Samza StreamTasks like we do for Kafka, because whenever the number of shards changes, any state associated with a StreamTask would no longer be in the right place.

      Kinesis assigns a message to a shard based on the MD5 hash of the message's partition key (so all messages with the same partition key are guaranteed to be in the same shard). Each shard owns a continuous range of the MD5 hash space. When the number of shards is increased by one, a shard's hash range is subdivided into two sub-ranges. When the number of shards is decreased by one, two adjacent shards' hash ranges are merged into a single range.

      I think the nicest way of modelling this in Samza would be to create a fixed number of StreamTasks (e.g. 256, but make it configurable), and to assign each task a fixed slice of this MD5 hash space. Each Kinesis shard then corresponds to a subset of these StreamTasks, and the SystemConsumer implementation routes messages from a shard to the appropriate StreamTask based on the hash of the message's partition key. This implies that all the StreamTasks for a particular Kinesis shard should be processed within the same container. This is not unlike the Kafka consumer in Samza, which fetches messages for all of a container's Kafka partitions in one go.

      This solves removes the semantic problem of resharding: we can ensure that messages with the same partition key are always routed to the same StreamTask, even across shard splits and merges.

      However, there are still some tricky edge cases to handle. For example, if Kinesis decides to merge two shards that are currently processed by two different Samza containers, what should Samza do? A simple (but perhaps a bit wasteful) solution would be for both containers to continue consuming the merged shard. Alternatively, Samza could reassign some StreamTasks from one container to another, but that would require any state to be moved or rebuilt. Probably double-consuming would make most sense for a first implementation.

      In summary, it looks like Kinesis support is feasible, and would be a fun challenge for someone to take on. Contributions welcome

      Attachments

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            martinkl Martin Kleppmann
            martinkl Martin Kleppmann

            Dates

              Created:
              Updated:

              Slack

                Issue deployment