Details

    • Type: Sub-task Sub-task
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Duplicate
    • Affects Version/s: 0.9.0.0
    • Fix Version/s: 0.10.2.0
    • Component/s: consumer
    • Labels:
      None

      Description

      Add functionality to the offsetsBeforeTime() API to load offsets corresponding to a particular timestamp, including earliest and latest offsets

        Issue Links

          Activity

          Hide
          Neha Narkhede added a comment -

          Will be easier to add once the metadata request/response works using the refactored Sender.

          Show
          Neha Narkhede added a comment - Will be easier to add once the metadata request/response works using the refactored Sender.
          Hide
          Jay Kreps added a comment -

          It will be good to rethink this API, but for now I think we can just expose seekToEnd and seekToBeginning in the consumer which are useful helpers and cover 99% of what you would want. So this issue shouldn't actually block releasing the consumer.

          Show
          Jay Kreps added a comment - It will be good to rethink this API, but for now I think we can just expose seekToEnd and seekToBeginning in the consumer which are useful helpers and cover 99% of what you would want. So this issue shouldn't actually block releasing the consumer.
          Hide
          Robert Metzger added a comment -

          A Flink user recently requested support for handling situations where Flink's KafkaConsumer can not keep up with the data volume (it starts lagging behind).
          Ideally, we want users to decide themselves how to handle the situation (by setting a custom new offset for the next poll() call). For our 0.8 connector, we can use {{offsetsBeforeTime()} call, for the 0.9 connector, there is no API yet.
          We are probably not going to address the issue immediately, but it would be nice to properly expose this for Kafka 0.9 as well.

          Show
          Robert Metzger added a comment - A Flink user recently requested support for handling situations where Flink's KafkaConsumer can not keep up with the data volume (it starts lagging behind). Ideally, we want users to decide themselves how to handle the situation (by setting a custom new offset for the next poll() call). For our 0.8 connector, we can use {{offsetsBeforeTime()} call, for the 0.9 connector, there is no API yet. We are probably not going to address the issue immediately, but it would be nice to properly expose this for Kafka 0.9 as well.
          Hide
          Jiangjie Qin added a comment -

          This ticket seems a subset of KAFKA-2076, where we had some discussion about how to expose the offset(s) within a time range.

          Show
          Jiangjie Qin added a comment - This ticket seems a subset of KAFKA-2076 , where we had some discussion about how to expose the offset(s) within a time range.
          Hide
          Jason Gustafson added a comment -

          Resolving this as a duplicate of KAFKA-4148, which is for KIP-79.

          Show
          Jason Gustafson added a comment - Resolving this as a duplicate of KAFKA-4148 , which is for KIP-79.

            People

            • Assignee:
              Jason Gustafson
              Reporter:
              Neha Narkhede
            • Votes:
              1 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development