Uploaded image for project: 'Flink'
  1. Flink
  2. FLINK-11501

Add a ratelimiting feature to the FlinkKafkaConsumer

    XMLWordPrintableJSON

Details

    Description

      There are instances when a Flink job that reads from Kafka can read at a significantly high throughput (particularly while processing a backlog) and degrade the underlying Kafka cluster.

      While Kafka quotas are perhaps the best way to enforce this ratelimiting, there are cases where such a setup is not available or easily enabled. In such a scenario, ratelimiting on the FlinkKafkaConsumer is useful feature. The approach is essentially involves using Guava's RateLimiter to ratelimit the bytes read from Kafka (in the KafkaConsumerThread)

      More discussion here: https://lists.apache.org/thread.html/8140b759ba83f33a22d809887fd2d711f5ffe7069c888eb9b1142272@%3Cdev.flink.apache.org%3E 

      Attachments

        1. RateLimiting-1.png
          153 kB
          Lakshmi Rao
        2. Ratelimiting-2.png
          125 kB
          Lakshmi Rao

        Issue Links

          Activity

            People

              glaksh100 Lakshmi Rao
              glaksh100 Lakshmi Rao
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Time Tracking

                  Estimated:
                  Original Estimate - Not Specified
                  Not Specified
                  Remaining:
                  Remaining Estimate - 0h
                  0h
                  Logged:
                  Time Spent - 0.5h
                  0.5h