Uploaded image for project: 'Apache Storm'
  1. Apache Storm
  2. STORM-2014

New Kafka spout duplicates checking if failed messages have reached max retries

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.0.0, 1.1.0
    • Component/s: storm-kafka-client
    • Labels:
      None

      Description

      The new Kafka spout has a RetryService interface that should make logic around retrying tuples pluggable. The RetryServiceExponentialBackoff class has code for setting a max retry count, and dropping messages once they reach the retry limit. This functionality is duplicated by the spout in the fail method, which means that the user must set different maxRetries for the RetryService and the spout in order for the RetryService code to be hit when dropping messages.

      I think the retry logic belongs in the RetryService interface, and should be removed from the spout. It would also be good if the RetryService could indicate if a message will be retried or not.

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                Srdo Stig Rohde Døssing
                Reporter:
                Srdo Stig Rohde Døssing
              • Votes:
                0 Vote for this issue
                Watchers:
                3 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved:

                  Time Tracking

                  Estimated:
                  Original Estimate - Not Specified
                  Not Specified
                  Remaining:
                  Remaining Estimate - 0h
                  0h
                  Logged:
                  Time Spent - 1.5h
                  1.5h