Uploaded image for project: 'Samza'
  1. Samza
  2. SAMZA-1392

KafkaSystemProducer performance and correctness with concurrent sends and flushes

Attach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • 0.14.0
    • None
    • None

    Description

      There are 2 issues we need to fix in the KafkaSystemProducer when sends and flushes are called concurrently:
      1. Concurrent sends contend for the sendlock, especially when producer compression is enabled. The fix is to use the producer.flush() API, which kafka has supported since at least version 0.9.x. This way we won't need to track the latest future, so we won't need the lock.

      2. When task.async.commit is enabled, the threads calling send() could set the exceptionInCallback to null before the exception is handled in user code or flush(). This could allow us to checkpoint offsets for which the corresponding output was not successfully sent.
      The short term solution here is to only handle the callback exceptions from flush() and allow users to configure the exceptions as ignorable in case they don't want flush to fail.
      The long term solution is to support a fully asynchronous SystemProducer. Ticket SAMZA-1393.

      I found issue #2 while working on issue #1, so while they're separate issues, it's easier to fix them with one ticket/patch.

      Attachments

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            jmakes Jake Maes
            jmakes Jake Maes
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment