Details

    • Type: Sub-task Sub-task
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: producer
    • Labels:
      None
    1. KAFKA-1252.patch
      66 kB
      Jay Kreps
    2. KAFKA-1252_2014-02-18_17:14:38.patch
      80 kB
      Jay Kreps

      Activity

      Hide
      Jay Kreps added a comment -

      Created reviewboard https://reviews.apache.org/r/18199/
      against branch trunk

      Show
      Jay Kreps added a comment - Created reviewboard https://reviews.apache.org/r/18199/ against branch trunk
      Hide
      Jay Kreps added a comment -

      One question to consider is how we want to handle retry backoff. Currently this code retries immediately. There are some exceptions like TimeoutException and NotLeaderForPartition that should actually retry immediately and others like LeaderNotAvailableException where you might want to wait a bit.

      If you think about it I'm not really sure if the user actually cares about the number of retries. Currently they care whether retries are possible because retries implies possible duplicates/loss. If we fixed this then I suspect you would always want to retry within the timeout you had specified. To avoid banging the server it might be good to backoff, but this is ultimately not really the concern of the client.

      The current implementation is simple, though. We might want to just take that as is.

      The ideal implementation would be (1) make retries idempotent, (2) always retry, (3) have an "exponential" backoff of something like 0, 1, 2, 4, 8, 16, etc to avoid killing the server, (4) bound this by the produce time limit the client specifies.

      Show
      Jay Kreps added a comment - One question to consider is how we want to handle retry backoff. Currently this code retries immediately. There are some exceptions like TimeoutException and NotLeaderForPartition that should actually retry immediately and others like LeaderNotAvailableException where you might want to wait a bit. If you think about it I'm not really sure if the user actually cares about the number of retries. Currently they care whether retries are possible because retries implies possible duplicates/loss. If we fixed this then I suspect you would always want to retry within the timeout you had specified. To avoid banging the server it might be good to backoff, but this is ultimately not really the concern of the client. The current implementation is simple, though. We might want to just take that as is. The ideal implementation would be (1) make retries idempotent, (2) always retry, (3) have an "exponential" backoff of something like 0, 1, 2, 4, 8, 16, etc to avoid killing the server, (4) bound this by the produce time limit the client specifies.
      Hide
      Jay Kreps added a comment -

      Updated reviewboard https://reviews.apache.org/r/18199/
      against branch trunk

      Show
      Jay Kreps added a comment - Updated reviewboard https://reviews.apache.org/r/18199/ against branch trunk
      Hide
      Jay Kreps added a comment -

      Cool, I posted an update that addresses your comments and will checkin with that. If you do have a chance I would appreciate a post-review on the metadata fetch node selection logic as that is a bit complex.

      Show
      Jay Kreps added a comment - Cool, I posted an update that addresses your comments and will checkin with that. If you do have a chance I would appreciate a post-review on the metadata fetch node selection logic as that is a bit complex.
      Hide
      Guozhang Wang added a comment -

      Sure, will do.

      Show
      Guozhang Wang added a comment - Sure, will do.
      Hide
      Jun Rao added a comment -

      There is a compilation error after the checkin.

      /Users/jrao/Intellij/kafka_gradle/core/src/main/scala/kafka/tools/newproducer/MirrorMaker.scala:87: value BLOCK_ON_BUFFER_FULL is not a member of object org.apache.kafka.clients.producer.ProducerConfig
      producerProps.setProperty(ProducerConfig.BLOCK_ON_BUFFER_FULL, "true")
      ^
      four warnings found
      one error found
      :core:compileScala FAILED

      Show
      Jun Rao added a comment - There is a compilation error after the checkin. /Users/jrao/Intellij/kafka_gradle/core/src/main/scala/kafka/tools/newproducer/MirrorMaker.scala:87: value BLOCK_ON_BUFFER_FULL is not a member of object org.apache.kafka.clients.producer.ProducerConfig producerProps.setProperty(ProducerConfig.BLOCK_ON_BUFFER_FULL, "true") ^ four warnings found one error found :core:compileScala FAILED
      Hide
      Guozhang Wang added a comment -

      It needs to be BLOCK_ON_BUFFER_FULL_CONFIG now, Jay Kreps if you are OK I am going to include the fix in KAFKA-1260 also since I am rebasing anyways.

      Show
      Guozhang Wang added a comment - It needs to be BLOCK_ON_BUFFER_FULL_CONFIG now, Jay Kreps if you are OK I am going to include the fix in KAFKA-1260 also since I am rebasing anyways.
      Hide
      Jay Kreps added a comment -

      I actually checked in a fix last night.

      Show
      Jay Kreps added a comment - I actually checked in a fix last night.
      Hide
      Guozhang Wang added a comment -

      Cool, I will only change the integration tests configs then.

      Show
      Guozhang Wang added a comment - Cool, I will only change the integration tests configs then.

        People

        • Assignee:
          Jay Kreps
          Reporter:
          Jay Kreps
        • Votes:
          0 Vote for this issue
          Watchers:
          3 Start watching this issue

          Dates

          • Created:
            Updated:
            Resolved:

            Development