Uploaded image for project: 'Kafka'
  1. Kafka
  2. KAFKA-8904

Reduce metadata lookups when producing to a large number of topics

Attach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments


    • Improvement
    • Status: Resolved
    • Minor
    • Resolution: Fixed
    • None
    • 2.5.0
    • controller, producer
    • None


      Per Lucas Bradstreet:
      "The problem was that the producer starts with no knowledge of topic metadata. So they start the producer up, and then they start sending messages to any of the thousands of topics that exist. Each time a message is sent to a new topic, it'll trigger a metadata request if the producer doesn't know about it. These metadata requests are done in serial such that if you send 2000 messages to 2000 topics, it will trigger 2000 new metadata requests.
      Each successive metadata request will include every topic seen so far, so the first metadata request will include 1 topic, the second will include 2 topics, etc.
      An additional problem is that this can take a while, and metadata expiry (for metadata that has not been recently used) is hard coded to 5 mins, so if this the initial fetches take long enough you can end up evicting the metadata before you send another message to a topic.

      So the approaches above are:
      1. We can linger for a bit before making a metadata request, allow more sends to go through, and then batch the metadata request for topics we we need in a single metadata request.
      2. We can allow pre-seeding the producer with metadata for a list of topics you care about.

      I prefer 1 if we can make it work."



          This comment will be Viewable by All Users Viewable by All Users


            bbyrne Brian Byrne
            bbyrne Brian Byrne
            Rajini Sivaram Rajini Sivaram
            0 Vote for this issue
            6 Start watching this issue



              Issue deployment