Uploaded image for project: 'Kafka'
  1. Kafka
  2. KAFKA-1182

Topic not created if number of live brokers less than # replicas

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 0.8.0
    • None
    • producer
    • None
    • Centos 6.3

    Description

      We are having kafka cluster of 2 nodes. (Using Kafka 0.8.0 version)
      Replication Factor: 2
      Number of partitions: 2

      Actual Behaviour:

      Out of two nodes, if any one node goes down then topic is not created in kafka.

      Steps to Reproduce:

      1. Create a 2 node kafka cluster with replication factor 2
      2. Start the Kafka cluster
      3. Kill any one node
      4. Start the producer to write on a new topic
      5. Observe the exception stated below:

      2013-12-12 19:37:19 0 [WARN ] ClientUtils$ - Fetching topic metadata with
      correlation id 3 for topics [Set(test-topic)] from broker
      [id:0,host:122.98.12.11,port:9092] failed
      java.net.ConnectException: Connection refused
      at sun.nio.ch.Net.connect(Native Method)
      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:500)
      at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)
      at kafka.producer.SyncProducer.connect(SyncProducer.scala:146)
      at
      kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:161)
      at
      kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:68)
      at kafka.producer.SyncProducer.send(SyncProducer.scala:112)
      at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:53)
      at
      kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
      at
      kafka.producer.BrokerPartitionInfo.getBrokerPartitionInfo(BrokerPartitionInfo.scala:49)
      at
      kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$getPartitionListForTopic(DefaultEventHandler.scala:186)
      at
      kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:150)
      at
      kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:149)
      at
      scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57)
      at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43)
      at
      kafka.producer.async.DefaultEventHandler.partitionAndCollate(DefaultEventHandler.scala:149)
      at
      kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:95)
      at
      kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72)
      at kafka.producer.Producer.send(Producer.scala:76)
      at kafka.javaapi.producer.Producer.send(Producer.scala:33)

      Expected Behaviour:

      In case of live brokers less than # replicas:

      There should be topic created so at least live brokers can receive the data.

      They can replicate data to other broker once any down broker comes up.
      Because now in case of live brokers less than # replicas, there is complete
      loss of data.

      Attachments

        Activity

          People

            junrao Jun Rao
            hanish.bansal.agarwal Hanish Bansal
            Votes:
            1 Vote for this issue
            Watchers:
            9 Start watching this issue

            Dates

              Created:
              Updated: