Kafka
  1. Kafka
  2. KAFKA-876

Produce request: Leader not local for partition [test,0] on broker 0

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Blocker Blocker
    • Resolution: Duplicate
    • Affects Version/s: 0.8.0
    • Fix Version/s: 0.8.0
    • Component/s: clients, replication
    • Labels:
      None
    • Environment:
      Windows

      Description

      Follow the quick start to open zookeeper, one broker, one producer and one consumer. In the producer console, there is an LeaderNotAvailableException for the first message, and the broker complains "Produce request: Leader not local for partition [test,0] on broker 0" for all following messages.

      Kafka-List-Topic shows "[2013-04-25 10:21:24,689] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient) topic: test partition: 0 leader: 0 replicas: 0 isr: 0". With --unavailable-partitions option, it doesn't list any topic.

      =========================Broker Log=============================

      Set JMX_PORT to default value : 9999
      C:\Projects\Kafka\kafka\bin\..
      log4j:ERROR Failed to rename [server.log] to [server.log.2013-04-25-09].
      [2013-04-25 10:08:49,531] INFO Verifying properties (kafka.utils.VerifiableProperties)
      [2013-04-25 10:08:49,578] INFO Property socket.send.buffer.bytes is overridden to 1048576 (kafka.utils.VerifiablePropert
      ies)
      [2013-04-25 10:08:49,578] INFO Property socket.request.max.bytes is overridden to 104857600 (kafka.utils.VerifiablePrope
      rties)
      [2013-04-25 10:08:49,578] INFO Property log.dir is overridden to /tmp/kafka-logs (kafka.utils.VerifiableProperties)
      [2013-04-25 10:08:49,578] INFO Property log.cleanup.interval.mins is overridden to 1 (kafka.utils.VerifiableProperties)
      [2013-04-25 10:08:49,578] INFO Property log.retention.hours is overridden to 168 (kafka.utils.VerifiableProperties)
      [2013-04-25 10:08:49,578] INFO Property num.io.threads is overridden to 2 (kafka.utils.VerifiableProperties)
      [2013-04-25 10:08:49,578] INFO Property broker.id is overridden to 0 (kafka.utils.VerifiableProperties)
      [2013-04-25 10:08:49,578] WARN Property kafka.csv.metrics.reporter.enabled is not valid (kafka.utils.VerifiablePropertie
      s)
      [2013-04-25 10:08:49,578] INFO Property port is overridden to 9092 (kafka.utils.VerifiableProperties)
      [2013-04-25 10:08:49,578] INFO Property log.flush.interval.messages is overridden to 10000 (kafka.utils.VerifiableProper
      ties)
      [2013-04-25 10:08:49,578] INFO Property zk.connection.timeout.ms is overridden to 1000000 (kafka.utils.VerifiablePropert
      ies)
      [2013-04-25 10:08:49,578] WARN Property kafka.metrics.reporters is not valid (kafka.utils.VerifiableProperties)
      [2013-04-25 10:08:49,578] WARN Property kafka.csv.metrics.dir is not valid (kafka.utils.VerifiableProperties)
      [2013-04-25 10:08:49,578] INFO Property log.flush.interval.ms is overridden to 1000 (kafka.utils.VerifiableProperties)
      [2013-04-25 10:08:49,578] WARN Property kafka.metrics.polling.interval.secs is not valid (kafka.utils.VerifiableProperti
      es)
      [2013-04-25 10:08:49,578] INFO Property num.network.threads is overridden to 2 (kafka.utils.VerifiableProperties)
      [2013-04-25 10:08:49,578] INFO Property socket.receive.buffer.bytes is overridden to 1048576 (kafka.utils.VerifiableProp
      erties)
      [2013-04-25 10:08:49,578] INFO Property log.segment.bytes is overridden to 536870912 (kafka.utils.VerifiableProperties)
      [2013-04-25 10:08:49,578] INFO Property zk.connect is overridden to localhost:2181 (kafka.utils.VerifiableProperties)
      [2013-04-25 10:08:49,594] INFO Property num.partitions is overridden to 1 (kafka.utils.VerifiableProperties)
      [2013-04-25 10:08:49,609] INFO [Kafka Server 0], starting (kafka.server.KafkaServer)
      [2013-04-25 10:08:49,625] INFO [Log Manager on Broker 0] Log directory 'C:\tmp\kafka-logs' not found, creating it. (kafk
      a.log.LogManager)
      [2013-04-25 10:08:49,625] INFO [Log Manager on Broker 0] Starting log cleaner every 60000 ms (kafka.log.LogManager)
      [2013-04-25 10:08:49,640] INFO [Log Manager on Broker 0] Starting log flusher every 3000 ms with the following overrides
      Map() (kafka.log.LogManager)
      [2013-04-25 10:08:49,656] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
      [2013-04-25 10:08:49,656] INFO [Socket Server on Broker 0], started (kafka.network.SocketServer)
      [2013-04-25 10:08:49,672] INFO connecting to ZK: localhost:2181 (kafka.server.KafkaZooKeeper)
      [2013-04-25 10:08:49,672] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
      [2013-04-25 10:08:49,672] INFO Client environment:zookeeper.version=3.3.3-1203054, built on 11/17/2011 05:47 GMT (org.ap
      ache.zookeeper.ZooKeeper)
      [2013-04-25 10:08:49,672] INFO Client environment:host.name=YIYIN-MAIN.redmond.corp.microsoft.com (org.apache.zookeeper.
      ZooKeeper)
      [2013-04-25 10:08:49,672] INFO Client environment:java.version=1.7.0_21 (org.apache.zookeeper.ZooKeeper)
      [2013-04-25 10:08:49,672] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
      [2013-04-25 10:08:49,672] INFO Client environment:java.home=C:\Program Files (x86)\Java\jdk1.7.0_21\jre (org.apache.zook
      eeper.ZooKeeper)
      [2013-04-25 10:08:49,687] INFO Client environment:java.io.tmpdir=C:\Users\yiyin\AppData\Local\Temp\ (org.apache.zookeepe
      r.ZooKeeper)
      [2013-04-25 10:08:49,687] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
      [2013-04-25 10:08:49,687] INFO Client environment:os.name=Windows 8 (org.apache.zookeeper.ZooKeeper)
      [2013-04-25 10:08:49,687] INFO Client environment:os.arch=x86 (org.apache.zookeeper.ZooKeeper)
      [2013-04-25 10:08:49,687] INFO Client environment:os.version=6.2 (org.apache.zookeeper.ZooKeeper)
      [2013-04-25 10:08:49,687] INFO Client environment:user.name=yiyin (org.apache.zookeeper.ZooKeeper)
      [2013-04-25 10:08:49,687] INFO Client environment:user.home=C:\Users\yiyin (org.apache.zookeeper.ZooKeeper)
      [2013-04-25 10:08:49,687] INFO Client environment:user.dir=C:\Projects\Kafka\kafka\bin (org.apache.zookeeper.ZooKeeper)
      [2013-04-25 10:08:49,687] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=or
      g.I0Itec.zkclient.ZkClient@16e73e3 (org.apache.zookeeper.ZooKeeper)
      [2013-04-25 10:08:49,703] INFO Opening socket connection to server localhost/127.0.0.1:2181 (org.apache.zookeeper.Client
      Cnxn)
      [2013-04-25 10:08:49,703] INFO Socket connection established to localhost/127.0.0.1:2181, initiating session (org.apache
      .zookeeper.ClientCnxn)
      [2013-04-25 10:08:49,781] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x13e422b2
      f620000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
      [2013-04-25 10:08:49,781] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
      [2013-04-25 10:08:49,812] INFO Registered broker 0 at path /brokers/ids/0 with address YIYIN-MAIN.redmond.corp.microsoft
      .com:9092. (kafka.utils.ZkUtils$)
      [2013-04-25 10:08:49,812] INFO [Kafka Server 0], Connecting to ZK: localhost:2181 (kafka.server.KafkaServer)
      [2013-04-25 10:08:49,859] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$)
      [2013-04-25 10:08:49,875] INFO 0 successfully elected as leader (kafka.server.ZookeeperLeaderElector)
      [2013-04-25 10:08:49,984] INFO No state transitions triggered since no partitions are assigned to brokers 0 (kafka.utils
      .ZkUtils$)
      [2013-04-25 10:08:49,984] INFO New leader is 0 (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
      [2013-04-25 10:08:50,000] INFO [Kafka Server 0], started (kafka.server.KafkaServer)
      [2013-04-25 10:08:55,969] INFO Closing socket connection to /10.123.11.25. (kafka.network.Processor)
      [2013-04-25 10:09:03,234] INFO [KafkaApi-0] Auto creation of topic test with 1 partitions and replication factor 1 is su
      ccessful! (kafka.server.KafkaApis)
      [2013-04-25 10:09:03,281] INFO Closing socket connection to /127.0.0.1. (kafka.network.Processor)
      [2013-04-25 10:09:03,281] INFO [Replica Manager on Broker 0]: Handling LeaderAndIsr request Name:LeaderAndIsrRequest;Ver
      sion:0;Controller:0;ControllerEpoch:1;CorrelationId:5;ClientId:id_0-host_null-port_9092;AckTimeoutMs:1000 ms;PartitionSt
      ate:(test,0) -> (LeaderAndIsrInfo:(Leader:0,ISR:0,LeaderEpoch:0,ControllerEpoch:1),ReplicationFactor:1);Leaders:id:0,hos
      t:YIYIN-MAIN.redmond.corp.microsoft.com,port:9092 (kafka.server.ReplicaManager)
      [2013-04-25 10:09:03,281] INFO [ReplicaFetcherManager on broker 0] Removing fetcher for partition [test,0] (kafka.server
      .ReplicaFetcherManager)
      [2013-04-25 10:09:03,297] INFO [Kafka Log on Broker 0], Completed load of log test-0 with log end offset 0 (kafka.log.Lo
      g)
      [2013-04-25 10:09:03,297] INFO [Log Manager on Broker 0] Created log for topic test partition 0 in C:\tmp\kafka-logs. (k
      afka.log.LogManager)
      [2013-04-25 10:09:03,297] INFO [Replica Manager on Broker 0]: Handled leader and isr request Name:LeaderAndIsrRequest;Ve
      rsion:0;Controller:0;ControllerEpoch:1;CorrelationId:5;ClientId:id_0-host_null-port_9092;AckTimeoutMs:1000 ms;PartitionS
      tate:(test,0) -> (LeaderAndIsrInfo:(Leader:0,ISR:0,LeaderEpoch:0,ControllerEpoch:1),ReplicationFactor:1);Leaders:id:0,ho
      st:YIYIN-MAIN.redmond.corp.microsoft.com,port:9092 (kafka.server.ReplicaManager)
      [2013-04-25 10:09:03,438] INFO Closing socket connection to /127.0.0.1. (kafka.network.Processor)
      [2013-04-25 10:09:03,453] WARN [KafkaApi-0] Produce request: Leader not local for partition [test,0] on broker 0 (kafka.
      server.KafkaApis)
      [2013-04-25 10:09:05,453] WARN [KafkaApi-0] Produce request: Leader not local for partition [test,0] on broker 0 (kafka.
      server.KafkaApis)
      [2013-04-25 10:09:07,250] WARN [KafkaApi-0] Produce request: Leader not local for partition [test,0] on broker 0 (kafka.
      server.KafkaApis)
      [2013-04-25 10:09:08,891] WARN [KafkaApi-0] Produce request: Leader not local for partition [test,0] on broker 0 (kafka.
      server.KafkaApis)

      =============================Producer Log==============================

      C:\Projects\Kafka\kafka\bin\..
      [2013-04-25 10:08:52,531] INFO Verifying properties (kafka.utils.VerifiableProperties)
      [2013-04-25 10:08:52,547] INFO Property queue.buffering.max.messages is overridden to 10000 (kafka.utils.VerifiablePrope
      rties)
      [2013-04-25 10:08:52,547] INFO Property key.serializer.class is overridden to kafka.serializer.StringEncoder (kafka.util
      s.VerifiableProperties)
      [2013-04-25 10:08:52,547] INFO Property compression.codec is overridden to 0 (kafka.utils.VerifiableProperties)
      [2013-04-25 10:08:52,547] INFO Property serializer.class is overridden to kafka.serializer.StringEncoder (kafka.utils.Ve
      rifiableProperties)
      [2013-04-25 10:08:52,547] INFO Property request.timeout.ms is overridden to 1500 (kafka.utils.VerifiableProperties)
      [2013-04-25 10:08:52,547] INFO Property broker.list is overridden to localhost:9092 (kafka.utils.VerifiableProperties)
      [2013-04-25 10:08:52,547] INFO Property send.buffer.bytes is overridden to 102400 (kafka.utils.VerifiableProperties)
      [2013-04-25 10:08:52,547] INFO Property request.required.acks is overridden to 0 (kafka.utils.VerifiableProperties)
      [2013-04-25 10:08:52,547] INFO Property producer.type is overridden to sync (kafka.utils.VerifiableProperties)
      [2013-04-25 10:08:52,547] INFO Property queue.buffering.max.ms is overridden to 1000 (kafka.utils.VerifiableProperties)
      [2013-04-25 10:08:52,547] INFO Property queue.enqueue.timeout.ms is overridden to 0 (kafka.utils.VerifiableProperties)
      hello
      [2013-04-25 10:09:03,203] INFO Fetching metadata with correlation id 0 for 1 topic(s) Set(test) (kafka.client.ClientUtil
      s$)
      [2013-04-25 10:09:03,203] INFO Connected to localhost:9092 for producing (kafka.producer.SyncProducer)
      [2013-04-25 10:09:03,281] INFO Disconnecting from localhost:9092 (kafka.producer.SyncProducer)
      [2013-04-25 10:09:03,281] WARN Error while fetching metadata partition 0 leader: none replicas: isr:
      isUnderReplicated: false for topic partition [test,0]: [class kafka.common.LeaderNotAvailableException] (kafka.producer.
      BrokerPartitionInfo)
      [2013-04-25 10:09:03,297] WARN Failed to collate messages by topic,partition due to (kafka.producer.async.DefaultEventHa
      ndler)
      kafka.common.LeaderNotAvailableException: No leader for any partition
      at kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$getPartition(DefaultEventH
      andler.scala:212)
      at kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:150)
      at kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:148)
      at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57)
      at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43)
      at kafka.producer.async.DefaultEventHandler.partitionAndCollate(DefaultEventHandler.scala:148)
      at kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:94)
      at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72)
      at kafka.producer.Producer.send(Producer.scala:74)
      at kafka.producer.ConsoleProducer$.main(ConsoleProducer.scala:159)
      at kafka.producer.ConsoleProducer.main(ConsoleProducer.scala)
      [2013-04-25 10:09:03,406] INFO Fetching metadata with correlation id 2 for 1 topic(s) Set(test) (kafka.client.ClientUtil
      s$)
      [2013-04-25 10:09:03,406] INFO Connected to localhost:9092 for producing (kafka.producer.SyncProducer)
      [2013-04-25 10:09:03,438] INFO Disconnecting from localhost:9092 (kafka.producer.SyncProducer)
      [2013-04-25 10:09:03,453] INFO Connected to YIYIN-MAIN.redmond.corp.microsoft.com:9092 for producing (kafka.producer.Syn
      cProducer)
      hello
      hello
      hello

      =================================ZooKeeper Log====================================

      C:\Projects\Kafka\kafka\bin\..
      [2013-04-25 10:08:46,500] INFO Reading configuration from: ..\config\zookeeper.properties (org.apache.zookeeper.server.q
      uorum.QuorumPeerConfig)
      [2013-04-25 10:08:46,500] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.
      zookeeper.server.quorum.QuorumPeerMain)
      [2013-04-25 10:08:46,515] INFO Reading configuration from: ..\config\zookeeper.properties (org.apache.zookeeper.server.q
      uorum.QuorumPeerConfig)
      [2013-04-25 10:08:46,515] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain)
      [2013-04-25 10:08:46,531] INFO Server environment:zookeeper.version=3.3.3-1203054, built on 11/17/2011 05:47 GMT (org.ap
      ache.zookeeper.server.ZooKeeperServer)
      [2013-04-25 10:08:46,531] INFO Server environment:host.name=YIYIN-MAIN.redmond.corp.microsoft.com (org.apache.zookeeper.
      server.ZooKeeperServer)
      [2013-04-25 10:08:46,531] INFO Server environment:java.version=1.7.0_21 (org.apache.zookeeper.server.ZooKeeperServer)
      [2013-04-25 10:08:46,531] INFO Server environment:java.vendor=Oracle Corporation (org.apache.zookeeper.server.ZooKeeperS
      erver)
      [2013-04-25 10:08:46,531] INFO Server environment:java.home=C:\Program Files (x86)\Java\jdk1.7.0_21\jre (org.apache.zook
      eeper.server.ZooKeeperServer)
      [2013-04-25 10:08:46,531] INFO Server environment:java.io.tmpdir=C:\Users\yiyin\AppData\Local\Temp\ (org.apache.zookeepe
      r.server.ZooKeeperServer)
      [2013-04-25 10:08:46,531] INFO Server environment:java.compiler=<NA> (org.apache.zookeeper.server.ZooKeeperServer)
      [2013-04-25 10:08:46,531] INFO Server environment:os.name=Windows 8 (org.apache.zookeeper.server.ZooKeeperServer)
      [2013-04-25 10:08:46,531] INFO Server environment:os.arch=x86 (org.apache.zookeeper.server.ZooKeeperServer)
      [2013-04-25 10:08:46,531] INFO Server environment:os.version=6.2 (org.apache.zookeeper.server.ZooKeeperServer)
      [2013-04-25 10:08:46,531] INFO Server environment:user.name=yiyin (org.apache.zookeeper.server.ZooKeeperServer)
      [2013-04-25 10:08:46,531] INFO Server environment:user.home=C:\Users\yiyin (org.apache.zookeeper.server.ZooKeeperServer)

      [2013-04-25 10:08:46,531] INFO Server environment:user.dir=C:\Projects\Kafka\kafka\bin (org.apache.zookeeper.server.ZooK
      eeperServer)
      [2013-04-25 10:08:46,531] INFO tickTime set to 3000 (org.apache.zookeeper.server.ZooKeeperServer)
      [2013-04-25 10:08:46,531] INFO minSessionTimeout set to -1 (org.apache.zookeeper.server.ZooKeeperServer)
      [2013-04-25 10:08:46,531] INFO maxSessionTimeout set to -1 (org.apache.zookeeper.server.ZooKeeperServer)
      [2013-04-25 10:08:46,547] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxn)
      [2013-04-25 10:08:46,562] INFO Snapshotting: 0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog)
      [2013-04-25 10:08:49,703] INFO Accepted socket connection from /127.0.0.1:8001 (org.apache.zookeeper.server.NIOServerCnx
      n)
      [2013-04-25 10:08:49,703] INFO Client attempting to establish new session at /127.0.0.1:8001 (org.apache.zookeeper.serve
      r.NIOServerCnxn)
      [2013-04-25 10:08:49,703] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog)
      [2013-04-25 10:08:49,781] INFO Established session 0x13e422b2f620000 with negotiated timeout 6000 for client /127.0.0.1:
      8001 (org.apache.zookeeper.server.NIOServerCnxn)
      [2013-04-25 10:08:49,812] INFO Got user-level KeeperException when processing sessionid:0x13e422b2f620000 type:create cx
      id:0x1 zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NoNode for /b
      rokers/ids (org.apache.zookeeper.server.PrepRequestProcessor)
      [2013-04-25 10:08:49,812] INFO Got user-level KeeperException when processing sessionid:0x13e422b2f620000 type:create cx
      id:0x2 zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NoNode for /broke
      rs (org.apache.zookeeper.server.PrepRequestProcessor)
      [2013-04-25 10:08:49,875] INFO Got user-level KeeperException when processing sessionid:0x13e422b2f620000 type:setData c
      xid:0xb zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a Error Path:/controller_epoch Error:KeeperErrorCode = NoNode
      for /controller_epoch (org.apache.zookeeper.server.PrepRequestProcessor)
      [2013-04-25 10:08:49,984] INFO Got user-level KeeperException when processing sessionid:0x13e422b2f620000 type:delete cx
      id:0x19 zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a Error Path:/admin Error:KeeperErrorCode = NoNode for /admin
      (org.apache.zookeeper.server.PrepRequestProcessor)
      [2013-04-25 10:08:55,594] INFO Accepted socket connection from /127.0.0.1:8007 (org.apache.zookeeper.server.NIOServerCnx
      n)
      [2013-04-25 10:08:55,609] INFO Client attempting to establish new session at /127.0.0.1:8007 (org.apache.zookeeper.serve
      r.NIOServerCnxn)
      [2013-04-25 10:08:55,672] INFO Established session 0x13e422b2f620001 with negotiated timeout 6000 for client /127.0.0.1:
      8007 (org.apache.zookeeper.server.NIOServerCnxn)
      [2013-04-25 10:08:55,687] INFO Accepted socket connection from /127.0.0.1:8010 (org.apache.zookeeper.server.NIOServerCnx
      n)
      [2013-04-25 10:08:55,687] INFO Client attempting to establish new session at /127.0.0.1:8010 (org.apache.zookeeper.serve
      r.NIOServerCnxn)
      [2013-04-25 10:08:55,687] INFO Established session 0x13e422b2f620002 with negotiated timeout 30000 for client /127.0.0.1
      :8010 (org.apache.zookeeper.server.NIOServerCnxn)
      [2013-04-25 10:08:55,703] INFO Processed session termination for sessionid: 0x13e422b2f620002 (org.apache.zookeeper.serv
      er.PrepRequestProcessor)
      [2013-04-25 10:08:55,703] INFO Closed socket connection for client /127.0.0.1:8010 which had sessionid 0x13e422b2f620002
      (org.apache.zookeeper.server.NIOServerCnxn)
      [2013-04-25 10:08:55,750] INFO Got user-level KeeperException when processing sessionid:0x13e422b2f620001 type:create cx
      id:0x2 zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a Error Path:/consumers/console-consumer-70983/ids Error:Keeper
      ErrorCode = NoNode for /consumers/console-consumer-70983/ids (org.apache.zookeeper.server.PrepRequestProcessor)
      [2013-04-25 10:08:55,750] INFO Got user-level KeeperException when processing sessionid:0x13e422b2f620001 type:create cx
      id:0x3 zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a Error Path:/consumers/console-consumer-70983 Error:KeeperErro
      rCode = NoNode for /consumers/console-consumer-70983 (org.apache.zookeeper.server.PrepRequestProcessor)
      [2013-04-25 10:08:55,750] INFO Got user-level KeeperException when processing sessionid:0x13e422b2f620001 type:create cx
      id:0x4 zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a Error Path:/consumers Error:KeeperErrorCode = NoNode for /con
      sumers (org.apache.zookeeper.server.PrepRequestProcessor)
      [2013-04-25 10:09:03,219] INFO Got user-level KeeperException when processing sessionid:0x13e422b2f620000 type:create cx
      id:0x21 zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a Error Path:/brokers/topics Error:KeeperErrorCode = NoNode fo
      r /brokers/topics (org.apache.zookeeper.server.PrepRequestProcessor)
      [2013-04-25 10:09:03,266] INFO Got user-level KeeperException when processing sessionid:0x13e422b2f620000 type:create cx
      id:0x2f zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a Error Path:/brokers/topics/test/partitions/0 Error:KeeperErr
      orCode = NoNode for /brokers/topics/test/partitions/0 (org.apache.zookeeper.server.PrepRequestProcessor)
      [2013-04-25 10:09:03,266] INFO Got user-level KeeperException when processing sessionid:0x13e422b2f620000 type:create cx
      id:0x30 zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a Error Path:/brokers/topics/test/partitions Error:KeeperError
      Code = NoNode for /brokers/topics/test/partitions (org.apache.zookeeper.server.PrepRequestProcessor)

      ==============================Consumer Log=======================================

      C:\Projects\Kafka\kafka\bin\..
      [2013-04-25 10:08:55,516] INFO Verifying properties (kafka.utils.VerifiableProperties)
      [2013-04-25 10:08:55,547] INFO Property zk.connect is overridden to localhost:2181 (kafka.utils.VerifiablePro
      [2013-04-25 10:08:55,547] INFO Property group.id is overridden to console-consumer-70983 (kafka.utils.Verifia
      es)
      [2013-04-25 10:08:55,547] INFO Property fetch.message.max.bytes is overridden to 1048576 (kafka.utils.Verifia
      es)
      [2013-04-25 10:08:55,547] INFO Property consumer.timeout.ms is overridden to -1 (kafka.utils.VerifiableProper
      [2013-04-25 10:08:55,547] INFO Property socket.timeout.ms is overridden to 30000 (kafka.utils.VerifiablePrope
      [2013-04-25 10:08:55,547] INFO Property auto.offset.reset is overridden to smallest (kafka.utils.VerifiablePr
      [2013-04-25 10:08:55,547] INFO Property socket.receive.buffer.bytes is overridden to 2097152 (kafka.utils.Ver
      erties)
      [2013-04-25 10:08:55,547] INFO Property fetch.wait.max.ms is overridden to 100 (kafka.utils.VerifiablePropert
      [2013-04-25 10:08:55,547] INFO Property auto.commit.enable is overridden to true (kafka.utils.VerifiablePrope
      [2013-04-25 10:08:55,547] INFO Property auto.commit.interval.ms is overridden to 10000 (kafka.utils.Verifiabl
      )
      [2013-04-25 10:08:55,547] INFO Property fetch.min.bytes is overridden to 1 (kafka.utils.VerifiableProperties)
      [2013-04-25 10:08:55,562] INFO [console-consumer-70983_YIYIN-MAIN-1366909735562-a3d9410c], Connecting to zook
      nce at localhost:2181 (kafka.consumer.ZookeeperConsumerConnector)
      [2013-04-25 10:08:55,562] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
      [2013-04-25 10:08:55,578] INFO Client environment:zookeeper.version=3.3.3-1203054, built on 11/17/2011 05:47
      ache.zookeeper.ZooKeeper)
      [2013-04-25 10:08:55,578] INFO Client environment:host.name=YIYIN-MAIN.redmond.corp.microsoft.com (org.apache
      ZooKeeper)
      [2013-04-25 10:08:55,578] INFO Client environment:java.version=1.7.0_21 (org.apache.zookeeper.ZooKeeper)
      [2013-04-25 10:08:55,578] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKee
      [2013-04-25 10:08:55,578] INFO Client environment:java.home=C:\Program Files (x86)\Java\jdk1.7.0_21\jre (org.
      eeper.ZooKeeper)
      [2013-04-25 10:08:55,578] INFO Client environment:java.io.tmpdir=C:\Users\yiyin\AppData\Local\Temp\ (org.apac
      r.ZooKeeper)
      [2013-04-25 10:08:55,578] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
      [2013-04-25 10:08:55,578] INFO Client environment:os.name=Windows 8 (org.apache.zookeeper.ZooKeeper)
      [2013-04-25 10:08:55,578] INFO Client environment:os.arch=x86 (org.apache.zookeeper.ZooKeeper)
      [2013-04-25 10:08:55,578] INFO Client environment:os.version=6.2 (org.apache.zookeeper.ZooKeeper)
      [2013-04-25 10:08:55,578] INFO Client environment:user.name=yiyin (org.apache.zookeeper.ZooKeeper)
      [2013-04-25 10:08:55,578] INFO Client environment:user.home=C:\Users\yiyin (org.apache.zookeeper.ZooKeeper)
      [2013-04-25 10:08:55,578] INFO Client environment:user.dir=C:\Projects\Kafka\kafka\bin (org.apache.zookeeper.
      [2013-04-25 10:08:55,578] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000
      g.I0Itec.zkclient.ZkClient@1124f52 (org.apache.zookeeper.ZooKeeper)
      [2013-04-25 10:08:55,594] INFO Opening socket connection to server localhost/127.0.0.1:2181 (org.apache.zooke
      Cnxn)
      [2013-04-25 10:08:55,594] INFO Socket connection established to localhost/127.0.0.1:2181, initiating session
      .zookeeper.ClientCnxn)
      [2013-04-25 10:08:55,672] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid =
      f620001, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
      [2013-04-25 10:08:55,672] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
      [2013-04-25 10:08:55,672] INFO [console-consumer-70983_YIYIN-MAIN-1366909735562-a3d9410c], starting auto comm
      10000 ms (kafka.consumer.ZookeeperConsumerConnector)
      [2013-04-25 10:08:55,687] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=3000
      rg.I0Itec.zkclient.ZkClient@13033a (org.apache.zookeeper.ZooKeeper)
      [2013-04-25 10:08:55,687] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
      [2013-04-25 10:08:55,687] INFO Opening socket connection to server localhost/127.0.0.1:2181 (org.apache.zooke
      Cnxn)
      [2013-04-25 10:08:55,687] INFO Socket connection established to localhost/127.0.0.1:2181, initiating session
      .zookeeper.ClientCnxn)
      [2013-04-25 10:08:55,687] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid =
      f620002, negotiated timeout = 30000 (org.apache.zookeeper.ClientCnxn)
      [2013-04-25 10:08:55,687] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
      [2013-04-25 10:08:55,703] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
      [2013-04-25 10:08:55,703] INFO Session: 0x13e422b2f620002 closed (org.apache.zookeeper.ZooKeeper)
      [2013-04-25 10:08:55,703] INFO EventThread shut down (org.apache.zookeeper.ClientCnxn)
      [2013-04-25 10:08:55,734] INFO [console-consumer-70983_YIYIN-MAIN-1366909735562-a3d9410c], begin registering
      nsole-consumer-70983_YIYIN-MAIN-1366909735562-a3d9410c in ZK (kafka.consumer.ZookeeperConsumerConnector)
      [2013-04-25 10:08:55,750] INFO [console-consumer-70983_YIYIN-MAIN-1366909735562-a3d9410c], end registering co
      ole-consumer-70983_YIYIN-MAIN-1366909735562-a3d9410c in ZK (kafka.consumer.ZookeeperConsumerConnector)
      [2013-04-25 10:08:55,750] INFO [console-consumer-70983_YIYIN-MAIN-1366909735562-a3d9410c], starting watcher e
      ead for consumer console-consumer-70983_YIYIN-MAIN-1366909735562-a3d9410c (kafka.consumer.ZookeeperConsumerCo
      [2013-04-25 10:08:55,766] INFO [console-consumer-70983_YIYIN-MAIN-1366909735562-a3d9410c], begin rebalancing
      nsole-consumer-70983_YIYIN-MAIN-1366909735562-a3d9410c try #0 (kafka.consumer.ZookeeperConsumerConnector)
      [2013-04-25 10:08:55,891] INFO Verifying properties (kafka.utils.VerifiableProperties)
      [2013-04-25 10:08:55,906] INFO Property broker.list is overridden to YIYIN-MAIN.redmond.corp.microsoft.com:90
      tils.VerifiableProperties)
      [2013-04-25 10:08:55,906] INFO Property request.timeout.ms is overridden to 30000 (kafka.utils.VerifiableProp
      [2013-04-25 10:08:55,906] INFO Property client.id is overridden to console-consumer-70983 (kafka.utils.Verifi
      ies)
      [2013-04-25 10:08:55,922] INFO Fetching metadata with correlation id 0 for 0 topic(s) Set() (kafka.client.Cli
      [2013-04-25 10:08:55,922] INFO Connected to YIYIN-MAIN.redmond.corp.microsoft.com:9092 for producing (kafka.p
      cProducer)
      [2013-04-25 10:08:55,969] INFO Disconnecting from YIYIN-MAIN.redmond.corp.microsoft.com:9092 (kafka.producer.
      r)
      [2013-04-25 10:08:55,969] INFO [ConsumerFetcherManager-1366909735672] Stopping leader finder thread (kafka.co
      umerFetcherManager)
      [2013-04-25 10:08:55,969] INFO [ConsumerFetcherManager-1366909735672] Stopping all fetchers (kafka.consumer.C
      herManager)
      [2013-04-25 10:08:55,969] INFO [ConsumerFetcherManager-1366909735672] All connections stopped (kafka.consumer
      tcherManager)
      [2013-04-25 10:08:55,969] INFO [console-consumer-70983_YIYIN-MAIN-1366909735562-a3d9410c], Cleared all releva
      or this fetcher (kafka.consumer.ZookeeperConsumerConnector)
      [2013-04-25 10:08:55,969] INFO [console-consumer-70983_YIYIN-MAIN-1366909735562-a3d9410c], Cleared the data c
      l the consumer message iterators (kafka.consumer.ZookeeperConsumerConnector)
      [2013-04-25 10:08:55,969] INFO [console-consumer-70983_YIYIN-MAIN-1366909735562-a3d9410c], Committing all off
      clearing the fetcher queues (kafka.consumer.ZookeeperConsumerConnector)
      [2013-04-25 10:08:55,969] INFO [console-consumer-70983_YIYIN-MAIN-1366909735562-a3d9410c], Releasing partitio
      (kafka.consumer.ZookeeperConsumerConnector)
      [2013-04-25 10:08:55,984] INFO [console-consumer-70983_YIYIN-MAIN-1366909735562-a3d9410c], Updating the cache
      sumer.ZookeeperConsumerConnector)
      [2013-04-25 10:08:55,984] INFO [console-consumer-70983_YIYIN-MAIN-1366909735562-a3d9410c], Consumer console-c
      83_YIYIN-MAIN-1366909735562-a3d9410c selected partitions : (kafka.consumer.ZookeeperConsumerConnector)
      [2013-04-25 10:08:55,984] INFO [console-consumer-70983_YIYIN-MAIN-1366909735562-a3d9410c-leader-finder-thread
      (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
      [2013-04-25 10:08:55,984] INFO [console-consumer-70983_YIYIN-MAIN-1366909735562-a3d9410c], end rebalancing co
      ole-consumer-70983_YIYIN-MAIN-1366909735562-a3d9410c try #0 (kafka.consumer.ZookeeperConsumerConnector)
      [2013-04-25 10:08:55,984] INFO [console-consumer-70983_YIYIN-MAIN-1366909735562-a3d9410c], Not creating event
      r trivial whitelist test (kafka.consumer.ZookeeperConsumerConnector)

        Issue Links

          Activity

          Yin Yin created issue -
          Hide
          Jun Rao added a comment -

          Hmm, this is weird. The broker log shows that broker 0 has successfully become the leader of topic test partition 0 after handling the leaderAndIsr request. However, afterwards, it still has the warning "Leader not local for partition [test,0]". Wondering if this is somehow a Windows specific issue.

          Show
          Jun Rao added a comment - Hmm, this is weird. The broker log shows that broker 0 has successfully become the leader of topic test partition 0 after handling the leaderAndIsr request. However, afterwards, it still has the warning "Leader not local for partition [test,0] ". Wondering if this is somehow a Windows specific issue.
          Hide
          Swapnil Ghike added a comment -

          Starting from 10:08:55,750, the zookeeper log shows user-level KeeperExceptions, one of which is:

          [2013-04-25 10:09:03,219] INFO Got user-level KeeperException when processing sessionid:0x13e422b2f620000 type:create cx
          id:0x21 zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a Error Path:/brokers/topics Error:KeeperErrorCode = NoNode fo
          r /brokers/topics (org.apache.zookeeper.server.PrepRequestProcessor)

          Could you check if your zookeeper is healthy (using zkClient say) after you start seeing the "Leader not local" messages?

          Show
          Swapnil Ghike added a comment - Starting from 10:08:55,750, the zookeeper log shows user-level KeeperExceptions, one of which is: [2013-04-25 10:09:03,219] INFO Got user-level KeeperException when processing sessionid:0x13e422b2f620000 type:create cx id:0x21 zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a Error Path:/brokers/topics Error:KeeperErrorCode = NoNode fo r /brokers/topics (org.apache.zookeeper.server.PrepRequestProcessor) Could you check if your zookeeper is healthy (using zkClient say) after you start seeing the "Leader not local" messages?
          Hide
          Jun Rao added a comment -

          That ZK log entry is ok. It's at INFO level, not error. Our ZK client auto-creates certain ZK paths when they don't exist.

          Show
          Jun Rao added a comment - That ZK log entry is ok. It's at INFO level, not error. Our ZK client auto-creates certain ZK paths when they don't exist.
          Hide
          Jun Rao added a comment -

          Could you add the following line to config/log4j/properties after the line " # Turn on all our debugging info"?
          log4j.logger.kafka.cluster.Partition=TRACE, kafkaAppender

          Could you rerun quickstart and see if there is any trace logging in the broker that looks like the following?
          "Broker 0 discarded the become-leader request ..."

          Show
          Jun Rao added a comment - Could you add the following line to config/log4j/properties after the line " # Turn on all our debugging info"? log4j.logger.kafka.cluster.Partition=TRACE, kafkaAppender Could you rerun quickstart and see if there is any trace logging in the broker that looks like the following? "Broker 0 discarded the become-leader request ..."
          Rob Withers made changes -
          Field Original Value New Value
          Comment [ I did what you requested (TRACE Partitions) and this is what I see in the state-change.log. An ERROR processing the LeaderAndIsr request...


          [2013-04-26 16:03:18,135] TRACE Controller 0 epoch 1 changed partition [unittest,0] state from NotExists to New with assigned replicas 0 (state.change.logger)
          [2013-04-26 16:03:18,141] TRACE Controller 0 epoch 1 changed state of replica 0 for partition [unittest,0] to NewReplica (state.change.logger)
          [2013-04-26 16:03:18,251] TRACE Controller 0 epoch 1 changed partition [unittest,0] from OnlinePartition to OnlinePartition with leader 0 (state.change.logger)
          [2013-04-26 16:03:18,256] TRACE Controller 0 epoch 1 sending become-leader LeaderAndIsr request with correlationId 5 to broker 0 for partition [unittest,0] (state.change.logger)
          [2013-04-26 16:03:18,257] TRACE Controller 0 epoch 1 changed state of replica 0 for partition [unittest,0] to OnlineReplica (state.change.logger)
          [2013-04-26 16:03:18,263] TRACE Broker 0 handling LeaderAndIsr request correlation id 5 received from controller 0 epoch 1 for partition [unittest,0] (state.change.logger)
          [2013-04-26 16:03:18,266] TRACE Broker 0 received LeaderAndIsr request correlationId 5 from controller 0 epoch 1 starting the become-leader transition for partition [unittest,0] (state.change.logger)
          [2013-04-26 16:03:18,324] ERROR Error on broker 0 while processing LeaderAndIsr request correlationId 5 received from controller 0 epoch 1 for partition (unittest,0) (state.change.logger)
          java.util.NoSuchElementException: key not found: \tmp\kafka-logs
          at scala.collection.MapLike$class.default(MapLike.scala:223)
          at scala.collection.immutable.Map$Map1.default(Map.scala:93)
          at scala.collection.MapLike$class.apply(MapLike.scala:134)
          at scala.collection.immutable.Map$Map1.apply(Map.scala:93)
          at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:83)
          at kafka.cluster.Partition$$anonfun$1.apply(Partition.scala:149)
          at kafka.cluster.Partition$$anonfun$1.apply(Partition.scala:149)
          at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
          at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
          at scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:61)
          at scala.collection.immutable.List.foreach(List.scala:45)
          at scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
          at scala.collection.immutable.List.map(List.scala:45)
          at kafka.cluster.Partition.makeLeader(Partition.scala:149)
          at kafka.server.ReplicaManager.kafka$server$ReplicaManager$$makeLeader(ReplicaManager.scala:256)
          at kafka.server.ReplicaManager$$anonfun$becomeLeaderOrFollower$3.apply(ReplicaManager.scala:220)
          at kafka.server.ReplicaManager$$anonfun$becomeLeaderOrFollower$3.apply(ReplicaManager.scala:212)
          at scala.collection.immutable.Map$Map1.foreach(Map.scala:105)
          at kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:212)
          at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:79)
          at kafka.server.KafkaApis.handle(KafkaApis.scala:64)
          at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:41)
          at java.lang.Thread.run(Thread.java:662)
          [2013-04-26 16:03:18,324] TRACE Broker 0 handled LeaderAndIsr request correlationId 5 received from controller 0 epoch 1 for partition [unittest,0] (state.change.logger)
          [2013-04-26 16:03:18,329] TRACE Controller 0 epoch 1 received response correlationId 5 for a request sent to broker 0 (state.change.logger)
          ]
          Hide
          Rob Withers added a comment - - edited

          I redid what you wanted (I switched to kafka8-tmp to differentiate from tmp for kafka7, but forgot to point the broker to there) and here is the results, primarily this ERROR, in the state-change.log. It seems it may be a filePath issue, on Windows:

          [2013-04-26 16:14:38,209] ERROR Error on broker 0 while processing LeaderAndIsr request correlationId 5 received from controller 0 epoch 1 for partition (unittest,0) (state.change.logger)
          java.util.NoSuchElementException: key not found: \kafka8-tmp\kafka-logs

          in the server.log:
          [2013-04-26 16:14:49,113] WARN [KafkaApi-0] Leader not local for partition [unittest,0] on broker 0 (kafka.server.KafkaApis)

          ***********************

          in the state-change,log:

          [2013-04-26 16:14:38,034] TRACE Controller 0 epoch 1 changed partition [unittest,0] state from NotExists to New with assigned replicas 0 (state.change.logger)
          [2013-04-26 16:14:38,039] TRACE Controller 0 epoch 1 changed state of replica 0 for partition [unittest,0] to NewReplica (state.change.logger)
          [2013-04-26 16:14:38,137] TRACE Controller 0 epoch 1 changed partition [unittest,0] from OnlinePartition to OnlinePartition with leader 0 (state.change.logger)
          [2013-04-26 16:14:38,141] TRACE Controller 0 epoch 1 sending become-leader LeaderAndIsr request with correlationId 5 to broker 0 for partition [unittest,0] (state.change.logger)
          [2013-04-26 16:14:38,143] TRACE Controller 0 epoch 1 changed state of replica 0 for partition [unittest,0] to OnlineReplica (state.change.logger)
          [2013-04-26 16:14:38,149] TRACE Broker 0 handling LeaderAndIsr request correlation id 5 received from controller 0 epoch 1 for partition [unittest,0] (state.change.logger)
          [2013-04-26 16:14:38,152] TRACE Broker 0 received LeaderAndIsr request correlationId 5 from controller 0 epoch 1 starting the become-leader transition for partition [unittest,0] (state.change.logger)
          [2013-04-26 16:14:38,209] ERROR Error on broker 0 while processing LeaderAndIsr request correlationId 5 received from controller 0 epoch 1 for partition (unittest,0) (state.change.logger)
          java.util.NoSuchElementException: key not found: \kafka8-tmp\kafka-logs
          at scala.collection.MapLike$class.default(MapLike.scala:223)
          at scala.collection.immutable.Map$Map1.default(Map.scala:93)
          at scala.collection.MapLike$class.apply(MapLike.scala:134)
          at scala.collection.immutable.Map$Map1.apply(Map.scala:93)
          at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:83)
          at kafka.cluster.Partition$$anonfun$1.apply(Partition.scala:149)
          at kafka.cluster.Partition$$anonfun$1.apply(Partition.scala:149)
          at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
          at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
          at scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:61)
          at scala.collection.immutable.List.foreach(List.scala:45)
          at scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
          at scala.collection.immutable.List.map(List.scala:45)
          at kafka.cluster.Partition.makeLeader(Partition.scala:149)
          at kafka.server.ReplicaManager.kafka$server$ReplicaManager$$makeLeader(ReplicaManager.scala:256)
          at kafka.server.ReplicaManager$$anonfun$becomeLeaderOrFollower$3.apply(ReplicaManager.scala:220)
          at kafka.server.ReplicaManager$$anonfun$becomeLeaderOrFollower$3.apply(ReplicaManager.scala:212)
          at scala.collection.immutable.Map$Map1.foreach(Map.scala:105)
          at kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:212)
          at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:79)
          at kafka.server.KafkaApis.handle(KafkaApis.scala:64)
          at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:41)
          at java.lang.Thread.run(Thread.java:662)
          [2013-04-26 16:14:38,210] TRACE Broker 0 handled LeaderAndIsr request correlationId 5 received from controller 0 epoch 1 for partition [unittest,0] (state.change.logger)
          [2013-04-26 16:14:38,215] TRACE Controller 0 epoch 1 received response correlationId 5 for a request sent to broker 0 (state.change.logger)

          Show
          Rob Withers added a comment - - edited I redid what you wanted (I switched to kafka8-tmp to differentiate from tmp for kafka7, but forgot to point the broker to there) and here is the results, primarily this ERROR, in the state-change.log. It seems it may be a filePath issue, on Windows: [2013-04-26 16:14:38,209] ERROR Error on broker 0 while processing LeaderAndIsr request correlationId 5 received from controller 0 epoch 1 for partition (unittest,0) (state.change.logger) java.util.NoSuchElementException: key not found: \kafka8-tmp\kafka-logs in the server.log: [2013-04-26 16:14:49,113] WARN [KafkaApi-0] Leader not local for partition [unittest,0] on broker 0 (kafka.server.KafkaApis) *********************** in the state-change,log: [2013-04-26 16:14:38,034] TRACE Controller 0 epoch 1 changed partition [unittest,0] state from NotExists to New with assigned replicas 0 (state.change.logger) [2013-04-26 16:14:38,039] TRACE Controller 0 epoch 1 changed state of replica 0 for partition [unittest,0] to NewReplica (state.change.logger) [2013-04-26 16:14:38,137] TRACE Controller 0 epoch 1 changed partition [unittest,0] from OnlinePartition to OnlinePartition with leader 0 (state.change.logger) [2013-04-26 16:14:38,141] TRACE Controller 0 epoch 1 sending become-leader LeaderAndIsr request with correlationId 5 to broker 0 for partition [unittest,0] (state.change.logger) [2013-04-26 16:14:38,143] TRACE Controller 0 epoch 1 changed state of replica 0 for partition [unittest,0] to OnlineReplica (state.change.logger) [2013-04-26 16:14:38,149] TRACE Broker 0 handling LeaderAndIsr request correlation id 5 received from controller 0 epoch 1 for partition [unittest,0] (state.change.logger) [2013-04-26 16:14:38,152] TRACE Broker 0 received LeaderAndIsr request correlationId 5 from controller 0 epoch 1 starting the become-leader transition for partition [unittest,0] (state.change.logger) [2013-04-26 16:14:38,209] ERROR Error on broker 0 while processing LeaderAndIsr request correlationId 5 received from controller 0 epoch 1 for partition (unittest,0) (state.change.logger) java.util.NoSuchElementException: key not found: \kafka8-tmp\kafka-logs at scala.collection.MapLike$class.default(MapLike.scala:223) at scala.collection.immutable.Map$Map1.default(Map.scala:93) at scala.collection.MapLike$class.apply(MapLike.scala:134) at scala.collection.immutable.Map$Map1.apply(Map.scala:93) at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:83) at kafka.cluster.Partition$$anonfun$1.apply(Partition.scala:149) at kafka.cluster.Partition$$anonfun$1.apply(Partition.scala:149) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206) at scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:61) at scala.collection.immutable.List.foreach(List.scala:45) at scala.collection.TraversableLike$class.map(TraversableLike.scala:206) at scala.collection.immutable.List.map(List.scala:45) at kafka.cluster.Partition.makeLeader(Partition.scala:149) at kafka.server.ReplicaManager.kafka$server$ReplicaManager$$makeLeader(ReplicaManager.scala:256) at kafka.server.ReplicaManager$$anonfun$becomeLeaderOrFollower$3.apply(ReplicaManager.scala:220) at kafka.server.ReplicaManager$$anonfun$becomeLeaderOrFollower$3.apply(ReplicaManager.scala:212) at scala.collection.immutable.Map$Map1.foreach(Map.scala:105) at kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:212) at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:79) at kafka.server.KafkaApis.handle(KafkaApis.scala:64) at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:41) at java.lang.Thread.run(Thread.java:662) [2013-04-26 16:14:38,210] TRACE Broker 0 handled LeaderAndIsr request correlationId 5 received from controller 0 epoch 1 for partition [unittest,0] (state.change.logger) [2013-04-26 16:14:38,215] TRACE Controller 0 epoch 1 received response correlationId 5 for a request sent to broker 0 (state.change.logger)
          Hide
          Sriram Subramanian added a comment -

          1. Is this the latest kafka checkout? If not, can you try on a more recent build?
          2. Could you attach the server config that you are using?

          Show
          Sriram Subramanian added a comment - 1. Is this the latest kafka checkout? If not, can you try on a more recent build? 2. Could you attach the server config that you are using?
          Hide
          Rob Withers added a comment - - edited

          It was current yesterday. I had trouble pulling tonight, but did so, with the top of the git log. Now I cannot run the broker with the below exception. I attached my server.properties below:

          D:\dish\workspace\kafka>git log
          commit e37a4644209de00f265a5361b31ca63cafc895e9
          Author: Joel Koshy <jjkoshy@gmail.com>
          Date: Fri Apr 26 14:13:00 2013 -0700

          Trivial commit - warn on consumer fetch request errors.

          commit 6e640e355632d97e322e462c2bbc379c05d407a4
          Author: Jun Rao <junrao@gmail.com>
          Date: Thu Apr 25 19:03:21 2013 -0700

          kafka-880; NoLeaderPartitionSet should be cleared before leader finder threa

          broker ERROR:

          [2013-04-26 17:23:39,076] FATAL Fatal error during KafkaServerStable startup. Pr
          epare to shutdown (kafka.server.KafkaServerStartable)
          java.lang.NullPointerException
          at org.apache.zookeeper.ClientCnxn.<init>(ClientCnxn.java:361)
          at org.apache.zookeeper.ClientCnxn.<init>(ClientCnxn.java:332)
          at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:383)
          at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:64)
          at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:872)
          at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:98)
          at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:84)
          at kafka.common.KafkaZookeeperClient$.getZookeeperClient(KafkaZookeperCl
          ient.scala:31)
          at kafka.server.KafkaZooKeeper.startup(KafkaZooKeeper.scala:39)
          at kafka.server.KafkaServer.startup(KafkaServer.scala:76)
          at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:
          34)
          at kafka.Kafka$.main(Kafka.scala:46)
          at kafka.Kafka.main(Kafka.scala)

          server.properties:

          1. Licensed to the Apache Software Foundation (ASF) under one or more
          2. contributor license agreements. See the NOTICE file distributed with
          3. this work for additional information regarding copyright ownership.
          4. The ASF licenses this file to You under the Apache License, Version 2.0
          5. (the "License"); you may not use this file except in compliance with
          6. the License. You may obtain a copy of the License at
          7. http://www.apache.org/licenses/LICENSE-2.0
          8. Unless required by applicable law or agreed to in writing, software
          9. distributed under the License is distributed on an "AS IS" BASIS,
          10. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
          11. See the License for the specific language governing permissions and
          12. limitations under the License.
          13. see kafka.server.KafkaConfig for additional details and defaults
                                                                  1. Server Basics #############################
          1. The id of the broker. This must be set to a unique integer for each broker.
            broker.id=0
                                                                  1. Socket Server Settings #############################
          1. The port the socket server listens on
            port=9092
          1. Hostname the broker will bind to and advertise to producers and consumers.
          2. If not set, the server will bind to all interfaces and advertise the value returned from
          3. from java.net.InetAddress.getCanonicalHostName().
            #host.name=localhost
          1. The number of threads handling network requests
            num.network.threads=2
          1. The number of threads doing disk I/O
            num.io.threads=2
          1. The send buffer (SO_SNDBUF) used by the socket server
            socket.send.buffer.bytes=2096576
          1. The receive buffer (SO_RCVBUF) used by the socket server
            socket.receive.buffer.bytes=2096576
          1. The maximum size of a request that the socket server will accept (protection against OOM)
            socket.request.max.bytes=104857600
                                                                  1. Log Basics #############################
          1. The directory under which to store log files
            log.dir=/kafka8-tmp/kafka-logs
          1. The number of logical partitions per topic per server. More partitions allow greater parallelism
          2. for consumption, but also mean more files.
            num.partitions=1
                                                                  1. Log Flush Policy #############################
          1. The following configurations control the flush of data to disk. This is the most
          2. important performance knob in kafka.
          3. There are a few important trade-offs here:
          4. 1. Durability: Unflushed data is at greater risk of loss in the event of a crash.
          5. 2. Latency: Data is not made available to consumers until it is flushed (which adds latency).
          6. 3. Throughput: The flush is generally the most expensive operation.
          7. The settings below allow one to configure the flush policy to flush data after a period of time or
          8. every N messages (or both). This can be done globally and overridden on a per-topic basis.
          1. The number of messages to accept before forcing a flush of data to disk
            log.flush.interval.messages=10000
          1. The maximum amount of time a message can sit in a log before we force a flush
            log.flush.interval.ms=1000
          1. Per-topic overrides for log.flush.interval.ms
            #log.flush.intervals.ms.per.topic=topic1:1000, topic2:3000
                                                                  1. Log Retention Policy #############################
          1. The following configurations control the disposal of log segments. The policy can
          2. be set to delete segments after a period of time, or after a given size has accumulated.
          3. A segment will be deleted whenever either of these criteria are met. Deletion always happens
          4. from the end of the log.
          1. The minimum age of a log file to be eligible for deletion
            log.retention.hours=168
          1. A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
          2. segments don't drop below log.retention.bytes.
            #log.retention.bytes=1073741824
          1. The maximum size of a log segment file. When this size is reached a new log segment will be created.
            log.segment.bytes=536870912
          1. The interval at which log segments are checked to see if they can be deleted according
          2. to the retention policies
            log.cleanup.interval.mins=1
                                                                  1. Zookeeper #############################
          1. Zookeeper connection string (see zookeeper docs for details).
          2. This is a comma separated host:port pairs, each corresponding to a zk
          3. server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
          4. You can also append an optional chroot string to the urls to specify the
          5. root directory for all kafka znodes.
            zookeeper.connect=localhost:2191
          1. Timeout in ms for connecting to zookeeper
            zookeeper.connection.timeout.ms=1000000
          1. metrics reporter properties
            kafka.metrics.polling.interval.secs=5
            kafka.metrics.reporters=kafka.metrics.KafkaCSVMetricsReporter
            kafka.csv.metrics.dir=/tmp/kafka_metrics
          2. Disable csv reporting by default.
            kafka.csv.metrics.reporter.enabled=false
          Show
          Rob Withers added a comment - - edited It was current yesterday. I had trouble pulling tonight, but did so, with the top of the git log. Now I cannot run the broker with the below exception. I attached my server.properties below: D:\dish\workspace\kafka>git log commit e37a4644209de00f265a5361b31ca63cafc895e9 Author: Joel Koshy <jjkoshy@gmail.com> Date: Fri Apr 26 14:13:00 2013 -0700 Trivial commit - warn on consumer fetch request errors. commit 6e640e355632d97e322e462c2bbc379c05d407a4 Author: Jun Rao <junrao@gmail.com> Date: Thu Apr 25 19:03:21 2013 -0700 kafka-880; NoLeaderPartitionSet should be cleared before leader finder threa broker ERROR: [2013-04-26 17:23:39,076] FATAL Fatal error during KafkaServerStable startup. Pr epare to shutdown (kafka.server.KafkaServerStartable) java.lang.NullPointerException at org.apache.zookeeper.ClientCnxn.<init>(ClientCnxn.java:361) at org.apache.zookeeper.ClientCnxn.<init>(ClientCnxn.java:332) at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:383) at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:64) at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:872) at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:98) at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:84) at kafka.common.KafkaZookeeperClient$.getZookeeperClient(KafkaZookeperCl ient.scala:31) at kafka.server.KafkaZooKeeper.startup(KafkaZooKeeper.scala:39) at kafka.server.KafkaServer.startup(KafkaServer.scala:76) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala: 34) at kafka.Kafka$.main(Kafka.scala:46) at kafka.Kafka.main(Kafka.scala) server.properties: Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. see kafka.server.KafkaConfig for additional details and defaults Server Basics ############################# The id of the broker. This must be set to a unique integer for each broker. broker.id=0 Socket Server Settings ############################# The port the socket server listens on port=9092 Hostname the broker will bind to and advertise to producers and consumers. If not set, the server will bind to all interfaces and advertise the value returned from from java.net.InetAddress.getCanonicalHostName(). #host.name=localhost The number of threads handling network requests num.network.threads=2 The number of threads doing disk I/O num.io.threads=2 The send buffer (SO_SNDBUF) used by the socket server socket.send.buffer.bytes=2096576 The receive buffer (SO_RCVBUF) used by the socket server socket.receive.buffer.bytes=2096576 The maximum size of a request that the socket server will accept (protection against OOM) socket.request.max.bytes=104857600 Log Basics ############################# The directory under which to store log files log.dir=/kafka8-tmp/kafka-logs The number of logical partitions per topic per server. More partitions allow greater parallelism for consumption, but also mean more files. num.partitions=1 Log Flush Policy ############################# The following configurations control the flush of data to disk. This is the most important performance knob in kafka. There are a few important trade-offs here: 1. Durability: Unflushed data is at greater risk of loss in the event of a crash. 2. Latency: Data is not made available to consumers until it is flushed (which adds latency). 3. Throughput: The flush is generally the most expensive operation. The settings below allow one to configure the flush policy to flush data after a period of time or every N messages (or both). This can be done globally and overridden on a per-topic basis. The number of messages to accept before forcing a flush of data to disk log.flush.interval.messages=10000 The maximum amount of time a message can sit in a log before we force a flush log.flush.interval.ms=1000 Per-topic overrides for log.flush.interval.ms #log.flush.intervals.ms.per.topic=topic1:1000, topic2:3000 Log Retention Policy ############################# The following configurations control the disposal of log segments. The policy can be set to delete segments after a period of time, or after a given size has accumulated. A segment will be deleted whenever either of these criteria are met. Deletion always happens from the end of the log. The minimum age of a log file to be eligible for deletion log.retention.hours=168 A size-based retention policy for logs. Segments are pruned from the log as long as the remaining segments don't drop below log.retention.bytes. #log.retention.bytes=1073741824 The maximum size of a log segment file. When this size is reached a new log segment will be created. log.segment.bytes=536870912 The interval at which log segments are checked to see if they can be deleted according to the retention policies log.cleanup.interval.mins=1 Zookeeper ############################# Zookeeper connection string (see zookeeper docs for details). This is a comma separated host:port pairs, each corresponding to a zk server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002". You can also append an optional chroot string to the urls to specify the root directory for all kafka znodes. zookeeper.connect=localhost:2191 Timeout in ms for connecting to zookeeper zookeeper.connection.timeout.ms=1000000 metrics reporter properties kafka.metrics.polling.interval.secs=5 kafka.metrics.reporters=kafka.metrics.KafkaCSVMetricsReporter kafka.csv.metrics.dir=/tmp/kafka_metrics Disable csv reporting by default. kafka.csv.metrics.reporter.enabled=false
          Hide
          Sriram Subramanian added a comment -

          is this a valid path in windows ? /kafka8-tmp/kafka-logs

          Show
          Sriram Subramanian added a comment - is this a valid path in windows ? /kafka8-tmp/kafka-logs
          Hide
          Rob Withers added a comment -

          It should resolve to D:\kafka8-tmp\kafka-logs. My kafka 7 works under cygwin, but I am running kafka8 under windows. When I ran before pulling, there was a D:\kafka8-tmp\kafka-logs created, but I did not check if there were any index files inside. I have since deleted the dir to reset the indexes.

          After pulling this afternoon, the new error does not explicitly state that it is a filePath issue.

          2013-04-26 17:23:39,076] FATAL Fatal error during KafkaServerStable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
          java.lang.NullPointerException
          at org.apache.zookeeper.ClientCnxn.<init>(ClientCnxn.java:361)

          Show
          Rob Withers added a comment - It should resolve to D:\kafka8-tmp\kafka-logs. My kafka 7 works under cygwin, but I am running kafka8 under windows. When I ran before pulling, there was a D:\kafka8-tmp\kafka-logs created, but I did not check if there were any index files inside. I have since deleted the dir to reset the indexes. After pulling this afternoon, the new error does not explicitly state that it is a filePath issue. 2013-04-26 17:23:39,076] FATAL Fatal error during KafkaServerStable startup. Prepare to shutdown (kafka.server.KafkaServerStartable) java.lang.NullPointerException at org.apache.zookeeper.ClientCnxn.<init>(ClientCnxn.java:361)
          Hide
          Jun Rao added a comment -

          Could you do the following and retry?

          ./sbt +package
          ./sbt assembly-package-dependency

          Show
          Jun Rao added a comment - Could you do the following and retry? ./sbt +package ./sbt assembly-package-dependency
          Hide
          Rob Withers added a comment -

          I may try to get into work, tomorrow, and check it out, or it will be Monday. I cannot run kafka8 under cygwin or dos here at home. Here is the inane error I get under dos:

          c:\rob\comp\workspace\kafka\bin>windows\zookeeper-server-start.bat ..\config\zookeeper.properties
          c:\rob\comp\workspace\kafka\bin\..
          The system cannot find the path specified.

          I thought it was kidding, the first time I saw it.

          thanks,
          rob

          Show
          Rob Withers added a comment - I may try to get into work, tomorrow, and check it out, or it will be Monday. I cannot run kafka8 under cygwin or dos here at home. Here is the inane error I get under dos: c:\rob\comp\workspace\kafka\bin>windows\zookeeper-server-start.bat ..\config\zookeeper.properties c:\rob\comp\workspace\kafka\bin\.. The system cannot find the path specified. I thought it was kidding, the first time I saw it. thanks, rob
          Hide
          Jun Rao added a comment -

          It seems that the exception was caused in the following statement in Partition.
          replicaManager.highWatermarkCheckpoints(log.dir.getParent)

          It tries to look up in a map of highWatermarkCheckpoint files to find the key "\kafka8-tmp\kafka-logs", but it doesn't exist. We register the keys using the property value in log.dirs. What property value did you set log.dirs to? It seems to be different from "\kafka8-tmp\kafka-logs".

          Show
          Jun Rao added a comment - It seems that the exception was caused in the following statement in Partition. replicaManager.highWatermarkCheckpoints(log.dir.getParent) It tries to look up in a map of highWatermarkCheckpoint files to find the key "\kafka8-tmp\kafka-logs", but it doesn't exist. We register the keys using the property value in log.dirs. What property value did you set log.dirs to? It seems to be different from "\kafka8-tmp\kafka-logs".
          Hide
          Rob Withers added a comment -

          The property is named "log.dir", not "log.dirs" in my properties file. Here is what I have:

                                                                  1. Log Basics #############################
          1. The directory under which to store log files
            log.dir=/kafka8-tmp/kafka-logs
          Show
          Rob Withers added a comment - The property is named "log.dir", not "log.dirs" in my properties file. Here is what I have: Log Basics ############################# The directory under which to store log files log.dir=/kafka8-tmp/kafka-logs
          Hide
          Jun Rao added a comment -

          It seems that log.dir.getParent() returns a path with forward slash \kafka8-tmp\kafka-logs. Could you use forward slash in log.dir instead of backward slash?

          Show
          Jun Rao added a comment - It seems that log.dir.getParent() returns a path with forward slash \kafka8-tmp\kafka-logs. Could you use forward slash in log.dir instead of backward slash?
          Hide
          Rob Withers added a comment -

          Jun, I changed the properties for the brokers to use backward slashes, as you suggested. As in \tmp\kafka0-logs. This worked and I was able to produce data for awhile. It seems the bug this issue is about is fixed and could be closed.

          However, zookeeper accepts forward slashes (/tmp/zookeeper) and is successful with this. Kafka should be agnostic as well, I think - to make it all it will be.

          After producing for awhile, it logs this FATAL error, in one of the brokers, and both brokers are shutdown:

          {2013-05-10 18:23:57,689}

          FATAL [highwatermark-checkpoint-thread1] (Logging.scala:109) - Attempt to swap the new high watermark file with the old one failed

          I will log a new issue.

          Show
          Rob Withers added a comment - Jun, I changed the properties for the brokers to use backward slashes, as you suggested. As in \tmp\kafka0-logs. This worked and I was able to produce data for awhile. It seems the bug this issue is about is fixed and could be closed. However, zookeeper accepts forward slashes (/tmp/zookeeper) and is successful with this. Kafka should be agnostic as well, I think - to make it all it will be. After producing for awhile, it logs this FATAL error, in one of the brokers, and both brokers are shutdown: {2013-05-10 18:23:57,689} FATAL [highwatermark-checkpoint-thread1] (Logging.scala:109) - Attempt to swap the new high watermark file with the old one failed I will log a new issue.
          Hide
          Jun Rao added a comment -

          This is fixed in kafka-903.

          Show
          Jun Rao added a comment - This is fixed in kafka-903.
          Jun Rao made changes -
          Link This issue is duplicated by KAFKA-903 [ KAFKA-903 ]
          Jun Rao made changes -
          Status Open [ 1 ] Resolved [ 5 ]
          Fix Version/s 0.8 [ 12317244 ]
          Resolution Duplicate [ 3 ]

            People

            • Assignee:
              Neha Narkhede
              Reporter:
              Yin Yin
            • Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development