Uploaded image for project: 'Kafka'
  1. Kafka
  2. KAFKA-2627

Kafka Heap Size increase impact performance badly

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Not A Problem
    • 0.8.2.1
    • None
    • core
    • None

    Description

      Initial Kafka server was configured with

      KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"

      As we have high resource to utilize, we changed it to below value

      KAFKA_HEAP_OPTS="-Xmx16G -Xms8G"

      Change highly impacted Kafka & Zookeeper, we started getting various issue at both end.

      We were not getting all replica in ISR. And it was an issue with Leader Selection which in-turn throwing Socket Connection Error.

      To debug, we checked kafaServer-gc.log, we were getting GC(Allocation Failure) though we have lot more Memory is avalable.

      ============== GC Error ===============
      2015-10-08T09:43:08.796+0000: 4.651: [GC (Allocation Failure) 4.651: [ParNew: 272640K->7265K(306688K), 0.0277514 secs] 272640K->7265K(1014528K), 0.0281243 secs] [Times: user=0.03 sys=0.05, real=0.03 secs]
      2015-10-08T09:43:11.317+0000: 7.172: [GC (Allocation Failure) 7.172: [ParNew: 279905K->3793K(306688K), 0.0157898 secs] 279905K->3793K(1014528K), 0.0159913 secs] [Times: user=0.03 sys=0.01, real=0.02 secs]
      2015-10-08T09:43:13.522+0000: 9.377: [GC (Allocation Failure) 9.377: [ParNew: 276433K->2827K(306688K), 0.0064236 secs] 276433K->2827K(1014528K), 0.0066834 secs] [Times: user=0.03 sys=0.00, real=0.01 secs]
      2015-10-08T09:43:15.518+0000: 11.372: [GC (Allocation Failure) 11.373: [ParNew: 275467K->3090K(306688K), 0.0055454 secs] 275467K->3090K(1014528K), 0.0057979 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
      2015-10-08T09:43:17.558+0000: 13.412: [GC (Allocation Failure) 13.412: [ParNew: 275730K->3346K(306688K), 0.0053757 secs] 275730K->3346K(1014528K), 0.0055039 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]

      ====================================================

      ============= Other Kafka Errors =========================
      [2015-10-01 15:35:19,039] INFO conflict in /brokers/ids/3 data:

      {"jmx_port":-1,"timestamp":"1443709506024","host":"<HOST>","version":1,"port":9092}

      stored data:

      {"jmx_port":-1,"timestamp":"1443702430352","host":"<HOST>","version":1,"port":9092}

      (kafka.utils.ZkUtils$)
      [2015-10-01 15:35:19,042] INFO I wrote this conflicted ephemeral node [

      {"jmx_port":-1,"timestamp":"1443709506024","host":"<HOST>","version":1,"port":9092}

      ] at /brokers/ids/3 a while back in a different session, hence I will backoff for this node to be deleted by Zookeeper and retry (kafka.utils.ZkUtils$)

      [2015-10-01 15:23:12,378] INFO Closing socket connection to /172.28.72.162. (kafka.network.Processor)
      [2015-10-01 15:23:12,378] INFO Closing socket connection to /172.28.72.162. (kafka.network.Processor)

      [2015-10-01 15:21:53,831] ERROR [ReplicaFetcherThread-4-1], Error for partition [workorder-topic,1] to broker 1:class kafka.common.NotLeaderForPartitionException (kafka.server.ReplicaFetcherThread)
      [2015-10-01 15:21:53,834] ERROR [ReplicaFetcherThread-4-1], Error for partition [workorder-topic,1] to broker 1:class kafka.common.NotLeaderForPartitionException (kafka.server.ReplicaFetcherThread)
      [2015-10-01 15:21:53,835] ERROR [ReplicaFetcherThread-4-1], Error for partition [workorder-topic,1] to broker 1:class kafka.common.NotLeaderForPartitionException (kafka.server.ReplicaFetcherThread)
      [2015-10-01 15:21:53,837] ERROR [ReplicaFetcherThread-4-1], Error for partition [workorder-topic,1] to broker 1:class kafka.common.NotLeaderForPartitionException (kafka.server.ReplicaFetcherThread)

      [2015-10-01 15:20:36,210] WARN [ReplicaFetcherThread-0-2], Error in fetch Name: FetchRequest; Version: 0; CorrelationId: 9; ClientId: ReplicaFetcherThread-0-2; ReplicaId: 3; MaxWait: 500 ms; MinBytes: 1 bytes; RequestInfo: [__consumer_offsets,17] -> PartitionFetchInfo(0,1048576),[__consumer_offsets,23] -> PartitionFetchInfo(0,1048576),[__consumer_offsets,29] -> PartitionFetchInfo(0,1048576),[__consumer_offsets,35] -> PartitionFetchInfo(0,1048576),[__consumer_offsets,41] -> PartitionFetchInfo(0,1048576),[__consumer_offsets,5] -> PartitionFetchInfo(0,1048576),[__consumer_offsets,11] -> PartitionFetchInfo(0,1048576),[__consumer_offsets,47] -> PartitionFetchInfo(0,1048576). Possible cause: java.net.SocketTimeoutException (kafka.server.ReplicaFetcherThread)
      [2015-10-01 15:20:36,210] INFO Reconnect due to socket error: java.nio.channels.ClosedChannelException (kafka.consumer.SimpleConsumer)
      [2015-10-01 15:20:38,238] WARN [ReplicaFetcherThread-1-2], Error in fetch Name: FetchRequest; Version: 0; CorrelationId: 9; ClientId: ReplicaFetcherThread-1-2; ReplicaId: 3; MaxWait: 500 ms; MinBytes: 1 bytes; RequestInfo: [tech-topic,1] -> PartitionFetchInfo(6966109,1048576). Possible cause: java.net.SocketTimeoutException (kafka.server.ReplicaFetcherThread)
      =======================================================

      I have replaced actual hostname with <HOST>.

      Once we reverted parameter KAFKA_HEAP_OPTS to 1G, all went well.

      Required your assistance for the same.

      Attachments

        Activity

          People

            Unassigned Unassigned
            MihirPandya Mihir Pandya
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: