Uploaded image for project: 'Kafka'
  1. Kafka
  2. KAFKA-1980

Console consumer throws OutOfMemoryError with large max-messages

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Minor
    • Resolution: Won't Fix
    • 0.8.1.1, 0.8.2.0
    • None
    • tools
    • None

    Description

      Tested on kafka_2.11-0.8.2.0
      Steps to reproduce:

      • Have any topic with at least 1 GB of data.
      • Use kafka-console-consumer.sh on the topic passing a large number to --max-messages, e.g.:
        $ bin/kafka-console-consumer.sh --zookeeper localhost --topic test.large --from-beginning --max-messages 99999999 | head -n 40

      Expected result:
      Should stream messages up to max-messages

      Result:
      Out of memory error:
      [2015-02-23 19:41:35,006] ERROR OOME with size 1048618 (kafka.network.BoundedByteBufferReceive)
      java.lang.OutOfMemoryError: Java heap space
      at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
      at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
      at kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80)
      at kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63)
      at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
      at kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
      at kafka.network.BlockingChannel.receive(BlockingChannel.scala:111)
      at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:71)
      at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
      at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
      at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
      at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
      at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
      at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
      at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
      at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
      at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
      at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110)
      at kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:94)
      at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
      at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)

      As a first guess I'd say that this is caused by slice() taking more memory than expected. Perhaps because it is called on an Iterable and not an Iterator?

      Attachments

        1. kafka-1980.patch
          0.6 kB
          Håkon Hitland

        Issue Links

          Activity

            People

              Unassigned Unassigned
              hakon Håkon Hitland
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: