Uploaded image for project: 'Kafka'
  1. Kafka
  2. KAFKA-5150

LZ4 decompression is 4-5x slower than Snappy on small batches / messages

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 0.8.2.2, 0.9.0.1, 0.10.2.1, 0.11.0.0
    • 0.10.2.2, 0.11.0.0
    • consumer
    • None

    Description

      I benchmarked RecordsIteratorDeepRecordsIterator instantiation on small batch sizes with small messages after observing some performance bottlenecks in the consumer.

      For batch sizes of 1 with messages of 100 bytes, LZ4 heavily underperforms compared to Snappy (see benchmark below). Most of our time is currently spent allocating memory blocks in KafkaLZ4BlockInputStream, due to the fact that we default to larger 64kB block sizes. Some quick testing shows we could improve performance by almost an order of magnitude for small batches and messages if we re-used buffers between instantiations of the input stream.

      Benchmark Code

      Benchmark                                              (compressionType)  (messageSize)   Mode  Cnt       Score       Error  Units
      DeepRecordsIteratorBenchmark.measureSingleMessage                    LZ4            100  thrpt   20   84802.279 ±  1983.847  ops/s
      DeepRecordsIteratorBenchmark.measureSingleMessage                 SNAPPY            100  thrpt   20  407585.747 ±  9877.073  ops/s
      DeepRecordsIteratorBenchmark.measureSingleMessage                   NONE            100  thrpt   20  579141.634 ± 18482.093  ops/s
      

      Attachments

        Issue Links

          Activity

            People

              xvrl Xavier Léauté
              xvrl Xavier Léauté
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: