With this feature, we can enable end-to-end block compression in Kafka. The idea is to enable compression on the producer for some or all topics, write the data in compressed format on the server and make the consumers compression aware. The data will be decompressed only on the consumer side. Ideally, there should be a choice of compression codecs to be used by the producer. That means a change to the message header as well as the network byte format. On the consumer side, the state maintenance behavior of the zookeeper consumer changes. For compressed data, the consumed offset will be advanced one compressed message at a time. For uncompressed data, consumed offset will be advanced one message at a time.