Uploaded image for project: 'Kafka'
  1. Kafka
  2. KAFKA-4815 Idempotent/transactional Producer (KIP-98)
  3. KAFKA-5032

Think through implications of max.message.size affecting record batches in message format V2

    XMLWordPrintableJSON

Details

    Description

      It's worth noting that the new behaviour for uncompressed messages is the same as the existing behaviour for compressed messages.

      A few things to think about:

      1. Do the producer settings max.request.size and batch.size still make sense and do we need to update the documentation? My conclusion is that things are still fine, but we may need to revise the docs.

      2. (Seems like we don't need to do this) Consider changing default max message set size to include record batch overhead. This is currently defined as:

      val MessageMaxBytes = 1000000 + MessageSet.LogOverhead
      

      We should consider changing it to (I haven't thought it through though):

      val MessageMaxBytes = 1000000 + DefaultRecordBatch.RECORD_BATCH_OVERHEAD
      

      3. When a record batch is too large, we throw RecordTooLargeException, which is confusing because there's also a RecordBatchTooLargeException. We should consider renaming these exceptions to make the behaviour clearer.

      4. We should consider deprecating max.message.bytes (server config) and message.max.bytes (topic config) in favour of configs that make it clear that we are talking about record batches instead of individual messages.

      Part of the work in this JIRA is working out what should be done for 0.11.0.0 and what can be done later.

      Attachments

        Issue Links

          Activity

            People

              apurva Apurva Mehta
              ijuma Ismael Juma
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: