Index: configuration08.html =================================================================== --- configuration08.html (revision 0) +++ configuration08.html (working copy) @@ -0,0 +1,497 @@ + + +
More details about server configuration can be found in the scala class kafka.server.KafkaConfig.
| Property | +Default | +Description | +
|---|---|---|
| broker.id | ++ | The broker id for this server | +
| log.dirs | +"/tmp/kafka-logs" | +The directories in which the log data is kept | +
| zookeeper.connect | +null | +Zookeeper host string | +
| message.max.bytes | ++ 1000000 + | +The maximum size of message that the server can receive | +
| num.network.threads | +3 | +The number of network threads that the server uses for handling network requests | +
| num.io.threads | +8 | +The number of io threads that the server uses for carrying out network requests | +
| queued.max.requests | +500 | +The number of queued requests allowed before blocking the network threads | +
| port | +6667 | +The port to listen and accept connections on | +
| host.name | +null | +
+ Hostname of broker. If this is set, it will only bind to this address. If this is not set, it will bind to all interfaces, and publish one to ZK + |
+
| socket.send.buffer.bytes | +100 * 1024 | +The SO_SNDBUFF buffer of the socket sever sockets | +
| socket.receive.buffer.bytes | +100 * 1024 | +The SO_RCVBUFF buffer of the socket sever sockets | +
| socket.request.max.bytes | +100 * 1024 * 1024 | +The maximum number of bytes in a socket request | +
| num.partitions | +1 | +The default number of log partitions per topic | +
| log.segment.bytes | +1024 * 1024 * 1024 | +The maximum size of a single log file | +
| log.segment.bytes.per.topic | +"" | +The maximum size of a single log file for some specific topic | +
| log.roll.hours | +24 * 7 | +The maximum time before a new log segment is rolled out | +
| log.roll.hours.per.topic | +"" | +The number of hours before rolling out a new log segment for some specific topic | +
| log.retention.hours | +24 * 7 | +The number of hours to keep a log file before deleting it | +
| log.retention.hours.per.topic | +"" | +The number of hours to keep a log file before deleting it for some specific topic | +
| log.retention.bytes | +-1 | +The maximum size of the log before deleting it | +
| log.retention.bytes.per.topic | +"" | +The maximum size of the log for some specific topic before deleting it | +
| log.cleanup.interval.mins | +10 | +The frequency in minutes that the log cleaner checks whether any log is eligible for deletion | +
| log.index.size.max.bytes | +10 * 1024 * 1024 | +The maximum size in bytes of the offset index | +
| log.index.interval.bytes | +4096 | +The interval with which we add an entry to the offset index | +
| log.flush.interval.messages | +10000 | +The number of messages accumulated on a log partition before messages are flushed to disk | +
| log.flush.interval.ms.per.topic | +"" | +The maximum time in ms that a message in selected topics is kept in memory before flushed to disk, e.g., topic1:3000,topic2:6000 | +
| log.flush.scheduler.interval.ms | +3000 | +The frequency in ms that the log flusher checks whether any log needs to be flushed to disk | +
| log.flush.interval.ms | +${log.flush.scheduler.interval.ms} + | +The maximum time in ms that a message in any topic is kept in memory before flushed to disk | +
| auto.create.topics.enable | +true | +Enable auto creation of topic on the server | +
| controller.socket.timeout.ms | +30000 | +The socket timeout for controller-to-broker channels | +
| controller.message.queue.size | +10 | +The buffer size for controller-to-broker-channels | +
| default.replication.factor | +1 | +Default replication factors for automatically created topics | +
| replica.lag.time.max.ms | +10000 | +If a follower hasn't sent any fetch requests during this time, the leader will remove the follower from isr | +
| replica.lag.max.messages | +4000 | +If the lag in messages between a leader and a follower exceeds this number, the leader will remove the follower from isr | +
| replica.socket.timeout.ms | +30 * 1000 | +The socket timeout for network requests | +
| replica.socket.receive.buffer.bytes | +64 * 1024 | +The socket receive buffer for network requests | +
| replica.fetch.max.bytes | +1024 * 1024 | +The number of byes of messages to attempt to fetch | +
| replica.fetch.wait.max.ms | +500 | +Max wait time for each fetcher request issued by follower replicas | +
| replica.fetch.min.bytes | +1 | +Minimum bytes expected for each fetch response. If not enough bytes, wait up to replicaMaxWaitTimeMs | +
| num.replica.fetchers | +1 | +
+ Number of fetcher threads used to replicate messages from a source broker. Increasing this value can increase the degree of I/O parallelism in the follower broker. + |
+
| replica.high.watermark.checkpoint.interval.ms | +5000L | +The frequency with which the high watermark is saved out to disk | +
| fetch.purgatory.purge.interval.requests | +10000 | +The purge interval (in number of requests) of the fetch request purgatory | +
| producer.purgatory.purge.interval.requests | +10000 | +The purge interval (in number of requests) of the producer request purgatory | +
| zookeeper.session.timeout.ms | +6000 | +Zookeeper session timeout | +
| zookeeper.connection.timeout.ms | +${zookeeper.session.timeout.ms} | +The max time that the client waits to establish a connection to zookeeper | +
| zookeeper.sync.time.ms | +2000 | +How far a ZK follower can be behind a ZK leader | +
More details about consumer configuration can be found in the scala class kafka.consumer.ConsumerConfig.
| Property | +Default | +Description | +
|---|---|---|
| group.id | ++ | A string that uniquely identifies a set of consumers within the same consumer group | +
| zookeeper.connect | +null | +Zookeeper host string | +
| consumer.id | +null | +
+ Generated automatically if not set. + |
+
| socket.timeout.ms | +30 * 1000 | +The socket timeout for network requests. The actual timeout set will be max.fetch.wait + socket.timeout.ms. | +
| socket.receive.buffer.bytes | +64 * 1024 | +The socket receive buffer for network requests | +
| fetch.message.max.bytes | +1024 * 1024 | +The number of byes of messages to attempt to fetch | +
| auto.commit.enable | +true | +If true, periodically commit to zookeeper the offset of messages already fetched by the consumer | +
| auto.commit.interval.ms | +60 * 1000 | +The frequency in ms that the consumer offsets are committed to zookeeper | +
| queued.max.messages | +10 | +Max number of messages buffered for consumption | +
| rebalance.max.retries | +4 | +Max number of retries during rebalance | +
| fetch.min.bytes | +1 | +The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will block | +
| fetch.wait.max.ms | +100 | +The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy fetch.min.bytes | +
| rebalance.backoff.ms | +${zookeeper.sync.time.ms} | +Backoff time between retries during rebalance | +
| refresh.leader.backoff.ms | +200 | +Backoff time to refresh the leader of a partition after it loses the current leader | +
| auto.offset.reset | +"smallest" | +
+ What to do if an offset is out of range: |
+
| consumer.timeout.ms | +-1 | +Throw a timeout exception to the consumer if no message is available for consumption after the specified interval | +
| client.id | +${group.id} | +Client id is specified by the kafka consumer client, used to distinguish different clients | +
| zookeeper.session.timeout.ms | +6000 | +Zookeeper session timeout | +
| zookeeper.connection.timeout.ms | +${zookeeper.session.timeout.ms} + | +The max time that the client waits to establish a connection to zookeeper | +
| zookeeper.sync.time.ms | +2000 | +How far a ZK follower can be behind a ZK leader | +
More details about producer configuration can be found in the scala class kafka.producer.ProducerConfig.
| Property | +Default | +Description | +
|---|---|---|
| metadata.broker.list | ++ |
+ This is for bootstrapping and the producer will only use it for getting metadata (topics, partitions and replicas). The socket connections for sending the actual data will be established based on the broker information returned in the metadata. The format is host1:port1,host2:port2, and the list can be a subset of brokers or a VIP pointing to a subset of brokers. + |
+
| partitioner.class | +"kafka.producer.DefaultPartitioner" | +The partitioner class for partitioning events amongst sub-topics | +
| producer.type | +"sync" | +
+ This parameter specifies whether the messages are sent asynchronously or not. Valid values are - async for asynchronous send and sync for synchronous send + |
+
| compression.codec | +"none" | +
+ This parameter allows you to specify the compression codec for all data generated by this producer. The default is NoCompressionCodec + |
+
| compressed.topics | +null | +
+ This parameter allows you to set whether compression should be turned on for particular topics. If the compression codec is anything other than NoCompressionCodec, enable compression only for specified topics if any. If the list of compressed topics is empty, then enable the specified compression codec for all topics. If the compression codec is NoCompressionCodec, compression is disabled for all topics + |
+
| message.send.max.retries | +3 | +
+ The leader may be unavailable transiently, which can fail the sending of a message. This property specifies the number of retries when such failures occur. + |
+
| retry.backoff.ms | +100 | +
+ Before each retry, the producer refreshes the metadata of relevant topics. Since leader election takes a bit of time, this property specifies the amount of time that the producer waits before refreshing the metadata. + |
+
| topic.metadata.refresh.interval.ms | +600* 1000 | +
+ The producer generally refreshes the topic metadata from brokers when there is a failure (partition missing, leader not available...). It will also poll regularly (default: every 10min so 600000ms). If you set this to a negative value, metadata will only get refreshed on failure. If you set this to zero, the metadata will get refreshed after each message sent (not recommended). Important note: the refresh happen only AFTER the message is sent, so if the producer never sends a message the metadata is never refreshed + |
+
| queue.buffering.max.ms | +5000 | +Maximum time, in milliseconds, for buffering data on the producer queue | +
| queue.buffering.max.messages | +10000 | +The maximum size of the blocking queue for buffering on the producer | +
| queue.enqueue.timeout.ms | +-1 | +
+ Timeout for event enqueue: |
+
| batch.num.messages | +200 | +The number of messages batched at the producer | +
| serializer.class | +"kafka.serializer.DefaultEncoder" | +The serializer class for values | +
| key.serializer.class | +${serializer.class} | +The serializer class for keys (defaults to the same as for values) | +
| send.buffer.bytes | +100 * 1024 | +Socket write buffer size | +
| client.id | +"" | +The client application sending the producer requests | +
| request.required.acks | +0 | +
+ The required acks of the producer requests - negative value means ack after the replicas in ISR have caught up to the leader's offset corresponding to this produce request. + |
+
| request.timeout.ms | +1500 | +The ack timeout of the producer requests. Value must be non-negative and non-zero | +