必备的配置信息如下:

Topic级别的配置和默认值在进行更深入的讨论。

More details about broker configuration can be found in the scala class kafka.server.KafkaConfig.

Topic-level configuration Configurations pertinent to topics have both a global default as well an optional per-topic override. If no per-topic configuration is given the global default is used. The override can be set at topic creation time by giving one or more --config options. This example creates a topic named my-topic with a custom max message size and flush rate:

Overrides can also be changed or set later using the alter topic command. This example updates the max message size for my-topic:

The following are the topic-level configurations. The server’s default configuration for this property is given under the Server Default Property heading, setting this default in the server config allows you to change the default given to topics that have no override specified.

PropertyDefaultServer Default PropertyDescription
cleanup.policy delete log.cleanup.policy A string that is either “delete” or “compact”. This string designates the retention policy to use on old log segments. The default policy (“delete”) will discard old segments when their retention time or size limit has been reached. The “compact” setting will enable on the topic.
delete.retention.ms 86400000 (24 hours) log.cleaner.delete.retention.ms The amount of time to retain delete tombstone markers for log compacted topics. This setting also gives a bound on the time in which a consumer must complete a read if they begin from offset 0 to ensure that they get a valid snapshot of the final stage (otherwise delete tombstones may be collected before they complete their scan).
flush.messages None log.flush.interval.messages This setting allows specifying an interval at which we will force an fsync of data written to the log. For example if this was set to 1 we would fsync after every message; if it were 5 we would fsync after every five messages. In general we recommend you not set this and use replication for durability and allow the operating system’s background flush capabilities as it is more efficient. This setting can be overridden on a per-topic basis (see ).
flush.ms None log.flush.interval.ms This setting allows specifying a time interval at which we will force an fsync of data written to the log. For example if this was set to 1000 we would fsync after 1000 ms had passed. In general we recommend you not set this and use replication for durability and allow the operating system’s background flush capabilities as it is more efficient.
index.interval.bytes 4096 log.index.interval.bytes This setting controls how frequently Kafka adds an index entry to it’s offset index. The default setting ensures that we index a message roughly every 4096 bytes. More indexing allows reads to jump closer to the exact position in the log but makes the index larger. You probably don’t need to change this.
max.message.bytes 1,000,000 message.max.bytes This is largest message size Kafka will allow to be appended to this topic. Note that if you increase this size you must also increase your consumer’s fetch size so they can fetch messages this large.
min.cleanable.dirty.ratio 0.5 log.cleaner.min.cleanable.ratio This configuration controls how frequently the log compactor will attempt to clean the log (assuming log compaction is enabled). By default we will avoid cleaning a log where more than 50% of the log has been compacted. This ratio bounds the maximum space wasted in the log by duplicates (at 50% at most 50% of the log could be duplicates). A higher ratio will mean fewer, more efficient cleanings but will mean more wasted space in the log.
min.insync.replicas 1 min.insync.replicas When a producer sets acks to “all”, min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend). When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of “all”. This will ensure that the producer raises an exception if a majority of replicas do not receive a write.
retention.bytes None log.retention.bytes This configuration controls the maximum size a log can grow to before we will discard old log segments to free up space if we are using the “delete” retention policy. By default there is no size limit only a time limit.
retention.ms 7 days log.retention.minutes This configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we are using the “delete” retention policy. This represents an SLA on how soon consumers must read their data.
segment.bytes 1 GB log.segment.bytes This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention.
segment.index.bytes 10 MB log.index.size.max.bytes This configuration controls the size of the index that maps offsets to file positions. We preallocate this index file and shrink it only after log rolls. You generally should not need to change this setting.
segment.ms 7 days log.roll.hours This configuration controls the period of time after which Kafka will force the log to roll even if the segment file isn’t full to ensure that retention can delete or compact old data.
segment.jitter.ms 0 log.roll.jitter.{ms,hours} The maximum jitter to subtract from logRollTimeMillis.

Below is the configuration of the Java producer:

For those interested in the legacy Scala producer configs, information can be found .

We introduce both the old 0.8 consumer configs and the new consumer configs respectively below.

3.3.1 Old Consumer Configs

The essential old consumer configurations are the following:

PropertyDefaultDescription
group.id A string that uniquely identifies the group of consumer processes to which this consumer belongs. By setting the same group id multiple processes indicate that they are all part of the same consumer group.
zookeeper.connect Specifies the ZooKeeper connection string in the form hostname:port where host and port are the host and port of a ZooKeeper server. To allow connecting through other ZooKeeper nodes when that ZooKeeper machine is down you can also specify multiple hosts in the form hostname1:port1,hostname2:port2,hostname3:port3.The server may also have a ZooKeeper chroot path as part of it’s ZooKeeper connection string which puts its data under some path in the global ZooKeeper namespace. If so the consumer should use the same chroot path in its connection string. For example to give a chroot path of /chroot/path you would give the connection string ashostname1:port1,hostname2:port2,hostname3:port3/chroot/path.
consumer.id null Generated automatically if not set.
socket.timeout.ms 30 * 1000 The socket timeout for network requests. The actual timeout set will be max.fetch.wait + socket.timeout.ms.
socket.receive.buffer.bytes 64 * 1024 The socket receive buffer for network requests
fetch.message.max.bytes 1024 * 1024 The number of bytes of messages to attempt to fetch for each topic-partition in each fetch request. These bytes will be read into memory for each partition, so this helps control the memory used by the consumer. The fetch request size must be at least as large as the maximum message size the server allows or else it is possible for the producer to send messages larger than the consumer can fetch.
num.consumer.fetchers 1 The number fetcher threads used to fetch data.
auto.commit.enable true If true, periodically commit to ZooKeeper the offset of messages already fetched by the consumer. This committed offset will be used when the process fails as the position from which the new consumer will begin.
auto.commit.interval.ms 60 * 1000 The frequency in ms that the consumer offsets are committed to zookeeper.
queued.max.message.chunks 2 Max number of message chunks buffered for consumption. Each chunk can be up to fetch.message.max.bytes.
rebalance.max.retries 4 When a new consumer joins a consumer group the set of consumers attempt to “rebalance” the load to assign partitions to each consumer. If the set of consumers changes while this assignment is taking place the rebalance will fail and retry. This setting controls the maximum number of attempts before giving up.
fetch.min.bytes 1 The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request.
fetch.wait.max.ms 100 The maximum amount of time the server will block before answering the fetch request if there isn’t sufficient data to immediately satisfy fetch.min.bytes
rebalance.backoff.ms 2000 Backoff time between retries during rebalance. If not set explicitly, the value in zookeeper.sync.time.ms is used.
refresh.leader.backoff.ms 200 Backoff time to wait before trying to determine the leader of a partition that has just lost its leader.
auto.offset.reset largest What to do when there is no initial offset in ZooKeeper or if an offset is out of range:

* largest : automatically reset the offset to the largest offset

* anything else: throw exception to the consumer |
| consumer.timeout.ms | -1 | Throw a timeout exception to the consumer if no message is available for consumption after the specified interval |
| exclude.internal.topics | true | Whether messages from internal topics (such as offsets) should be exposed to the consumer. |
| client.id | group id value | The client id is a user-specified string sent in each request to help trace calls. It should logically identify the application making the request. |
| zookeeper.session.timeout.ms | 6000 | ZooKeeper session timeout. If the consumer fails to heartbeat to ZooKeeper for this period of time it is considered dead and a rebalance will occur. |
| zookeeper.connection.timeout.ms | 6000 | The max time that the client waits while establishing a connection to zookeeper. |
| zookeeper.sync.time.ms | 2000 | How far a ZK follower can be behind a ZK leader |
| offsets.storage | zookeeper | Select where offsets should be stored (zookeeper or kafka). |
| offsets.channel.backoff.ms | 1000 | The backoff period when reconnecting the offsets channel or retrying failed offset fetch\/commit requests. |
| offsets.channel.socket.timeout.ms | 10000 | Socket timeout when reading responses for offset fetch\/commit requests. This timeout is also used for ConsumerMetadata requests that are used to query for the offset manager. |
| offsets.commit.max.retries | 5 | Retry the offset commit up to this many times on failure. This retry count only applies to offset commits during shut-down. It does not apply to commits originating from the auto-commit thread. It also does not apply to attempts to query for the offset coordinator before committing offsets. i.e., if a consumer metadata request fails for any reason, it will be retried and that retry does not count toward this limit. |
| dual.commit.enabled | true | If you are using “kafka” as offsets.storage, you can dual commit offsets to ZooKeeper (in addition to Kafka). This is required during migration from zookeeper-based offset storage to kafka-based offset storage. With respect to any given consumer group, it is safe to turn this off after all instances within that group have been migrated to the new version that commits offsets to the broker (instead of directly to ZooKeeper). |
| partition.assignment.strategy | range | Select between the “range” or “roundrobin” strategy for assigning partitions to consumer streams.The round-robin partition assignor lays out all the available partitions and all the available consumer threads. It then proceeds to do a round-robin assignment from partition to consumer thread. If the subscriptions of all consumer instances are identical, then the partitions will be uniformly distributed. (i.e., the partition ownership counts will be within a delta of exactly one across all consumer threads.) Round-robin assignment is permitted only if: (a) Every topic has the same number of streams within a consumer instance (b) The set of subscribed topics is identical for every consumer instance within the group.Range partitioning works on a per-topic basis. For each topic, we lay out the available partitions in numeric order and the consumer threads in lexicographic order. We then divide the number of partitions by the total number of consumer streams (threads) to determine the number of partitions to assign to each consumer. If it does not evenly divide, then the first few consumers will have one extra partition. |

More details about consumer configuration can be found in the scala classkafka.consumer.ConsumerConfig.

3.3.2 New Consumer Configs

Since 0.9.0.0 we have been working on a replacement for our existing simple and high-level consumers. The code is considered beta quality. Below is the configuration for the new consumer:

Below is the configuration of the Kafka Connect framework.

NameDescriptionTypeDefaultValid ValuesImportance
config.storage.topic kafka topic to store configs string high
group.id A unique string that identifies the Connect cluster group this worker belongs to. string high
internal.key.converter Converter class for internal key Connect data that implements the Converter interface. Used for converting data like offsets and configs. class high
internal.value.converter Converter class for offset value Connect data that implements the Converter interface. Used for converting data like offsets and configs. class high
key.converter Converter class for key Connect data that implements the Converter interface. class high
offset.storage.topic kafka topic to store connector offsets in string high
status.storage.topic kafka topic to track connector and task status string high
value.converter Converter class for value Connect data that implements the Converter interface. class high
bootstrap.servers A list of host\/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the formhost1:port1,host2:port2,.... Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down). list [localhost:9092] high
cluster ID for this cluster, which is used to provide a namespace so multiple Kafka Connect clusters or instances may co-exist while sharing a single Kafka cluster. string connect high
heartbeat.interval.ms The expected time between heartbeats to the group coordinator when using Kafka’s group management facilities. Heartbeats are used to ensure that the worker’s session stays active and to facilitate rebalancing when new members join or leave the group. The value must be set lower than session.timeout.ms, but typically should be set no higher than 1\/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances. int 3000 high
session.timeout.ms The timeout used to detect failures when using Kafka’s group management facilities. int 30000 high
ssl.key.password The password of the private key in the key store file. This is optional for client. password null high
ssl.keystore.location The location of the key store file. This is optional for client and can be used for two-way authentication for client. string null high
ssl.keystore.password The store password for the key store file.This is optional for client and only needed if ssl.keystore.location is configured. password null high
ssl.truststore.location The location of the trust store file. string null high
ssl.truststore.password The password for the trust store file. password null high
connections.max.idle.ms Close idle connections after the number of milliseconds specified by this config. long 540000 medium
receive.buffer.bytes The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. int 32768 [0,…] medium
request.timeout.ms The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. int 40000 [0,…] medium
sasl.kerberos.service.name The Kerberos principal name that Kafka runs as. This can be defined either in Kafka’s JAAS config or in Kafka’s config. string null medium
sasl.mechanism SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism. string GSSAPI medium
security.protocol Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. string PLAINTEXT medium
send.buffer.bytes The size of the TCP send buffer (SO_SNDBUF) to use when sending data. int 131072 [0,…] medium
ssl.enabled.protocols The list of protocols enabled for SSL connections. list [TLSv1.2, TLSv1.1, TLSv1] medium
ssl.keystore.type The file format of the key store file. This is optional for client. string JKS medium
ssl.protocol The SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. string TLS medium
ssl.provider The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. string null medium
ssl.truststore.type The file format of the trust store file. string JKS medium
worker.sync.timeout.ms When the worker is out of sync with other workers and needs to resynchronize configurations, wait up to this amount of time before giving up, leaving the group, and waiting a backoff period before rejoining. int 3000 medium
worker.unsync.backoff.ms When the worker is out of sync with other workers and fails to catch up within worker.sync.timeout.ms, leave the Connect cluster for this long before rejoining. int 300000 medium
access.control.allow.methods Sets the methods supported for cross origin requests by setting the Access-Control-Allow-Methods header. The default value of the Access-Control-Allow-Methods header allows cross origin requests for GET, POST and HEAD. string “” low
access.control.allow.origin Value to set the Access-Control-Allow-Origin header to for REST API requests.To enable cross origin access, set this to the domain of the application that should be permitted to access the API, or ‘*‘ to allow access from any domain. The default value only allows access from the domain of the REST API. string “” low
client.id An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip\/port by allowing a logical application name to be included in server-side request logging. string “” low
metadata.max.age.ms The period of time in milliseconds after which we force a refresh of metadata even if we haven’t seen any partition leadership changes to proactively discover any new brokers or partitions. long 300000 [0,…] low
metric.reporters A list of classes to use as metrics reporters. Implementing the MetricReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. list [] low
metrics.num.samples The number of samples maintained to compute metrics. int 2 [1,…] low
metrics.sample.window.ms The window of time a metrics sample is computed over. long 30000 [0,…] low
offset.flush.interval.ms Interval at which to try committing offsets for tasks. long 60000 low
offset.flush.timeout.ms Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt. long 5000 low
reconnect.backoff.ms The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker. long 50 [0,…] low
rest.advertised.host.name If this is set, this is the hostname that will be given out to other workers to connect to. string null low
rest.advertised.port If this is set, this is the port that will be given out to other workers to connect to. int null low
rest.host.name Hostname for the REST API. If this is set, it will only bind to this interface. string null low
rest.port Port for the REST API to listen on. int 8083 low
retry.backoff.ms The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. long 100 [0,…] low
sasl.kerberos.kinit.cmd Kerberos kinit command path. string \/usr\/bin\/kinit low
sasl.kerberos.min.time.before.relogin Login thread sleep time between refresh attempts. long 60000 low
sasl.kerberos.ticket.renew.jitter Percentage of random jitter added to the renewal time. double 0.05 low
sasl.kerberos.ticket.renew.window.factor Login thread will sleep until the specified window factor of time from last refresh to ticket’s expiry has been reached, at which time it will try to renew the ticket. double 0.8 low
ssl.cipher.suites A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol.By default all the available cipher suites are supported. list null low
ssl.endpoint.identification.algorithm The endpoint identification algorithm to validate server hostname using server certificate. string null low
ssl.keymanager.algorithm The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. string SunX509 low
ssl.trustmanager.algorithm The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine. string PKIX low
task.shutdown.graceful.timeout.ms Amount of time to wait for tasks to shutdown gracefully. This is the total amount of time, not per task. All task have shutdown triggered, then they are waited on sequentially. long 5000 low