BookKeeper 是一个冗余的日志存储系统,Pulsar 用它来持久化存储所有消息。

Broker

Pulsar broker 负责处理从生产者发出的消息、向消费者派发消息、在集群间复制数据,等等。

  1. <td>
  2. Whether persistent topics are enabled on the broker
  3. </td>
  4. <td>
  5. true
  6. </td>
  1. <td>
  2. Whether non-persistent topics are enabled on the broker
  3. </td>
  4. <td>
  5. true
  6. </td>
  1. <td>
  2. Whether the Pulsar Functions worker service is enabled in the broker
  3. </td>
  4. <td>
  5. false
  6. </td>
  1. <td>
  2. Zookeeper quorum connection string
  3. </td>
  4. <td>
  5. </td>
  1. <td>
  2. ZooKeeper 缓存过期时间(秒)
  3. </td>
  4. <td>
  5. 300
  6. </td>
  1. <td>
  2. 配置存储连接字符串(以逗号分隔的列表)
  3. </td>
  4. <td>
  5. </td>
  1. <td>
  2. Broker data port
  3. </td>
  4. <td>
  5. 6650
  6. </td>
  1. <td>
  2. Broker data port for TLS
  3. </td>
  4. <td>
  5. 6651
  6. </td>
  1. <td>
  2. Port to use to server HTTP request
  3. </td>
  4. <td>
  5. 8080
  6. </td>
  1. <td>
  2. Port to use to server HTTPS request
  3. </td>
  4. <td>
  5. 8443
  6. </td>
  1. <td>
  2. Enable the WebSocket API service in broker
  3. </td>
  4. <td>
  5. false
  6. </td>
  1. <td>
  2. Hostname or IP address the service binds on, default is 0.0.0.0.
  3. </td>
  4. <td>
  5. 0.0.0.0
  6. </td>
  1. <td>
  2. Hostname or IP address the service advertises to the outside world. If not set, the value of <code>InetAddress.getLocalHost().getHostName()</code> is used.
  3. </td>
  4. <td>
  5. </td>
  1. <td>
  2. Name of the cluster to which this broker belongs to
  3. </td>
  4. <td>
  5. </td>
  1. <td>
  2. Sets the default behavior for message deduplication in the broker. If enabled, the broker will reject messages that were already stored in the topic. This setting can be overridden on a per-namespace basis.
  3. </td>
  4. <td>
  5. false
  6. </td>
  1. <td>
  2. The maximum number of producers for which information will be stored for deduplication purposes.
  3. </td>
  4. <td>
  5. 10000
  6. </td>
  1. <td>
  2. The number of entries after which a deduplication informational snapshot is taken. A larger interval will lead to fewer snapshots being taken, though this would also lengthen the topic recovery time (the time required for entries published after the snapshot to be replayed).
  3. </td>
  4. <td>
  5. 1000
  6. </td>
  1. <td>
  2. The time of inactivity (in minutes) after which the broker will discard deduplication information related to a disconnected producer.
  3. </td>
  4. <td>
  5. 360
  6. </td>
  1. <td>
  2. The default messages per second dispatch throttling-limit for every replicator in replication. The value of <code>0</code> means disabling replication message dispatch-throttling
  3. </td>
  4. <td>
  5. 0
  6. </td>
  1. <td>
  2. The default bytes per second dispatch throttling-limit for every replicator in replication. The value of <code>0</code> means disabling replication message-byte dispatch-throttling
  3. </td>
  4. <td>
  5. 0
  6. </td>
  1. <td>
  2. Zookeeper session timeout in milliseconds
  3. </td>
  4. <td>
  5. </td>
  1. <td>
  2. Time to wait for broker graceful shutdown. After this time elapses, the process will be killed
  3. </td>
  4. <td>
  5. 60000
  6. </td>
  1. <td>
  2. Flag to skip broker shutdown when broker handles Out of memory error.
  3. </td>
  4. <td>
  5. false
  6. </td>
  1. How often to check for topics that have reached the quota
  2. </td>
  3. <td>
  4. 60
  5. </td>
  1. <td>
  2. The default per-topic backlog quota limit
  3. </td>
  4. <td>
  5. -1
  6. </td>
  1. <td>
  2. Enable topic auto creation if a new producer or consumer connected
  3. </td>
  4. <td>
  5. true
  6. </td>
  1. <td>
  2. The topic type (partitioned or non-partitioned) that is allowed to be automatically created.
  3. </td>
  4. <td>
  5. Partitioned
  6. </td>
  1. <td>
  2. Enable subscription auto creation if a new consumer connected
  3. </td>
  4. <td>
  5. true
  6. </td>
  1. <td>
  2. The number of partitioned topics that is allowed to be automatically created if <code>allowAutoTopicCreationType</code> is partitioned
  3. </td>
  4. <td>
  5. 1
  6. </td>
  1. <td>
  2. Enable the deletion of inactive topics
  3. </td>
  4. <td>
  5. true
  6. </td>
  1. <td>
  2. How often to check for inactive topics
  3. </td>
  4. <td>
  5. 60
  6. </td>
  1. <td>
  2. Set the mode to delete inactive topics. <li> <code>delete_when_no_subscriptions</code>: 删除没有订阅或活动生产者的主题。 <li> <code>delete_when_subscriptions_caught_up</code>: delete the topic whose subscriptions have no backlogs and which has no active producers or consumers.
  3. </td>
  4. <td>
  5. <code>delete_when_no_subscriptions</code>
  6. </td>
  1. <td>
  2. Set the maximum duration for inactive topics. If it is not specified, the <code>brokerDeleteInactiveTopicsFrequencySeconds</code> parameter is adopted.
  3. </td>
  4. <td>
  5. N/A
  6. </td>
  1. <td>
  2. How frequently to proactively check and purge expired messages
  3. </td>
  4. <td>
  5. 5
  6. </td>
  1. <td>
  2. Interval between checks to see if topics with compaction policies need to be compacted
  3. </td>
  4. <td>
  5. 60
  6. </td>
  1. <td>
  2. How long to delay rewinding cursor and dispatching messages when active consumer is changed.
  3. </td>
  4. <td>
  5. 1000
  6. </td>
  1. <td>
  2. Enable check for minimum allowed client library version
  3. </td>
  4. <td>
  5. false
  6. </td>
  1. <td>
  2. Allow client libraries with no version information
  3. </td>
  4. <td>
  5. true
  6. </td>
  1. <td>
  2. Path for the file used to determine the rotation status for the broker when responding to service discovery health checks
  3. </td>
  4. <td>
  5. </td>
  1. <td>
  2. If true, (and ModularLoadManagerImpl is being used), the load manager will attempt to use only brokers running the latest software version (to minimize impact to bundles)
  3. </td>
  4. <td>
  5. false
  6. </td>
  1. <td>
  2. Max number of partitions per partitioned topic. Use 0 or negative number to disable the check
  3. </td>
  4. <td>
  5. 0
  6. </td>
  1. <td>
  2. Enable TLS
  3. </td>
  4. <td>
  5. false
  6. </td>
  1. <td>
  2. TLS证书文件的路径
  3. </td>
  4. <td>
  5. </td>
  1. <td>
  2. TLS私钥文件的路径
  3. </td>
  4. <td>
  5. </td>
  1. <td>
  2. </td>
  3. <td>
  4. </td>
  1. <td>
  2. Accept untrusted TLS certificate from client
  3. </td>
  4. <td>
  5. false
  6. </td>
  1. <td>
  2. Specify the tls cipher the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- <code>TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256</code>
  3. </td>
  4. </td>
  1. <td>
  2. Enable TLS with KeyStore type configuration in broker
  3. </td>
  4. <td>
  5. false
  6. </td>
  1. <td>
  2. TLS Provider for KeyStore type
  3. </td>
  4. <td>
  5. </td>
  1. <td>
  2. LS KeyStore type configuration in broker: JKS, PKCS12
  3. </td>
  4. <td>
  5. JKS
  6. </td>
  1. <td>
  2. TLS KeyStore path in broker
  3. </td>
  4. <td>
  5. </td>
  1. <td>
  2. TLS KeyStore password for broker
  3. </td>
  4. <td>
  5. </td>
  1. <td>
  2. Whether internal client use KeyStore type to authenticate with Pulsar brokers
  3. </td>
  4. <td>
  5. false
  6. </td>
  1. <td>
  2. The TLS Provider used by internal client to authenticate with other Pulsar brokers
  3. </td>
  4. <td>
  5. </td>
  1. <td>
  2. TLS TrustStore type configuration for internal client: JKS, PKCS12, used by the internal client to authenticate with Pulsar brokers
  3. </td>
  4. <td>
  5. JKS
  6. </td>
  1. <td>
  2. TLS TrustStore path for internal client, used by the internal client to authenticate with Pulsar brokers
  3. </td>
  4. <td>
  5. </td>
  1. <td>
  2. TLS TrustStore password for internal client, used by the internal client to authenticate with Pulsar brokers
  3. </td>
  4. <td>
  5. </td>
  1. <td>
  2. Specify the tls cipher the internal client will use to negotiate during TLS Handshake. (a comma-separated list of ciphers) e.g. [TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256]
  3. </td>
  4. <td>
  5. </td>
  1. <td>
  2. Specify the tls protocols the broker will use to negotiate during TLS handshake. (逗号分隔的协议名称列表)。 e.g. [TLSv1.2, TLSv1.1, TLSv1]
  3. </td>
  4. <td>
  5. </td>
  1. <td>
  2. The default ttl for namespaces if ttl is not configured at namespace policies.
  3. </td>
  4. <td>
  5. 0
  6. </td>
  1. <td>
  2. Configure the secret key to be used to validate auth tokens. The key can be specified like: <code>tokenSecretKey=data:;base64,xxxxxxxxx</code> or <code>tokenSecretKey=file:///my/secret.key</code>
  3. </td>
  4. <td>
  5. </td>
  1. <td>
  2. Configure the public key to be used to validate auth tokens. The key can be specified like: <code>tokenPublicKey=data:;base64,xxxxxxxxx</code> or <code>tokenPublicKey=file:///my/secret.key</code>
  3. </td>
  4. <td>
  5. </td>
  1. <td>
  2. Configure the algorithm to be used to validate auth tokens. This can be any of the asymettric algorithms supported by Java JWT (https://github.com/jwtk/jjwt#signature-algorithms-keys)
  3. </td>
  4. <td>
  5. RS256
  6. </td>
  1. <td>
  2. Specify which of the token's claims will be used as the authentication "principal" or "role". The default "sub" claim will be used if this is left blank
  3. </td>
  4. <td>
  5. </td>
  1. <td>
  2. The token audience "claim" name, e.g. "aud", that will be used to get the audience from token. If not set, audience will not be verified.
  3. </td>
  4. <td>
  5. </td>
  1. <td>
  2. The token audience stands for this broker. The field <code>tokenAudienceClaim</code> of a valid token, need contains this.
  3. </td>
  4. <td>
  5. </td>
  1. <td>
  2. Max number of unacknowledged messages allowed to receive messages by a consumer on a shared subscription. Broker will stop sending messages to consumer once, this limit reaches until consumer starts acknowledging messages back. Using a value of 0, is disabling unackeMessage limit check and consumer can receive messages without any restriction
  3. </td>
  4. <td>
  5. 50000
  6. </td>
  1. <td>
  2. Max number of unacknowledged messages allowed per shared subscription. Broker will stop dispatching messages to all consumers of the subscription once this limit reaches until consumer starts acknowledging messages back and unack count reaches to limit/2. Using a value of 0, is disabling unackedMessage-limit check and dispatcher can dispatch messages without any restriction
  3. </td>
  4. <td>
  5. 200000
  6. </td>
  1. <td>
  2. Enable subscription message redelivery tracker
  3. </td>
  4. <td>
  5. true
  6. </td>
Name
enablePersistentTopics
enableNonPersistentTopics
functionsWorkerEnabled
zookeeperServers
zooKeeperCacheExpirySeconds
configurationStoreServers
brokerServicePort
brokerServicePortTls
webServicePort
webServicePortTls
webSocketServiceEnabled
bindAddress
advertisedAddress
clusterName
brokerDeduplicationEnabled
brokerDeduplicationMaxNumberOfProducers
brokerDeduplicationEntriesInterval
brokerDeduplicationProducerInactivityTimeoutMinutes
dispatchThrottlingRatePerReplicatorInMsg
dispatchThrottlingRatePerReplicatorInByte
zooKeeperSessionTimeoutMillis
brokerShutdownTimeoutMs
skipBrokerShutdownOnOOM
backlogQuotaCheckEnabled
backlogQuotaCheckIntervalInSeconds
backlogQuotaDefaultLimitGB
allowAutoTopicCreation
allowAutoTopicCreationType
allowAutoSubscriptionCreation
defaultNumPartitions
brokerDeleteInactiveTopicsEnabled
brokerDeleteInactiveTopicsFrequencySeconds
brokerDeleteInactiveTopicsMode
brokerDeleteInactiveTopicsMaxInactiveDurationSeconds
messageExpiryCheckIntervalInMinutes
brokerServiceCompactionMonitorIntervalInSeconds
activeConsumerFailoverDelayTimeMillis
clientLibraryVersionCheckEnabled
clientLibraryVersionCheckAllowUnversioned
statusFilePath
preferLaterVersions
maxNumPartitionsPerPartitionedTopic
tlsEnabled
tlsCertificateFilePath
tlsKeyFilePath
tlsTrustCertsFilePath
tlsAllowInsecureConnection
tlsProtocols
tlsCiphers
tlsEnabledWithKeyStore
tlsProvider
tlsKeyStoreType
tlsKeyStore
tlsKeyStorePassword
brokerClientTlsEnabledWithKeyStore
brokerClientSslProvider
brokerClientTlsTrustStoreType
brokerClientTlsTrustStore
brokerClientTlsTrustStorePassword
brokerClientTlsCiphers
brokerClientTlsProtocols
ttlDurationDefaultInSeconds
tokenSecretKey
tokenPublicKey
tokenPublicAlg
tokenAuthClaim
tokenAudienceClaim
tokenAudience
maxUnackedMessagesPerConsumer
maxUnackedMessagesPerSubscription
subscriptionRedeliveryTrackerEnabled

subscriptionExpirationTimeMinutes | How long to delete inactive subscriptions from last consuming.

greater than 0 deletes inactive subscriptions automatically.
Setting this configuration to 0 does not delete inactive subscriptions automatically.

Since this configuration takes effect on all topics, if there is even one topic whose subscriptions should not be deleted automatically, you need to set it to 0.
Instead, you can set a subscription expiration time for each namespace using the . | 0 | |maxConcurrentLookupRequest| Max number of concurrent lookup request broker allows to throttle heavy incoming lookup traffic |50000| |maxConcurrentTopicLoadRequest| Max number of concurrent topic loading request broker allows to control number of zk-operations |5000| |authenticationEnabled| Enable authentication |false| |authenticationProviders| Autentication provider name list, which is comma separated list of class names || |authorizationEnabled| Enforce authorization |false| |superUserRoles| Role names that are treated as “super-user”, meaning they will be able to do all admin operations and publish/consume from all topics || |brokerClientAuthenticationPlugin| Authentication settings of the broker itself. 当 broker 连接到其它 broker 时使用,不管是相同集群还是其它集群中的 || |brokerClientAuthenticationParameters||| |athenzDomainNames| 用于认证的受支持的 Athenz 提供者域名(逗号分隔) || |exposePreciseBacklogInPrometheus| 暴露出精确的积压统计信息,设置为 false 来使用 published counter 和 consumed counter 来进行计算,这样更高效但可能会有误差。 |false| |bookkeeperMetadataServiceUri| bookkeeper 用来载入相应元数据驱动及解析其元数据服务位置的元数据服务 uri。 This value can be fetched using bookkeeper shell whatisinstanceid command in BookKeeper cluster. 例如: zk+hierarchical://localhost:2181/ledgers。 元数据服务 uri 列表也可以是一个分号间隔的值,如下所示: ;zk2:2181;zk3:2181/ledgers || |bookkeeperClientAuthenticationPlugin| 连接到 bookies 时使用的认证插件 || |bookkeeperClientAuthenticationParametersName| BookKeeper 认证插件的实现指定的名字与值 || |bookkeeperClientAuthenticationParameters||| |bookkeeperClientTimeoutInSeconds| BookKeeper 添加与读取操作的超时时间 |30| |bookkeeperClientSpeculativeReadTimeoutInMillis| 当一个读请求无法在一定时间内完成时启用 speculative 读取。采用 0 值时,禁用 speculative 读取。 |0| |bookkeeperClientHealthCheckEnabled| 启用 bookie 健康检查。 Bookies that have more than the configured number of failure within the interval will be quarantined for some time. 在此期间,这些 bookies 上不会创建新的 ledger |true| |bookkeeperClientHealthCheckIntervalSeconds||60| |bookkeeperClientHealthCheckErrorThresholdPerInterval||5| |bookkeeperClientHealthCheckQuarantineTimeInSeconds ||1800| |bookkeeperClientRackawarePolicyEnabled| 启用 rack-aware bookie 选择政策。 当生成新的 bookie ensemble 时 BK 会从不同的 rack 中选取 bookie |true| |bookkeeperClientRegionawarePolicyEnabled| 启用 region-aware bookie 选择政策。 BK will chose bookies from different regions and racks when forming a new bookie ensemble. 如果启用,bookkeeperClientRackawarePolicyEnabled 属性的值会被忽略 |false| |bookkeeperClientReorderReadSequenceEnabled| 启用/禁用读取条目的读取序列重排序。 |false| |bookkeeperClientIsolationGroups| 通过指定一个要选择的 bookie 组列表来启用 bookie 隔离。 指定组外的任何 bookie 都不会被 broker 使用 || |bookkeeperClientSecondaryIsolationGroups| 当 bookkeeperClientIsolationGroups 没有足够多的可用 bookie 时启用 bookie 的 secondary-isolation 组。 || |bookkeeperClientMinAvailableBookiesInIsolationGroups| bookkeeperClientIsolationGroups 中应可用的最少 bookie 数,否则 broker 会将 bookkeeperClientSecondaryIsolationGroups 的 bookie 囊括在隔离列表中。 || |bookkeeperClientGetBookieInfoIntervalSeconds| 设置定期查询 bookie 信息的间隔 |86400| |bookkeeperClientGetBookieInfoRetryIntervalSeconds| 设置当查询 bookie 信息失败后的重试间隔 |60| |bookkeeperEnableStickyReads | 启用/禁用 使对一个 ledger 的读请求变得与单个 broker 粘滞(sticky)。 If this flag is enabled, the client will use one single bookie (by preference) to read all entries for a ledger. | true | |managedLedgerDefaultEnsembleSize| 当创建一个 ledger 时使用的 bookie 数量 |2| |managedLedgerDefaultWriteQuorum| 每个消息存储的拷贝数量 |2| |managedLedgerDefaultAckQuorum| 得到保证的拷贝数量 (写操作完成前需要等待的 ack) |2| |managedLedgerCacheSizeMB| 在托管的 ledger 中用于缓存数据负载的内存大小。 This memory is allocated from JVM direct memory and it’s shared across all the topics running in the same broker. 默认地,使用可用的直接内存的 1/5 || |managedLedgerCacheCopyEntries| 当插入缓存时是否应该创建条目负载的拷贝 | false| |managedLedgerCacheEvictionWatermark| 当驱逐触发时降低缓存等级的阈值 |0.9| |managedLedgerCacheEvictionFrequency| 为托管的 ledger 缓存配置缓存驱逐频率 (驱逐/秒) | 100.0 | |managedLedgerCacheEvictionTimeThresholdMillis| 所有在缓存中停留超过配置时长的条目会被驱逐 | 1000 | |managedLedgerCursorBackloggedThreshold| 设置当游标应被当作“积压的” 且应被设置为非活动的阈值 (以条目数为单位)。 | 1000| |managedLedgerDefaultMarkDeleteRateLimit| 限制由消费者确认消息产生的每秒写入频率 |1.0| |managedLedgerMaxEntriesPerLedger| 在触发倒转前追加至 ledger 的最大条目数。 A ledger rollover is triggered on these conditions:

  • Either the max rollover time has been reached
  • or max entries have been written to the ledged and at least min-time has passed

|50000| |managedLedgerMinLedgerRolloverTimeMinutes| Minimum time between ledger rollover for a topic |10| |managedLedgerMaxLedgerRolloverTimeMinutes| Maximum time before forcing a ledger rollover for a topic |240| |managedLedgerCursorMaxEntriesPerLedger| Max number of entries to append to a cursor ledger |50000| |managedLedgerCursorRolloverTimeInSeconds| Max time before triggering a rollover on a cursor ledger |14400| |managedLedgerMaxUnackedRangesToPersist| Max number of “acknowledgment holes” that are going to be persistently stored. When acknowledging out of order, a consumer will leave holes that are supposed to be quickly filled by acking all the messages. The information of which messages are acknowledged is persisted by compressing in “ranges” of messages that were acknowledged. After the max number of ranges is reached, the information will only be tracked in memory and messages will be redelivered in case of crashes. |1000| |autoSkipNonRecoverableData| Skip reading non-recoverable/unreadable data-ledger under managed-ledger’s list.It helps when data-ledgers gets corrupted at bookkeeper and managed-cursor is stuck at that ledger. |false| |loadBalancerEnabled| Enable load balancer |true| |loadBalancerPlacementStrategy| Strategy to assign a new bundle weightedRandomSelection || |loadBalancerReportUpdateThresholdPercentage| Percentage of change to trigger load report update |10| |loadBalancerReportUpdateMaxIntervalMinutes| maximum interval to update load report |15| |loadBalancerHostUsageCheckIntervalMinutes| Frequency of report to collect |1| |loadBalancerSheddingIntervalMinutes| Load shedding interval. Broker periodically checks whether some traffic should be offload from some over-loaded broker to other under-loaded brokers |30| |loadBalancerSheddingGracePeriodMinutes| Prevent the same topics to be shed and moved to other broker more that once within this timeframe |30| |loadBalancerBrokerMaxTopics| Usage threshold to allocate max number of topics to broker |50000| |loadBalancerBrokerUnderloadedThresholdPercentage| Usage threshold to determine a broker as under-loaded |1| |loadBalancerBrokerOverloadedThresholdPercentage| Usage threshold to determine a broker as over-loaded |85| |loadBalancerResourceQuotaUpdateIntervalMinutes| Interval to update namespace bundle resource quotat |15| |loadBalancerBrokerComfortLoadLevelPercentage| Usage threshold to determine a broker is having just right level of load |65| |loadBalancerAutoBundleSplitEnabled| enable/disable namespace bundle auto split |false| |loadBalancerNamespaceBundleMaxTopics| maximum topics in a bundle, otherwise bundle split will be triggered |1000| |loadBalancerNamespaceBundleMaxSessions| maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered |1000| |loadBalancerNamespaceBundleMaxMsgRate| maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered |1000| |loadBalancerNamespaceBundleMaxBandwidthMbytes| maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered |100| |loadBalancerNamespaceMaximumBundles| maximum number of bundles in a namespace |128| |replicationMetricsEnabled| Enable replication metrics |true| |replicationConnectionsPerBroker| Max number of connections to open for each broker in a remote cluster More connections host-to-host lead to better throughput over high-latency links. |16| |replicationProducerQueueSize| Replicator producer queue size |1000| |replicatorPrefix| Replicator prefix used for replicator producer name and cursor name pulsar.repl|| |replicationTlsEnabled| Enable TLS when talking with other clusters to replicate messages |false| |defaultRetentionTimeInMinutes| Default message retention time || |defaultRetentionSizeInMB| Default retention size |0| |keepAliveIntervalSeconds| How often to check whether the connections are still alive |30| |loadManagerClassName| Name of load manager to use |org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl| |supportedNamespaceBundleSplitAlgorithms| Supported algorithms name for namespace bundle split |[range_equally_divide,topic_count_equally_divide]| |defaultNamespaceBundleSplitAlgorithm| Default algorithm name for namespace bundle split |range_equally_divide| |managedLedgerOffloadDriver| Driver to use to offload old data to long term storage (Possible values: S3) || |managedLedgerOffloadMaxThreads| Maximum number of thread pool threads for ledger offloading |2| |managedLedgerUnackedRangesOpenCacheSetEnabled| Use Open Range-Set to cache unacknowledged messages |true| |managedLedgerOffloadDeletionLagMs|Delay between a ledger being successfully offloaded to long term storage and the ledger being deleted from bookkeeper | 14400000| |managedLedgerOffloadAutoTriggerSizeThresholdBytes|The number of bytes before triggering automatic offload to long term storage |-1 (disabled)| |s3ManagedLedgerOffloadRegion| For Amazon S3 ledger offload, AWS region || |s3ManagedLedgerOffloadBucket| For Amazon S3 ledger offload, Bucket to place offloaded ledger into || |s3ManagedLedgerOffloadServiceEndpoint| For Amazon S3 ledger offload, Alternative endpoint to connect to (useful for testing) || |s3ManagedLedgerOffloadMaxBlockSizeInBytes| For Amazon S3 ledger offload, Max block size in bytes. (默认为 64MB,最小 5MB) |67108864| |s3ManagedLedgerOffloadReadBufferSizeInBytes| 对于 Amazon S3 ledger 卸载,以字节计的读缓存大小 (默认为 1MB) |1048576| |s3ManagedLedgerOffloadRole| 对于 Amazon S3 ledger 卸载,在写入 s3 前提供一个假定角色 || |s3ManagedLedgerOffloadRoleSessionName| 对于 Amazon S3 ledger 卸载,当使用一个角色时提供一个角色会话名 |pulsar-s3-offload| | acknowledgmentAtBatchIndexLevelEnabled | 启用或禁用批量下标确认。 | false | | maxMessageSize | 设置消息的最大大小。 | 5 MB |

Client

NameDescription默认值
webServiceUrl群集的 web URL。
brokerServiceUrl集群的Pulsar 协议地址。pulsar://localhost:6650/
authPlugin身份认证插件。
authParams群集的身份认证参数, 逗号分隔的字符串。
useTls是否在群集中强制执行 TLS 验证。false
tlsAllowInsecureConnection
tlsTrustCertsFilePath

Log4j

Name默认值
pulsar.root.loggerWARN,CONSOLE
pulsar.log.dirlogs
pulsar.log.filepulsar.log
log4j.rootLogger${pulsar.root.logger}
log4j.appender.CONSOLEorg.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.ThresholdDEBUG
log4j.appender.CONSOLE.layoutorg.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern%d{ISO8601} - %-5p - [%t:%C{1}@%L] - %m%n
log4j.appender.ROLLINGFILEorg.apache.log4j.DailyRollingFileAppender
log4j.appender.ROLLINGFILE.ThresholdDEBUG
log4j.appender.ROLLINGFILE.File${pulsar.log.dir}/${pulsar.log.file}
log4j.appender.ROLLINGFILE.layoutorg.apache.log4j.PatternLayout
log4j.appender.ROLLINGFILE.layout.ConversionPattern%d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n
log4j.appender.TRACEFILEorg.apache.log4j.FileAppender
log4j.appender.TRACEFILE.ThresholdTRACE
log4j.appender.TRACEFILE.Filepulsar-trace.log
log4j.appender.TRACEFILE.layoutorg.apache.log4j.PatternLayout
log4j.appender.TRACEFILE.layout.ConversionPattern%d{ISO8601} - %-5p [%t:%C{1}@%L][%x] - %m%n

Log4j shell

Name默认值
bookkeeper.root.loggerERROR,CONSOLE
log4j.rootLogger${bookkeeper.root.logger}
log4j.appender.CONSOLEorg.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.ThresholdDEBUG
log4j.appender.CONSOLE.layoutorg.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern%d{ABSOLUTE} %-5p %m%n
log4j.logger.org.apache.zookeeperERROR
log4j.logger.org.apache.bookkeeperERROR
log4j.logger.org.apache.bookkeeper.bookie.BookieShellINFO

WebSocket

NameDescription默认值
configurationStoreServers
zooKeeperSessionTimeoutMillis30000
zooKeeperCacheExpirySecondsZooKeeper 缓存过期时间(秒)300
serviceUrl
serviceUrlTls
brokerServiceUrl
brokerServiceUrlTls
webServicePort8080
webServicePortTls8443
bindAddress0.0.0.0
clusterName
authenticationEnabledfalse
authenticationProviders
authorizationEnabledfalse
superUserRoles
brokerClientAuthenticationPlugin
brokerClientAuthenticationParameters
tlsEnabledfalse
tlsAllowInsecureConnectionfalse
tlsCertificateFilePath
tlsKeyFilePath
tlsTrustCertsFilePath

Pulsar proxy

The can be configured in the conf/proxy.conf file.

NameDescription默认值
zookeeperServersZooKeeper quorum 连接字符串(以逗号分隔的列表)
configurationStoreServers配置存储连接字符串(以逗号分隔的列表)
zookeeperSessionTimeoutMsZooKeeper会话超时(以毫秒为单位)30000
zooKeeperCacheExpirySecondsZooKeeper 缓存过期时间(秒)300
servicePort用于服务器二进制Protobuf请求的端口6650
servicePortTls用于服务器二进制Protobuf TLS请求的端口6651
statusFilePath在响应服务发现健康检查时,用于确定代理实例的轮换状态的文件的路径
advertisedAddressHostname or IP address the service advertises to the outside world.InetAddress.getLocalHost().getHostname()
authenticationEnabled是否为Pulsar代理启用身份验证false
authenticateMetricsEndpointWhether the ‘/metrics’ endpoint requires authentication. Defaults to true. ‘authenticationEnabled’ must also be set for this to take effect.true
authenticationProviders身份验证提供者名称列表(以逗号分隔的类名列表)
authorizationEnabled是否由Pulsar代理强制执行授权false
authorizationProvider授权提供程序的完全限定类名org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider
brokerClientAuthenticationPluginPulsar代理使用的身份验证插件,用于对Pulsar brokers进行身份验证
brokerClientAuthenticationParametersPulsar代理用于对Pulsar Brokers进行身份验证的参数
brokerClientTrustCertsFilePathPulsar代理用于对Pulsar Brokers进行身份验证的可信证书的路径
superUserRoles“超级用户”的角色名,这意味着它们将能够执行所有管理
forwardAuthorizationCredentialsWhether client authorization credentials are forwared to the broker for re-authorization. Authentication must be enabled via authenticationEnabled=true for this to take effect.false
maxConcurrentInboundConnectionsMax concurrent inbound connections. The proxy will reject requests beyond that.10000
maxConcurrentLookupRequestsMax concurrent outbound connections. The proxy will error out requests beyond that.50000
tlsEnabledInProxy是否为代理启用TLSfalse
tlsEnabledWithBroker与Pulsar Brokers通信时是否启用TLSfalse
tlsCertificateFilePathTLS证书文件的路径
tlsKeyFilePathTLS私钥文件的路径
tlsTrustCertsFilePath受信任的TLS证书pem文件的路径
tlsHostnameVerificationEnabled当代理与brokers建立TLS连接时是否验证主机名false
tlsRequireTrustedClientCertOnConnectWhether client certificates are required for TLS. Connections are rejected if the client certificate isn’t trusted.false
tlsProtocolsSpecify the tls protocols the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- TLSv1.2, TLSv1.1, TLSv1
tlsCiphersSpecify the tls cipher the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
tokenSecretKeyConfigure the secret key to be used to validate auth tokens. The key can be specified like: tokenSecretKey=data:;base64,xxxxxxxxx or tokenSecretKey=file:///my/secret.key
tokenPublicKeyConfigure the public key to be used to validate auth tokens. The key can be specified like: tokenPublicKey=data:;base64,xxxxxxxxx or tokenPublicKey=file:///my/secret.key
tokenPublicAlgConfigure the algorithm to be used to validate auth tokens. This can be any of the asymettric algorithms supported by Java JWT (https://github.com/jwtk/jjwt#signature-algorithms-keys)RS256
tokenAuthClaimSpecify the token claim that will be used as the authentication “principal” or “role”. The “subject” field will be used if this is left blank

ZooKeeper handles a broad range of essential configuration- and coordination-related tasks for Pulsar. The default configuration file for ZooKeeper is in the conf/zookeeper.conf file in your Pulsar installation. The following parameters are available:

In addition to the parameters in the table above, configuring ZooKeeper for Pulsar involves adding a server.N line to the conf/zookeeper.conf file for each node in the ZooKeeper cluster, where N is the number of the ZooKeeper node. Here’s an example for a three-node ZooKeeper cluster: