All the methods in producer, , and reader of a Java client are thread-safe.
Javadoc for the Pulsar client is divided into two domains by package as follows.
This document focuses only on the client API for producing and consuming messages on Pulsar topics. For how to use the Java admin client, see .
The latest version of the Pulsar Java client library is available via Maven Central. To use the latest version, add the pulsar-client
library to your build configuration.
If you use Maven, add the following information to the pom.xml
file.
Gradle
If you use Gradle, add the following information to the build.gradle
file.
def pulsarVersion = '2.6.1'
dependencies {
compile group: 'org.apache.pulsar', name: 'pulsar-client', version: pulsarVersion
}
连接URL
To connect to Pulsar using client libraries, you need to specify a URL.
You can assign Pulsar protocol URLs to specific clusters and use the pulsar
scheme. The default port is 6650
. The following is an example of localhost
.
pulsar://localhost:6650
If you have multiple brokers, the URL is as follows.
pulsar://localhost:6550,localhost:6651,localhost:6652
A URL for a production Pulsar cluster is as follows.
pulsar://pulsar.us-west.example.com:6650
If you use TLS authentication, the URL is as follows.
pulsar+ssl://pulsar.us-west.example.com:6651
You can instantiate a object using just a URL for the target Pulsar cluster like this:
PulsarClient client = PulsarClient.builder()
.serviceUrl("pulsar://localhost:6650")
.build();
If you have multiple brokers, you can initiate a PulsarClient like this:
PulsarClient client = PulsarClient.builder()
.serviceUrl("pulsar://localhost:6650,localhost:6651,localhost:6652")
.build();
If you create a client, you can use the loadConf
configuration. The following parameters are available in loadConf
.
serviceUrl
|Service URL provider for Pulsar service | None String | authPluginClassName
| Name of the authentication plugin | None String | authParams
| String represents parameters for the authentication plugin
Example
key1:val1,key2:val2|None long|operationTimeoutMs
|Operation timeout |30000 long|statsIntervalSeconds
|Interval between each stats info
Stats is activated with positive statsInterval
Set statsIntervalSeconds
to 1 second at least |60 int|numIoThreads
| The number of threads used for handling connections to brokers | 1 int|numListenerThreads
|The number of threads used for handling message listeners | 1 boolean|useTcpNoDelay
|Whether to use TCP no-delay flag on the connection to disable Nagle algorithm |true boolean |useTls
|Whether to use TLS encryption on the connection| false string | tlsTrustCertsFilePath
|Path to the trusted TLS certificate file|None boolean|tlsAllowInsecureConnection
|Whether the Pulsar client accepts untrusted TLS certificate from broker | false boolean | tlsHostnameVerificationEnable
| Whether to enable TLS hostname verification|false int|concurrentLookupRequest
|The number of concurrent lookup requests allowed to send on each broker connection to prevent overload on broker|5000 int|maxLookupRequest
|The maximum number of lookup requests allowed on each broker connection to prevent overload on broker | 50000 int|maxNumberOfRejectedRequestPerConnection
|The maximum number of rejected requests of a broker in a certain time frame (30 seconds) after the current connection is closed and the client creates a new connection to connect to a different broker|50 int|keepAliveIntervalSeconds
|Seconds of keeping alive interval for each client broker connection|30 int|connectionTimeoutMs
|Duration of waiting for a connection to a broker to be established
If the duration passes without a response from a broker, the connection attempt is dropped|10000 int|requestTimeoutMs
|Maximum duration for completing a request |60000 int|defaultBackoffIntervalNanos
| Default duration for a backoff interval | TimeUnit.MILLISECONDS.toNanos(100); long|maxBackoffIntervalNanos
|Maximum duration for a backoff interval|TimeUnit.SECONDS.toNanos(30)
Check out the Javadoc for the class for a full list of configurable parameters.
In addition to client-level configuration, you can also apply producer and specific configuration as described in sections below.
Producer
在Pulsar中,生产者写消息到主题中。 一旦你实例化一个 客户端对象(在如上z章节),你可以创建一个 生产者用于特定的主题。
Producer<byte[]> producer = client.newProducer()
.topic("my-topic")
.create();
// 然后你就可以发送消息到指定的broker 和topic上:
producer.send("My message".getBytes());
By default, producers produce messages that consist of byte arrays. You can produce different types by specifying a message .
Producer<String> stringProducer = client.newProducer(Schema.STRING)
.topic("my-topic")
.create();
stringProducer.send("My message");
producer.close(); consumer.close(); client.close();
关闭操作也可以是异步的:
```java
producer.closeAsync()
.thenRun(() -> System.out.println("Producer closed"));
.exceptionally((ex) -> {
System.err.println("Failed to close producer: " + ex);
return ex;
});
配置Producer(生产者)
If you instantiate a Producer
object by specifying only a topic name as the example above, use the default configuration for producer.
If you create a producer, you can use the loadConf
configuration. The following parameters are available in loadConf
.
Type | Name|
Description
| Default |—-|—-|—-|—- String| topicName
| Topic name| null| String|producerName
|Producer name| null long|sendTimeoutMs
|Message send timeout in ms.
If a message is not acknowledged by a server before the sendTimeout
expires, an error occurs.|30000 boolean|blockIfQueueFull
|If it is set to true
, when the outgoing message queue is full, the Send
and SendAsync
methods of producer block, rather than failing and throwing errors.
If it is set to false
, when the outgoing message queue is full, the Send
and SendAsync
methods of producer fail and ProducerQueueIsFullError
exceptions occur.
The MaxPendingMessages
parameter determines the size of the outgoing message queue.|false int|maxPendingMessages
|The maximum size of a queue holding pending messages.
For example, a message waiting to receive an acknowledgment from a .
By default, when the queue is full, all calls to the Send
and SendAsync
methods fail unless you set BlockIfQueueFull
to true
.|1000 int|maxPendingMessagesAcrossPartitions
|The maximum number of pending messages across partitions.
Use the setting to lower the max pending messages for each partition ({@link #setMaxPendingMessages(int)}) if the total number exceeds the configured value.|50000 MessageRoutingMode|messageRoutingMode
|Message routing logic for producers on partitioned topics.
Apply the logic only when setting no key on messages.
Available options are as follows:
pulsar.RoundRobinDistribution
: round robin
pulsar.UseSinglePartition
: publish all messages to a single partition
pulsar.CustomPartition
: a custom partitioning scheme|pulsar.RoundRobinDistribution
HashingScheme|hashingScheme
|Hashing function determining the partition where you publish a particular message (partitioned topics only).Available options are as follows:
pulsar.JavaStringHash
: the equivalent ofString.hashCode()
in Java
pulsar.Murmur3_32Hash
: applies the hashing function
pulsar.BoostHash
: applies the hashing function from C++’s Boost library |HashingScheme.JavaStringHash
ProducerCryptoFailureAction|cryptoFailureAction
|Producer should take action when encryption fails.
FAIL: if encryption fails, unencrypted messages fail to send.
SEND: if encryption fails, unencrypted messages are sent. |
ProducerCryptoFailureAction.FAIL
long|batchingMaxPublishDelayMicros
|Batching time period of sending messages.|TimeUnit.MILLISECONDS.toMicros(1) int|batchingMaxMessages|The maximum number of messages permitted in a batch.|1000 boolean|batchingEnabled
|Enable batching of messages. |true CompressionType|compressionType
|Message data compression type used by a producer.Available options:
SNAPPY
| No compressionYou can configure parameters if you do not want to use the default configuration.
For a full list, see the Javadoc for the class. The following is an example.
Producer<byte[]> producer = client.newProducer()
.topic("my-topic")
.batchingMaxPublishDelay(10, TimeUnit.MILLISECONDS)
.sendTimeout(10, TimeUnit.SECONDS)
.blockIfQueueFull(true)
.create();
消息路由
当使用分区主题时,当你使用生产者发布消息时你可以指定路由模式。 For more information on specifying a routing mode using the Java client, see the cookbook.
You can publish messages asynchronously using the Java client. With async send, the producer puts the message in a blocking queue and returns it immediately. Then the client library sends the message to the broker in the background. If the queue is full (max size configurable), the producer is blocked or fails immediately when calling the API, depending on arguments passed to the producer.
The following is an example.
producer.sendAsync("my-async-message".getBytes()).thenAccept(msgId -> {
System.out.printf("Message with ID %s successfully sent", msgId);
});
As you can see from the example above, async send operations return a wrapped in a
CompletableFuture
.Configure messages
In addition to a value, you can set additional items on a given message:
producer.newMessage()
.key("my-message-key")
.value("my-async-message".getBytes())
.property("my-key", "my-value")
.property("my-other-key", "my-other-value")
.send();
You can terminate the builder chain with
sendAsync()
and get a future return.在Pulsar中,消费者订阅topic主题并处理生产者发布到这些主题的消息。 你可以首先实例化一个PulsarClient 对象并传递给他一个borker() URL来实例化一个消费者。
一旦实例化一个 对象,你可以指定一个主题和一个来创建一个 Consumer 消费者。
Consumer consumer = client.newConsumer()
.topic("my-topic")
.subscriptionName("my-subscription")
.subscribe();
The
subscribe
method will auto subscribe the consumer to the specified topic and subscription. 一种让消费者监听主题的方法是使用while
循环。 In this example loop, the consumer listens for messages, prints the contents of any received message, and then that the message has been processed. If the processing logic fails, you can use negative acknowledgement to redeliver the message later.while (true) {
// Wait for a message
Message msg = consumer.receive();
try {
// Do something with the message
System.out.printf("Message received: %s", new String(msg.getData()));
// Acknowledge the message so that it can be deleted by the message broker
consumer.acknowledge(msg);
} catch (Exception e) {
// Message failed to process, redeliver later
consumer.negativeAcknowledge(msg);
}
}
Configure consumer
If you instantiate a
Consumer
object by specifying only a topic and subscription name as in the example above, the consumer uses the default configuration.When you create a consumer, you can use the
loadConf
configuration. The following parameters are available inloadConf
.Description
| Default |—-|—-|—-|—- Set
| topicNames
| Topic name| Sets.newTreeSet() Pattern|topicsPattern
| Topic pattern |None String|subscriptionName
| Subscription name| None SubscriptionType|subscriptionType
| Subscription typeThree subscription types are available:
- Exclusive
- Failover(灾备)
- Shared(共享)
|SubscriptionType.Exclusive int |
receiverQueueSize
| Size of a consumer’s receiver queue.
For example, the number of messages accumulated by a consumer before an application calls Receive
.
A value higher than the default value increases consumer throughput, though at the expense of more memory utilization.| 1000 long|acknowledgementsGroupTimeMicros
|Group a consumer acknowledgment for a specified time.
By default, a consumer uses 100ms grouping time to send out acknowledgments to a broker.
Setting a group time of 0 sends out acknowledgments immediately.
A longer ack group time is more efficient at the expense of a slight increase in message re-deliveries after a failure.|TimeUnit.MILLISECONDS.toMicros(100) long|negativeAckRedeliveryDelayMicros
|Delay to wait before redelivering messages that failed to be processed.
When an application uses {@link Consumer#negativeAcknowledge(Message)}, failed messages are redelivered after a fixed timeout. |TimeUnit.MINUTES.toMicros(1) int |maxTotalReceiverQueueSizeAcrossPartitions
|The max total receiver queue size across partitions.
This setting reduces the receiver queue size for individual partitions if the total receiver queue size exceeds this value.|50000 String|consumerName
|Consumer name|null long|ackTimeoutMillis
|Timeout of unacked messages|0 long|tickDurationMillis
|Granularity of the ack-timeout redelivery.
Using an higher tickDurationMillis
reduces the memory overhead to track messages when setting ack-timeout to a bigger value (for example, 1 hour).|1000 int|priorityLevel
|Priority level for a consumer to which a broker gives more priority while dispatching messages in the shared subscription mode.
The broker follows descending priorities. For example, 0=max-priority, 1, 2,…
In shared subscription mode, the broker first dispatches messages to the max priority level consumers if they have permits. Otherwise, the broker considers next priority level consumers.
Example 1
If a subscription has consumerA with priorityLevel
0 and consumerB with priorityLevel
1, then the broker only dispatches messages to consumerA until it runs out permits and then starts dispatching messages to consumerB.
Example 2
Consumer Priority, Level, Permits
C1, 0, 2
C2, 0, 1
C3, 0, 1
C4, 1, 2
C5, 1, 1
Order in which a broker dispatches messages to consumers is: C1, C2, C3, C1, C4, C5, C4.|0 ConsumerCryptoFailureAction|cryptoFailureAction
|Consumer should take action when it receives a message that can not be decrypted.
FAIL: this is the default option to fail messages until crypto succeeds.
DISCARD:silently acknowledge and not deliver message to an application.
CONSUME: deliver encrypted messages to applications. It is the application’s responsibility to decrypt the message.
The decompression of message fails.
If messages contain batch messages, a client is not be able to retrieve individual messages in batch.
Delivered encrypted message contains {@link EncryptionContext} which contains encryption and compression information in it using which application can decrypt consumed message payload.|ConsumerCryptoFailureAction.FAIL SortedMap
| properties
|A name or value property of this consumer.
properties
is application defined metadata attached to a consumer.
When getting a topic stats, associate this metadata with the consumer stats for easier identification.|new TreeMap<>() boolean||If enabling readCompacted
, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.
A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.
Only enabling readCompacted
on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).
Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a PulsarClientException
.|false SubscriptionInitialPosition|subscriptionInitialPosition
|Initial position at which to set cursor when subscribing to a topic at first time.|SubscriptionInitialPosition.Latest int|patternAutoDiscoveryPeriod
|Topic auto discovery period when using a pattern for topic’s consumer.
The default and minimum value is 1 minute.|1 RegexSubscriptionMode|regexSubscriptionMode
|When subscribing to a topic using a regular expression, you can pick a certain type of topics.
PersistentOnly: only subscribe to persistent topics.
NonPersistentOnly: only subscribe to non-persistent topics.
AllTopics: subscribe to both persistent and non-persistent topics. |RegexSubscriptionMode.PersistentOnly DeadLetterPolicy|
deadLetterPolicy
|Dead letter policy for consumers.
By default, some messages are probably redelivered many times, even to the extent that it never stops.
By using the dead letter mechanism, messages have the max redelivery count. When exceeding the maximum number of redeliveries, messages are sent to the Dead Letter Topic and acknowledged automatically.
You can enable the dead letter mechanism by setting deadLetterPolicy
.
Example
client.newConsumer()
.deadLetterPolicy(DeadLetterPolicy.builder().maxRedeliverCount(10).build())
.subscribe();
Default dead letter topic name is {TopicName}-{Subscription}-DLQ
.
To set a custom dead letter topic name:client.newConsumer()
.deadLetterPolicy(DeadLetterPolicy.builder().maxRedeliverCount(10)
.deadLetterTopic("your-topic-name").build())
.subscribe();
When specifying the dead letter policy while not specifying ackTimeoutMillis
, you can set the ack timeout to 30000 millisecond.|None boolean|autoUpdatePartitions
|If autoUpdatePartitions
is enabled, a consumer subscribes to partition increasement automatically.
Note: this is only for partitioned consumers.|true boolean|replicateSubscriptionState
|If replicateSubscriptionState
is enabled, a subscription state is replicated to geo-replicated clusters.|false
You can configure parameters if you do not want to use the default configuration. For a full list, see the Javadoc for the class.
The following is an example.
Consumer consumer = client.newConsumer()
.topic("my-topic")
.subscriptionName("my-subscription")
.ackTimeout(10, TimeUnit.SECONDS)
.subscriptionType(SubscriptionType.Exclusive)
.subscribe();
异步接收
The receive
method receives messages synchronously (the consumer process is blocked until a message is available). You can also use , which returns a CompletableFuture
object immediately once a new message is available.
The following is an example.
Async receive operations return a wrapped inside of a CompletableFuture
.
Batch receive
Use batchReceive
to receive multiple messages for each call.
The following is an example.
Messages messages = consumer.batchReceive();
for (message in messages) {
// do something
}
consumer.acknowledge(messages)
Note:
Batch receive policy limits the number and bytes of messages in a single batch. You can specify a timeout to wait for enough messages.
The batch receive is completed if any of the following condition is met: enough number of messages, bytes of messages, wait timeout.
Consumer consumer = client.newConsumer() .topic(“my-topic”) .subscriptionName(“my-subscription”) .batchReceivePolicy(BatchReceivePolicy.builder() .maxNumMessages(100) .maxNumBytes(1024 * 1024) .timeout(200, TimeUnit.MILLISECONDS) .build()) .subscribe();
The default batch receive policy is:
```java
BatchReceivePolicy.builder()
.maxNumMessage(-1)
.maxNumBytes(10 * 1024 * 1024)
.timeout(100, TimeUnit.MILLISECONDS)
.build();
消费者除了订阅单个Pulsar主题外,你还可以使用多主题订阅订阅多个主题。 若要使用多主题订阅, 可以提供一个topic正则表达式 (regex) 或 主题List
。 如果通过 regex 选择主题, 则所有主题都必须位于同一Pulsar命名空间中。
The followings are some examples.
import org.apache.pulsar.client.api.Consumer;
import org.apache.pulsar.client.api.PulsarClient;
import java.util.Arrays;
import java.util.List;
import java.util.regex.Pattern;
ConsumerBuilder consumerBuilder = pulsarClient.newConsumer()
.subscriptionName(subscription);
// Subscribe to all topics in a namespace
Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default/.*");
Consumer allTopicsConsumer = consumerBuilder
.topicsPattern(allTopicsInNamespace)
.subscribe();
// Subscribe to a subsets of topics in a namespace, based on regex
Pattern someTopicsInNamespace = Pattern.compile("persistent://public/default/foo.*");
Consumer allTopicsConsumer = consumerBuilder
.topicsPattern(someTopicsInNamespace)
.subscribe();
你还可以订阅明确的主题列表 (如果愿意, 可跨命名空间):
List<String> topics = Arrays.asList(
"topic-1",
"topic-2",
"topic-3"
);
Consumer multiTopicConsumer = consumerBuilder
.topics(topics)
.subscribe();
// 或者:
Consumer multiTopicConsumer = consumerBuilder
.topics(
"topic-1",
"topic-2",
"topic-3"
)
.subscribe();
You can also subscribe to multiple topics asynchronously using the subscribeAsync
method rather than the synchronous subscribe
method. The following is an example.
Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default.*");
consumerBuilder
.topics(topics)
.subscribeAsync()
.thenAccept(this::receiveMessageFromConsumer);
private void receiveMessageFromConsumer(Consumer consumer) {
consumer.receiveAsync().thenAccept(message -> {
// Do something with the received message
receiveMessageFromConsumer(consumer);
});
}
订阅模型
Pulsar has various subscription modes to match different scenarios. A topic can have multiple subscriptions with different subscription modes. However, a subscription can only have one subscription mode at a time.
A subscription is identical with the subscription name which can specify only one subscription mode at a time. You cannot change the subscription mode unless all existing consumers of this subscription are offline.
Different subscription modes have different message distribution modes. This section describes the differences of subscription modes and how to use them.
In order to better describe their differences, assuming you have a topic named “my-topic”, and the producer has published 10 messages.
Producer<String> producer = client.newProducer(Schema.STRING)
.topic("my-topic")
.enableBatching(false)
.create();
// 3 messages with "key-1", 3 messages with "key-2", 2 messages with "key-3" and 2 messages with "key-4"
producer.newMessage().key("key-1").value("message-1-1").send();
producer.newMessage().key("key-1").value("message-1-2").send();
producer.newMessage().key("key-1").value("message-1-3").send();
producer.newMessage().key("key-2").value("message-2-1").send();
producer.newMessage().key("key-2").value("message-2-2").send();
producer.newMessage().key("key-2").value("message-2-3").send();
producer.newMessage().key("key-3").value("message-3-1").send();
producer.newMessage().key("key-3").value("message-3-2").send();
producer.newMessage().key("key-4").value("message-4-1").send();
producer.newMessage().key("key-4").value("message-4-2").send();
Exclusive
Consumer consumer = client.newConsumer()
.topic("my-topic")
.subscriptionName("my-subscription")
.subscriptionType(SubscriptionType.Exclusive)
.subscribe()
Only the first consumer is allowed to the subscription, other consumers receive an error. The first consumer receives all 10 messages, and the consuming order is the same as the producing order.
Failover(灾备)
Create new consumers and subscribe with theFailover
subscription mode.
Consumer consumer1 = client.newConsumer()
.topic("my-topic")
.subscriptionName("my-subscription")
.subscriptionType(SubscriptionType.Failover)
.subscribe()
Consumer consumer2 = client.newConsumer()
.topic("my-topic")
.subscriptionName("my-subscription")
.subscriptionType(SubscriptionType.Failover)
.subscribe()
//conumser1 is the active consumer, consumer2 is the standby consumer.
//consumer1 receives 5 messages and then crashes, consumer2 takes over as an active consumer.
Multiple consumers can attach to the same subscription, yet only the first consumer is active, and others are standby. When the active consumer is disconnected, messages will be dispatched to one of standby consumers, and the standby consumer then becomes active consumer.
If the first active consumer is disconnected after receiving 5 messages, the standby consumer becomes active consumer. Consumer1 will receive:
("key-1", "message-1-1")
("key-1", "message-1-2")
("key-1", "message-1-3")
("key-2", "message-2-1")
("key-2", "message-2-2")
consumer2 will receive:
("key-2", "message-2-3")
("key-3", "message-3-1")
("key-3", "message-3-2")
("key-4", "message-4-1")
("key-4", "message-4-2")
Note:
If a topic is a partitioned topic, each partition has only one active consumer, messages of one partition are distributed to only one consumer, and messages of multiple partitions are distributed to multiple consumers.
Shared(共享)
Create new consumers and subscribe with Shared
subscription mode:
Consumer consumer1 = client.newConsumer()
.topic("my-topic")
.subscriptionName("my-subscription")
.subscriptionType(SubscriptionType.Shared)
.subscribe()
Consumer consumer2 = client.newConsumer()
.topic("my-topic")
.subscriptionName("my-subscription")
.subscriptionType(SubscriptionType.Shared)
.subscribe()
//Both consumer1 and consumer 2 is active consumers.
In shared subscription mode, multiple consumers can attach to the same subscription and messages are delivered in a round robin distribution across consumers.
If a broker dispatches only one message at a time, consumer1 receives the following information.
("key-1", "message-1-1")
("key-1", "message-1-3")
("key-2", "message-2-2")
("key-3", "message-3-1")
("key-4", "message-4-1")
consumer2 receives the follwoing information.
("key-1", "message-1-2")
("key-2", "message-2-1")
("key-2", "message-2-3")
("key-3", "message-3-2")
("key-4", "message-4-2")
Shared
subscription is different from Exclusive
and Failover
subscription modes. Shared
subscription has better flexibility, but cannot provide order guarantee.
Key_shared
This is a new subscription mode since 2.4.0 release, create new consumers and subscribe with Key_Shared
subscription mode.
Consumer consumer1 = client.newConsumer()
.topic("my-topic")
.subscriptionName("my-subscription")
.subscribe()
Consumer consumer2 = client.newConsumer()
.subscriptionName("my-subscription")
.subscriptionType(SubscriptionType.Key_Shared)
.subscribe()
//Both consumer1 and consumer2 are active consumers.
Key_Shared
subscription is like Shared
subscription, all consumers can attach to the same subscription. But it is different from Key_Shared
subscription, messages with the same key are delivered to only one consumer in order. The possible distribution of messages between different consumers (by default we do not know in advance which keys will be assigned to a consumer, but a key will only be assigned to a consumer at the same time).
consumer1 receives the follwoing information.
("key-1", "message-1-1")
("key-1", "message-1-2")
("key-1", "message-1-3")
("key-3", "message-3-1")
("key-3", "message-3-2")
consumer2 receives the follwoing information.
("key-2", "message-2-1")
("key-2", "message-2-2")
("key-2", "message-2-3")
("key-4", "message-4-1")
("key-4", "message-4-2")
If batching is enabled at the producer side, messages with different keys are added to a batch by default. The broker will dispatch the batch to the consumer, so the default batch mechanism may break the Key_Shared subscription guaranteed message distribution semantics. The producer needs to use the KeyBasedBatcher
.
Or the producer can disable batching.
Producer producer = client.newProducer()
.topic("my-topic")
.enableBatching(false)
.create();
Reader
With the reader interface, Pulsar clients can “manually position” themselves within a topic and reading all messages from a specified message onward. The Pulsar API for Java enables you to create objects by specifying a topic, a MessageId , and .
The following is an example.
ReaderConfiguration conf = new ReaderConfiguration();
byte[] msgIdBytes = // 一些消息ID 的字节数组
MessageId id = MessageId.fromByteArray(msgIdBytes);
Reader reader = pulsarClient.newReader()
.topic(topic)
.startMessageId(id)
.create();
while (true) {
Message message = reader.readNext();
// 处理消息
}
In the example above, a Reader
object is instantiated for a specific topic and message (by ID); the reader iterates over each message in the topic after the message is identified by msgIdBytes
(how that value is obtained depends on the application).
上面的示例代码展示了Reader
对象指向特定的消息(ID),但你也可以使用MessageId.earliest
来指向topic上最早可用的消息,使用MessageId.latest
指向最新的消息。
When you create a reader, you can use the loadConf
configuration. The following parameters are available in loadConf
.
topicName
|Topic name. |None int|receiverQueueSize
|Size of a consumer’s receiver queue.
For example, the number of messages that can be accumulated by a consumer before an application calls Receive
.
A value higher than the default value increases consumer throughput, though at the expense of more memory utilization.|1000 ReaderListenerreaderListener
|A listener that is called for message received.|None String|readerName
|Read name.|null String|subscriptionRolePrefix
|Prefix of subscription role. |null CryptoKeyReader|cryptoKeyReader
|Interface that abstracts the access to a key store.|null ConsumerCryptoFailureAction|cryptoFailureAction
|Consumer should take action when it receives a message that can not be decrypted.
FAIL: this is the default option to fail messages until crypto succeeds.
DISCARD: silently acknowledge and not deliver message to an application.
CONSUME: deliver encrypted messages to applications. It is the application’s responsibility to decrypt the message.
The message decompression fails.
If messages contain batch messages, a client is not be able to retrieve individual messages in batch.
Delivered encrypted message contains {@link EncryptionContext} which contains encryption and compression information in it using which application can decrypt consumed message payload.|ConsumerCryptoFailureAction.FAIL boolean|
readCompacted
|If enablingreadCompacted
, a consumer reads messages from a compacted topic rather than a full message backlog of a topic.
A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.
readCompacted
can only be enabled on subscriptions to persistent topics, which have a single active consumer (for example, failure or exclusive subscriptions).
Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a PulsarClientException
.|false boolean|resetIncludeHead
|If set to true, the first message to be returned is the one specified by messageId
.
If set to false, the first message to be returned is the one next to the message specified by messageId
.|false
Sticky key range reader
In sticky key range reader, broker will only dispatch messages which hash of the message key contains by the specified key hash range. Multiple key hash ranges can be specified on a reader.
The following is an example to create a sticky key range reader.
pulsarClient.newReader()
.topic(topic)
.startMessageId(MessageId.earliest)
.keyHashRange(Range.of(0, 10000), Range.of(20001, 30000))
.create();
Total hash range size is 65536, so the max end of the range should be less than or equal to 65535.
In Pulsar, all message data consists of byte arrays “under the hood.” Message schemas enable you to use other types of data when constructing and handling messages (from simple types like strings to more complex, application-specific types). 如果在不指定schema的情况下构造 ,则生产者只能生成类型为 byte[]
的消息。 The following is an example.
Producer<byte[]> producer = client.newProducer()
.topic(topic)
.create();
The producer above is equivalent to a Producer<byte[]>
(in fact, you should always explicitly specify the type). If you’d like to use a producer for a different type of data, you’ll need to specify a schema that informs Pulsar which data type will be transmitted over the topic.
Schema实例
假设您有一个 SensorReading
类, 你想通过Pulsar主题进行传输:
public class SensorReading {
public float temperature;
public SensorReading(float temperature) {
this.temperature = temperature;
}
// A no-arg constructor is required
public SensorReading() {
}
public float getTemperature() {
return temperature;
}
public void setTemperature(float temperature) {
this.temperature = temperature;
}
}
You could then create a Producer<SensorReading>
(or Consumer<SensorReading>
) like this:
Producer<SensorReading> producer = client.newProducer(JSONSchema.of(SensorReading.class))
.topic("sensor-readings")
.create();
以下schema格式目前可用于 Java:
无schema 或者字节数组schema(可以使用
Schema.BYTES
):Producer<byte[]> bytesProducer = client.newProducer(Schema.BYTES)
.topic("some-raw-bytes-topic")
.create();
或者:
Producer<byte[]> bytesProducer = client.newProducer()
.topic("some-raw-bytes-topic")
.create();
String
for normal UTF-8-encoded string data. Apply the schema usingSchema.STRING
:Producer<String> stringProducer = client.newProducer(Schema.STRING)
.topic("some-string-topic")
.create();
Create JSON schemas for POJOs using
Schema.JSON
. The following is an example.Producer<MyPojo> pojoProducer = client.newProducer(Schema.JSON(MyPojo.class))
.topic("some-pojo-topic")
.create();
Generate Protobuf schemas using
Schema.PROTOBUF
. The following example shows how to create the Protobuf schema and use it to instantiate a new producer:Producer<MyProtobuf> protobufProducer = client.newProducer(Schema.PROTOBUF(MyProtobuf.class))
.topic("some-protobuf-topic")
.create();
Define Avro schemas with
Schema.AVRO
. The following code snippet demonstrates how to create and use Avro schema.Producer<MyAvro> avroProducer = client.newProducer(Schema.AVRO(MyAvro.class))
.topic("some-avro-topic")
.create();
Authentication
Pulsar currently supports three authentication schemes: , Athenz, and . You can use the Pulsar Java client with all of them.
TLS 认证
要使用,你需要使用setUseTls
方法设置TLS为true
,将您的Pulsar客户端指向TLS证书路径,并提供证书和密钥文件的路径。
The following is an example.
Map<String, String> authParams = new HashMap<>();
authParams.put("tlsCertFile", "/path/to/client-cert.pem");
authParams.put("tlsKeyFile", "/path/to/client-key.pem");
Authentication tlsAuth = AuthenticationFactory
.create(AuthenticationTls.class.getName(), authParams);
PulsarClient client = PulsarClient.builder()
.serviceUrl("pulsar+ssl://my-broker.com:6651")
.enableTls(true)
.tlsTrustCertsFilePath("/path/to/cacert.pem")
.authentication(tlsAuth)
.build();
要使用Athenz做为身份认证提供者,你需要并且在hash提供如下四个参数的值:
tenantDomain
tenantService
providerDomain
privateKey
You can also set an optional keyId
. The following is an example.
Map<String, String> authParams = new HashMap<>();
authParams.put("tenantDomain", "shopping"); // Tenant domain name
authParams.put("tenantService", "some_app"); // Tenant service name
authParams.put("providerDomain", "pulsar"); // Provider domain name
authParams.put("privateKey", "file:///path/to/private.pem"); // Tenant private key path
authParams.put("keyId", "v1"); // Key id for the tenant private key (optional, default: "0")
Authentication athenzAuth = AuthenticationFactory
.create(AuthenticationAthenz.class.getName(), authParams);
PulsarClient client = PulsarClient.builder()
.serviceUrl("pulsar+ssl://my-broker.com:6651")
.enableTls(true)
.tlsTrustCertsFilePath("/path/to/cacert.pem")
.authentication(athenzAuth)
.build();
支持的格式:
privateKey
参数支持如下三种格式: *file:///path/to/file
*file:/path/to/file
*data:application/x-pem-file;base64,<base64-encoded value>
Oauth2
The following example shows how to use Oauth2 as an authentication provider for the Pulsar Java client.
PulsarClient client = PulsarClient.builder()
.serviceUrl("pulsar://broker.example.com:6650/")
.authentication(
AuthenticationFactoryOAuth2.clientCredentials(this.issuerUrl, this.credentialsUrl, this.audience))
.build();
In addition, you can also use the encoded parameters to configure authentication for Pulsar Java client.
Authentication auth = AuthenticationFactory
.create(AuthenticationOAuth2.class.getName(), "{"type":"client_credentials","privateKey":"...","issuerUrl":"...","audience":"..."}");
PulsarClient client = PulsarClient.builder()
.serviceUrl("pulsar://broker.example.com:6650/")
.authentication(auth)