分层存储使用 来支持 Amazon S3 和 (简称 GCS)来进行长期存储。 With Jclouds, it is easy to add support for more cloud storage providers in the future.
Tiered storage uses to support filesystem for long term storage. With Hadoop, it is easy to add support for more filesystem in the future.
Tiered storage should be used when you have a topic for which you want to keep a very long backlog for a long time. For example, if you have a topic containing user actions which you use to train your recommendation systems, you may want to keep that data for a long time, so that if you change your recommendation algorithm you can rerun it against your full user history.
A topic in Pulsar is backed by a log, known as a managed ledger. This log is composed of an ordered list of segments. Pulsar only every writes to the final segment of the log. All previous segments are sealed. The data within the segment is immutable. This is known as a segment oriented architecture.
The Tiered Storage offloading mechanism takes advantage of this segment oriented architecture. When offloading is requested, the segments of the log are copied, one-by-one, to tiered storage. All segments of the log, apart from the segment currently being written to can be offloaded.
On the broker, the administrator must configure the bucket and credentials for the cloud storage service. The configured bucket must exist before attempting to offload. If it does not exist, the offload operation will fail.
Pulsar uses multi-part objects to upload the segment data. It is possible that a broker could crash while uploading the data. We recommend you add a life cycle rule your bucket to expire incomplete multi-part upload after a day or two to avoid getting charged for incomplete uploads.
当 ledger 被卸载到长期存储时,你仍然可以通过 Pulsar SQL 在卸载的 ledger 中查询数据。
Offloading is configured in .
At a minimum, the administrator must configure the driver, the bucket and the authenticating credentials. There is also some other knobs to configure, like the bucket region, the max block size in backed storage, etc.
Currently we support driver of types:
aws-s3
: Simple Cloud Storage Servicegoogle-cloud-storage
:
Bucket 与地区
Buckets are the basic containers that hold your data. Everything that you store in Cloud Storage must be contained in a bucket. You can use buckets to organize your data and control access to your data, but unlike directories and folders, you cannot nest buckets.
s3ManagedLedgerOffloadBucket=pulsar-topic-offload
With AWS S3, the default region is US East (N. Virginia)
. Page contains more information.
s3ManagedLedgerOffloadRegion=eu-west-3
使用 AWS 验证身份
To be able to access AWS S3, you need to authenticate with AWS S3. Pulsar does not provide any direct means of configuring authentication for AWS S3, but relies on the mechanisms supported by the .
Once you have created a set of credentials in the AWS IAM console, they can be configured in a number of ways.
- 使用 ec2 实例元数据凭据
If you are on AWS instance with an instance profile that provides credentials, Pulsar will use these credentials if no other mechanism is provided
- Set the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in
conf/pulsar_env.sh
.
export AWS_ACCESS_KEY_ID=ABC123456789
export AWS_SECRET_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c
- Add the Java system properties aws.accessKeyId and aws.secretKey to PULSAR_EXTRA_OPTS in
conf/pulsar_env.sh
.
PULSAR_EXTRA_OPTS="${PULSAR_EXTRA_OPTS} ${PULSAR_MEM} ${PULSAR_GC} -Daws.accessKeyId=ABC123456789 -Daws.secretKey=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c -Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.maxCapacity.default=1000 -Dio.netty.recycler.linkCapacity=1024"
- Set the access credentials in
~/.aws/credentials
.
aws_access_key_id=ABC123456789
aws_secret_access_key=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c
- 假定一个 IAM 角色
If you want to assume an IAM role, this can be done via specifying the following:
This will use the DefaultAWSCredentialsProviderChain
for assuming this role.
配置块读/写的大小
Pulsar also provides some knobs to configure the size of requests sent to AWS S3.
s3ManagedLedgerOffloadMaxBlockSizeInBytes
configures the maximum size of a “part” sent during a multipart upload. This cannot be smaller than 5MB. Default is 64MB.s3ManagedLedgerOffloadReadBufferSizeInBytes
configures the block size for each individual read when reading back data from AWS S3. Default is 1MB.
In both cases, these should not be touched unless you know what you are doing.
Buckets are the basic containers that hold your data. Everything that you store in Cloud Storage must be contained in a bucket. You can use buckets to organize your data and control access to your data, but unlike directories and folders, you cannot nest buckets.
gcsManagedLedgerOffloadBucket=pulsar-topic-offload
Bucket Region is the region where bucket located. Bucket Region is not a required but a recommended configuration. If it is not configured, It will use the default region.
Regarding GCS, buckets are default created in the us multi-regional location
, page contains more information.
gcsManagedLedgerOffloadRegion=europe-west3
使用 GCS 验证身份
The administrator needs to configure gcsManagedLedgerOffloadServiceAccountKeyFile
in broker.conf
for the broker to be able to access the GCS service. gcsManagedLedgerOffloadServiceAccountKeyFile
is a Json file, containing the GCS credentials of a service account. contains more information of how to create this key file for authentication. More information about google cloud IAM is available here.
- 打开 。
- Click Create service account.
- In the Create service account window, type a name for the service account, and select Furnish a new private key. If you want to grant G Suite domain-wide authority to the service account, also select Enable G Suite Domain-wide Delegation.
- Click Create.
gcsManagedLedgerOffloadServiceAccountKeyFile="/Users/hello/Downloads/project-804d5e6a6f33.json"
配置块读/写的大小
Pulsar also provides some knobs to configure the size of requests sent to GCS.
gcsManagedLedgerOffloadMaxBlockSizeInBytes
configures the maximum size of a “part” sent during a multipart upload. This cannot be smaller than 5MB. Default is 64MB.gcsManagedLedgerOffloadReadBufferSizeInBytes
configures the block size for each individual read when reading back data from GCS. Default is 1MB.
In both cases, these should not be touched unless you know what you are doing.
配置连接地址
You can configure the connection address in the broker.conf
file.
配置 Hadoop 配置路径
The configuration file is stored in the Hadoop profile path. It contains various settings, such as base path, authentication, and so on.
fileSystemProfilePath="../conf/filesystem_offload_core_site.xml"
The model for storing topic data uses org.apache.hadoop.io.MapFile
. You can use all of the configurations in org.apache.hadoop.io.MapFile
for Hadoop.
示例
For more information about the configurations in org.apache.hadoop.io.MapFile
, see Filesystem Storage.
Namespace policies can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that the topic has stored on the pulsar cluster. Once the topic reaches the threshold, an offload operation will be triggered. Setting a negative value to the threshold will disable automatic offloading. Setting the threshold to 0 will cause the broker to offload data as soon as it possiby can.
$ bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace
Offloading can manually triggered through a REST endpoint on the Pulsar broker. We provide a CLI which will call this rest endpoint for you.
When triggering offload, you must specify the maximum size, in bytes, of backlog which will be retained locally on the bookkeeper. The offload mechanism will offload segments from the start of the topic backlog until this condition is met.
$ bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1
Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1
The command to triggers an offload will not wait until the offload operation has completed. To check the status of the offload, use offload-status.
$ bin/pulsar-admin topics offload-status my-tenant/my-namespace/topic1
Offload is currently running
To wait for offload to complete, add the -w flag.
$ bin/pulsar-admin topics offload-status -w my-tenant/my-namespace/topic1
Offload was a success
$ bin/pulsar-admin topics offload-status persistent://public/default/topic1
Error in offload
null