SeaweedFS is fast. However, it is limited by available number of volume servers.
One good way is to combine SeaweedFS' fast local access speed with the cloud storage's elastic capacity.
With fixed number of servers, this transparent cloud integration literally gives SeaweedFS unlimited capacity, in addition to its fast speed. Just add more local SeaweedFS volume servers to increase the throughput.
If one volume is tiered to the cloud,
- The volume is marked as readonly.
- The index file is still local
- The same O(1) disk read is applied to the remote file. When requesting a file entry, a single range request retrieves the entry's content.
- Use
weed scaffold -conf=master
to generatemaster.toml
, tweak it, and start master server with themaster.toml
. - Use
volume.tier.upload
in to move volumes to the cloud. - Use
volume.tier.download
inweed shell
to move volumes to the local cluster.
Multiple s3 buckets are supported. Usually you just need to configure one backend.
After this is configured, you can use this command to upload the .dat file content to the cloud.
// move the volume 37.dat to the s3 cloud
volume.tier.upload -dest=s3.default -collection=benchmark -volumeId=37
// if for any reason you want to move the volume to a different bucket
volume.tier.upload -dest=s3.name2 -collection=benchmark -volumeId=37