In this section, you’ll have a better understanding of concepts related to volume size.

  • It is what you set during the volume creation, and we will call it nominal size in this doc to avoid ambiguity.
  • Since the volume itself is just a CRD object in Kubernetes and the data is stored in each replica, this is actually the nominal size of each replica.
  • The reason we call this field as “nominal size” is that Longhorn replicas are using to store data and this value is the apparent size of the sparse files (the maximum size to which they may expand). The actual size used by each replica is not equal to this nominal size.
  • Based on this nominal size, the replicas will be scheduled to those nodes that have enough allocatable space during the volume creation. (See this doc for more info about node allocation size.)
  • The value of nominal size determines the max available space when the volume is in use. In other words, the current active data size hold by a volume cannot be greater than its nominal size.

  • The actual size indicates the actual space used by each replica on the corresponding node.
  • Since all historical data stored in the snapshots and active data will be calculated into the actual size, the final value can be greater than the nominal size.
  • The actual size will be shown only when the volume is running.

In the example, we will explain how volume size and actual size get changed after a bunch of IO and snapshot related operations.

  1. Create a 12 Gi volume with a single replica, then attach and mount it on a node. See Figure 1 of the illustration.
    • For the empty volume, the nominal size is 12 Gi and the actual size is almost 0.
    • There is some meta info in the volume hence the actual size is 260 Mi and is not exactly 0.

  1. Write 4 Gi data (data#0) in the volume mount point. The actual size is increased by 4 Gi because of the allocated blocks in the replica for the 4 Gi data. Meanwhile, df command in the filesystem also shows the 4 Gi used space. See Figure 2 of the illustration.

  1. Delete the 4 Gi data. Then, df command shows that the used space of the filesystem is nearly 0, but the actual size is unchanged.

  1. Then, rewrite the 4 Gi data (data#1), and the df command in the filesystem shows 4 Gi used space again. Howerver, the actual size is increased by 4 Gi and becomes 8.25Gi. See Figure 3(a) of the illustration.

  1. Take a snapshot (snapshot#1). See Figure 4 of the illustration.
    • Now data#1 is stored in snapshot#1.
    • The new volume head size is almost 0.
    • With the volume head and the snapshot included, the actual size remains 8.25 Gi.

  1. Write 8 Gi data (data#2) in the volume mount, then take one more snapshot (snapshot#2). See Figure 5 of the illustration.

    • Now the actual size is 16.2 Gi, which is greater than the volume nominal size.
    • From a filesystem’s perspective, the overlapping part between the two snapshots is considered as the blocks that have to be reused or overwritten. But in terms of Longhorn, these blocks are actually fresh ones held in another snapshot/volume head. See the 2 snapshots in Figure 6.

  1. Delete snapshot#1 and wait for snapshot purge complete. See Figure 7 of the illustration.
    • Here Longhorn actually coalesces the snapshot#1 with the snapshot#2.
    • For the overlapping part during coalescing, the newer data (data#2) will be retained in the blocks. Then some historical data is removed and the volume gets shrunk (from 16.2 Gi to 11.4 Gi in the example).

  1. Delete all existing data (data#2) and write 11.5 Gi data (data#3) in the volume mount. See Figure 8 of the illustration.
    • this makes the volume head actual size becomes 11.5 Gi and the volume total actual size becomes 22.9 Gi.

  1. Try to delete the only snapshot (snapshot#2) of the volume. See Figure 9 of the illustration.
    • The snapshot directly behinds the volume head cannot be cleaned up. If users try to delete this kind of snapshot, Longhorn will mark the snapshot as Removing, hide it, then try to free the overlapping part between the volume head and the snapshot for the snapshot file. The last operation is called snapshot prune in Longhorn and is available since v1.3.0.
    • Since in the example both the snapshot and the volume head use up most of the nominal space, the overlapping part almost equals to the snapshot actual size. After the pruning, the snapshot actual size is down to 259 Mi and the volume gets shrunk from 22.9 Gi to 11.8 Gi.

Here we summarize the important things related to disk space usage we have in the example:

  • Longhorn does not support TRIM/UNMAP operations. Hence deleting files from filesystems will not lead to volume actual size decreasing/shrinking.

  • Allocated blocks but unused are not reused

    Deleting then writing new files would lead to the actual size keeps increasing. Since the filesystem may not reuse the recently freed blocks from recently deleted files. Thus, allocating an appropriate nominal size for a volume that holds heavy writing tasks according to the IO pattern would make disk space usage more efficient.

  • By deleting snapshots, the overlapping part of the used blocks might be eliminated regardless of whether the blocks are recently released blocks by the filesystem or still contain historical data.

  1. Reserve enough free space in disks as buffers in case of the actual size of existing volumes keep growing up.

    • A general estimation for the maximum space consumption of a volume is

      • where N is the total number of snapshots the volume contains (including the volume head), and the extra 1 is for the temporary space that may be required by snapshot deletion.
      • The average actual size of the snapshots varies and depends on the use cases. If snapshots are created periodically for a volume (e.g. by relying on snapshot recurring jobs), the average value would be the average modified data size for the volume in the snapshot creation interval. If there are heavy writing tasks for volumes, the head/snapshot average actual size would be volume the nominal size. In this case, it’s better to set Storage Over Provisioning Percentage to be smaller than 100% to avoid disk space exhaustion.
      • Some extended cases:

        • Users don’t want snapshot at all. Neither (manually created) snapshot nor recurring job will be launched. Assume is enabled, then formula would become:

          • The worst case that leads to so much space usage:
            1. Somehow the 1st rebuilding/expansion is triggered, which leads to the 1st system snapshot creation.
            2. The 1st purge following by the 1st rebuilding does nothing.
            3. Before another purging/pruning coming, there is data written to the new volume head and the 2nd rebuilding/expansion is triggered. Then the 2nd system snapshot is triggered.
            4. The 2nd purge following by the 2nd rebuilding would lead to the coalescing of the 2 system snapshots. This coalescing requires temporary space.
          • The explanation of the formula:
            • The 1st 1 means the volume head.
            • The 2nd 1 is the first system snapshot mentioned in the worst case.
            • The 3rd 1 is the second system snapshot mentioned in the worst case.
            • The 4th 1 is for the temporary space that may be required by the 2 system snapshot purge/coalescing.
  2. Do not retain too many snapshots for the volumes.

  3. Cleaning up snapshots will help reclaim disk space. There are two ways to clean up snapshots:

    • Delete the snapshots manually via Longhorn UI.

    Also, notice that the extra space, up to volume nominal , is required during snapshot cleanup and merge.