This operation is very cheap. Just increase a number in master server's memory.

    Lookup volume

    1. curl "http://localhost:9333/dir/lookup?volumeId=3&pretty=y"
    2. {
    3. "locations": [
    4. {
    5. "publicUrl": "localhost:8080",
    6. "url": "localhost:8080"
    7. }
    8. ]
    9. }
    10. # Other usages:
    11. # You can actually use the file id to lookup, if you are lazy to parse the file id.
    12. curl "http://localhost:9333/dir/lookup?volumeId=3,01637037d6"
    13. # If you know the collection, specify it since it will be a little faster
    14. curl "http://localhost:9333/dir/lookup?volumeId=3&collection=turbo"

    If your system has many deletions, the deleted file's disk space will not be synchronously re-claimed. There is a background job to check volume disk usage. If empty space is more than the threshold, default to 0.3, the vacuum job will make the volume readonly, create a new volume with only existing files, and switch on the new volume. If you are impatient or doing some testing, vacuum the unused spaces this way.

    This operation is not trivial. It will try to make a copy of the .dat and .idx files, skipping deleted files, and switch to the new files, removing the old files.

    Pre-Allocate Volumes

    1. # specify a specific replication
    2. curl "http://localhost:9333/vol/grow?replication=000&count=4"
    3. {"count":4}
    4. # specify a collection
    5. curl "http://localhost:9333/vol/grow?collection=turbo&count=4"
    6. # specify data center
    7. curl "http://localhost:9333/vol/grow?dataCenter=dc1&count=4"
    8. # specify ttl
    9. curl "http://localhost:9333/vol/grow?ttl=5d&count=4"

    This generates 4 empty volumes.

    Check System Status

    1. curl "http://10.0.2.15:9333/cluster/status?pretty=y"
    2. {
    3. "IsLeader": true,
    4. "Leader": "10.0.2.15:9333",
    5. "Peers": [
    6. "10.0.2.15:9334",
    7. "10.0.2.15:9335"
    8. ]
    9. }
    10. curl "http://localhost:9333/dir/status?pretty=y"
    11. {
    12. "DataCenters": [
    13. {
    14. "Free": 3,
    15. "Id": "dc1",
    16. "Max": 7,
    17. "Racks": [
    18. {
    19. "DataNodes": [
    20. {
    21. "Free": 3,
    22. "Max": 7,
    23. "PublicUrl": "localhost:8080",
    24. "Url": "localhost:8080",
    25. "Volumes": 4
    26. }
    27. ],
    28. "Free": 3,
    29. "Id": "DefaultRack",
    30. "Max": 7
    31. }
    32. ]
    33. },
    34. {
    35. "Free": 21,
    36. "Id": "dc3",
    37. "Max": 21,
    38. "Racks": [
    39. {
    40. "DataNodes": [
    41. {
    42. "Free": 7,
    43. "Max": 7,
    44. "PublicUrl": "localhost:8081",
    45. "Url": "localhost:8081",
    46. "Volumes": 0
    47. }
    48. ],
    49. "Free": 7,
    50. "Id": "rack1",
    51. "Max": 7
    52. {
    53. "DataNodes": [
    54. {
    55. "Free": 7,
    56. "Max": 7,
    57. "PublicUrl": "localhost:8082",
    58. "Url": "localhost:8082",
    59. "Volumes": 0
    60. },
    61. {
    62. "Free": 7,
    63. "Max": 7,
    64. "PublicUrl": "localhost:8083",
    65. "Url": "localhost:8083",
    66. "Volumes": 0
    67. }
    68. ],
    69. "Free": 14,
    70. "Id": "DefaultRack",
    71. "Max": 14
    72. }
    73. ]
    74. }
    75. ],
    76. "Free": 24,
    77. "Max": 28,
    78. "layouts": [
    79. {
    80. "collection": "",
    81. "replication": "000",
    82. "writables": [
    83. 1,
    84. 2,
    85. 3,
    86. 4
    87. ]
    88. }
    89. ]
    90. },
    91. "Version": "0.47"