- Deploy YDB On-Premises
- Prepare and format disks on each server
- Create TLS certificates using OpenSSL
- Initialize a cluster
- Start the DB dynamic node
Before you start
Make sure you have SSH access to all servers. This is necessary to install artifacts and run the YDB binary file.
Your network configuration must allow TCP connections on the following ports (by default):
- 2135, 2136: GRPC for client-cluster interaction.
- 19001, 19002 - Interconnect for intra-cluster node interaction.
- 8765, 8766: The HTTP interface for cluster monitoring.
Select the servers and disks to be used for data storage:
- Use the fault tolerance model for cluster deployment in one availability zone (AZ). To survive the loss of 2 nodes, use at least 8 nodes.
- Use the
mirror-3-dc
fault tolerance model for cluster deployment in three availability zones (AZ). To survive the loss of 1 AZ and 1 node in another AZ, use at least 9 nodes. The number of nodes in each AZ should be the same.
Run each static node on a separate server.
Create a system user and a group to run YDB under
On each server where YDB will be running, execute:
To make sure the YDB server has access to block store disks to run, add the user to start the process under to the disk group.
sudo usermod -aG disk ydb
Prepare and format disks on each server
Warning
We don’t recommend using disks that are used by other processes (including the OS) for data storage.
- Create a partition on the selected disk
Alert
Be careful! The following step will delete all partitions on the specified disks.
Make sure that you specified the disks that have no other data!
sudo parted /dev/nvme0n1 mklabel gpt -s
sudo parted -a optimal /dev/nvme0n1 mkpart primary 0% 100%
sudo parted /dev/nvme0n1 name 1 ydb_disk_01
sudo partx --u /dev/nvme0n1
As a result, a disk labeled as /dev/disk/by-partlabel/ydb_disk_01
will appear in the system.
If you plan to use more than one disk on each server, specify a label that is unique for each of them instead of ydb_disk_01
. You’ll need to use these disks later in the configuration files.
Download an archive with the ydbd
executable file and the libraries necessary for working with YDB:
curl https://binaries.ydb.tech/ydbd-main-linux-amd64.tar.gz | tar -xz
Create directories to run:
mkdir -p /opt/ydb
chown ydb:ydb /opt/ydb
mkdir /opt/ydb/bin
mkdir /opt/ydb/cfg
- Copy the binary file, libraries, and configuration file to the appropriate directories:
sudo cp -i ydbd-main-linux-amd64/bin/ydbd /opt/ydb/bin/
sudo cp -i ydbd-main-linux-amd64/lib/libaio.so /opt/ydb/lib/
sudo cp -i ydbd-main-linux-amd64/lib/libiconv.so /opt/ydb/lib/
sudo cp -i ydbd-main-linux-amd64/lib/libidn.so /opt/ydb/lib/
- Format the disk with the built-in command
sudo LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/ydb/lib /opt/ydb/bin/ydbd admin bs disk obliterate /dev/disk/by-partlabel/ydb_disk_01
Perform this operation for each disk that will be used for data storage.
Prepare the configuration files:
Unprotected mode
Protected mode
Download a sample config for the appropriate failure model of your cluster:
- : For a single-datacenter cluster.
- mirror-3dc: For a cross-datacenter cluster consisting of 9 nodes.
- : For a cross-datacenter cluster consisting of 3 nodes.
- In the host_configs section, specify all disks and their types on each cluster node. Possible disk types:
host_configs:
- drive:
- path: /dev/disk/by-partlabel/ydb_disk_01
type: SSD
host_config_id: 1
- ROT (rotational): HDD.
- SSD: SSD or NVMe.
- In the
hosts
section, specify the FQDN of each node, their configuration and location in adata_center
orrack
.
hosts:
- host: node1.ydb.tech
host_config_id: 1
walle_location:
body: 1
data_center: 'zone-a'
rack: '1'
- host: node2.ydb.tech
host_config_id: 1
walle_location:
body: 2
rack: '1'
- host: node3.ydb.tech
host_config_id: 1
walle_location:
body: 3
data_center: 'zone-c'
rack: '1'
Save the YDB configuration file as /opt/ydb/cfg/config.yaml
In this mode, traffic between cluster nodes and between the client and cluster is encrypted using the TLS protocol.
Create TLS certificates using OpenSSL
Note
You can use existing TLS certificates. It’s important that certificates support both server and client authentication (extendedKeyUsage = serverAuth,clientAuth
).
Create a CA key
Create a directory named secure
to store the CA key and one named certs
for certificates and node keys:
Create a ca.cnf
configuration file with the following content:
[ ca ]
default_ca = CA_default
[ CA_default ]
default_days = 365
database = index.txt
serial = serial.txt
default_md = sha256
copy_extensions = copy
unique_subject = no
prompt=no
distinguished_name = distinguished_name
x509_extensions = extensions
[ distinguished_name ]
organizationName = YDB
commonName = YDB CA
[ extensions ]
keyUsage = critical,digitalSignature,nonRepudiation,keyEncipherment,keyCertSign
basicConstraints = critical,CA:true,pathlen:1
[ signing_policy ]
organizationName = supplied
commonName = optional
[ signing_node_req ]
keyUsage = critical,digitalSignature,keyEncipherment
extendedKeyUsage = serverAuth,clientAuth
# Used to sign client certificates.
[ signing_client_req ]
keyUsage = critical,digitalSignature,keyEncipherment
extendedKeyUsage = clientAuth
Create a CA key by running the command:
openssl genrsa -out secure/ca.key 2048
Save this key separately, you’ll need it for issuing certificates. If it’s lost, you’ll have to reissue all certificates.
Create a private Certificate Authority (CA) certificate by running the command:
openssl req -new -x509 -config ca.cnf -key secure/ca.key -out ca.crt -days 365 -batch
Creating keys and certificates for cluster nodes
Create a node.conf
configuration file with the following content:
# OpenSSL node configuration file
[ req ]
prompt=no
distinguished_name = distinguished_name
req_extensions = extensions
[ distinguished_name ]
organizationName = YDB
[ extensions ]
subjectAltName = DNS:<node>.<domain>
Create a certificate key by running the command:
openssl genrsa -out node.key 2048
Create a Certificate Signing Request (CSR) by running the command:
openssl req -new -sha256 -config node.cnf -key certs/node.key -out node.csr -batch
Create a node certificate with the following command:
openssl ca -config ca.cnf -keyfile secure/ca.key -cert certs/ca.crt -policy signing_policy \
-extensions signing_node_req -out certs/node.crt -outdir certs/ -in node.csr -batch
Create directories for certificates on each node
mkdir /opt/ydb/certs
chmod 0750 /opt/ydb/certs
Copy the node certificates and keys
{% include %}
3. In the interconnect_config
and grpc_config
sections, specify the path to the certificate, key, and CA certificates:
interconnect_config:
start_tcp: true
encryption_mode: OPTIONAL
path_to_certificate_file: "/opt/ydb/certs/node.crt"
path_to_private_key_file: "/opt/ydb/certs/node.key"
path_to_ca_file: "/opt/ydb/certs/ca.crt"
grpc_config:
cert: "/opt/ydb/certs/node.crt"
key: "/opt/ydb/certs/node.key"
ca: "/opt/ydb/certs/ca.crt"
Save the configuration file as /opt/ydb/cfg/config.yaml
Start static nodes
Manual
Using systemd
sudo su - ydb
cd /opt/ydb
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/ydb/lib
/opt/ydb/bin/ydbd server --log-level 3 --syslog --tcp --yaml-config /opt/ydb/cfg/config.yaml \
TBD: how and where to write logs? Log rotation
- On each node, create a configuration file named
/etc/systemd/system/ydbd-storage.service
with the following content:
[Unit]
Description=YDB storage node
After=network-online.target rc-local.service
Wants=network-online.target
StartLimitInterval=10
StartLimitBurst=15
[Service]
Restart=always
RestartSec=1
User=ydb
PermissionsStartOnly=true
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=ydbd
SyslogFacility=daemon
SyslogLevel=err
Environment=LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/ydb/lib
ExecStart=/opt/ydb/bin/ydbd server --log-level 3 --syslog --tcp --yaml-config /opt/ydb/cfg/config.yaml --grpc-port 2135 --ic-port 19001 --mon-port 8765 --node static
LimitNOFILE=65536
LimitCORE=0
LimitMEMLOCK=3221225472
[Install]
WantedBy=multi-user.target
- Run YDB storage on each node:
sudo systemctl start ydbd-storage
On one of the cluster nodes, run the command:
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/ydb/lib ; /opt/ydb/bin/ydbd admin blobstorage config init --yaml-file /opt/ydb/cfg/config.yaml ; echo $?
The command execution code should be null.
Creating the first database
To work with tables, you need to create at least one database and run a process serving this database (a dynamic node).
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/ydb/lib ; /opt/ydb/bin/ydbd admin database /Root/testdb create ssd:1
Start the DB dynamic node
Manual
Using systemd
- Start the YDB dynamic node for the /Root/testdb database:
sudo su - ydb
cd /opt/ydb
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/ydb/lib
/opt/ydbd/bin/ydbd server --grpc-port 2136 --ic-port 19002 --mon-port 8766 --yaml-config /opt/ydb/cfg/config.yaml \
--tenant /Root/testdb --node-broker --node-broker --node-broker
Run additional dynamic nodes on other servers to ensure database availability.
- Create a configuration file named
/etc/systemd/system/ydbd-testdb.service
with the following content:
[Unit]
Description=YDB testdb dynamic node
After=network-online.target rc-local.service
Wants=network-online.target
StartLimitInterval=10
StartLimitBurst=15
[Service]
Restart=always
RestartSec=1
User=ydb
PermissionsStartOnly=true
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=ydbd
SyslogFacility=daemon
SyslogLevel=err
Environment=LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/ydb/lib
ExecStart=LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/ydb/lib ; /opt/ydb/bin/ydbd server --grpc-port 2136 --ic-port 19002 --mon-port 8766 --yaml-config /opt/ydb/cfg/config.yaml --tenant /Root/testdb --node-broker --node-broker --node-broker
LimitNOFILE=65536
LimitCORE=0
LimitMEMLOCK=32212254720
[Install]
WantedBy=multi-user.target
- Start the YDB dynamic node for the /Root/testdb database:
- Run additional dynamic nodes on other servers to ensure database availability.
Test the created database
- Install the YDB CLI as described in Installing the YDB CLI
- Create a
test_table
:
ydb -e grpc://<node1.domain>:2136 -d /Root/testdb scripting yql \
Where node.domain is the FQDN of the server running the dynamic nodes that support the /Root/testdb
database.