Prerequisite
To use IoTDB, you need to have:
Java >= 1.8 (Please make sure the environment path has been set)
Set the max open files num as 65535 to avoid “too many open files” problem.
IoTDB provides you three installation methods, you can refer to the following suggestions, choose one of them:
- Installation from source code. If you need to modify the code yourself, you can use this method.
- Installation from binary files. Download the binary files from the official website.
- Using Docker:The path to the dockerfile is https://github.com/apache/iotdb/blob/master/docker/src/main
You can download the source code from:
The default dev branch is the master branch, If you want to use a released version, for example:
Under the root path of iotdb:
Then the cluster binary version can be found at cluster/target/{iotdb-project.version}
Download
You can download the binary file from: Download Page (opens new window)
Directory
After installation, the following directories will be generated by default under the root directory of the iotdb cluster:
Before starting to use IoTDB, you need to config the configuration files first. For your convenience, we have already set the default config in the files.
In total, we provide users four kinds of configurations module:
- environment configuration file (
iotdb-env.bat
,iotdb-env.sh
). The default configuration file for the environment configuration item. Users can configure the relevant system configuration items of JAVA-JVM in the file. - system configuration file (
iotdb-engine.properties
). The default configuration file for the IoTDB engine layer configuration item. Users can configure the IoTDB engine related parameters in the file, such as the precision of timestamp(timestamp_precision
), etc. What’s more, Users can configure settings of TsFile (the data files), such as the data size written to the disk per time(group_size_in_byte
). iotdb-cluster.properties
. Some configurations required by IoTDB cluster. Such as replication number(default_replica_num
), etc.
For detailed description of the two configuration files iotdb-engine.properties
, iotdb-env.sh
/iotdb-env.bat
, please refer to Configuration Manual (opens new window). The configuration items of IoTDB cluster are in the iotdb-cluster.properties
file, you can also review the comments in the directly or you can refer to [Cluster Configuration](#Cluster Configuration).
You are necessary to modify the following configuration items of each node to start your IoTDB cluster:
iotdb-engine.properties:
rpc_address
rpc_port
base_dir
data_dirs
wal_dir
iotdb-cluster.properties
internal_ip
internal_meta_port
internal_data_port
cluster_info_public_port
seed_nodes
Some configurations in the iotdb-engines.properties will be ignored
enable_auto_create_schema
is always considered asfalse
. Useenable_auto_create_schema
in iotdb-cluster.properties to enable it, instead.is_sync_enable
is always considered asfalse
.
Start Service
Start Cluster
You can deploy a distributed cluster on multiple nodes or on a single machine, the main difference being that the latter needs to handle conflicts between ports and file directories. For detail descriptions, please refer to Configurations.
To start the service of one of the nodes, you need to execute the following commands:
> nohup sbin/start-node.sh [printgc] [<conf_path>] >/dev/null 2>&1 &
# Windows
> sbin\start-node.bat [printgc] [<conf_path>]
printgc
means whether enable the gc and print gc logs when start the node, <conf_path>
use the configuration file in the conf_path
folder to override the default configuration file.
If you start all the seed nodes, and all the seed nodes can contact each other without ip/port and file directory conflicts, the cluster has successfully started.
In the process of cluster running, users can add new nodes to the cluster or delete existing nodes. At present, it only supports node by node cluster scalability, and multi node cluster scalability can be transformed into a series of single node cluster scalability operations. The cluster will hanlde new cluster extension operations only after the last cluster scalability operation is completed.
Start the following script on the new node to join the cluster to add a new node:
printgc
means whether enable the gc and print gc logs when start the node, <conf_path>
use the configuration file in the conf_path
folder to override the default configuration file. GC log is off by default. For performance tuning, you may want to collect the GC info. GC log is stored at IOTDB_HOME/logs/gc.log
.
Start the following script on any node in the cluster to delete a node:
# Unix/OS X
# Windows
> sbin\remove-node.bat <internal_ip> <internal_meta_port>
internal_ip
means the IP address of the node to be deleted means the meta port of the node to be deleted
Use Cli
please refer to QuickStart (opens new window). You can establish a connection with any node in the cluster according to the rpc_address and rpc_port.
Cluster Configuration
- internal_ip
Name | internal_ip |
---|---|
Description | IP address of internal communication between nodes in IOTDB cluster, such as heartbeat, snapshot, raft log, etc. internal_ip is a private ip. |
Type | String |
Default | 127.0.0.1 |
Effective | After restart system, shall NOT change after cluster is up |
- internal_meta_port
Name | internal_meta_port |
---|---|
Description | IoTDB meta service port, for meta group’s communication, which involves all nodes and manages the cluster configuration and storage groups. IoTDB will automatically create a heartbeat port for each meta service. The default meta service heartbeat port is internal_meta_port+1 , please confirm that these two ports are not reserved by the system and are not occupied |
Type | Int32 |
Default | 9003 |
Effective | After restart system, shall NOT change after cluster is up |
- internal_data_port
Name | internal_data_port |
---|---|
Description | IoTDB data service port, for data groups’ communication, each consists of one node and its replicas, managing timeseries schemas and data. IoTDB will automatically create a heartbeat port for each data service. The default data service heartbeat port is internal_data_port+1 . Please confirm that these two ports are not reserved by the system and are not occupied |
Type | Int32 |
Default | 40010 |
Effective | After restart system, shall NOT change after cluster is up |
- cluster_info_public_port
Name | cluster_info_public_port |
---|---|
Description | The port of RPC service that getting the cluster info (e.g., data partition) |
Type | Int32 |
Default | 6567 |
Effective | After restart system |
- open_server_rpc_port
Name | open_server_rpc_port |
---|---|
Description | whether open port for server module (for debug purpose), if true, the single’s server rpc_port will be changed to rpc_port (in iotdb-engines.properties) + 1 |
Type | Boolean |
Default | False |
Effective | After restart system |
- seed_nodes
Name | seed_nodes |
---|---|
Description | The address(internal ip) of the nodes in the cluster, {IP/DOMAIN}:internal_meta_port format, separated by commas; for the pseudo-distributed mode, you can fill in localhost , or 127.0.0.1 or mixed, but the real ip address cannot appear; for the distributed mode, real ip or hostname is supported, but localhost or 127.0.0.1 cannot appear. When used by start-node.sh(.bat) , this configuration means the nodes that will form the initial cluster, so every node that use start-node.sh(.bat) should have the same seed_nodes , or the building of the initial cluster will fail. WARNING: if the initial cluster is built, this should not be changed before the environment is cleaned. When used by add-node.sh(.bat) , this means the nodes to which that the application of joining the cluster will be sent, as all nodes can respond to a request, this configuration can be any nodes that already in the cluster, unnecessary to be the nodes that were used to build the initial cluster by start-node.sh(.bat) . Several nodes will be picked randomly to send the request, the number of nodes picked depends on the number of retries. |
Type | String |
Default | 127.0.0.1:9003,127.0.0.1:9005,127.0.0.1:9007 |
Effective | After restart system |
- rpc_thrift_compression_enable
- default_replica_num
Name | default_replica_num |
---|---|
Description | Number of cluster replicas of timeseries schema and data. Storage group info is always fully replicated in all nodes. |
Type | Int32 |
Default | 3 |
Effective | After restart system, shall NOT change after cluster is up |
- multi_raft_factor
Name | multi_raft_factor |
---|---|
Description | Number of raft group instances started by each data group. By default, each data group starts one raft group |
Type | Int32 |
Default | 1 |
Effective | After restart system, shall NOT change after cluster is up |
- cluster_name
Name | cluster_name |
---|---|
Description | Cluster name is used to identify different clusters, cluster_name of all nodes in a cluster must be the same |
Type | String |
Default | default |
Effective | After restart system |
- connection_timeout_ms
Name | connection_timeout_ms |
---|---|
Description | Thrift socket and connection timeout between raft nodes, in milliseconds. Note that the timeout of the connection used for sending heartbeats and requesting votes will be adjust to min(heartbeat_interval_ms, connection_timeout_ms). |
Type | Int32 |
Default | 20000 |
Effective | After restart system |
- heartbeat_interval_ms
Name | heartbeat_interval_ms |
---|---|
Description | The time period between heartbeat broadcasts in leader, in milliseconds |
Type | Int64 |
Default | 1000 |
Effective | After restart system |
- election_timeout_ms
Name | election_timeout_ms |
---|---|
Description | The election timeout in follower, or the time waiting for request votes in elector, in milliseconds |
Type | Int64 |
Default | 20000 |
Effective | After restart system |
- read_operation_timeout_ms
- write_operation_timeout_ms
Name | write_operation_timeout_ms |
---|---|
Description | The write operation timeout period, for internal communication only, not for the entire operation, in milliseconds |
Type | Int32 |
Default | 30000 |
Effective | After restart system |
- min_num_of_logs_in_mem
Name | min_num_of_logs_in_mem |
---|---|
Description | The minimum number of committed logs in memory, after each log deletion, at most such number of logs will remain in memory. Increasing the number will reduce the chance to use snapshot in catch-ups, but will also increase the memory footprint |
Type | Int32 |
Default | 100 |
Effective | After restart system |
- max_num_of_logs_in_mem
Name | max_num_of_logs_in_mem |
---|---|
Description | Maximum number of committed logs in memory, when reached, a log deletion will be triggered. Increasing the number will reduce the chance to use snapshot in catch-ups, but will also increase memory footprint |
Type | Int32 |
Default | 1000 |
Effective | After restart system |
- log_deletion_check_interval_second
Name | log_deletion_check_interval_second |
---|---|
Description | The interval for checking and deleting the committed log task, which removes oldest in-memory committed logs when their size exceeds , in seconds |
Type | Int32 |
Default | 60 |
Effective | After restart system |
- enable_auto_create_schema
Name | enable_auto_create_schema |
---|---|
Description | Whether creating schema automatically is enabled, this will replace the one in iotdb-engine.properties |
Type | BOOLEAN |
Default | true |
Effective | After restart system |
Name | consistency_level |
---|---|
Description | Consistency level, now three consistency levels are supported: strong, mid, and weak. Strong consistency means the server will first try to synchronize with the leader to get the newest data, if failed(timeout), directly report an error to the user; While mid consistency means the server will first try to synchronize with the leader, but if failed(timeout), it will give up and just use current data it has cached before; Weak consistency does not synchronize with the leader and simply use the local data |
Type | strong, mid, weak |
Default | mid |
Effective | After restart system |
- is_enable_raft_log_persistence