集群部署

    注意:

    仅企业版以上支持。
    本文档以172.16.4.13、172.16.4.12、172.16.4.11三台服务器来搭建集群。

    redis集群安装配置

    这里准备在一台服务器上搭建一个三主三从的redis集群。

    1. 安装redis 先,本文档以redis5.0.5为例。
      下载完成后进行解压、编译、安装。

    注意:

    redis是由C语言编写的,它的运行需要C环境,所以编译前需安装 gcc。

    1. 新建一个cluster文件夹,用来存放集群节点目录。
      分别在172.16.4.11上创建7000、7003、7001、7004、7002、7005六文件夹,,这些节点分别使用7000、7003、7001、7004、7002、7005端口,以7000节点为例配置如下:

    其他节点只需修改端口和文件名,依次按此进行配置即可,配置完成后启动节点。

    注意:

    1. 主从节点分配

    注意:

    其中-replicas 1表示每个主节点1个从节点

    1. 查看集群节点信息
    1. ops@jetlinks-server-3:~/data/redis-5.0.5$ redis-cli -h 172.16.4.11 -p 7000 cluster nodes
    2. 562558e5afa0575d1059c47b9531a37cd75a9190 172.16.4.11:7003@17003 slave 9174a3ebcdbda174ec9189fbae0e38d9bbeeff5f 0 1594368796612 4 connected
    3. bd974dbdd2f8447b27375c832c4c9c99328f4487 172.16.4.11:7005@17005 slave 3ab893f4cdcfaa52254a8cece2b54b561de29990 0 1594368797514 6 connected
    4. 9174a3ebcdbda174ec9189fbae0e38d9bbeeff5f 172.16.4.11:7002@17002 master - 0 1594368797615 3 connected 10923-16383
    5. 3ab893f4cdcfaa52254a8cece2b54b561de29990 172.16.4.11:7001@17001 master - 0 1594368797000 2 connected 5461-10922
    6. 238585a079196b0ab15ebb47ac681d42c083cdaf 172.16.4.11:7000@17000 myself,master - 0 1594368796000 1 connected 0-5460
    7. d1ae6c623694c5f1fafc64d7abbeac4a9926ff49 172.16.4.11:7004@17004 slave 238585a079196b0ab15ebb47ac681d42c083cdaf 0 1594368796000 5 connected

    redis集群搭建请参考redis官方文档 (opens new window)

    1. 分别在三台服务器上安装elasticsearch 前往官网。
      下载完成后安装es。
    2. 配置elasticsearch.yml文件
    1. ops@jetlinks-server-3:~/elasticsearch/config$ sudo vi elasticsearch.yml
    2. # ======================== Elasticsearch Configuration =========================
    3. #
    4. # NOTE: Elasticsearch comes with reasonable defaults for most settings.
    5. # Before you set out to tweak and tune the configuration, make sure you
    6. # understand what are you trying to accomplish and the consequences.
    7. #
    8. # The primary way of configuring a node is via this file. This template lists
    9. # the most important settings you may want to configure for a production cluster.
    10. #
    11. # Please consult the documentation for further information on configuration options:
    12. # https://www.elastic.co/guide/en/elasticsearch/reference/index.html
    13. #
    14. # ---------------------------------- Cluster -----------------------------------
    15. #
    16. # Use a descriptive name for your cluster:
    17. #
    18. cluster.name: es-cluster
    19. #
    20. # ------------------------------------ Node ------------------------------------
    21. #
    22. # Use a descriptive name for the node:
    23. #
    24. node.name: node-3
    25. #
    26. # Add custom attributes to the node:
    27. #
    28. #node.attr.rack: r1
    29. #
    30. # ----------------------------------- Paths ------------------------------------
    31. #
    32. # Path to directory where to store the data (separate multiple locations by comma):
    33. #
    34. #path.data: /path/to/data
    35. #
    36. # Path to log files:
    37. #
    38. #path.logs: /path/to/logs
    39. #
    40. # ----------------------------------- Memory -----------------------------------
    41. #
    42. # Lock the memory on startup:
    43. #
    44. #bootstrap.memory_lock: true
    45. #
    46. # Make sure that the heap size is set to about half the memory available
    47. # on the system and that the owner of the process is allowed to use this
    48. # limit.
    49. #
    50. # Elasticsearch performs poorly when the system is swapping the memory.
    51. # ---------------------------------- Network -----------------------------------
    52. #
    53. # Set the bind address to a specific IP (IPv4 or IPv6):
    54. #
    55. network.host: 172.16.4.11
    56. transport.tcp.port: 9300
    57. # Set a custom port for HTTP:
    58. #
    59. http.port: 9200
    60. #
    61. # For more information, consult the network module documentation.
    62. #
    63. # --------------------------------- Discovery ----------------------------------
    64. #
    65. # Pass an initial list of hosts to perform discovery when new node is started:
    66. # The default list of hosts is ["127.0.0.1", "[::1]"]
    67. #
    68. discovery.zen.ping.unicast.hosts: ["172.16.4.13:9300", "172.16.4.12:19300","172.16.4.11:9300"]
    69. #
    70. # Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
    71. #
    72. discovery.zen.minimum_master_nodes: 1
    73. #
    74. # For more information, consult the zen discovery module documentation.
    75. #
    76. # ---------------------------------- Gateway -----------------------------------
    77. #
    78. # Block initial recovery after a full cluster restart until N nodes are started:
    79. #
    80. #gateway.recover_after_nodes: 3
    81. #
    82. # For more information, consult the gateway module documentation.
    83. #
    84. # ---------------------------------- Various -----------------------------------
    85. #
    86. # Require explicit names when deleting indices:
    87. #
    88. #action.destructive_requires_name: true

    三台服务器不同的配置:

    1. #172.16.4.12
    2. node.name: node-2
    3. network.host:172.16.4.12
    1. #172.16.4.13
    2. node.name: node-1
    3. network.host:172.16.4.13

    3 在三台服务器上分别启动es

    1. ./bin/elasticsearch

    数据库启动

    本文档案例使用docker启动postgresql,可参考文件中的postgresql配置。

    1.修改application.yml,并将jetlinks打成jar包

    application.yml文件:

    在多台服务器上启动jetlinks,配置中的jetlinks.server-id必须互不相同。

    在项目根目录执行:

    1. mvn clean package -DskipTests
    1. 分别在三台服务器上启动jetlinks 文档以脚本方式启动,脚本如下:
    1. #!/bin/bash
    2. nohup java -jar -Dspring.application.name=jetlinks-cluster-test-3 jetlinks-standalone.jar >jetlinks-pro.log 2>&1 &

    注意:

    三台服务启动的jetlinks.server-id应该互不相同,在application.yml中jetlinks.server-id引用了spring.application.name,所以此处的name应不相同。

    nginx配置

    通过nginx来代理前后端。

    1. 安装nginx
      前往官网 下载完成后解压、编译、安装。
    2. 修改配置文件

    配置文件如下:

    1. upstream iotserver {
    2. server 172.16.4.11:8844;
    3. server 172.16.4.12:8844;
    4. server 172.16.4.13:8844;
    5. }
    6. upstream webserver {
    7. server 172.16.4.11:9000;
    8. }
    9. upstream fileserver {
    10. server 172.16.4.11:8844; #此处指定文件上传到该服务器上
    11. }
    12. server {
    13. listen 8080;
    14. server_name demo2.jetlinks.cn;
    15. location ^~/upload/ {
    16. proxy_pass http://fileserver;
    17. proxy_set_header Host $host:$server_port;
    18. proxy_set_header X-Real-IP $remote_addr;
    19. }
    20. location ^~/jetlinks/file/static {
    21. proxy_pass http://fileserver/file/static;
    22. proxy_set_header X-Forwarded-Proto $scheme;
    23. proxy_set_header Host $host:$server_port;
    24. proxy_set_header X-Real-IP $remote_addr;
    25. proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    26. proxy_send_timeout 30m;
    27. proxy_read_timeout 30m;
    28. client_max_body_size 100m;
    29. }
    30. location ^~/jetlinks/ {
    31. proxy_pass http://iotserver/;
    32. proxy_set_header X-Forwarded-Proto $scheme;
    33. proxy_set_header Host $host:$server_port;
    34. proxy_set_header X-Real-IP $remote_addr;
    35. proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    36. proxy_http_version 1.1;
    37. proxy_set_header Upgrade $http_upgrade;
    38. proxy_set_header Connection "upgrade";
    39. proxy_connect_timeout 1;
    40. proxy_buffering off;
    41. chunked_transfer_encoding off;
    42. proxy_cache off;
    43. proxy_send_timeout 30m;
    44. proxy_read_timeout 30m;
    45. client_max_body_size 100m;
    46. }
    47. location / {
    48. proxy_pass http://webserver/;
    49. }
    50. }
    1. 启动

    用于代理设备连接。此处使用nginx作为演示,也可以使用LVS,haProxy等方式。

    配置文件:

    1. load_module /usr/lib/nginx/modules/ngx_stream_module.so;
    2. user root;
    3. worker_processes 1;
    4. error_log /etc/nginx/log/error.log warn;
    5. pid /var/run/nginx.pid;
    6. events {
    7. worker_connections 1024;
    8. }
    9. stream {
    10. upstream tcp-test {
    11. hash $remote_addr consistent;
    12. server 172.16.4.13:1889 max_fails=3 fail_timeout=10s;
    13. #server 172.16.4.12:1889 max_fails=3 fail_timeout=10s;
    14. server 172.16.4.11:1889 max_fails=3 fail_timeout=10s;
    15. }
    16. server {
    17. listen 1884;
    18. proxy_pass tcp-test;
    19. proxy_connect_timeout 30s;
    20. proxy_timeout 30s;
    21. }

    此配置监听的MQTT的1884端口,代理到服务器上的1889。如需代理自定义端口需保持代理端口与jetlinks平台中网络组件开启的端口一致。 如此处平台中MQTT服务网络组件设置的端口应为1889。