自己做网站打开是乱码,在线制作免费生成水印,wordpress网页防破解,dede可以做视频网站在网上找了一圈, 发现要么就是单机版的部署了多个节点, 很少有多台机器部署集群的, 有些就拿官网的例子写一写, 没有实战经验, 下面分享一个教程, 实实在在的多台机器, 每台机器部署2个节点的例子 先上.env , docker-compose.yml文件, 这个文件是核心, 里面掺杂太多坑, 已经帮你…
在网上找了一圈, 发现要么就是单机版的部署了多个节点, 很少有多台机器部署集群的, 有些就拿官网的例子写一写, 没有实战经验, 下面分享一个教程, 实实在在的多台机器, 每台机器部署2个节点的例子 先上.env , docker-compose.yml文件, 这个文件是核心, 里面掺杂太多坑, 已经帮你填好 .env # Password for the elastic user (at least 6 characters)
ELASTIC_PASSWORD7ns8TptDWjCKaZ7d# Password for the kibana_system user (at least 6 characters)
KIBANA_PASSWORDuCXX4AXUzZgUnabK# Version of Elastic products
STACK_VERSION8.16.0# Set the cluster name
CLUSTER_NAMEtsc# Set to basic or trial to automatically start the 30-day trial
LICENSEbasic
#LICENSEtrial# Port to expose Elasticsearch HTTP API to the host
ES_PORT9200
#ES_PORT127.0.0.1:9200# Port to expose Kibana to the host
KIBANA_PORT5601
#KIBANA_PORT80# Increase or decrease based on the available host memory (in bytes)
# 1073741824 1G
# 17179869184 16G
# 21474836480 20G
# 32212254720 30G
#MEM_LIMIT1073741824
MEM_LIMIT32212254720NODE1es101
NODE2es102
IP# Project namespace (defaults to the current folder name if not set)
COMPOSE_PROJECT_NAMEtsc docker-compose.yml services:setup:image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}volumes:- certs:/usr/share/elasticsearch/config/certsuser: 0command: bash -c if [ x${ELASTIC_PASSWORD} x ]; thenecho Set the ELASTIC_PASSWORD environment variable in the .env file;exit 1;elif [ x${KIBANA_PASSWORD} x ]; thenecho Set the KIBANA_PASSWORD environment variable in the .env file;exit 1;fi;if [ ! -f config/certs/ca.zip ]; thenecho Creating CA;bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;unzip config/certs/ca.zip -d config/certs;fi;if [ ! -f config/certs/certs.zip ]; thenecho Creating certs;echo -ne \instances:\n\ - name: ${NODE1}\n\ dns:\n\ - ${NODE1}\n\ ip:\n\ - ${IP}\n\ - name: ${NODE2}\n\ dns:\n\ - ${NODE2}\n\ ip:\n\ - ${IP}\n\ config/certs/instances.yml;bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;unzip config/certs/certs.zip -d config/certs;fi;echo Setting file permissionschown -R root:root config/certs;find . -type d -exec chmod 750 \{\} \;;find . -type f -exec chmod 640 \{\} \;;echo Waiting for Elasticsearch availability;until curl -s --cacert config/certs/ca/ca.crt https://${IP}:9200 | grep -q missing authentication credentials; do sleep 30; done;echo Setting kibana_system password;until curl -s -X POST --cacert config/certs/ca/ca.crt -u elastic:${ELASTIC_PASSWORD} -H Content-Type: application/json https://${IP}:9200/_security/user/kibana_system/_password -d {\password\:\${KIBANA_PASSWORD}\} | grep -q ^{}; do sleep 10; done;echo All done!;healthcheck:test: [CMD-SHELL, [ -f config/certs/${NODE1}/${NODE1}.crt ]]interval: 1stimeout: 5sretries: 120es01:image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}container_name: ${NODE1}hostname: ${NODE1}restart: always # 配置服务的重启策略volumes:- certs:/usr/share/elasticsearch/config/certs- plugins:/usr/share/elasticsearch/plugins- esdata01:/usr/share/elasticsearch/data- eslog01:/usr/share/elasticsearch/logsports:- 9200:9200- 9300:9300environment:- TZAsia/Shanghai- node.name${NODE1}- cluster.name${CLUSTER_NAME}- network.host0.0.0.0- network.publish_host${IP}- cluster.initial_master_nodeses0011,es0012- discovery.seed_hosts192.168.0.15:9300,192.168.0.15:9301- ELASTIC_PASSWORD${ELASTIC_PASSWORD}- bootstrap.memory_locktrue- xpack.security.enabledtrue- xpack.security.http.ssl.enabledtrue- xpack.security.http.ssl.keycerts/${NODE1}/${NODE1}.key- xpack.security.http.ssl.certificatecerts/${NODE1}/${NODE1}.crt- xpack.security.http.ssl.certificate_authoritiescerts/ca/ca.crt- xpack.security.transport.ssl.enabledtrue- xpack.security.transport.ssl.keycerts/${NODE1}/${NODE1}.key- xpack.security.transport.ssl.certificatecerts/${NODE1}/${NODE1}.crt- xpack.security.transport.ssl.certificate_authoritiescerts/ca/ca.crt- xpack.security.transport.ssl.verification_modecertificate- xpack.license.self_generated.type${LICENSE}- xpack.ml.use_auto_machine_memory_percenttruemem_limit: ${MEM_LIMIT}ulimits:memlock:soft: -1hard: -1healthcheck:test:[CMD-SHELL,curl -s --cacert config/certs/ca/ca.crt https://${NODE1}:9200 | grep -q missing authentication credentials,]interval: 10stimeout: 10sretries: 120es02:depends_on:- es01image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}container_name: ${NODE2}hostname: ${NODE2}restart: always # 配置服务的重启策略volumes:- certs:/usr/share/elasticsearch/config/certs- plugins:/usr/share/elasticsearch/plugins- esdata02:/usr/share/elasticsearch/data- eslog02:/usr/share/elasticsearch/logsports:- 9201:9201- 9301:9301environment:- TZAsia/Shanghai- node.name${NODE2}- http.port9201- transport.port9301- cluster.name${CLUSTER_NAME}- network.host0.0.0.0- network.publish_host${IP}- cluster.initial_master_nodeses0011,es0012- discovery.seed_hosts192.168.0.15:9300,192.168.0.15:9301- ELASTIC_PASSWORD${ELASTIC_PASSWORD}- bootstrap.memory_locktrue- xpack.security.enabledtrue- xpack.security.http.ssl.enabledtrue- xpack.security.http.ssl.keycerts/${NODE2}/${NODE2}.key- xpack.security.http.ssl.certificatecerts/${NODE2}/${NODE2}.crt- xpack.security.http.ssl.certificate_authoritiescerts/ca/ca.crt- xpack.security.transport.ssl.enabledtrue- xpack.security.transport.ssl.keycerts/${NODE2}/${NODE2}.key- xpack.security.transport.ssl.certificatecerts/${NODE2}/${NODE2}.crt- xpack.security.transport.ssl.certificate_authoritiescerts/ca/ca.crt- xpack.security.transport.ssl.verification_modecertificate- xpack.license.self_generated.type${LICENSE}- xpack.ml.use_auto_machine_memory_percenttruemem_limit: ${MEM_LIMIT}ulimits:memlock:soft: -1hard: -1healthcheck:test:[CMD-SHELL,curl -s --cacert config/certs/ca/ca.crt https://${NODE2}:9201 | grep -q missing authentication credentials,]interval: 10stimeout: 10sretries: 120kibana:deploy:replicas: 0 # 默认不启动 , 第一个节点主动启动 docker-compose up -d kibanadepends_on:es01:condition: service_healthyes02:condition: service_healthyimage: docker.elastic.co/kibana/kibana:${STACK_VERSION}restart: always # 配置服务的重启策略volumes:- certs:/usr/share/kibana/config/certs- kibanadata:/usr/share/kibana/dataports:- ${KIBANA_PORT}:5601environment:- SERVERNAMEkibana- ELASTICSEARCH_HOSTShttps://${NODE1}:9200- ELASTICSEARCH_USERNAMEkibana_system- ELASTICSEARCH_PASSWORD${KIBANA_PASSWORD}- ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIESconfig/certs/ca/ca.crtmem_limit: ${MEM_LIMIT}healthcheck:test:[CMD-SHELL,curl -s -I http://${NODE1}:5601 | grep -q HTTP/1.1 302 Found,]interval: 10stimeout: 10sretries: 120
volumes:certs:driver: localdriver_opts:type: nonedevice: /data/elasticsearch/certso: bindplugins:driver: localdriver_opts:type: nonedevice: /data/elasticsearch/pluginso: bindesdata01:driver: localdriver_opts:type: nonedevice: /data/elasticsearch/${NODE1}/datao: bindesdata02:driver: localdriver_opts:type: nonedevice: /data/elasticsearch/${NODE2}/datao: bindkibanadata:driver: localdriver_opts:type: nonedevice: /data/elasticsearch/kibanadatao: bindeslog01:driver: localdriver_opts:type: nonedevice: /data/elasticsearch/${NODE1}/logso: bindeslog02:driver: localdriver_opts:type: nonedevice: /data/elasticsearch/${NODE2}/logso: bind 操作步骤 各点执行如下命令创建目录mkdir -p /data/elasticsearch将 .env 拷贝到目录 /data/elasticsearch 中替换IP
cd /data/elasticsearch
ip$(hostname -i)
sed -i s/^IP.*$/IP$ip/ .env# 获取当前主机名
hostname$(hostname)# 替换 .env 文件中的 NODE1 和 NODE2 为主机名
sed -i s/NODE1.*/NODE1${hostname}1/ .env
sed -i s/NODE2.*/NODE2${hostname}2/ .env执行创建目录操作
hostname$(hostname)
NODE1${hostname}1
NODE2${hostname}2mkdir -p /data/elasticsearch/{certs,kibanadata,plugins}
mkdir -p /data/elasticsearch/{$NODE1,$NODE2}/{data,logs}chmod grwx /data/elasticsearch/{certs,kibanadata,plugins}
chmod grwx /data/elasticsearch/{$NODE1,$NODE2}/{data,logs}更改文件或目录 所属用户组 的命令,指定的目标组为组 ID 为 0 的用户组。
在大多数 Linux 系统中组 ID 为 0 通常是 root 组chgrp 0 /data/elasticsearch/{certs,kibanadata,plugins}
chgrp 0 /data/elasticsearch/{$NODE1,$NODE2}/{data,logs}集群版部署将 docker-compose.yml 上传到 es001到 es009的目录 /data/elasticsearch方便做法: 上传到共享目录 es001:/backup/es然后各点执行 \cp /backup/es/docker-compose.yml /data/elasticsearch插件 cp -r /backup/es/ik /data/elasticsearch/plugins/
https://release.infinilabs.com/analysis-ik/stable/启动命令
docker-compose up -d注意事项: 首先启动第一个节点, 然后将生成的 ca.zip 及解压后的文件拷贝到其它节点对应目录, 这是为了使用相同的证书
\cp -r /data/elasticsearch/certs/ca* /backup/es/然后从共享盘拷贝到各个节点
\cp -r /backup/es/ca* /data/elasticsearch/certs检查是否部署成功
curl -k -u elastic:上述设置的密码 https://127.0.0.1:9200