1 ES集群加密Base Auth认证
- 配置Elasticsearch集群的通信加密(如TLS/SSL)。
1 生成证书文件
[root@elk91 ~]# /usr/share/elasticsearch/bin/elasticsearch-certutil cert -out /etc/elasticsearch/elastic-certificates.p12 -pass "" --days 36500
2 将证书文件拷贝到其他节点
[root@elk91 ~]# chmod 640 /etc/elasticsearch/elastic-certificates.p12
[root@elk91 ~]#
[root@elk91 ~]# ll /etc/elasticsearch/elastic-certificates.p12
-rw-r----- 1 root elasticsearch 3596 May 7 09:04 /etc/elasticsearch/elastic-certificates.p12
[root@elk91 ~]#
[root@elk91 ~]# scp -p /etc/elasticsearch/elastic-certificates.p12 10.168.10.92:/etc/elasticsearch
[root@elk91 ~]# scp -p /etc/elasticsearch/elastic-certificates.p12 10.168.10.93:/etc/elasticsearch
3.修改ES集群的配置文件
[root@elk91 ~]# vim /etc/elasticsearch/elasticsearch.yml
...
# 在最后一行添加以下内容
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
4.同步ES配置文件到其他节点
[root@elk91 ~]# scp /etc/elasticsearch/elasticsearch.yml 10.168.10.92:/etc/elasticsearch/
[root@elk91 ~]# scp /etc/elasticsearch/elasticsearch.yml 10.168.10.93:/etc/elasticsearch/
5.所有节点重启ES集群
[root@elk91 ~]# systemctl restart elasticsearch.service
[root@elk92 ~]# systemctl restart elasticsearch.service
[root@elk93 ~]# systemctl restart elasticsearch.service
6.测试验证ES集群访问
[root@elk91 ~]# curl 10.168.10.91:9200/_cat/nodes?v
{"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/_cat/nodes?v]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/_cat/nodes?v]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}
[root@elk91 ~]#
7.生成随机密码
[root@elk91 ~]# /usr/share/elasticsearch/bin/elasticsearch-setup-passwords auto
warning: usage of JAVA_HOME is deprecated, use ES_JAVA_HOME
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
The passwords will be randomly generated and printed to the console.
Please confirm that you would like to continue [y/N]y # 此处输入字母Y
Changed password for user apm_system
PASSWORD apm_system = Gqh7ioG773BsCw5tZtth
Changed password for user kibana_system
PASSWORD kibana_system = yGFYWAUDKrfJ882LwX3j
Changed password for user kibana
PASSWORD kibana = yGFYWAUDKrfJ882LwX3j
Changed password for user logstash_system
PASSWORD logstash_system = fYAFI5ARJvpy2jglGFqt
Changed password for user beats_system
PASSWORD beats_system = YYBMkiKKk9vdIGlrTkmX
Changed password for user remote_monitoring_user
PASSWORD remote_monitoring_user = H6bLbJ7FdF1kL4takLCq
Changed password for user elastic
PASSWORD elastic = TDk3CKVKRWxRqVGBGUR8
8 验证集群是否正常,此密码不要抄我的,看你上面生成的密码
[root@elk-91 ~]# curl -u elastic:TDk3CKVKRWxRqVGBGUR8 10.168.10.91:9200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
10.168.10.93 18 84 26 0.69 0.49 0.42 cdfhilmrstw - elk-93
10.168.10.92 44 91 27 0.72 0.45 0.29 cdfhilmrstw * elk-92
10.168.10.91 7 96 28 0.58 0.44 0.37 cdfhilmrstw - elk-91
1.1 Kibana对接ES加密集群
配置Kibana安全连接加密的ES集群。
1.修改kibana的配置文件
[root@elk91 ~]# vim /etc/kibana/kibana.yml
...
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://10.0.0.91:9200","http://10.0.0.92:9200","http://10.0.0.93:9200"]
elasticsearch.username: "kibana_system"
elasticsearch.password: "yGFYWAUDKrfJ882LwX3j" # 注意,此密码不要抄我的,看你上面生成的密码
2.重启kibana
[root@elk91 ~]# systemctl restart kibana.service
[root@elk91 ~]#
3.访问kibana的webUI重置管理员密码
使用elastic用户进行登录即可。
1.2 Filebeat对接ES加密集群
设置Filebeat将日志数据安全传输到加密的ES集群。
1.编写filebeat配置文件
[root@elk93 ~]# cat /etc/filebeat/config/18-tcp-to-es_tls.yaml
filebeat.inputs:
- type: tcp
host: "0.0.0.0:9000"
output.elasticsearch:
hosts:
- "http://10.168.10.91:9200"
- "http://10.168.10.92:9200"
- "http://10.168.10.93:9200"
index: "cmy-es-tls-filebeat-%{+yyyy-MM-dd}"
username: "elastic"
password: "123456"
setup.ilm.enabled: false
setup.template.name: "cmy"
setup.template.pattern: "cmy-*"
setup.template.overwrite: false
setup.template.settings:
index.number_of_shards: 3
index.number_of_replicas: 1
[root@elk93 ~]#
2.启动filebeat实例
[root@elk93 ~]# filebeat -e -c /etc/filebeat/config/18-tcp-to-es_tls.yaml
3.发送测试数据
[root@elk91 ~]# echo www.cmy.com | nc 10.168.10.93 9000
1.3 Logstash对接ES加密集群
配置Logstash与加密ES集群的安全通信。
1.编写Logstash的配置文件
[root@elk93 ~]# cat /etc/logstash/conf.d/13-tcp-to-es_tls.conf
input {
tcp {
port => 8888
}
}
output {
elasticsearch {
hosts => ["http://10.168.10.91:9200","http://10.168.10.92:9200","http://10.168.10.93:9200"]
index => "cmy-tls-logstash-%{+yyyy-MM-dd}"
user => "elastic"
password => "123456"
}
}
[root@elk93 ~]#
2.启动Logstash
[root@elk93 ~]# logstash -f /etc/logstash/conf.d/13-tcp-to-es_tls.conf
3.发送测试数据
[root@elk91 ~]# echo www.cmy.com | nc 10.168.10.93 8888
4.kibana测试验证
2 ES的API Key认证方式
学习如何使用API Key进行身份验证。
- ES配置启用api-key功能并Filebeat测试验证
1.为什么要启用api-key
为了安全性,使用用户名和密码的方式进行认证会暴露用户信息。
ElasticSearch也支持api-key的方式进行认证。这样就可以保证安全性。api-key是不能用于登录kibana,安全性得到保障。
而且可以基于api-key实现权限控制。
2.ES启用api-key
[root@elk91 ~]# vim /etc/elasticsearch/elasticsearch.yml
...
# 添加如下配置
# 启用api_key功能
xpack.security.authc.api_key.enabled: true
# 指定API密钥加密算法
xpack.security.authc.api_key.hashing.algorithm: pbkdf2
# 缓存的API密钥时间
xpack.security.authc.api_key.cache.ttl: 1d
# API密钥保存数量的上限
xpack.security.authc.api_key.cache.max_keys: 10000
# 用于内存中缓存的API密钥凭据的哈希算法
xpack.security.authc.api_key.cache.hash_algo: ssha256
[root@elk91 ~]#
3.拷贝配置文件到其他节点
[root@elk91 ~]# scp /etc/elasticsearch/elasticsearch.yml 10.168.10.92:/etc/elasticsearch
[root@elk91 ~]# scp /etc/elasticsearch/elasticsearch.yml 10.168.10.93:/etc/elasticsearch
4.重启ES集群
[root@elk93 ~]# systemctl restart elasticsearch.service
[root@elk92 ~]# systemctl restart elasticsearch.service
[root@elk91 ~]# systemctl restart elasticsearch.service
5.访问kibana的WebUI
http://10.168.10.91:5601/app/management/security/api_keys
6.创建api-key
略,见视频。
7.基于api-key解析
[root@elk91 ~]# echo S2tURnFKWUJzZHF5c09mWk50OGY6b2pIZ2wtN2xTZDZLcUlSUEo1eHpDQQ== | base64 -d ;echo
KkTFqJYBsdqysOfZNt8f:ojHgl-7lSd6KqIRPJ5xzCA
[root@elk91 ~]#
8.编写Filebeat的配置文件
[root@elk93 ~]# cat >/etc/filebeat/config/19-tcp-to-es_api-key.yaml <<EOF
filebeat.inputs:
- type: tcp
host: "0.0.0.0:9000"
output.elasticsearch:
hosts:
- 10.168.10.91:9200
- 10.168.10.92:9200
- 10.168.10.93:9200
#username: "elastic"
#password: "123456"
# 基于api_key方式认证,相比于上面的base_auth更加安全。(生产环境推荐使用此方式!)
api_key: "bcGiqJYBe-hSFCTMBzXv:m90g-qwvSBaPAYChU5Fn0Q"
index: cmy-es-tls-filebeat-api-key
setup.ilm.enabled: false
setup.template.name: "cmy-es"
setup.template.pattern: "cmy-es-*"
setup.template.overwrite: true
setup.template.settings:
index.number_of_shards: 3
index.number_of_replicas: 0
EOF
[root@elk93 ~]#
9.启动filebeat实例
[root@elk93 ~]# filebeat -e -c /etc/filebeat/config/19-tcp-to-es_api-key.yaml
10.发送测试数据
[root@elk91 ~]# echo 1111111111111111111 | nc 10.168.10.93 9000
11.kibana验证数据
略,见视频。
3 ES集群配置https证书
- ES集群配置https证书
1.自建ca证书
[root@elk91 ~]# /usr/share/elasticsearch/bin/elasticsearch-certutil ca --out /etc/elasticsearch/elastic-stack-ca.p12 --pass "" --days 36500
[root@elk91 ~]#
[root@elk91 ~]# ll /etc/elasticsearch/elastic-stack-ca.p12
-rw------- 1 root elasticsearch 2672 May 7 11:38 /etc/elasticsearch/elastic-stack-ca.p12
[root@elk91 ~]#
2.基于自建ca证书生成ES证书
[root@elk91 ~]# /usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca /etc/elasticsearch/elastic-stack-ca.p12 --out /etc/elasticsearch/elastic-certificates-https.p12 --pass "" --days 3650 --ca-pass ""
[root@elk91 ~]# ll /etc/elasticsearch/elastic-stack-ca.p12
-rw------- 1 root elasticsearch 2672 May 7 11:38 /etc/elasticsearch/elastic-stack-ca.p12
[root@elk91 ~]#
[root@elk91 ~]# ll /etc/elasticsearch/elastic-certificates-https.p12
-rw------- 1 root elasticsearch 3596 May 7 11:39 /etc/elasticsearch/elastic-certificates-https.p12
[root@elk91 ~]#
3.修改配置文件
[root@elk91 ~]# vim /etc/elasticsearch/elasticsearch.yml
...
# 启用https配置
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: elastic-certificates-https.p12
[root@elk91 ~]#
4.同步配置文件集群的其他节点
[root@elk91 ~]# chmod 640 /etc/elasticsearch/elastic-certificates-https.p12
[root@elk91 ~]#
[root@elk91 ~]# ll /etc/elasticsearch/elastic-certificates-https.p12
-rw-r----- 1 root elasticsearch 3596 May 7 11:39 /etc/elasticsearch/elastic-certificates-https.p12
[root@elk91 ~]#
[root@elk91 ~]# scp -p /etc/elasticsearch/elastic{-certificates-https.p12,search.yml} 10.168.10.92:/etc/elasticsearch/
[root@elk91 ~]# scp -p /etc/elasticsearch/elastic{-certificates-https.p12,search.yml} 10.168.10.93:/etc/elasticsearch/
5.重启ES集群
[root@elk91 ~]# systemctl restart elasticsearch.service
[root@elk92 ~]# systemctl restart elasticsearch.service
[root@elk93 ~]# systemctl restart elasticsearch.service
6.测试验证,使用https协议
[root@elk91 ~]# curl https://10.168.10.91:9200/_cat/nodes?v -u elastic:123456 -k
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
10.168.10.91 17 94 2 0.81 0.35 0.12 cdfhilmrstw - elk91
10.168.10.93 14 87 2 0.58 0.24 0.08 cdfhilmrstw - elk93
10.168.10.92 33 96 1 0.40 0.20 0.07 cdfhilmrstw * elk92
[root@elk91 ~]#
[root@elk91 ~]# curl https://10.168.10.91:9200/_cat/nodes?v -u elastic:123456 --insecure
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
10.168.10.91 20 95 0 0.07 0.18 0.09 cdfhilmrstw - elk91
10.168.10.93 17 87 0 0.02 0.11 0.06 cdfhilmrstw - elk93
10.168.10.92 36 97 0 0.01 0.09 0.05 cdfhilmrstw * elk92
[root@elk91 ~]#
3.1 filebeat对接ES的https加密集群
3.2 kibana对接ES的https加密集群
- kibana对接ES的https加密集群
1.修改kibana的配置跳过自建证书校验
[root@elk91 ~]# vim /etc/kibana/kibana.yml
...
# 指向ES集群的地址协议为https
elasticsearch.hosts: ["https://10.168.10.91:9200","https://10.168.10.92:9200","https://10.168.10.93:9200"]
# 跳过证书校验
elasticsearch.ssl.verificationMode: none
2.重启kibana
[root@elk91 ~]# systemctl restart kibana.service
3.再次访问测试
http://10.168.10.91:5601/
3.3 logstash基于api-key
1.确保ES集群使用的是https协议
2.创建api-key
curl -X POST "https://10.168.10.91:9200/_security/api_key" \
-H "Content-Type: application/json" \
-u elastic:123456 \
-k \
-d '{
"name": "cmy",
"role_descriptors": {
"filebeat_monitoring": {
"cluster": ["all"],
"index": [
{
"names": ["index-kafka-to-es*"],
"privileges": ["all"]
}
]
}
}
}'
返回数据:
{"id":"7vWGqZYBLWMAsOH0HZPp","name":"cmy","api_key":"A0mcwjXuS9-ltHU8QY0YOA","encoded":"N3ZXR3FaWUJMV01Bc09IMEhaUHA6QTBtY3dqWHVTOS1sdEhVOFFZMFlPQQ=="}
解码encoded数据:
[root@elk91 ~]# echo bi1jVHM1WUJ3UnJNeWpmNGh1NEw6ZFlCZmV3Z0tUVENWbzhJa1lXN01LZw== |base64 -d ;echo
n-cTs5YBwRrMyjf4hu4L:dYBfewgKTTCVo8IkYW7MKg
[root@elk91 ~]#
3.修改Logstash的配置文件
[root@elk93 ~]# cat >/etc/logstash/conf.d/14-tcp-to-es_api-key.conf <<EOF
input {
tcp {
port => 8888
}
}
output {
elasticsearch {
hosts => ["https://10.168.10.91:9200","https://10.168.10.92:9200","https://10.168.10.93:9200"]
index => "cmy-logstash-api-key-xixi"
#user => elastic
#password => "123456"
# 指定api-key的方式认证
api_key => "7PV9qZYBLWMAsOH0XZPj:_lHIlQqITNyicqMWrxB-6A"
# 使用api-key则必须启动ssl
ssl => true
# 跳过ssl证书验证
ssl_certificate_verification => false
}
}
EOF
[root@elk93 ~]#
4.启动Logstash
[root@elk93 ~]# logstash -f /etc/logstash/conf.d/14-tcp-to-es_api-key.conf
5.发送测试数据
[root@elk91 ~]# echo 88888888888888888888888 | nc 10.168.10.92 8888
[root@elk91 ~]# echo 99999999999999999999999 | nc 10.168.10.93 8888
4 ES管理员密码重置问题
解决忘记管理员密码时的重置方法。
- ES7重置elastic管理员密码案例
1.创建一个超级管理员角色
[root@elk93 ~]# /usr/share/elasticsearch/bin/elasticsearch-users useradd cmy -p 123456 -r superuser
[root@elk93 ~]#
2.查看用户列表
[root@elk91 ~]# /usr/share/elasticsearch/bin/elasticsearch-users list
cmy : superuser
[root@elk91 ~]#
3.基于管理员修改密码
[root@elk93 ~]# curl -s --user cmy:123456 -XPUT "http://localhost:9200/_xpack/security/user/elastic/_password?pretty" -H 'Content-Type: application/json' -d'
{
"password" : "654321"
}'
4.使用密码登录测试
[root@elk93 ~]# curl 10.168.10.91:9200/_cat/nodes -u elastic:654321
10.168.10.91 82 89 3 0.03 0.09 0.17 cdfhilmrstw * elk91
10.168.10.92 70 86 1 0.07 0.07 0.12 cdfhilmrstw - elk92
10.168.10.93 64 74 3 0.15 0.21 0.21 cdfhilmrstw - elk93
[root@elk93 ~]#
5 基于Kibana实现RBAC
在Kibana中通过角色基于访问控制(Role-Based Access Control)管理权限。
基于 Kibana 实现 RBAC(Role-Based Access Control,基于角色的访问控制) 是 Elastic Stack 中管理用户权限的核心机制。它通过定义角色(Roles)并分配给用户(Users)或 API Key,实现对 Kibana 界面、Elasticsearch 数据及功能的精细化权限控制。以下是完整指南:
1. Kibana RBAC 核心概念
组件 | 说明 |
---|---|
用户(Users) | 登录 Kibana 的个体(如 elastic 、kibana_system 或自定义用户)。 |
角色(Roles) | 定义一组权限(如可读索引、管理仪表盘等),关联到用户或 API Key。 |
权限(Privileges) | 分为 Elasticsearch 集群/索引权限 和 Kibana 功能权限 两类。 |
空间(Spaces) | Kibana 的多租户功能,不同空间可隔离数据、可视化等资源(需分配空间权限)。 |
2. 配置步骤
2.1 创建角色(Roles)
通过 Kibana 界面或 Elasticsearch API 定义角色:
• 方法1:Kibana 图形化操作
路径:Stack Management > Security > Roles > Create role
• Elasticsearch 权限:控制索引/集群操作(如 read
、write
、delete
)。
• Kibana 权限:控制访问的 Kibana 功能(如 Dashboard
、Visualize
的 Read
或 All
)。
• 方法2:Elasticsearch API
curl -X POST "https://localhost:9200/_security/role/logs_viewer" \
-H "Content-Type: application/json" \
-u elastic:your_password \
-k \
-d '{
"cluster": ["monitor"],
"indices": [
{
"names": ["logs-*"],
"privileges": ["read"]
}
],
"applications": [
{
"application": "kibana-.kibana",
"privileges": ["feature/dashboard/read"],
"resources": ["space:default"]
}
]
}'
2.2 分配角色给用户
• Kibana 操作:Stack Management > Security > Users > Edit user
,选择角色(如 logs_viewer
)。
• API 操作:
curl -X POST "https://localhost:9200/_security/user/john" \
-H "Content-Type: application/json" \
-u elastic:your_password \
-k \
-d '{
"password": "user123",
"roles": ["logs_viewer"],
"full_name": "John Doe"
}'
2.3 空间(Spaces)权限控制
• 创建空间:Stack Management > Spaces > Create a space
(如 sales
、engineering
)。
• 分配空间权限:在角色配置中指定可访问的空间及权限(如 space:sales:read
)。
3. 权限分类详解
3.1 Elasticsearch 权限
权限类型 | 示例权限 | 说明 |
---|---|---|
集群权限 | monitor 、manage |
控制集群级别操作(如节点状态监控)。 |
索引权限 | read 、index 、delete |
控制对特定索引的读写删除操作。 |
3.2 Kibana 功能权限
功能项 | 权限示例 | 说明 |
---|---|---|
Dashboard | feature/dashboard/all |
允许创建、编辑或仅查看仪表盘。 |
Discover | feature/discover/read |
控制是否允许使用 Discover 查询数据。 |
ML | feature/ml/all |
管理机器学习任务。 |
4. 高级场景
4.1 基于属性的访问控制(ABAC)
通过 角色模板 动态分配权限(如按用户部门过滤索引):
{
"role": {
"name": "department_access",
"templates": [
{
"template": {
"source": "{{access}}",
"params": {
"access": "user.metadata.department"
}
}
}
]
}
}
4.2 API Key 临时授权
为外部服务生成有限权限的 API Key:
curl -X POST "https://localhost:9200/_security/api_key" \
-H "Content-Type: application/json" \
-u elastic:your_password \
-k \
-d '{
"name": "temp_logs_key",
"role_descriptors": {
"logs_api": {
"indices": [
{
"names": ["logs-*"],
"privileges": ["read"]
}
]
}
}
}'
6 综合案例
- 1.创建3个角色,分别为: dba,k8s,sre
- 2.使用Logstash基于分别监听"6666","7777","8888"端口,并基于3个不同的api-key写入3个不同的索引:
cmy-dba
cmy-k8s
cmy-sre
- 3.创建3个不同的用户,对相应的索引有对应的访问权限,要去如下:
xixi用户可以访问cmy-dba索引的数据
haha用户可以访问cmy-k8s索引的数据
hehe用户可以访问cmy-sre索引的数据
解题思路:
1.创建api-key
POST /_security/api_key
{
"name": "dba",
"role_descriptors": {
"filebeat_monitoring": {
"cluster": ["all"],
"index": [
{
"names": ["cmy-dba"],
"privileges": ["create_index", "create"]
}
]
}
}
}
返回数据:
{
"id" : "fx3WqZYB-RorAU42FA3p",
"name" : "dba",
"api_key" : "hUaaKe1PROadd7b8jJJEcQ",
"encoded" : "ZngzV3FaWUItUm9yQVU0MkZBM3A6aFVhYUtlMVBST2FkZDdiOGpKSkVjUQ=="
}
解码数据:
[root@elk92 ~]# echo ZngzV3FaWUItUm9yQVU0MkZBM3A6aFVhYUtlMVBST2FkZDdiOGpKSkVjUQ== | base64 -d ;echo
fx3WqZYB-RorAU42FA3p:hUaaKe1PROadd7b8jJJEcQ
[root@elk92 ~]#
POST /_security/api_key
{
"name": "k8s",
"role_descriptors": {
"filebeat_monitoring": {
"cluster": ["all"],
"index": [
{
"names": ["cmy-k8s"],
"privileges": ["create_index", "create"]
}
]
}
}
}
返回数据:
{
"id" : "gB3WqZYB-RorAU42jQ0S",
"name" : "k8s",
"api_key" : "urLRSQkUQ52nfQGpMZj5LA",
"encoded" : "Z0IzV3FaWUItUm9yQVU0MmpRMFM6dXJMUlNRa1VRNTJuZlFHcE1aajVMQQ=="
}
解码数据:
[root@elk93 ~]# echo Z0IzV3FaWUItUm9yQVU0MmpRMFM6dXJMUlNRa1VRNTJuZlFHcE1aajVMQQ== | base64 -d ;echo
gB3WqZYB-RorAU42jQ0S:urLRSQkUQ52nfQGpMZj5LA
[root@elk93 ~]#
POST /_security/api_key
{
"name": "sre",
"role_descriptors": {
"filebeat_monitoring": {
"cluster": ["all"],
"index": [
{
"names": ["cmy-sre"],
"privileges": ["create_index", "create"]
}
]
}
}
}
返回数据:
{
"id" : "gR3XqZYB-RorAU42Xw3R",
"name" : "sre",
"api_key" : "sakMKOnZSSy9Sf51N8s1wA",
"encoded" : "Z1IzWHFaWUItUm9yQVU0Mlh3M1I6c2FrTUtPblpTU3k5U2Y1MU44czF3QQ=="
}
解码数据:
[root@elk93 ~]# echo Z1IzWHFaWUItUm9yQVU0Mlh3M1I6c2FrTUtPblpTU3k5U2Y1MU44czF3QQ== | base64 -d ;echo
gR3XqZYB-RorAU42Xw3R:sakMKOnZSSy9Sf51N8s1wA
[root@elk93 ~]#
2.编写Logstash的配置文件
[root@elk93 ~]# cat /etc/logstash/conf.d/15-ketanglianxi-tcp-to-es.conf
input {
tcp {
port => 6666
type => dba
}
tcp {
port => 7777
type => k8s
}
tcp {
port => 8888
type => sre
}
}
output {
if [type] == "dba" {
elasticsearch {
hosts => ["https://10.168.10.91:9200","https://10.168.10.92:9200","https://10.168.10.93:9200"]
index => "cmy-dba"
api_key => "fx3WqZYB-RorAU42FA3p:hUaaKe1PROadd7b8jJJEcQ"
ssl => true
ssl_certificate_verification => false
}
} else if [type] == "k8s" {
elasticsearch {
hosts => ["https://10.168.10.91:9200","https://10.168.10.92:9200","https://10.168.10.93:9200"]
index => "cmy-k8s"
api_key => "gB3WqZYB-RorAU42jQ0S:urLRSQkUQ52nfQGpMZj5LA"
ssl => true
ssl_certificate_verification => false
}
} else {
elasticsearch {
hosts => ["https://10.168.10.91:9200","https://10.168.10.92:9200","https://10.168.10.93:9200"]
index => "cmy-sre"
api_key => "gR3XqZYB-RorAU42Xw3R:sakMKOnZSSy9Sf51N8s1wA"
ssl => true
ssl_certificate_verification => false
}
}
}
[root@elk93 ~]#
3.启动Logstash实例
[root@elk93 ~]# logstash -f /etc/logstash/conf.d/15-ketanglianxi-tcp-to-es.conf
4.发送测试数据
[root@elk91 ~]# echo 11111111111111111 | nc 10.168.10.93 6666
^C
[root@elk91 ~]#
[root@elk91 ~]#
[root@elk91 ~]# echo 2222222222222222 | nc 10.168.10.93 7777
^C
[root@elk91 ~]#
[root@elk91 ~]#
[root@elk91 ~]# echo 33333333333333333 | nc 10.168.10.93 8888
7 ES优化
7.1 ES集群的JVM优化
JVM优化的思路
默认情况下,ES会吃掉宿主机的一半内存。
生存环境中,建议大家使用宿主机的一半内存,但是这个内存上限为32GB。官方推荐是26GB。
1.2 查看JVM的大小
[root@elk91 ~]# ps -ef | grep elasticsearch | egrep "Xmx|Xms"
elastic+ 19076 1 1 11:42 ? 00:04:09 /usr/share/elasticsearch/jdk/bin/java ... -Xms1937m -Xmx1937m ...
[root@elk91 ~]#
1.3 修改JVM大小
[root@elk91 ~]# vim /etc/elasticsearch/jvm.options
...
-Xms256m
-Xmx256m
1.4 拷贝配置文件到其他节点
[root@elk91 ~]# scp /etc/elasticsearch/jvm.options 10.0.0.92:/etc/elasticsearch
[root@elk91 ~]# scp /etc/elasticsearch/jvm.options 10.0.0.93:/etc/elasticsearch
1.5 所有节点重启ES集群
[root@elk91 ~]# systemctl restart elasticsearch.service
[root@elk92 ~]# systemctl restart elasticsearch.service
[root@elk93 ~]# systemctl restart elasticsearch.service
1.6 验证JVM大小
[root@elk91 ~]# ps -ef | grep elasticsearch | egrep "Xmx|Xms"
elastic+ 20219 1 58 17:16 ? 00:01:05 /usr/share/elasticsearch/jdk/bin/java ... -Xms256m -Xmx256m ...
7.2 ES禁用索引的通配符删除
2.1 默认支持基于通配符删除索引
[root@elk91 ~]# curl -u elastic:123456 -X DELETE https://10.0.0.91:9200/cmy-elk-mul* -k ;echo
{"acknowledged":true}
[root@elk91 ~]#
2.2 修改ES集群的配置文件
[root@elk91 ~]# vim /etc/elasticsearch/elasticsearch.yml
...
# 禁止使用通配符或_all删除索引
action.destructive_requires_name: true
2.3 拷贝配置文件到其他节点
[root@elk91 ~]# scp /etc/elasticsearch/elasticsearch.yml 10.0.0.92:/etc/elasticsearch/
[root@elk91 ~]# scp /etc/elasticsearch/elasticsearch.yml 10.0.0.93:/etc/elasticsearch/
2.4 重启ES集群
[root@elk91 ~]# systemctl restart elasticsearch.service
[root@elk92 ~]# systemctl restart elasticsearch.service
[root@elk93 ~]# systemctl restart elasticsearch.service
2.5 验证测试
[root@elk91 ~]# curl -u elastic:123456 -X DELETE https://10.168.10.91:9200/cmy-elk* -k ;echo
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Wildcard expressions or all indices are not allowed"}],"type":"illegal_argument_exception","reason":"Wildcard expressions or all indices are not allowed"},"status":400}
[root@elk91 ~]#
#elk