1 lb-nginx
- 部署3个Pod,要求如下:
- 一个Pod充当负载均衡器,另外两个Pod要求首页内容显示不一样;
- 客户端访问负载均衡器的Pod,该pod将请求调度到另外两个pod;
[root@master231 pods]# cat 05-casedemo-lb-web.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
apps: lb
name: nginx-lb
spec:
containers:
- image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
name: c1
---
apiVersion: v1
kind: Pod
metadata:
labels:
apps: web01
name: nginx-web01
spec:
containers:
- image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
name: c1
---
apiVersion: v1
kind: Pod
metadata:
labels:
apps: web02
name: nginx-web02
spec:
containers:
- image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
name: c1
[root@master231 pods]#
[root@master231 pods]#
[root@master231 pods]# kubectl apply -f 05-casedemo-lb-web.yaml
pod/nginx-lb created
pod/nginx-web01 created
pod/nginx-web02 created
[root@master231 pods]#
[root@master231 pods]# kubectl get pods -o wide -l 'apps in (web01,web02,lb)' --show-labels
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
nginx-lb 1/1 Running 0 28s 10.100.203.144 worker232 <none> <none> apps=lb
nginx-web01 1/1 Running 0 28s 10.100.203.145 worker232 <none> <none> apps=web01
nginx-web02 1/1 Running 0 28s 10.100.203.146 worker232 <none> <none> apps=web02
[root@master231 pods]#
- 2.修改web服务的首页
[root@master231 pods]# kubectl exec -it nginx-web01 -- sh
/ # echo web01 > /usr/share/nginx/html/index.html
/ #
[root@master231 pods]#
[root@master231 pods]# kubectl exec -it nginx-web02 -- sh
/ # echo web02 > /usr/share/nginx/html/index.html
/ #
[root@master231 pods]#
[root@worker232 ~]# curl 10.100.203.145
web01
[root@worker232 ~]#
[root@worker232 ~]# curl 10.100.203.146
web02
[root@worker232 ~]#
- 3.修改负载均衡器的配置文件
[root@master231 pods]# kubectl exec -it nginx-lb -- sh
/ #
/ # vi /etc/nginx/nginx.conf
...
http {
...
# include /etc/nginx/conf.d/*.conf;
upstream web {
server 10.100.203.145;
server 10.100.203.146;
}
server {
listen 80;
location / {
proxy_pass http://web;
}
}
...
}
/ # nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
/ #
/ #
/ # nginx -s reload
2025/05/26 01:49:33 [notice] 54#54: signal process started
/ #
- 4.访问负载均衡器测试验证
[root@worker232 ~]# for i in `seq 10`; do curl 10.100.203.144;done
web02
web01
web02
web01
web02
web01
web02
web01
web02
web01
[root@worker232 ~]#
2 lb-nginx+tomcat
- 1.部署3个Pod,要求如下:
- 1.将tomcat,nginx软件包上传到harbor镜像"cmy-casedemo"仓库
- registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
- http://192.168.14.253/Resources/Prometheus/images/cmy-tomcat-v9.0.87.tar.gz
- 2.其中2个pod部署的为tomcat环境,要求使用harbor仓库镜像;
- 3.另外一个pod部署的nginx环境充当负载均衡器,要求使用harbor仓库镜像;
- 4.访问nginx的负载均衡器,能够调度到两个不同的tomcat示例;
[root@master231 pods]#
[root@master231 pods]#
[root@master231 pods]# kubectl apply -f 05-casedemo-lb-web.yaml
pod/nginx-lb created
pod/nginx-web01 created
pod/nginx-web02 created
[root@master231 pods]#
[root@master231 pods]# kubectl get pods -o wide -l 'apps in (web01,web02,lb)' --show-labels
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
nginx-lb 1/1 Running 0 28s 10.100.203.144 worker232 <none> <none> apps=lb
nginx-web01 1/1 Running 0 28s 10.100.203.145 worker232 <none> <none> apps=web01
nginx-web02 1/1 Running 0 28s 10.100.203.146 worker232 <none> <none> apps=web02
[root@master231 pods]#
- 2.修改web服务的首页
[root@master231 pods]# kubectl exec -it nginx-web01 -- sh
/ # echo web01 > /usr/share/nginx/html/index.html
/ #
[root@master231 pods]#
[root@master231 pods]# kubectl exec -it nginx-web02 -- sh
/ # echo web02 > /usr/share/nginx/html/index.html
/ #
[root@master231 pods]#
[root@worker232 ~]# curl 10.100.203.145
web01
[root@worker232 ~]#
[root@worker232 ~]# curl 10.100.203.146
web02
[root@worker232 ~]#
- 3.修改负载均衡器的配置文件
[root@master231 pods]# kubectl exec -it nginx-lb -- sh
/ #
/ # vi /etc/nginx/nginx.conf
...
http {
...
# include /etc/nginx/conf.d/*.conf;
upstream web {
server 10.100.203.145;
server 10.100.203.146;
}
server {
listen 80;
location / {
proxy_pass http://web;
}
}
...
}
/ # nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
/ #
/ #
/ # nginx -s reload
2025/05/26 01:49:33 [notice] 54#54: signal process started
/ #
- 4.访问负载均衡器测试验证
[root@worker232 ~]# for i in `seq 10`; do curl 10.100.203.144;done
web02
web01
web02
web01
web02
web01
web02
web01
web02
web01
[root@worker232 ~]#
3 env-wp-mysql
1.上传MySQL镜像到harbor仓库
2.编写资源清单
[root@master231 pods]# cat 10-pods-env-mysql.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
apps: mysql
name: db-mysql80
spec:
containers:
- image: harbor.cmy.cn/mysql/mysql@sha256:c57363379dee26561c2e554f82e70704be4c8129bd0d10e29252cc0a34774004
# 向容器传递环境变量
env:
# 自定义变量的名称
- name: MYSQL_ALLOW_EMPTY_PASSWORD
# 自定义变量的值
value: "yes"
- name: MYSQL_DATABASE
value: wordpress
- name: MYSQL_USER
value: linux97
- name: MYSQL_PASSWORD
value: cmy
name: c1
args:
- --character-set-server=utf8
- --collation-server=utf8_bin
- --default-authentication-plugin=mysql_native_password
---
apiVersion: v1
kind: Pod
metadata:
labels:
apps: wp
name: wp
spec:
hostNetwork: true
nodeName: worker233
containers:
- image: harbor.cmy.cn/mysql/wp@sha256:07c5a73891236eed540e68c8cdc819a24fe617fa81259ee22be3105daefa3ee1
# 向容器传递环境变量
name: c1
4 rc
cat 01.rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: rc-xiuxian
spec:
replicas: 3
selector:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
class: linux91
spec:
containers:
- name: c1
image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v3
5 mongodb
[root@master231 deployments]# cat 04-deploy-mongo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-mongodb
spec:
replicas: 1
selector:
matchLabels:
apps: mongodb
template:
metadata:
labels:
apps: mongodb
spec:
nodeName: worker232
containers:
- image: harbor250.cmy.com/cmy-db/mongo:8.0.6-noble
name: c1
[root@master231 deployments]# kubectl apply -f 04-deploy-mongo.yaml
deployment.apps/deploy-mongodb created
[root@master231 deployments]#
[root@master231 deployments]# kubectl get pods -o wide -l apps=mongodb
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-mongodb-58574559cd-zhwv8 1/1 Running 0 9s 10.100.203.191 worker232 <none> <none>
[root@master231 deployments]#
3.测试验证
[root@master231 deployments]# kubectl exec -it deploy-mongodb-58574559cd-zhwv8 -- mongosh
Current Mongosh Log ID: 683537993b2d8146146b140a
Connecting to: mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+2.4.2
Using MongoDB: 8.0.6
Using Mongosh: 2.4.2
...
test> show dbs;
admin 8.00 KiB
config 12.00 KiB
local 8.00 KiB
test>
test> use cmy
switched to db cmy
cmy>
cmy> db.student.insertMany([{"name":"于文智","hobby":["睡觉","游戏","小姐姐"]},{"name":"李涵","hobby":["CS 1.6"]}])
{
acknowledged: true,
insertedIds: {
'0': ObjectId('6835381c3b2d8146146b140b'),
'1': ObjectId('6835381c3b2d8146146b140c')
}
}
cmy> db.student.find()
[
{
_id: ObjectId('6835381c3b2d8146146b140b'),
name: '于文智',
hobby: [ '睡觉', '游戏', '小姐姐' ]
},
{
_id: ObjectId('6835381c3b2d8146146b140c'),
name: '李涵',
hobby: [ 'CS 1.6' ]
}
]
6 gitlab
案例:
1.导入镜像
[root@worker233 ~]# wget http://192.168.14.253/Resources/Kubernetes/Project/DevOps/cmy-gitlab-ce-v17.5.2.tar.gz
[root@worker233 ~]# docker load -i cmy-gitlab-ce-v17.5.2.tar.gz
[root@worker233 ~]# docker tag gitlab/gitlab-ce:17.5.2-ce.0 harbor250.cmy.com/cmy-devops/gitlab-ce:17.5.2-ce.0
[root@worker233 ~]#
[root@worker233 ~]# docker push harbor250.cmy.com/cmy-devops/gitlab-ce:17.5.2-ce.0
温馨提示:
建议将该虚拟机内存调大到8GB,然后再启动。
2.编写资源清单
[root@master231 deployments]# cat 03-deploy-gitlab.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-gitlab
spec:
replicas: 1
selector:
matchLabels:
apps: gitlab
template:
metadata:
labels:
apps: gitlab
spec:
nodeName: worker233
hostNetwork: true
containers:
- image: harbor250.cmy.com/cmy-devops/gitlab-ce:17.5.2-ce.0
name: c1
[root@master231 deployments]#
[root@master231 deployments]# kubectl apply -f 03-deploy-gitlab.yaml
deployment.apps/deploy-gitlab created
[root@master231 deployments]#
[root@master231 deployments]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-gitlab-55b9d7f9c-fh5tq 1/1 Running 0 37s 10.0.0.233 worker233 <none> <none>
[root@master231 deployments]#
3.查看初始化密码
[root@master231 deployments]# kubectl logs deploy-gitlab-55b9d7f9c-fh5tq | grep initial_root_password
Password stored to /etc/gitlab/initial_root_password. This file will be cleaned up in first reconfigure run after 24 hours.
[root@master231 deployments]#
[root@master231 deployments]# kubectl exec deploy-gitlab-55b9d7f9c-fh5tq -- cat /etc/gitlab/initial_root_password
# WARNING: This value is valid only in the following conditions
# 1. If provided manually (either via `GITLAB_ROOT_PASSWORD` environment variable or via `gitlab_rails['initial_root_password']` setting in `gitlab.rb`, it was provided before database was seeded for the first time (usually, the first reconfigure run).
# 2. Password hasn't been changed manually, either via UI or via command line.
#
# If the password shown here doesn't work, you must reset the admin password following https://docs.gitlab.com/ee/security/reset_user_password.html#reset-your-root-password.
Password: MKulu3J7rcTV3Ynb1oAH9TW44+g0Otq+F11GHq6R1qk=
# NOTE: This file will be automatically deleted in the first reconfigure run after 24 hours.
[root@master231 deployments]#
7 job实现mysql备份恢复
7.1 部署wp并添加文章
cat wp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: db-mysql80
labels:
apps: mysql
spec:
replicas: 1 # 设置副本数量
selector:
matchLabels:
apps: mysql
template:
metadata:
labels:
apps: mysql
spec:
containers:
- image: harbor.cmy.cn/mysql/mysql@sha256:c57363379dee26561c2e554f82e70704be4c8129bd0d10e29252cc0a34774004
name: c1
env:
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: "yes"
- name: MYSQL_DATABASE
value: wordpress
- name: MYSQL_USER
value: linux97
- name: MYSQL_PASSWORD
value: cmy
args:
- --character-set-server=utf8
- --collation-server=utf8_bin
- --default-authentication-plugin=mysql_native_password
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wp
labels:
apps: wp
spec:
replicas: 1 # 设置副本数量
selector:
matchLabels:
apps: wp
template:
metadata:
labels:
apps: wp
spec:
hostNetwork: true
nodeName: worker233
containers:
- image: harbor.cmy.cn/mysql/wp@sha256:07c5a73891236eed540e68c8cdc819a24fe617fa81259ee22be3105daefa3ee1
name: c1
7.2 使用job控制器对MySQL进行数据库备份;
cat bak.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: mysql-backup
spec:
template:
spec:
containers:
- name: mysql-backup
image: harbor.cmy.cn/mysql/mysql@sha256:c57363379dee26561c2e554f82e70704be4c8129bd0d10e29252cc0a34774004
command: ["/bin/bash"]
args: ["-c", "mysqldump -h 10.100.203.173 -u linux97 -pcmy wordpress > /tmp/mysql_backup_$(date +'%Y%m%d%H%M%S').sql;sleep 1000 "]
restartPolicy: OnFailure
然后使用kubectl cp将备份数据拿出来
7.3 删除数据库
[root@master231 01-mysql-backup-jobs]# kubectl exec -it deploy-mysql-565cdb9df7-fnl57 -- mysql
mysql> SHOW DATABASES;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
| wordpress |
+--------------------+
5 rows in set (0.00 sec)
mysql>
mysql> DROP DATABASE wordpress;
Query OK, 12 rows affected (0.03 sec)
mysql>
mysql> SHOW DATABASES;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
4 rows in set (0.00 sec)
7.4 恢复数据
[root@master231 01-mysql-backup-jobs]# cat 03-jobs-restore.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: mysqldump-restore
spec:
template:
spec:
containers:
- name: pi
image: harbor250.cmy.com/cmy-db/mysql:8.0.36-oracle
command:
- /bin/bash
- -c
- tail -f /etc/hosts
restartPolicy: Never
[root@master231 01-mysql-backup-jobs]#
[root@master231 01-mysql-backup-jobs]# kubectl apply -f 03-jobs-restore.yaml
job.batch/mysqldump-restore created
[root@master231 01-mysql-backup-jobs]#
[root@master231 01-mysql-backup-jobs]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-mysql-565cdb9df7-fnl57 1/1 Running 0 12m 10.0.0.232 worker232 <none> <none>
deploy-wordpress-765cd78dcc-mqk79 1/1 Running 0 12m 10.0.0.233 worker233 <none> <none>
mysqldump-backup-rp52j 1/1 Running 0 3m21s 10.100.203.133 worker232 <none> <none>
mysqldump-restore-l7fgp 1/1 Running 0 5s 10.100.203.136 worker232 <none> <none>
[root@master231 01-mysql-backup-jobs]#
[root@master231 01-mysql-backup-jobs]# kubectl cp mysqldump-backup-rp52j:/tmp/xixi.sql /tmp/wp.sql
tar: Removing leading `/' from member names
[root@master231 01-mysql-backup-jobs]#
[root@master231 01-mysql-backup-jobs]# ll /tmp/wp.sql
-rw-r--r-- 1 root root 1354323 May 27 15:41 /tmp/wp.sql
[root@master231 01-mysql-backup-jobs]#
[root@master231 01-mysql-backup-jobs]# kubectl cp /tmp/wp.sql mysqldump-restore-l7fgp:/tmp/wp.sql
[root@master231 01-mysql-backup-jobs]#
[root@master231 01-mysql-backup-jobs]# kubectl exec -it mysqldump-restore-l7fgp -- bash
bash-4.4#
bash-4.4# mysql -h 10.0.0.232 < /tmp/wp.sql
bash-4.4#
bash-4.4#
- 7.再次验证
[root@master231 01-mysql-backup-jobs]# kubectl exec -it deploy-mysql-565cdb9df7-fnl57 -- mysql -e "SHOW DATABASES;"
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
| wordpress |
+--------------------+
[root@master231 01-mysql-backup-jobs]#
- 8.访问WebUI
略,见视频。
8 cronjob实现定时备份etcd数据
验证etcd存储K8S集群数据:
1.部署etcdctl工具
[root@master231 ~]# wget http://192.168.14.253/Resources/Prometheus/softwares/Etcd/etcd-v3.5.21-linux-amd64.tar.gz
[root@master231 ~]# tar -xf etcd-v3.5.21-linux-amd64.tar.gz -C /usr/local/bin etcd-v3.5.21-linux-amd64/etcdctl --strip-components=1
[root@master231 ~]#
[root@master231 ~]# etcdctl --endpoints="https://10.168.10.231:2379" --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key endpoint status --write-out=table
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://10.168.10.231:2379 | e670fb8b0b7fd7c6 | 3.5.6 | 9.3 MB | true | false | 8 | 177427 | 177427 | |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
[root@master-231 /cmy/manifests/cj]#
[root@master231 ~]# alias etcdctl='etcdctl --endpoints="https://10.168.10.231:2379" --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key'
[root@master231 ~]#
etcdctl endpoint status --write-out=table
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://10.168.10.231:2379 | e670fb8b0b7fd7c6 | 3.5.6 | 9.3 MB | true | false | 8 | 177529 | 177529 | |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
[root@master-231 /cmy/manifests/cj]#
2.验证所有Pod存储
[root@master231 ~]# etcdctl get "" --prefix --keys-only | grep "/pods/" | grep default
/registry/pods/default/deploy-mysql-565cdb9df7-fnl57
/registry/pods/default/deploy-wordpress-765cd78dcc-mqk79
/registry/pods/default/mysqldump-backup-rp52j
/registry/pods/default/mysqldump-restore-l7fgp
[root@master231 ~]#
3.可以直接在etcd中删除Pod记录【生产环境谨慎操作!!!】
[root@master231 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
deploy-mysql-565cdb9df7-fnl57 1/1 Running 0 36m
deploy-wordpress-765cd78dcc-mqk79 1/1 Running 0 36m
mysqldump-backup-rp52j 1/1 Running 0 26m
mysqldump-restore-l7fgp 1/1 Running 0 23m
[root@master231 ~]#
[root@master231 ~]# etcdctl del "/registry/pods/default/deploy-mysql-565cdb9df7-fnl57"
1
[root@master231 ~]#
[root@master231 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
deploy-mysql-565cdb9df7-9r4rd 1/1 Running 0 3s
deploy-wordpress-765cd78dcc-mqk79 1/1 Running 0 36m
mysqldump-backup-rp52j 1/1 Running 0 26m
mysqldump-restore-l7fgp 1/1 Running 0 23m
[root@master231 ~]#
使用cj控制器实现etcd数据的周期性备份。
[root@master231 ketanglianxi]# kubectl get pods -o wide -n kube-system -l component=etcd
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
etcd-master231 1/1 Running 2 (31h ago) 5d4h 10.0.0.231 master231 <none> <none>
[root@master231 ketanglianxi]#
- 实战案例
1.编写Dockerfile
[root@master231 ~]# cp /etc/kubernetes/pki/etcd/{ca.crt,server.crt,server.key} ./
[root@master231 ~]# cp /usr/local/bin/etcdctl ./
[root@master231 ~]# cat Dockerfile
cat Dockerfile
FROM harbor.cmy.cn/nginx/apps:v1
MAINTAINER cmy
LABEL school=cmy \
class=linux97
COPY etcdctl /usr/local/bin/
COPY ca.crt server.crt server.key /certs/
#CMD ["tail","-f","/etc/hosts"]
ENTRYPOINT ["/bin/sh","-c","etcdctl --endpoints=\"https://10.168.10.231:2379\" --cacert=/certs/ca.crt --cert=/certs/server.crt --key=/certs/server.key snapshot save /opt/cmy-etcd-`date +%F-%T`.backup && tail -f /etc/hosts"]
[root@master-231 ~]#
2.编译镜像并推送到harbor仓库
[root@master-231 ~]# docker build -t harbor.cmy.cn/etcd/etcd-bak:v2 .
[root@master-231 ~]# docker push harbor.cmy.cn/etcd/etcd-bak:v2
3.编写资源清单
cat bak-etcd.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: etcd-backup
spec:
schedule: "* * * * *" # 每天凌晨2点执行
jobTemplate:
spec:
template:
spec:
containers:
- name: etcd-backup
image: harbor.cmy.cn/etcd/etcd-bak:v2
restartPolicy: OnFailure
[root@master231 01-etcd-bakcup-cj]# kubectl apply -f 01-cj-backup-etcd.yaml
cronjob.batch/backup-etcd created
4.查看数据
[root@master231 01-etcd-bakcup-cj]# kubectl get cj,jobs,po -o wide
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE CONTAINERS IMAGES SELECTOR
cronjob.batch/backup-etcd * * * * * False 2 12s 115s c1 harbor250.cmy.com/cmy-casedemo/etcd-backup:v0.8 <none>
NAME COMPLETIONS DURATION AGE CONTAINERS IMAGES SELECTOR
job.batch/backup-etcd-29138956 0/1 72s 72s c1 harbor250.cmy.com/cmy-casedemo/etcd-backup:v0.8 controller-uid=788a5ee4-bc82-41d0-bcd2-c3795564679f
job.batch/backup-etcd-29138957 0/1 12s 12s c1 harbor250.cmy.com/cmy-casedemo/etcd-backup:v0.8 controller-uid=54873c2b-7c04-4fdf-a72e-7b6eebf11ffc
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/backup-etcd-29138956-4mnxn 1/1 Running 0 72s 10.100.203.147 worker232 <none> <none>
pod/backup-etcd-29138957-pgmtn 1/1 Running 0 12s 10.100.203.149 worker232 <none> <none>
[root@master231 01-etcd-bakcup-cj]#
[root@master231 01-etcd-bakcup-cj]# kubectl exec -it backup-etcd-29138956-4mnxn -- sh
/ # ls -l /opt/
total 8128
-rw------- 1 root root 8319008 May 27 09:16 cmy-etcd-2025-05-27-09:16:00.backup
/ #
/ #
[root@master231 01-etcd-bakcup-cj]#
[root@master231 01-etcd-bakcup-cj]# kubectl exec -it backup-etcd-29138957-pgmtn -- sh
/ # ls -l /opt/
total 8128
-rw------- 1 root root 8319008 May 27 09:17 cmy-etcd-2025-05-27-09:17:00.backup
9 ETCD备份数据与恢复!!!!
9.1 备份
- cj控制器基于存储卷周期性备份k8s的etcd数据
1.准备证书文件
[root@master231 01-etcd-backup-volumes]# mkdir -pv /cmy/data/nfs-server/homework/etcd-backup/certs
mkdir: created directory '/cmy/data/nfs-server/homework'
mkdir: created directory '/cmy/data/nfs-server/homework/etcd-backup'
mkdir: created directory '/cmy/data/nfs-server/homework/etcd-backup/certs'
[root@master231 01-etcd-backup-volumes]#
[root@master231 01-etcd-backup-volumes]# cp /etc/kubernetes/pki/etcd/{ca.crt,server.crt,server.key} /cmy/data/nfs-server/homework/etcd-backup/certs
[root@master231 01-etcd-backup-volumes]#
[root@master231 01-etcd-backup-volumes]# tree /cmy/data/nfs-server/homework/etcd-backup/certs
/cmy/data/nfs-server/homework/etcd-backup/certs
├── ca.crt
├── server.crt
└── server.key
0 directories, 3 files
[root@master231 01-etcd-backup-volumes]#
2.创建备份数据的存储目录
[root@master231 01-etcd-backup-volumes]# mkdir -pv /cmy/data/nfs-server/homework/etcd-backup/data
mkdir: created directory '/cmy/data/nfs-server/homework/etcd-backup/data'
[root@master231 01-etcd-backup-volumes]#
[root@master231 01-etcd-backup-volumes]# tree /cmy/data/nfs-server/homework/etcd-backup/data
/cmy/data/nfs-server/homework/etcd-backup/data
0 directories, 0 files
[root@master231 01-etcd-backup-volumes]#
3.编写Dockerfile并推送到harbor仓库
[root@master231 01-etcd-backup-volumes]# which etcdctl
/usr/local/bin/etcdctl
[root@master231 01-etcd-backup-volumes]#
[root@master231 01-etcd-backup-volumes]# cp /usr/local/bin/etcdctl ./
[root@master231 01-etcd-backup-volumes]#
[root@master231 01-etcd-backup-volumes]# cat > Dockerfile <<EOF
FROM harbor.cmy.cn/nginx/apps:v1
COPY etcdctl /usr/local/bin/
COPY etcd /usr/local/bin/
CMD ["tail","-f","/etc/hosts"]
[root@master231 01-etcd-backup-volumes]#
[root@master231 01-etcd-backup-volumes]# docker build -t harbor.cmy.cn/etcd/etcdctl:v3.5.21 .
[root@master231 01-etcd-backup-volumes]#
[root@master231 01-etcd-backup-volumes]# docker push harbor.cmy.cn/etcd/etcdctl:v3.5.21
4.编写cj的资源清单
[root@master231 01-etcd-backup-volumes]# cat 01-cj-backup-etcd.yaml
cat 01-cj-backup-etcd.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: backup-etcd
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
volumes:
- name: tz
hostPath:
path: /etc/localtime
- name: certs
nfs:
server: 10.168.10.231
path: /cmy/data/nfs-server/homework/etcd-backup/certs
- name: backup
nfs:
server: 10.168.10.231
path: /cmy/data/nfs-server/homework/etcd-backup/data
containers:
- name: c1
image: harbor.cmy.cn/etcd/etcdctl:v3.5.21
volumeMounts:
- name: certs
mountPath: /certs
- name: backup
mountPath: /opt
- name: tz
mountPath: /etc/localtime
command:
- /bin/sh
- -c
- etcdctl --endpoints=\"https://10.168.10.231:2379\" --cacert=/certs/ca.crt --cert=/certs/server.crt --key=/certs/server.key snapshot save /opt/cmy-etcd-`date +%F-%T`.backup
restartPolicy: OnFailure
5.创建资源
[root@master231 01-etcd-backup-volumes]# kubectl apply -f 01-cj-backup-etcd.yaml
cronjob.batch/backup-etcd created
[root@master231 01-etcd-backup-volumes]#
[root@master231 01-etcd-backup-volumes]# kubectl get cj,jobs,pods -o wide
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE CONTAINERS IMAGES SELECTOR
cronjob.batch/backup-etcd * * * * * False 1 1s 36s c1 harbor250.cmy.com/cmy-casedemo/etcdctl:v3.5.21 <none>
NAME COMPLETIONS DURATION AGE CONTAINERS IMAGES SELECTOR
job.batch/backup-etcd-29151431 0/1 1s 1s c1 harbor250.cmy.com/cmy-casedemo/etcdctl:v3.5.21 controller-uid=1d60525c-7357-42d2-b9bc-3af9cd41cf61
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/backup-etcd-29151431-497n8 1/1 Running 0 1s 10.100.140.100 worker233 <none> <none>
6.查看数据是否备份成功
[root@master231 ~]# ll -h /cmy/data/nfs-server/homework/etcd-backup/data
total 47M
drwxr-xr-x 2 root root 4.0K Jun 5 09:12 ./
drwxr-xr-x 4 root root 4.0K Jun 5 08:55 ../
-rw------- 1 root root 24M Jun 5 09:11 cmy-etcd-2025-06-05-09:11:01.backup
-rw------- 1 root root 24M Jun 5 09:12 cmy-etcd-2025-06-05-09:12:01.backup
[root@master231 ~]#
[root@master231 ~]#
9.2 恢复
- etcd数据恢复案例
1.准备etcd服务端程序
[root@master231 ~]# wget http://192.168.14.253/Resources/Prometheus/softwares/Etcd/etcd-v3.5.21-linux-amd64.tar.gz
[root@master231 ~]# tar xf etcd-v3.5.21-linux-amd64.tar.gz -C /cmy/manifests/homework/01-etcd-backup-volumes etcd-v3.5.21-linux-amd64/etcd --strip-components=1
[root@master231 ~]#
2.编写Dockerfile并推送到harbor仓库
[root@master231 01-etcd-backup-volumes]# cat etcd-server.dockerfile
FROM harbor.cmy.cn/nginx/apps:v1
COPY etcdctl /usr/local/bin/
COPY etcd /usr/local/bin/
CMD ["tail","-f","/etc/hosts"]
[root@master231 01-etcd-backup-volumes]#
[root@master231 01-etcd-backup-volumes]# docker build -t harbor.cmy.cn/etcd/etcd:v3.5.21 .
[root@master231 01-etcd-backup-volumes]#
[root@master231 01-etcd-backup-volumes]# docker push harbor.cmy.cn/etcd/etcd:v3.5.21
3.准备恢复的数据目录
[root@master231 01-etcd-backup-volumes]# mkdir -pv /cmy/data/nfs-server/homework/etcd-backup/restore
mkdir: created directory '/cmy/data/nfs-server/homework/etcd-backup/restore'
[root@master231 01-etcd-backup-volumes]#
[root@master231 ~]# tree /cmy/data/nfs-server/homework/etcd-backup/restore/
/cmy/data/nfs-server/homework/etcd-backup/restore/
0 directories, 0 files
[root@master231 ~]#
4.编写jobs控制器还原数据
[root@master231 01-etcd-backup-volumes]# cat 02-jobs-restore-etcd.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: restore-etcd
spec:
template:
spec:
volumes:
- name: data
nfs:
server: 10.168.10.231
path: /cmy/data/nfs-server/homework/etcd-backup/data
- name: restore
nfs:
server: 10.168.10.231
path: /cmy/data/nfs-server/homework/etcd-backup/restore
containers:
- name: restore
image: harbor.cmy.cn/etcd/etcdctl:v3.5.21
volumeMounts:
- name: data
mountPath: /data
- name: restore
mountPath: /var/lib/etcd
command:
- /bin/sh
- -c
- etcdctl snapshot restore /data/cmy-etcd-2025-06-05-10:29:02.backup --data-dir=/var/lib/etcd
restartPolicy: Never
[root@master231 01-etcd-backup-volumes]# kubectl apply -f 02-jobs-restore-etcd.yaml
job.batch/restore-etcd created
[root@master231 01-etcd-backup-volumes]#
[root@master231 01-etcd-backup-volumes]# kubectl get jobs,po
NAME COMPLETIONS DURATION AGE
job.batch/backup-etcd-29151458 1/1 4s 2m47s
job.batch/backup-etcd-29151459 1/1 4s 107s
job.batch/backup-etcd-29151460 1/1 3s 47s
job.batch/restore-etcd 1/1 4s 15s
NAME READY STATUS RESTARTS AGE
pod/backup-etcd-29151458-jf77j 0/1 Completed 0 2m47s
pod/backup-etcd-29151459-8kntm 0/1 Completed 0 107s
pod/backup-etcd-29151460-6phf6 0/1 Completed 0 47s
pod/restore-etcd-grt64 0/1 Completed 0 15s
[root@master231 01-etcd-backup-volumes]#
[root@master231 01-etcd-backup-volumes]# tree /cmy/data/nfs-server/homework/etcd-backup/restore/
/cmy/data/nfs-server/homework/etcd-backup/restore/
└── member
├── snap
│ ├── 0000000000000001-0000000000000001.snap
│ └── db
└── wal
└── 0000000000000000-0000000000000000.wal
3 directories, 3 files
[root@master231 01-etcd-backup-volumes]#
[root@master231 01-etcd-backup-volumes]# du -sh /cmy/data/nfs-server/homework/etcd-backup/restore/
85M /cmy/data/nfs-server/homework/etcd-backup/restore/
[root@master231 01-etcd-backup-volumes]#
5.deploy部署etcd还原数据
[root@master231 01-etcd-backup-volumes]# cat 03-deploy-etcd.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-etcd
spec:
replicas: 1
selector:
matchLabels:
apps: etcd
template:
metadata:
labels:
apps: etcd
spec:
volumes:
- name: data
nfs:
server: 10.168.10.231
path: /cmy/data/nfs-server/homework/etcd-backup/restore
containers:
- image: harbor.cmy.cn/etcd/etcd:v3.5.21
name: c1
#command: ["tail","-f","/etc/hosts" ]
command:
- /bin/sh
- -c
- etcd --data-dir /var/lib/etcd --listen-client-urls 'http://0.0.0.0:2379' --advertise-client-urls 'http://0.0.0.0:2379'
ports:
- containerPort: 2379
name: http
- containerPort: 2380
name: tcp
volumeMounts:
- name: data
mountPath: /var/lib/etcd
---
apiVersion: v1
kind: Service
metadata:
name: etcd-single
spec:
type: LoadBalancer
selector:
apps: etcd
ports:
- port: 2379
targetPort: 2379
nodePort: 30088
[root@master231 01-etcd-backup-volumes]# kubectl apply -f 03-deploy-etcd.yaml
deployment.apps/deploy-etcd created
[root@master231 01-etcd-backup-volumes]#
[root@master231 01-etcd-backup-volumes]# kubectl get pods -o wide -l apps=etcd
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-etcd-85c5848665-rg6k7 1/1 Running 0 13s 10.100.140.77 worker233 <none> <none>
[root@master231 01-etcd-backup-volumes]#
[root@master231 01-etcd-backup-volumes]# kubectl exec -it deploy-etcd-85c5848665-rg6k7 -- sh
/ # etcdctl get "" --prefix --keys-only | wc -l # Duang~发现数据还原成功啦!
1248
/ #
- etcd-workbench图形化管理etcd服务
参考链接:
https://tzfun.github.io/etcd-workbench/
https://github.com/tzfun/etcd-workbench/blob/master/README_ZH.md
https://github.com/tzfun/etcd-workbench-web/blob/master/server/src/main/resources/etcd-workbench.conf
1.拉取镜像
docker pull tzfun/etcd-workbench:1.1.4
SVIP:
[root@node-exporter43 ~]# wget http://192.168.14.253/Resources/Prometheus/images/etcd-workbench/cmy-etcd-workbench-v1.1.4.tar.gz
[root@node-exporter43 ~]# docker load -i cmy-etcd-workbench-v1.1.4.tar.gz
2.运行etcd-workbench
[root@node-exporter43 ~]# cat etcd-workbench.conf
[server]
# 服务监听的端口
port = 8002
# 链接超时时间
etcdExecuteTimeoutMillis = 3000
# 数据存储目录
dataDir = ./data
[auth]
# 启用认证功能
enable = true
# 指定用户名和密码
user = admin:cmy
[log]
# 指定日志的级别
level = INFO
# 日志存储目录
file = ./logs
# 日志文件的名称
fileName = etcd-workbench
# 指定日志的滚动大小
fileLimitSize = 100
# 日志打印的位置
printers = std,file
[root@node-exporter43 ~]#
[root@node-exporter43 ~]# docker run -d -v /root/etcd-workbench.conf:/usr/tzfun/etcd-workbench/etcd-workbench.conf --name etcd-workbench --network host tzfun/etcd-workbench:1.1.4
88e4dc60963e92f988a617727e7cf76db3e0d565096859ca63549bed7883fc46
[root@node-exporter43 ~]#
[root@node-exporter43 ~]# ss -ntl | grep 8002
LISTEN 0 4096 *:8002 *:*
[root@node-exporter43 ~]#
[root@node-exporter43 ~]# docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
88e4dc60963e tzfun/etcd-workbench:1.1.4 "/bin/sh -c 'java …" 9 seconds ago Up 8 seconds etcd-workbench
[root@node-exporter43 ~]#
3.访问etcd-workbench的webUI
http://10.0.0.43:8002/
用户名和密码使用你自定义的即可。
10 MongoDB实现定时备份数据和恢复
10.1 deploy部署MongoDB
cat ../deployment/mangodb.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-mongodb
spec:
replicas: 1
selector:
matchLabels:
apps: mongodb
template:
metadata:
labels:
apps: mongodb
spec:
hostNetwork: true # 使用主机网络
containers:
- name: mongodb
image: harbor.cmy.cn/mangodb/mangodb@sha256:353d42c6da48e9390fed6b5fd5bb3a44eff16f9c8081963a8ea077482d953606
插入数据
kubectl exec -it pod/deploy-mongodb-bb6f7b5cd-5w2ws -- mongosh
use mydatabase
// 创建集合
db.createCollection("mycollection")
// 插入数据
db.mycollection.insertMany([
{ name: "Alice", age: 25, email: "alice@example.com" },
{ name: "Bob", age: 30, email: "bob@example.com" },
{ name: "Charlie", age: 35, email: "charlie@example.com" }
])
10.2 cj部署备份pod
cat bak-mongo.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: mongo-backup
spec:
schedule: "* * * * *" # 每天凌晨2点执行
jobTemplate:
spec:
template:
spec:
containers:
- name: mongo-backup
image: harbor.cmy.cn/mangodb/mangodb@sha256:353d42c6da48e9390fed6b5fd5bb3a44eff16f9c8081963a8ea077482d953606
command: ["/bin/bash"]
args: ["-c", " mongodump -h 10.168.10.233 -d mydatabase -o /mnt/; tail -f /etc/hosts"]
restartPolicy: Never
10.3 数据删除与恢复验证
kubectl exec -it pod/deploy-mongodb-bb6f7b5cd-5w2ws -- mongosh
use mydatabase
db.dropDatabase()
test> show dbs;
admin 40.00 KiB
config 108.00 KiB
local 40.00 KiB
数据恢复
进入备份pod
kubectl exec -it pod/mongo-backup-29139140-h8p8w -- bash
root@mongo-backup-29139140-h8p8w:/# mongorestore --host 10.168.10.233 --db mydatabase /mnt/mydatabase
2025-05-27T12:28:52.002+0000 The --db and --collection flags are deprecated for this use-case; please use --nsInclude instead, i.e. with --nsInclude=${DATABASE}.${COLLECTION}
2025-05-27T12:28:52.004+0000 building a list of collections to restore from /mnt/mydatabase dir
2025-05-27T12:28:52.004+0000 don't know what to do with file "/mnt/mydatabase/prelude.json", skipping...
2025-05-27T12:28:52.004+0000 reading metadata for mydatabase.mycollection from /mnt/mydatabase/mycollection.metadata.json
2025-05-27T12:28:52.018+0000 restoring mydatabase.mycollection from /mnt/mydatabase/mycollection.bson
2025-05-27T12:28:52.029+0000 finished restoring mydatabase.mycollection (3 documents, 0 failures)
2025-05-27T12:28:52.029+0000 no indexes to restore for collection mydatabase.mycollection
2025-05-27T12:28:52.029+0000 3 document(s) restored successfully. 0 document(s) failed to restore.
root@mongo-backup-29139140-h8p8w:/#
验证数据是否恢复
kubectl exec -it pod/deploy-mongodb-bb6f7b5cd-5w2ws -- mongosh --eval 'show dbs;'
admin 40.00 KiB
config 108.00 KiB
local 40.00 KiB
mydatabase 40.00 KiB
11 svc-wordpress
mysql
cat mysql.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mysql # 通常建议使用更有意义的标签,如 app=mysql
name: db-mysql
spec:
replicas: 1 # 定义副本数,默认为1
selector:
matchLabels:
apps: mysql # 必须匹配 Pod 的 metadata.labels
template:
metadata:
labels:
apps: mysql # 必须与 selector.matchLabels 匹配
spec:
containers:
- image: harbor.cmy.cn/mysql/mysql@sha256:c57363379dee26561c2e554f82e70704be4c8129bd0d10e29252cc0a34774004
name: c1
env:
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: "yes"
- name: MYSQL_DATABASE
value: wordpress
- name: MYSQL_USER
value: linux97
- name: MYSQL_PASSWORD
value: cmy
args:
- --character-set-server=utf8
- --collation-server=utf8_bin
- --default-authentication-plugin=mysql_native_password
svc-mysql
cat svc-clusterip.yaml
apiVersion: v1
kind: Service
metadata:
name: svc-mysql
spec:
# 配置端口映射
ports:
# 表示的是svc的端口
- port: 3306
# 表示Pod的端口
targetPort: 3306
# 关联Pod的标签
selector:
apps: mysql
# 指定svc的类型
type: ClusterIP
wp
cat wp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
apps: wp # 建议使用更有意义的标签(原 apps: wp 改为 app: wp)
name: wp-deploy
spec:
replicas: 2 # 默认副本数
selector:
matchLabels:
apps: wp # 必须与 Pod 模板中的标签匹配
template:
metadata:
labels:
apps: wp # 必须与 selector.matchLabels 匹配
spec:
nodeName: worker233
containers:
- image: harbor.cmy.cn/mysql/wp@sha256:07c5a73891236eed540e68c8cdc819a24fe617fa81259ee22be3105daefa3ee1
name: c1
env:
- name: WORDPRESS_DB_HOST
# 指定数据库时需要指定对应的数据库svc的名称
value: svc-mysql
- name: WORDPRESS_DB_NAME
value: wordpress
- name: WORDPRESS_DB_USER
value: linux97
- name: WORDPRESS_DB_PASSWORD
value: cmy
svc-wp
cat svc-nodeport.yaml
apiVersion: v1
kind: Service
metadata:
name: svc-wp
spec:
# 配置端口映射
ports:
# 表示的是svc的端口
- port: 80
nodePort: 30080
# 表示Pod的端口
targetPort: 80
# 关联Pod的标签
selector:
apps: wp
# 指定svc的类型
type: NodePort
kubectl get svc,pods -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.200.0.1 <none> 443/TCP 7d1h <none>
service/svc-wp NodePort 10.200.119.170 <none> 80:30080/TCP 6m48s apps=wp
service/svc-xiuxian ClusterIP 10.200.35.23 <none> 3306/TCP 11m apps=mysql
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/db-mysql-8dcf44d64-n5m6b 1/1 Running 0 13m 10.100.203.136 worker232 <none> <none>
pod/wp-deploy-5c9c84fb59-2qhr6 1/1 Running 0 6s 10.100.140.81 worker233 <none> <none>
pod/wp-deploy-5c9c84fb59-zkwcr 1/1 Running 0 6s 10.100.140.82 worker233 <none> <none>
12 k8s之EFK部署
部署EFK实现日志采集,要求如下:
使用deploy部署ElasticSearch单点,要求部署在kube-public名称空间;
使用CLusterIP类型svc将ES暴露;
使用deploy部署kibana,要求部署在default名称空间;
kibana能够和ES进行链接,并使用5601端口暴露,要求使用LoadBalancer;
使用ds部署filebeat组件,并采集系统日志并写入ES,通过kibana出图展示;
12.1 deploy部署ElasticSearch单点
cat es.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
namespace: kube-public
labels:
app: elasticsearch
spec:
replicas: 1 # 单节点模式
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
hostNetwork: true
nodeName: worker232
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.25
ports:
- containerPort: 9200
- containerPort: 9300
env:
- name: discovery.type
value: "single-node"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
resources:
requests:
memory: "512Mi"
cpu: "500m"
12.2 使用CLusterIP类型svc将ES暴露
cat svc-es.yaml
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: kube-public
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
ports:
- protocol: TCP
port: 9200
targetPort: 9200
name: http
- protocol: TCP
port: 9300
targetPort: 9300
name: transport
type: ClusterIP
12.3 deploy部署kibana
cat kibana.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
hostNetwork: true
nodeName: worker232
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.17.25
ports:
- containerPort: 5601
env:
- name: ELASTICSEARCH_HOSTS
value: "http://10.200.8.143:9200"
- name: I18N_LOCALE
value: "zh-CN"
resources:
requests:
memory: "512Mi"
cpu: "500m"
12.4 使用LoadBalancer暴露kibana
cat svc-nodebalance.yaml
apiVersion: v1
kind: Service
metadata:
name: svc-kibana-loadbalancer
spec:
ports:
- port: 5601
nodePort: 5601
selector:
app: kibana
type: LoadBalancer
12.5 ds部署filebeat组件
cat fibeat.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
tolerations:
- operator: Exists
containers:
- name: filebeat
image: harbor.cmy.cn/efk/fibeat-cmy:v7.17.25
注意这个镜像是基于官方镜像进行自定义,将采集配置文件放入镜像
cat ~/Dockerfile
FROM docker.elastic.co/beats/filebeat:7.17.25
LABEL auth=cmy
COPY fibeat.yaml /usr/share/filebeat/fibeat.yaml
CMD ["filebeat", "-e", "-c", "/usr/share/filebeat/fibeat.yaml"]
13 WordPress 使用 NFS 实现多节点共享静态数据和数据库数据
数据库部分
apiVersion: apps/v1
kind: Deployment
metadata:
name: db-nfs
spec:
replicas: 1
selector:
matchLabels:
apps: db
template:
metadata:
labels:
apps: db
spec:
restartPolicy: Always
volumes:
- name: db
nfs:
server: 10.168.10.231
path: /cmy/data/nfs-server/mysql
containers:
- name: db-cmy
image: harbor.cmy.cn/mysql/mysql@sha256:c57363379dee26561c2e554f82e70704be4c8129bd0d10e29252cc0a34774004
imagePullPolicy: Always
volumeMounts:
- name: db
mountPath: /var/lib/mysql/
env:
- name: MYSQL_DATABASE
value: wordpress
- name: MYSQL_USER
value: cmy
- name: MYSQL_PASSWORD
value: "1"
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: "yes"
---
apiVersion: v1
kind: Service
metadata:
name: db-svc
spec:
type: ClusterIP
clusterIP: 10.200.0.100
selector:
apps: db
ports:
- port: 3306
targetPort: 3306
WordPress 部分
apiVersion: apps/v1
kind: Deployment
metadata:
name: wp-nfs
spec:
replicas: 3
selector:
matchLabels:
apps: wp
template:
metadata:
labels:
apps: wp
spec:
restartPolicy: Always
volumes:
- name: wp
nfs:
server: 10.168.10.231
path: /cmy/data/nfs-server/wp
containers:
- name: wordpress
image: harbor.cmy.cn/mysql/wp@sha256:07c5a73891236eed540e68c8cdc819a24fe617fa81259ee22be3105daefa3ee1
imagePullPolicy: Always
volumeMounts:
- name: wp
mountPath: /var/www/html/wp-content/
env:
- name: WORDPRESS_DB_HOST
value: 10.200.0.100
- name: WORDPRESS_DB_USER
value: cmy
- name: WORDPRESS_DB_PASSWORD
value: "1"
- name: WORDPRESS_DB_NAME
value: wordpress
---
apiVersion: v1
kind: Service
metadata:
name: wp-svc
spec:
type: LoadBalancer
selector:
apps: wp
ports:
- port: 80
targetPort: 80
nodePort: 30090
14 mysql主从复制搭建cm-svc-deploy-pvc-sc
14.1 cm
cat mysql-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-config
data:
master-my.cnf: |
[mysqld]
server-id=10
log-bin=mysql-bin
binlog-format=ROW
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
symbolic-links=0
default_authentication_plugin=mysql_native_password # 设置默认认证插件
slave-my.cnf: |
[mysqld]
server-id=20
relay-log=mysql-relay-bin
log-bin=mysql-bin
binlog-format=ROW
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
symbolic-links=0
read-only=1
default_authentication_plugin=mysql_native_password # 设置默认认证插件
init-master.sh: |
#!/bin/bash
MYSQL_ROOT_PASSWORD=1qaz
echo "[主库] 等待 MySQL 启动..."
until mysql -uroot -h mysql-master -p$MYSQL_ROOT_PASSWORD -e "SELECT 1"; do sleep 2; done
echo "[主库] 创建复制用户 repl..."
mysql -uroot -h mysql-master -p$MYSQL_ROOT_PASSWORD -e "CREATE USER IF NOT EXISTS 'repl'@'%' IDENTIFIED BY 'replpass';"
mysql -uroot -h mysql-master -p$MYSQL_ROOT_PASSWORD -e "GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%';"
mysql -uroot -h mysql-master -p$MYSQL_ROOT_PASSWORD -e "FLUSH PRIVILEGES;"
echo "[主库] 记录当前 binlog 状态..."
# 使用 -N(跳过列名)和 -B(批处理模式)获取干净输出
mysql -uroot -h mysql-master -p$MYSQL_ROOT_PASSWORD -NB -e "SHOW MASTER STATUS" > /initdata/master_status.txt
echo "[主库] 导出全库数据..."
mysqldump -uroot -h mysql-master -p$MYSQL_ROOT_PASSWORD --all-databases --single-transaction > /initdata/alldb.sql
echo "[主库] 最终 binlog 状态:"
cat /initdata/master_status.txt
init-slave.sh: |
#!/bin/bash
set -e
MYSQL_ROOT_PASSWORD=1qaz
echo "[从库] 等待 MySQL 完全启动..."
for i in {1..30}; do
if mysql -uroot -h mysql-slave -p$MYSQL_ROOT_PASSWORD -e "SELECT 1" &>/dev/null; then
break
fi
echo "等待 MySQL 启动 ($i/30)..."
sleep 2
done
if ! mysql -uroot -h mysql-slave -p$MYSQL_ROOT_PASSWORD -e "SELECT 1" &>/dev/null; then
echo "[从库] 错误:MySQL 启动失败!"
exit 1
fi
echo "[从库] 正在导入主库全量数据..."
mysql -uroot -h mysql-slave -p$MYSQL_ROOT_PASSWORD < /initdata/alldb.sql
echo "[从库] 提取主库最新 binlog 位置..."
# 检查状态文件是否存在
if [ -f /initdata/master_status.txt ]; then
# 直接读取文件内容(应该是两列:文件名和位置)
read -r LOG_FILE LOG_POS <<< $(cat /initdata/master_status.txt)
echo "从状态文件获取: File=$LOG_FILE, Position=$LOG_POS"
else
echo "[从库] 警告:未找到主库状态文件,使用 SQL 文件位置"
LOG_FILE=$(grep -m1 'MASTER_LOG_FILE' /initdata/alldb.sql | sed -E "s/.*MASTER_LOG_FILE='([^']+)'.*/\1/")
LOG_POS=$(grep -m1 'MASTER_LOG_POS' /initdata/alldb.sql | sed -E "s/.*MASTER_LOG_POS=([0-9]+).*/\1/")
fi
# 验证位置参数是有效的
if [[ ! "$LOG_FILE" =~ ^mysql-bin\.[0-9]+$ ]] || [[ ! "$LOG_POS" =~ ^[0-9]+$ ]]; then
echo "[从库] 错误:无效的 binlog 位置!"
echo "LOG_FILE: '$LOG_FILE'"
echo "LOG_POS: '$LOG_POS'"
echo "master_status.txt 内容:"
cat /initdata/master_status.txt
exit 1
fi
echo "[从库] 配置主从复制连接..."
# 清除任何现有复制配置
mysql -uroot -h mysql-slave -p$MYSQL_ROOT_PASSWORD -e "STOP SLAVE; RESET SLAVE ALL;" 2>/dev/null || true
# 使用 HERE 文档避免转义问题
mysql -uroot -h mysql-slave -p$MYSQL_ROOT_PASSWORD <<EOF
CHANGE MASTER TO
MASTER_HOST='mysql-master',
MASTER_USER='repl',
MASTER_PASSWORD='replpass',
MASTER_LOG_FILE='$LOG_FILE',
MASTER_LOG_POS=$LOG_POS,
GET_MASTER_PUBLIC_KEY=1;
EOF
echo "[从库] 启动复制线程..."
mysql -uroot -h mysql-slave -p$MYSQL_ROOT_PASSWORD -e "START SLAVE;"
echo "[从库] 等待复制线程初始化..."
sleep 5
echo "[从库] 当前 Slave 状态:"
mysql -uroot -h mysql-slave -p$MYSQL_ROOT_PASSWORD -e "SHOW SLAVE STATUS\G"
echo "[从库] 关键复制状态:"
mysql -uroot -h mysql-slave -p$MYSQL_ROOT_PASSWORD -e "SHOW SLAVE STATUS\G" | grep -E 'Slave_IO_Running|Slave_SQL_Running|Last_IO_Error|Last_SQL_Error'
14.2 主库
cat mysql-master-deployment.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-mysql-master
spec:
accessModes:
- ReadWriteMany
resources:
limits:
storage: 2000Mi
requests:
storage: 1000Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-master
spec:
replicas: 1
selector:
matchLabels:
app: mysql
role: master
template:
metadata:
labels:
app: mysql
role: master
spec:
containers:
- name: mysql
image: harbor.cmy.cn/mysql/mysql@sha256:c57363379dee26561c2e554f82e70704be4c8129bd0d10e29252cc0a34774004
env:
- name: MYSQL_ROOT_PASSWORD
value: "1qaz" # 替换为你的root密码
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: db
mountPath: /var/lib/mysql
- name: config-volume
mountPath: /etc/my.cnf
subPath: my.cnf
volumes:
- name: db
persistentVolumeClaim:
claimName: pvc-mysql-master
- name: config-volume
configMap:
name: mysql-config
items:
- key: master-my.cnf
path: my.cnf
---
# mysql-master-service.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql-master
spec:
selector:
app: mysql
role: master
ports:
- protocol: TCP
port: 3306
targetPort: 3306
type: ClusterIP
创建测试数据
create database cmy;
14.3 从库
cat mysql-slave-deployment.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-mysql-slave
spec:
accessModes:
- ReadWriteMany
resources:
limits:
storage: 2000Mi
requests:
storage: 1000Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-slave
spec:
replicas: 1
selector:
matchLabels:
app: mysql
role: slave
template:
metadata:
labels:
app: mysql
role: slave
spec:
volumes:
- name: db1
persistentVolumeClaim:
claimName: pvc-mysql-slave
- name: initdata
nfs:
server: 10.168.10.231
path: /cmy/data/nfs-server/mysql/init_data
- name: backup
configMap:
name: mysql-config
items:
- key: init-master.sh
path: init-master.sh
- name: restore
configMap:
name: mysql-config
items:
- key: init-slave.sh
path: init-slave.sh
- name: config-volume
configMap:
name: mysql-config
items:
- key: slave-my.cnf
path: my.cnf
containers:
- name: mysql
image: harbor.cmy.cn/mysql/mysql@sha256:c57363379dee26561c2e554f82e70704be4c8129bd0d10e29252cc0a34774004
command: ["/bin/bash", "-c"]
args:
- |
/entrypoint.sh mysqld &
for i in {1..30}; do
if mysqladmin ping -uroot -p1qaz --silent; then break; fi
sleep 1
done
bash /tmp/init-master.sh
sleep 5
bash /tmp/init-slave.sh
tail -f /dev/null
env:
- name: MYSQL_ROOT_PASSWORD
value: "1qaz" # 与主库保持一致
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: db1
mountPath: /var/lib/mysql
- name: config-volume
mountPath: /etc/my.cnf
subPath: my.cnf
- name: initdata
mountPath: /initdata
- name: restore
mountPath: /tmp/init-slave.sh
subPath: init-slave.sh
- name: backup
mountPath: /tmp/init-master.sh
subPath: init-master.sh
---
# mysql-slave-service.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql-slave
spec:
selector:
app: mysql
role: slave
ports:
- protocol: TCP
port: 3306
targetPort: 3306
type: ClusterIP
15 使用velero对k8s集群进行备份恢复
Velero(之前称为 Heptio Ark)是一个开源工具,用于对 Kubernetes 集群中的资源、持久卷(Persistent Volumes, PVs)以及应用数据进行备份、恢复和迁移。它支持跨集群的数据迁移,也常用于灾难恢复、集群升级前的数据保护等场景。
15.1 Velero 的核心功能
-
备份 Kubernetes 资源
Velero 可以备份整个命名空间、特定资源(如 Deployment、Service 等)或整个集群的资源状态。 -
备份持久卷(PV)数据
支持将 PV 中的数据通过云存储(如 AWS S3、Azure Blob Storage、阿里云 OSS 等)或 NFS 等方式进行备份。 -
恢复资源与数据
可以将之前备份的资源状态和 PV 数据恢复到同一个或不同的 Kubernetes 集群中。 -
集群迁移
利用备份和恢复机制,可以将应用从一个 Kubernetes 集群迁移到另一个集群(包括跨云平台)。 -
定时备份(计划任务)
支持设置定时任务,定期自动备份集群状态,适合生产环境的数据保护。
15.2 Velero 的架构组成
Velero 主要由两部分组成:
-
Velero 客户端(CLI)
用户通过命令行工具velero
与 Velero 服务交互,执行备份、恢复、查看日志等操作。 -
Velero 服务端(运行在 Kubernetes 集群中)
服务端以一个或多个 Pod 的形式运行在目标 Kubernetes 集群中,负责执行实际的备份、恢复操作,并与存储后端通信。
15.3 Velero 的工作原理
-
备份过程
- 用户通过
velero backup create
命令发起备份。 - Velero 服务端调用 Kubernetes API Server 获取指定资源的当前状态。
- 对于持久卷,Velero 会调用 Volume Snapshot 功能(如果支持)或通过 CSI(Container Storage Interface)驱动将数据写入配置的存储后端(如 S3)。
- 备份元数据(包括资源清单和 PV 快照信息)也会被上传到存储后端。
- 用户通过
-
恢复过程
- 用户通过
velero restore create
命令发起恢复。 - Velero 从存储后端下载备份的元数据和资源清单,并重新应用到 Kubernetes 集群中。
- 对于持久卷,Velero 会根据备份中的快照信息重新创建 PV 并挂载数据。
- 用户通过
15.4 支持的存储后端
Velero 支持多种存储后端用于保存备份数据,包括:
- 云存储服务:
- AWS S3 及兼容服务(如 MinIO)
- Azure Blob Storage
- Google Cloud Storage
- 阿里云 OSS
- NFS(Network File System)
- 其他支持 S3 API 的对象存储
注意:对于持久卷的数据备份,是否支持直接快照取决于 Kubernetes 集群使用的存储驱动是否支持 Volume Snapshot 功能(如 CSI 驱动)。
[[Velero备份与恢复K8s集群及应用]]