k8s之存储

1. NFS(网络文件系统)

优点

  • 数据共享:多个 Pod 可以挂载同一个 NFS 目录,实现数据在不同 Pod 之间的共享,适合有协作需求的应用场景,比如多个 Web 服务器共用一套静态资源。
  • 简单易用:相比一些复杂的分布式存储方案,NFS 挂载的配置和使用较为直观,运维成本较低,很适合中小规模集群的存储需求。
  • 成熟稳定:NFS 技术已经发展多年,有很高的稳定性和兼容性,能适配各类操作系统的客户端。

缺点

  • 单点故障:如果 NFS 服务器发生故障,依赖它的所有 Pod 都可能受到影响,无法正常读写数据,通常需要引入额外的冗余机制。
  • 性能瓶颈:在高并发读写场景下,NFS 性能可能跟不上,因为它本质上是基于网络传输的文件系统,网络带宽和服务器处理能力容易成为瓶颈。
  • 权限管理复杂:跨平台使用 NFS 时,由于不同操作系统权限模型存在差异,在权限设置和管理上会比较繁琐。

[[k8s部署案例集合#12 WordPress 使用 NFS 实现多节点共享静态数据和数据库数据]]

2. emptyDir

优点

  • 容器间共享数据:当一个 Pod 包含多个容器时,不同容器间可能需要共享一些临时数据,emptyDir 就能充当这个共享空间。
  • 缓存用途:某些应用需要快速访问的临时缓存数据,将这类缓存数据存放在 emptyDir 卷,利用本地存储的高读写速度特性,加速应用的运行。
  • 临时文件存储:应用运行过程中会产生一些临时文件,比如解压文件、日志文件的临时副本,把它们放在 emptyDir 卷,既便于容器内部使用,又不会干扰宿主节点的其他文件系统结构。

<font color="#9bbb59">其特点就是随着Pod生命周期结束而结束。换句话说,当Pod被删除时,持久化的数据会一并删除!</font>

<font color="#9bbb59">应用场景:</font>
<font color="#9bbb59"> – 1.实现同一个Pod内不同容器的数据共享;</font>
<font color="#9bbb59"> – 2.对数据做临时存储;</font>
<font color="#9bbb59">emptyDir数据的存储路径</font>
/var/lib/kubelet/pods/<POD_ID>/volumes/kubernetes.io~empty-dir/<VOLUME_NAME>/

示例:创建 Pod,实现容器数据共享

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-xiuxian-emptydir-multiple
spec:
  replicas: 3
  selector:
    matchLabels:
      apps: xiuxian
  template:
    metadata:
      labels:
        apps: xiuxian
    spec:
      volumes:
      - emptyDir: {}
        name: data
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
        ports:
        - containerPort: 80
        volumeMounts:
        - name: data
          mountPath: /usr/share/nginx/html
      - name: c2
        image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
        command:
        - /bin/sh
        - -c
        - echo ${HOSTNAME} >> /cmy/index.html; tail -f /etc/hosts
        volumeMounts:
        - name: data
          mountPath: /cmy

3. hostPath

hostPath用于Pod内容器访问worker节点宿主机任意路径。
应用场景:
– 1.将某个worker节点的宿主机数据共享给Pod的容器指定路径;
– 2.同步时区;

示例:同步时间与共享宿主机数据

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-xiuxian-hostpath
spec:
  replicas: 3
  selector:
    matchLabels:
      apps: xiuxian
  template:
    metadata:
      labels:
        apps: xiuxian
    spec:
      volumes:
      - emptyDir: {}
        name: data01
        # 声明存储卷的名称是hostPath
      - hostPath:
          # 将worker节点宿主机路径暴露给容器,如果目录不存在会自动创建。
          path: /linux96
        name: data02
      - name: data03
        hostPath:
          path: /etc/localtime
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
        ports:
        - containerPort: 80
        volumeMounts:
        - name: data01
          mountPath: /usr/share/nginx/html
        - name: data02
          mountPath: /cmy
        - name: data03
          mountPath: /etc/localtime

[root@master-231 /cmy/manifests/volume]# kubectl exec -it pod/deploy-xiuxian-hostpath-7d7c8c588d-cbbb7 -- sh
/ # date
Wed Jun  4 15:34:07 CST 2025

1 总结

特性 emptyDir hostPath NFS
​生命周期​ 与 Pod 绑定(Pod 删除销毁) 与节点绑定(Pod 删除保留) 独立于 Pod 和节点(持久化)
​存储位置​ 节点本地磁盘(或内存) 节点本地文件系统 远程 NFS 服务器
​共享性​ 同一 Pod 内共享 同一节点上的多个 Pod 共享 多个 Pod(跨节点)共享
​适用场景​ 临时数据(缓存、日志) 节点本地持久化数据 跨节点共享持久化存储
​生产环境推荐​ 适合临时数据 不推荐(除非特殊需求) 适合共享存储
​性能​ 快(内存)或中等(磁盘) 快(本地磁盘) 取决于网络带宽
​高可用性​ 无(Pod 删除销毁) 无(依赖节点) 可配置高可用(NFS 集群

2 configmap

所谓的configMap就是配置字典,其主要的应用场景就是存储配置信息,比如配置文件等相关信息。

大多数的使用场景都是应用的配置文件。

2.1 响应式创建cm

ubectl create configmap xixi --from-file=myhosts=/etc/hosts --from-file=/etc/os-release --from-literal=school=cmy --from-literal=class=linux97
configmap/xixi created
[root@master231 configmaps]# 
[root@master231 configmaps]# kubectl get cm 
NAME               DATA   AGE
kube-root-ca.crt   1      14d
xixi               4      4s
[root@master231 configmaps]# 
[root@master231 configmaps]# kubectl get cm  xixi 
NAME   DATA   AGE
xixi   4      8s
[root@master231 configmaps]# 
[root@master231 configmaps]# kubectl get cm xixi -o yaml
apiVersion: v1
data:
  class: linux97
  myhosts: |
    127.0.0.1 localhost
    127.0.1.1 cmy

    # The following lines are desirable for IPv6 capable hosts
    ::1     ip6-localhost ip6-loopback
    fe00::0 ip6-localnet
    ff00::0 ip6-mcastprefix
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters
    10.168.10.250 harbor250.cmy.com
  os-release: |
    PRETTY_NAME="Ubuntu 22.04.4 LTS"
    NAME="Ubuntu"
    VERSION_ID="22.04"
    VERSION="22.04.4 LTS (Jammy Jellyfish)"
    VERSION_CODENAME=jammy
    ID=ubuntu
    ID_LIKE=debian
    HOME_URL="https://www.ubuntu.com/"
    SUPPORT_URL="https://help.ubuntu.com/"
    BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
    PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
    UBUNTU_CODENAME=jammy
    BLOG=https://www.cnblogs.com/cmy
  school: cmy
kind: ConfigMap
metadata:
  creationTimestamp: "2025-06-05T03:01:23Z"
  name: xixi
  namespace: default
  resourceVersion: "1166717"
  uid: ea61963c-0966-4a2b-9b9c-618f2e3515e4
[root@master231 configmaps]# 

2.2 声明式创建cm

[root@master231 configmaps]# cat /etc/my.cnf
[mysqld]
basedir=/cmy/softwares/mysql80
datadir=/cmy/data/mysql80
port=3306
socket=/tmp/mysql80.sock

[client]
socket=/tmp/mysql80.sock
[root@master231 configmaps]# 
[root@master231 configmaps]# kubectl create configmap haha --from-file=myshadow=/etc/my.cnf --from-literal=school=laonanhai  -o yaml --dry-run=client > 01-cm.yaml
[root@master231 configmaps]# 
[root@master231 configmaps]# vim 01-cm.yaml 
[root@master231 configmaps]# 
[root@master231 configmaps]# cat 01-cm.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: haha
data:
  myshadow: |
    [mysqld]
    basedir=/cmy/softwares/mysql80
    datadir=/cmy/data/mysql80
    port=3306
    socket=/tmp/mysql80.sock

    [client]
    socket=/tmp/mysql80.sock
  school: laonanhai
[root@master231 configmaps]# 
[root@master231 configmaps]# kubectl apply -f  01-cm.yaml 
configmap/haha created
[root@master231 configmaps]# 
[root@master231 configmaps]# kubectl get cm
NAME               DATA   AGE
haha               2      12s
kube-root-ca.crt   1      14d
xixi               4      7m47s
[root@master231 configmaps]# 

2.3 Pod基于环境变量方式引用cm

 cat ./02-deploy-cm-env.yaml
apiVersion:  apps/v1
kind: Deployment
metadata:
  name: deploy-cm-env
spec:
  replicas: 3
  selector:
    matchLabels:
      apps: xiuxian
  template:
    metadata:
      labels:
        apps: xiuxian
        version: v1
    spec:
      containers:
      - image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
        name: c1
        env:
        - name: cmy-xuexiao
          # 值从某个资源中引用
          valueFrom:
            # 表示值从一个configMap中引用
            configMapKeyRef:
              # 指定configMap的名称
              name: haha
              # 指定configMap的key
              key: school
        - name: cmy-shadow
          valueFrom:
            configMapKeyRef:
              name: haha
              key: myshadow

2.4 Pod基于存储卷方式引用cm !!!!

 cat 03-deploy-cm-volumes.yaml
apiVersion:  apps/v1
kind: Deployment
metadata:
  name: deploy-cm-env
spec:
  replicas: 3
  selector:
    matchLabels:
      apps: xiuxian
  template:
    metadata:
      labels:
        apps: xiuxian
        version: v1
    spec:
      volumes:
      - name: data
        # 指定存储卷的类型是configMap
        configMap:
          # 指定configMap的名称
          name: xixi
          # 如果不定义items,则引用cm的所有key。
          # 如果只是用到cm的个别key,则可以定义items资源来引用。
          items:
            # 引用的key
          - key: os-release
            # 可以暂时理解为将来挂载到容器的文件名称
            path: os.txt
          - key: school
            path: xuexiao.log
      containers:
      - image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
        name: c1
        volumeMounts:
        - name: data
          mountPath: /data
        env:
        - name: cmy-xuexiao
          valueFrom:
            configMapKeyRef:
              name: haha
              key: school
        - name: cmy-shadow
          valueFrom:
            configMapKeyRef:
              name: haha
              key: myshadow
[root@master231 configmaps]# 
[root@master231 configmaps]# kubectl apply -f  03-deploy-cm-volumes.yaml
deployment.apps/deploy-cm-env created
[root@master231 configmaps]# 
[root@master231 configmaps]# kubectl get pods -o wide
NAME                             READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
deploy-cm-env-56945dfb87-4g9w9   1/1     Running   0          2s    10.100.140.123   worker233   <none>           <none>
deploy-cm-env-56945dfb87-qzqg8   1/1     Running   0          2s    10.100.140.121   worker233   <none>           <none>
deploy-cm-env-56945dfb87-xz49q   1/1     Running   0          2s    10.100.203.168   worker232   <none>           <none>
[root@master231 configmaps]# 
[root@master231 configmaps]# kubectl exec -it deploy-cm-env-56945dfb87-4g9w9 -- sh
/ # ls -l /data/
total 0
lrwxrwxrwx    1 root     root            13 Jun  5 03:24 os.txt -> ..data/os.txt
lrwxrwxrwx    1 root     root            18 Jun  5 03:24 xuexiao.log -> ..data/xuexiao.log
/ # 

2.5 configMap的subPath实战

其主要的应用场景是将挂载点变成文件而非目录。


2.实战案例  
[root@master231 06-games-configmaps]# cat 02-deploy-cm-games-subPath.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: cm-nginx
data:
  nginx.conf: |
    user  nginx;
    worker_processes  auto;
    
    error_log  /var/log/nginx/error.log notice;
    pid        /var/run/nginx.pid;
    
    
    events {
        worker_connections  1024;
    }
    
    
    http {
        include       /etc/nginx/mime.types;
        default_type  application/octet-stream;
    
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
    
        access_log  /var/log/nginx/access.log  main;
    
        sendfile        on;
        #tcp_nopush     on;
    
        keepalive_timeout  65;
    
        #gzip  on;
    
        include /etc/nginx/conf.d/*.conf;
    }

  games.conf: |
    server {
        listen        0.0.0.0:81;
        root          /usr/local/nginx/html/bird/;
        server_name   game01.cmy.com;
    }
    server {
        listen        0.0.0.0:81;
        root          /usr/local/nginx/html/pinshu/;
        server_name   game02.cmy.com;
    }
    
    server {
        listen        0.0.0.0:81;
        root          /usr/local/nginx/html/tanke/;
        server_name   game03.cmy.com;
    }
    
    server {
        listen        0.0.0.0:81;
        root          /usr/local/nginx/html/chengbao/;
        server_name   game04.cmy.com;
    }
    
    server {
        listen        0.0.0.0:81;
        root          /usr/local/nginx/html/motuo/;
        server_name   game05.cmy.com;
    }
    




---

apiVersion:  apps/v1
kind: Deployment
metadata:
  name: deploy-games
spec:
  replicas: 3
  selector:
    matchLabels:
      apps: game
  template:
    metadata:
      labels:
        apps: game
    spec:
      volumes:
      - name: games
        configMap:
          name: cm-nginx
          items:
          - key: games.conf
            path: games.conf
      - name: main
        configMap:
          name: cm-nginx
          items:
          - key: nginx.conf
            path: nginx.conf
      - name: tz
        hostPath:
          path: /etc/localtime
      containers:
      - image: harbor250.cmy.com/cmy-games/games:v0.6
        name: c1
        volumeMounts:
        - name: games
          mountPath: /etc/nginx/conf.d/games.conf
          # 当subPath的值和items.path的值相同时,则mountPath的挂载点将是个文件而非目录!
          subPath: games.conf
        - name: main
          mountPath: /etc/nginx/nginx.conf
          subPath: nginx.conf
        - name: tz
          mountPath: /etc/localtime
[root@master231 06-games-configmaps]# 
[root@master231 06-games-configmaps]# kubectl apply -f 02-deploy-cm-games-subPath.yaml 
configmap/cm-nginx created
deployment.apps/deploy-games created
[root@master231 06-games-configmaps]# 
[root@master231 06-games-configmaps]# kubectl get pods -o wide
NAME                            READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
deploy-games-7cc8985ddc-l7mlx   1/1     Running   0          4s    10.100.140.73    worker233   <none>           <none>
deploy-games-7cc8985ddc-n758q   1/1     Running   0          4s    10.100.203.161   worker232   <none>           <none>
deploy-games-7cc8985ddc-vq8cg   1/1     Running   0          4s    10.100.140.72    worker233   <none>           <none>
[root@master231 06-games-configmaps]# 
[root@master231 06-games-configmaps]# kubectl exec -it deploy-games-7cc8985ddc-l7mlx -- sh
/ # ls -l /etc/nginx/
total 36
drwxr-xr-x    1 root     root          4096 Jun  5 14:44 conf.d
-rw-r--r--    1 root     root          1077 May 25  2021 fastcgi.conf
-rw-r--r--    1 root     root          1007 May 25  2021 fastcgi_params
-rw-r--r--    1 root     root          5231 May 25  2021 mime.types
lrwxrwxrwx    1 root     root            22 Nov 13  2021 modules -> /usr/lib/nginx/modules
-rw-r--r--    1 root     root           647 Jun  5 14:44 nginx.conf
-rw-r--r--    1 root     root           636 May 25  2021 scgi_params
-rw-r--r--    1 root     root           664 May 25  2021 uwsgi_params
/ # 
/ # ls -l /etc/nginx/conf.d/
total 8
-rw-r--r--    1 root     root          1093 Jun  5 14:44 default.conf
-rw-r--r--    1 root     root          3630 Jun  5 14:44 games.conf
/ # 

[[k8s部署案例集合#14 mysql主从复制搭建cm-svc=deploy]]

3 secret

secrets是K8S用于存储敏感数据。

相比于cm资源,secret会对元数据进行base64编码。

2.响应式创建secrets
[root@master231 secrets]# kubectl create secret generic xixi --from-file=myhosts=/etc/hosts --from-file=/etc/passwd --from-literal=school=cmy --from-literal=class=linux97
secret/xixi created
[root@master231 secrets]# 
[root@master231 secrets]# kubectl get secrets 
NAME                  TYPE                                  DATA   AGE
default-token-xw24w   kubernetes.io/service-account-token   3      14d
linux97-token-pffxx   kubernetes.io/service-account-token   3      29h
oldboy-token-rz9v5    kubernetes.io/service-account-token   3      2d1h
xixi                  Opaque                                4      6s
[root@master231 secrets]# 
[root@master231 secrets]# kubectl get secrets  xixi 
NAME   TYPE     DATA   AGE
xixi   Opaque   4      11s
[root@master231 secrets]# 
[root@master231 secrets]# kubectl get secrets  xixi  -o yaml
apiVersion: v1
data:
  class: bGludXg5Nw==
  myhosts: MTI3LjAuMC4xIGxvY2FsaG9zdAoxMjcuMC4xLjEgeWluemhlbmdqaWUKCiMgVGhlIGZvbGxvd2luZyBsaW5lcyBhcmUgZGVzaXJhYmxlIGZvciBJUHY2IGNhcGFibGUgaG9zdHMKOjoxICAgICBpcDYtbG9jYWxob3N0IGlwNi1sb29wYmFjawpmZTAwOjowIGlwNi1sb2NhbG5ldApmZjAwOjowIGlwNi1tY2FzdHByZWZpeApmZjAyOjoxIGlwNi1hbGxub2RlcwpmZjAyOjoyIGlwNi1hbGxyb3V0ZXJzCjEwLjAuMC4yNTAgaGFyYm9yMjUwLm9sZGJveWVkdS5jb20K
  passwd: cm9vdDp4OjA6MDpyb290Oi9yb290Oi9iaW4vYmFzaApkYWVtb246eDoxOjE6ZGFlbW9uOi91c3Ivc2JpbjovdXNyL3NiaW4vbm9sb2dpbgpiaW46eDoyOjI6YmluOi9iaW46L3Vzci9zYmluL25vbG9naW4Kc3lzOng6MzozOnN5czovZGV2Oi91c3Ivc2Jpbi9ub2xvZ2luCnN5bmM6eDo0OjY1NTM0OnN5bmM6L2JpbjovYmluL3N5bmMKZ2FtZXM6eDo1OjYwOmdhbWVzOi91c3IvZ2FtZXM6L3Vzci9zYmluL25vbG9naW4KbWFuOng6NjoxMjptYW46L3Zhci9jYWNoZS9tYW46L3Vzci9zYmluL25vbG9naW4KbHA6eDo3Ojc6bHA6L3Zhci9zcG9vbC9scGQ6L3Vzci9zYmluL25vbG9naW4KbWFpbDp4Ojg6ODptYWlsOi92YXIvbWFpbDovdXNyL3NiaW4vbm9sb2dpbgpu
  school: b2xkYm95ZWR1
kind: Secret
metadata:
  creationTimestamp: "2025-06-05T08:26:36Z"
  name: xixi
  namespace: default
  resourceVersion: "1206032"
  uid: 460d410d-ee0b-48dd-85b6-ca441cedddad
type: Opaque
[root@master231 secrets]# 
[root@master231 secrets]# echo bGludXg5Nw== | base64 -d ;echo 
linux97
[root@master231 secrets]# 
[root@master231 secrets]# 
[root@master231 secrets]# echo b2xkYm95ZWR1 | base64 -d ;echo 
cmy
[root@master231 secrets]# 
[root@master231 secrets]# 
[root@master231 secrets]# echo cm9vdDp4OjA6MDpyb290Oi9yb290Oi9iaW4vYmFzaApkYWVtb246eDoxOjE6ZGFlbW9uOi91c3Ivc2JpbjovdXNyL3NiaW4vbm9sb2dpbgpiaW46eDoyOjI6YmluOi9iaW46L3Vzci9zYmluL25vbG9naW4Kc3lzOng6MzozOnN5czovZGV2Oi91c3Ivc2Jpbi9ub2xvZ2luCnN5bmM6eDo0OjY1NTM0OnN5bmM6L2JpbjovYmluL3N5bmMKZ2FtZXM6eDo1OjYwOmdhbWVzOi91c3IvZ2FtZXM6L3Vzci9zYmluL25vbG9naW4KbWFuOng6NjoxMjptYW46L3Zhci9jYWNoZS9tYW46L3Vzci9zYmluL25vbG9naW4KbHA6eDo3Ojc6bHA6L3Zhci9zcG9vbC9scGQ6L3Vzci9zYmluL25vbG9naW4KbWFpbDp4Ojg6ODptYWlsOi92YXIvbWFpbDovdXNyL3NiaW4vbm9sb2dpbgpuZXdzOng6OTo5Om5ld3M6L3Zhci9zcG9vbC9uZXdzOi91c3Ivc2Jpbi9ub2xvZ2luCnV1Y3A6eDoxMDoxMDp1dWNwOi92YXIvc3Bv| base64 -d
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin

[root@master231 secrets]# 





	3.声明式创建secrets
		3.1 方式一
[root@master231 secrets]# echo admin | base64 
YWRtaW4K
[root@master231 secrets]# 
[root@master231 secrets]# echo cmy | base64 
eWluemhlbmdqaWUK
[root@master231 secrets]# 
[root@master231 secrets]# echo cmy | base64 
b2xkYm95ZWR1Cg==
[root@master231 secrets]# 
[root@master231 secrets]# echo linux97 | base64 
bGludXg5Nwo=
[root@master231 secrets]# 
[root@master231 secrets]# cat 01-secrets-data.yaml
apiVersion: v1
kind: Secret
metadata:
  name: haha
data:
  class: bGludXg5Nwo=
  school: b2xkYm95ZWR1Cg==
  username: YWRtaW4K
  password: eWluemhlbmdqaWUK
[root@master231 secrets]# 
[root@master231 secrets]# kubectl apply -f  01-secrets-data.yaml
secret/haha created
[root@master231 secrets]# 
[root@master231 secrets]# kubectl get secrets 
NAME                  TYPE                                  DATA   AGE
default-token-xw24w   kubernetes.io/service-account-token   3      14d
haha                  Opaque                                4      3s
linux97-token-pffxx   kubernetes.io/service-account-token   3      29h
oldboy-token-rz9v5    kubernetes.io/service-account-token   3      2d1h
[root@master231 secrets]# 
[root@master231 secrets]# kubectl get secrets haha -o yaml
apiVersion: v1
data:
  class: bGludXg5Nwo=
  password: eWluemhlbmdqaWUK
  school: b2xkYm95ZWR1Cg==
  username: YWRtaW4K
kind: Secret
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"class":"bGludXg5Nwo=","password":"eWluemhlbmdqaWUK","school":"b2xkYm95ZWR1Cg==","username":"YWRtaW4K"},"kind":"Secret","metadata":{"annotations":{},"name":"haha","namespace":"default"}}
  creationTimestamp: "2025-06-05T08:31:53Z"
  name: haha
  namespace: default
  resourceVersion: "1206644"
  uid: ee11904e-1806-4a62-bdb6-39ccc6852c2f
type: Opaque
[root@master231 secrets]# 


		3.2 方式二【推荐】!!!!
[root@master231 secrets]# cat 02-secrets-stringData.yaml
apiVersion: v1
kind: Secret
metadata:
  name: hehe
stringData:
  class: linux97
  school: cmy
  username: admin
  password: cmy
  host: 10.168.10.250
  port: "3306"
  database: wordpress
[root@master231 secrets]# 
[root@master231 secrets]# kubectl apply -f  02-secrets-stringData.yaml
secret/hehe created
[root@master231 secrets]# 
[root@master231 secrets]# kubectl get secrets 
NAME                  TYPE                                  DATA   AGE
default-token-xw24w   kubernetes.io/service-account-token   3      14d
haha                  Opaque                                4      4m3s
hehe                  Opaque                                7      4s
linux97-token-pffxx   kubernetes.io/service-account-token   3      29h
oldboy-token-rz9v5    kubernetes.io/service-account-token   3      2d1h
[root@master231 secrets]# 
[root@master231 secrets]# 
[root@master231 secrets]# kubectl get secrets hehe -o yaml
apiVersion: v1
data:
  class: bGludXg5Nw==
  database: d29yZHByZXNz
  host: MTAuMC4wLjI1MA==
  password: eWluemhlbmdqaWU=
  port: MzMwNg==
  school: b2xkYm95ZWR1
  username: YWRtaW4=
kind: Secret
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Secret","metadata":{"annotations":{},"name":"hehe","namespace":"default"},"stringData":{"class":"linux97","database":"wordpress","host":"10.168.10.250","password":"cmy","port":"3306","school":"cmy","username":"admin"}}
  creationTimestamp: "2025-06-05T08:35:52Z"
  name: hehe
  namespace: default
  resourceVersion: "1207100"
  uid: 67645193-d60a-43ee-ab66-7b1d6c5c9fd2
type: Opaque
[root@master231 secrets]# 

	4.pod基于env引用secret 
[root@master231 secrets]# cat ./03-deploy-secrets-env.yaml
apiVersion:  apps/v1
kind: Deployment
metadata:
  name: deploy-secret-env
spec:
  replicas: 3
  selector:
    matchLabels:
      apps: xiuxian
  template:
    metadata:
      labels:
        apps: xiuxian
        version: v1
    spec:
      containers:
      - image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
        name: c1
        env:
        - name: cmy-username
          # 值从某个资源中引用
          valueFrom:
            # 表示值从一个configMap中引用
            secretKeyRef:
              # 指定configMap的名称
              name: haha
              # 指定configMap的key
              key: username
        - name: cmy-pwd
          valueFrom:
            secretKeyRef:
              name: haha
              key: password
[root@master231 secrets]# 
[root@master231 secrets]# kubectl apply -f  ./03-deploy-secrets-env.yaml
deployment.apps/deploy-secret-env created
[root@master231 secrets]# 
[root@master231 secrets]# kubectl get pods -o wide
NAME                                 READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
deploy-secret-env-5454bffcbc-6nv9g   1/1     Running   0          34s   10.100.140.104   worker233   <none>           <none>
deploy-secret-env-5454bffcbc-fb986   1/1     Running   0          34s   10.100.140.103   worker233   <none>           <none>
deploy-secret-env-5454bffcbc-q9fjk   1/1     Running   0          34s   10.100.203.148   worker232   <none>           <none>
[root@master231 secrets]# 
[root@master231 secrets]# kubectl exec -it deploy-secret-env-5454bffcbc-6nv9g -- env
...
cmy-username=admin

cmy-pwd=cmy

...
[root@master231 secrets]# 
[root@master231 secrets]# kubectl delete -f 03-deploy-secrets-env.yaml 
deployment.apps "deploy-secret-env" deleted
[root@master231 secrets]#
	
	
	
	5.pods基于存储卷引用secret 
[root@master231 secrets]# cat 04-deploy-secrets-volumes.yaml 
apiVersion:  apps/v1
kind: Deployment
metadata:
  name: deploy-secrets-volumes
spec:
  replicas: 3
  selector:
    matchLabels:
      apps: xiuxian
  template:
    metadata:
      labels:
        apps: xiuxian
        version: v1
    spec:
      volumes:
      - name: data
        # 指定存储卷的类型是secret
        secret:
          # 指定configMap的名称
          secretName: hehe
          # 如果不定义items,则引用cm的所有key。
          # 如果只是用到cm的个别key,则可以定义items资源来引用。
          items:
            # 引用的key
          - key: host
            # 可以暂时理解为将来挂载到容器的文件名称
            path: host.txt
          - key: database
            path: db.log
          - key: port
            path: port.log
      containers:
      - image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
        name: c1
        volumeMounts:
        - name: data
          mountPath: /data
        env:
        - name: cmy-username
          valueFrom:
            secretKeyRef:
              name: haha
              key: username
        - name: cmy-pwd
          valueFrom:
            secretKeyRef:
              name: haha
              key: password

[root@master231 secrets]# 
[root@master231 secrets]# kubectl apply -f  04-deploy-secrets-volumes.yaml 
deployment.apps/deploy-secrets-volumes created
[root@master231 secrets]# 
[root@master231 secrets]# kubectl get pods -o wide
NAME                                    READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
deploy-secrets-volumes-9bfd5b8c-fftlc   1/1     Running   0          4s    10.100.140.108   worker233   <none>           <none>
deploy-secrets-volumes-9bfd5b8c-qg586   1/1     Running   0          4s    10.100.140.106   worker233   <none>           <none>
deploy-secrets-volumes-9bfd5b8c-v98vk   1/1     Running   0          4s    10.100.203.179   worker232   <none>           <none>
[root@master231 secrets]# 
[root@master231 secrets]# kubectl exec -it deploy-secrets-volumes-9bfd5b8c-fftlc -- sh
/ # ls -l /data/
total 0
lrwxrwxrwx    1 root     root            13 Jun  5 08:46 db.log -> ..data/db.log
lrwxrwxrwx    1 root     root            15 Jun  5 08:46 host.txt -> ..data/host.txt
lrwxrwxrwx    1 root     root            15 Jun  5 08:46 port.log -> ..data/port.log
/ # 
/ #
/ # cat /data/port.log ; echo
3306
/ # 
/ # cat /data/host.txt ; echo 
10.168.10.250
/ # 
/ # cat /data/db.log ; echo 
wordpress
/ # 
/ # 




3.1 secret实现harbor私有镜像仓库认证案例

- secret实现harbor私有镜像仓库认证案例
	1.harbor创建账号并关联对应项目 
用户名: linux97 
密码 :  Linux97@2025

	2.将harbor的认证信息使用secret资源封装的资源清单查看
[root@master231 secrets]# kubectl create secret docker-registry harbor-linux97 --docker-username=cmy --docker-password=cmy1QAZ! --docker-email=dev@qq.com --docker-server=harbor.cmy.cn --dry-run=client -o yaml
apiVersion: v1
data:
  .dockerconfigjson: eyJhdXRocyI6eyJoYXJib3IyNTAub2xkYm95ZWR1LmNvbSI6eyJ1c2VybmFtZSI6ImxpbnV4OTciLCJwYXNzd29yZCI6IkxpbnV4OTdAMjAyNSIsImVtYWlsIjoibGludXg5N0BvbGRib3llZHUuY29tIiwiYXV0aCI6ImJHbHVkWGc1TnpwTWFXNTFlRGszUURJd01qVT0ifX19
kind: Secret
metadata:
  creationTimestamp: null
  name: harbor-linux97
type: kubernetes.io/dockerconfigjson
[root@master231 secrets]# 
[root@master231 secrets]# echo  eyJhdXRocyI6eyJoYXJib3IyNTAub2xkYm95ZWR1LmNvbSI6eyJ1c2VybmFtZSI6ImxpbnV4OTciLCJwYXNzd29yZCI6IkxpbnV4OTdAMjAyNSIsImVtYWlsIjoibGludXg5N0BvbGRib3llZHUuY29tIiwiYXV0aCI6ImJHbHVkWGc1TnpwTWFXNTFlRGszUURJd01qVT0ifX19 | base64 -d ; echo
{"auths":{"harbor250.cmy.com":{"username":"linux97","password":"Linux97@2025","email":"linux97@cmy.com","auth":"bGludXg5NzpMaW51eDk3QDIwMjU="}}}
[root@master231 secrets]# 
[root@master231 secrets]# 

cmy1QAZ!

	2.编写资源清单 
[root@master231 secrets]# cat 05-deploy-secrets-harbor.yaml
apiVersion:  apps/v1
kind: Deployment
metadata:
  name: deploy-harbor-secrets
spec:
  replicas: 3
  selector:
    matchLabels:
      apps: xiuxian
  template:
    metadata:
      labels:
        apps: xiuxian
        version: v1
    spec:
      # 指定镜像认证的信息
      imagePullSecrets:
      - name: harbor-linux97
      containers:
      - image: harbor250.cmy.com/cmy-casedemo/apps:v1
        imagePullPolicy: Always
        name: c1

---

apiVersion: v1
kind: Secret
metadata:
  name: harbor-linux97
type: kubernetes.io/dockerconfigjson
data:
  .dockerconfigjson: eyJhdXRocyI6eyJoYXJib3IyNTAub2xkYm95ZWR1LmNvbSI6eyJ1c2VybmFtZSI6ImxpbnV4OTciLCJwYXNzd29yZCI6IkxpbnV4OTdAMjAyNSIsImVtYWlsIjoibGludXg5N0BvbGRib3llZHUuY29tIiwiYXV0aCI6ImJHbHVkWGc1TnpwTWFXNTFlRGszUURJd01qVT0ifX19
[root@master231 secrets]# 
[root@master231 secrets]# kubectl apply -f 05-deploy-secrets-harbor.yaml 
deployment.apps/deploy-harbor-secrets created
secret/harbor-linux97 created
[root@master231 secrets]# 
[root@master231 secrets]# kubectl get pods -o wide
NAME                                     READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
deploy-harbor-secrets-85845dff8b-5spmm   1/1     Running   0          35s   10.100.203.135   worker232   <none>           <none>
deploy-harbor-secrets-85845dff8b-kcwj4   1/1     Running   0          35s   10.100.140.115   worker233   <none>           <none>
deploy-harbor-secrets-85845dff8b-rs8nb   1/1     Running   0          35s   10.100.140.113   worker233   <none>           <none>
[root@master231 secrets]#  

4 downwardAPI

与ConfigMap和Secret不同,DownwardAPI自身并非一种独立的API资源类型。

DownwardAPI只是一种将Pod的metadata、spec或status中的字段值注入到其内部Container里的方式。

DownwardAPI提供了两种方式用于将POD的信息注入到容器内部
– 环境变量:
用于单个变量,可以将POD信息和容器信息直接注入容器内部.
– Volume挂载:
将 POD 信息生成为文件,直接挂载到容器内部中去

4.1 环境变量方式使用DownwardAPI

		2.1 参数说明
			fieldRef有效值:
				- metadata.name
				- metadata.namespace,
				- `metadata.labels['<KEY>']`
				- `metadata.annotations['<KEY>']`
				- spec.nodeName
				- spec.serviceAccountName
				- status.hostIP
				- status.podIP
				- status.podIPs
				 
			resourceFieldRef有效值:
				- limits.cpu
				- limits.memory
				- limits.ephemeral-storage
				- requests.cpu
				- requests.memory
				- requests.ephemeral-storage

cat 03-downwardapi.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-downwardapi-env
spec:
  replicas: 3
  selector:
    matchLabels:
      apps: xiuxian
  template:
    metadata:
      labels:
        apps: xiuxian
    spec:
      containers:
      - name: c1
        image: harbor.cmy.cn/nginx/apps:v1
        resources:
          requests:
            cpu: 0.2
            memory: 200Mi
          limits:
            cpu: 0.5
            memory: 500Mi
        imagePullPolicy: Always
        env:
        - name: cmy-PODNAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: cmy-IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: cmy-REQUESTS
          valueFrom:
            resourceFieldRef:
              resource: requests.cpu
        - name: cmy-LIMITS
          valueFrom:
            resourceFieldRef:
              resource: limits.memory
[root@master231 volumes]# kubectl exec -it deploy-downwardapi-env-77974796c5-bdslc -- env | grep cmy
cmy-PODNAME=deploy-downwardapi-env-77974796c5-bdslc
cmy-IP=10.100.203.156
cmy-REQUESTS=1  # 并不是0.2,而是1,向上取整。
cmy-LIMITS=524288000

4.2 存储卷方式使用DownwardAPI

apiVersion: apps/v1
kind: Deployment
metadata:
  name: downwardapi-volume
spec:
  replicas: 1
  selector:
    matchLabels:
      apps: v1
  template:
    metadata:
      labels:
        apps: v1
    spec:
      volumes:
      - name: data01
        # 指定存储卷类型
        downwardAPI:
          # 指定键值对
          items:
          - path: pod-name
            # 仅支持: annotations, labels, name and namespace。
            fieldRef:
              fieldPath: "metadata.name"
      - name: data02
        downwardAPI:
          items:
          - path: pod-ns
            fieldRef:
              fieldPath: "metadata.namespace"
      - name: data03
        downwardAPI:
          items:
          - path: containers-limists-memory
            # 仅支持: limits.cpu, limits.memory, requests.cpu and requests.memory
            resourceFieldRef:
              containerName: c2
              resource: "limits.memory"
      containers:
      - name: c1
        image: harbor.cmy.cn/nginx/apps:v1
        resources:
          requests:
            cpu: 0.2
            memory: 300Mi
          limits:
            cpu: 0.5
            memory: 500Mi
        volumeMounts:
        - name: data01
          mountPath: /cmy-xixi
        - name: data02
          mountPath: /cmy-haha
        - name: data03
          mountPath: /cmy-hehe
      - name: c2
        image: harbor.cmy.cn/nginx/apps:v1
        volumeMounts:
        - name: data03
          mountPath: /cmy-hehe
        command:
        - tail
        args:
        - -f
        - /etc/hosts
        resources:
          limits:
            cpu: 1.5
            memory: 1.5Gi

5 PV和PVC

  • pv
    pv用于和后端存储对接的资源,关联后端存储。

  • sc
    sc可以动态创建pv的资源,关联后端存储。

  • pvc
    可以向pv或者sc进行资源请求,获取特定的存储。

    pod只需要在存储卷声明使用哪个pvc即可。

5.1 手动创建pv和pvc及pod引用

1.手动创建pv 
		1.1 创建工作目录
[root@master231 ~]# mkdir -pv /cmy/data/nfs-server/pv/linux/pv00{1,2,3}
mkdir: created directory '/cmy/data/nfs-server/pv'
mkdir: created directory '/cmy/data/nfs-server/pv/linux'
mkdir: created directory '/cmy/data/nfs-server/pv/linux/pv001'
mkdir: created directory '/cmy/data/nfs-server/pv/linux/pv002'
mkdir: created directory '/cmy/data/nfs-server/pv/linux/pv003'
[root@master231 ~]# 
[root@master231 ~]# tree /cmy/data/nfs-server/pv/linux/
/cmy/data/nfs-server/pv/linux/
├── pv001
├── pv002
└── pv003

3 directories, 0 files
[root@master231 ~]# 



		1.2 编写资源清单
[root@master231 persistentvolumes]# cat  manual-pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: cmy-linux-pv01
  labels:
    school: cmy
spec:
# 5 声明PV的访问模式,常用的有"ReadWriteOnce","ReadOnlyMany"和"ReadWriteMany":
# 6 ReadWriteOnce:(简称:"RWO")
# 7 只允许单个worker节点读写存储卷,但是该节点的多个Pod是可以同时访问该存储卷的。
# 8 ReadOnlyMany:(简称:"ROX")
# 9 允许多个worker节点进行只读存储卷。
# 10 ReadWriteMany:(简称:"RWX")
# 11 允许多个worker节点进行读写存储卷。
# 12 ReadWriteOncePod:(简称:"RWOP")
# 13 该卷可以通过单个Pod以读写方式装入。
# 14 如果您想确保整个集群中只有一个pod可以读取或写入PVC,请使用ReadWriteOncePod访问模式。
# 15 这仅适用于CSI卷和Kubernetes版本1.22+。
   accessModes:
   - ReadWriteMany
# 16 声明存储卷的类型为nfs
   nfs:
     path: /cmy/data/nfs-server/pv/linux/pv001
     server: 10.168.10.231
# 17 指定存储卷的回收策略,常用的有"Retain"和"Delete"
# 18 Retain:
# 19 "保留回收"策略允许手动回收资源。
# 20 删除PersistentVolumeClaim时,PersistentVolume仍然存在,并且该卷被视为"已释放"。
# 21 在管理员手动回收资源之前,使用该策略其他Pod将无法直接使用。
# 22 Delete:
# 23 对于支持删除回收策略的卷插件,k8s将删除pv及其对应的数据卷数据。
# 24 Recycle:
# 25 对于"回收利用"策略官方已弃用。相反,推荐的方法是使用动态资源调配。
# 26 如果基础卷插件支持,回收回收策略将对卷执行基本清理(rm -rf /thevolume/*),并使其再次可用于新的声明。
   persistentVolumeReclaimPolicy: Retain
# 27 声明存储的容量
   capacity:
     storage: 2Gi

---

apiVersion: v1
kind: PersistentVolume
metadata:
  name: cmy-linux-pv02
  labels:
    school: cmy
spec:
   accessModes:
   - ReadWriteMany
   nfs:
     path: /cmy/data/nfs-server/pv/linux/pv002
     server: 10.168.10.231
   persistentVolumeReclaimPolicy: Retain
   capacity:
     storage: 5Gi

---

apiVersion: v1
kind: PersistentVolume
metadata:
  name: cmy-linux-pv03
  labels:
    school: cmy
spec:
   accessModes:
   - ReadWriteMany
   nfs:
     path: /cmy/data/nfs-server/pv/linux/pv003
     server: 10.168.10.231
   persistentVolumeReclaimPolicy: Retain
   capacity:
     storage: 10Gi
[root@master231 persistentvolumes]# 

			1.3 创建pv 
[root@master231 persistentvolumes]# kubectl apply -f manual-pv.yaml 
persistentvolume/cmy-linux-pv01 created
persistentvolume/cmy-linux-pv02 created
persistentvolume/cmy-linux-pv03 created
[root@master231 persistentvolumes]# 
[root@master231 persistentvolumes]# kubectl get pv
NAME                   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
cmy-linux-pv01   2Gi        RWX            Retain           Available                                   8s
cmy-linux-pv02   5Gi        RWX            Retain           Available                                   8s
cmy-linux-pv03   10Gi       RWX            Retain           Available                                   8s
[root@master231 persistentvolumes]# 
[root@master231 persistentvolumes]# 


相关资源说明:
		NAME : 
			pv的名称
		CAPACITY : 
			pv的容量
		ACCESS MODES: 
			pv的访问模式
		RECLAIM POLICY:
			pv的回收策略。
		STATUS :
			pv的状态。
		CLAIM:
			pv被哪个pvc使用。
		STORAGECLASS  
			sc的名称。
		REASON   
			pv出错时的原因。
		AGE
			创建的时间。


	2.手动创建pvc 
[root@master231 persistentvolumeclaims]# cat manual-pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cmy-linux-pvc
spec:
# 28 声明要是用的pv
# 29 volumeName: cmy-linux-pv03
# 30 声明资源的访问模式
  accessModes:
  - ReadWriteMany
# 31 声明资源的使用量
  resources:
    limits:
       storage: 4Gi
    requests:
       storage: 3Gi
[root@master231 persistentvolumeclaims]# 
[root@master231 persistentvolumeclaims]# kubectl apply -f  manual-pvc.yaml 
persistentvolumeclaim/cmy-linux-pvc created
[root@master231 persistentvolumeclaims]# 
[root@master231 persistentvolumeclaims]# kubectl get pv,pvc
NAME                                    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                         STORAGECLASS   REASON   AGE
persistentvolume/cmy-linux-pv01   2Gi        RWX            Retain           Available                                                         3m39s
persistentvolume/cmy-linux-pv02   5Gi        RWX            Retain           Bound       default/cmy-linux-pvc                           3m39s
persistentvolume/cmy-linux-pv03   10Gi       RWX            Retain           Available                                                         3m39s

NAME                                        STATUS   VOLUME                 CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/cmy-linux-pvc   Bound    cmy-linux-pv02   5Gi        RWX                           6s
[root@master231 persistentvolumeclaims]# 


	3.Pod引用pvc
[root@master231 volumes]# cat 17-deploy-pvc.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-pvc-demo
spec:
  replicas: 1
  selector:
    matchLabels:
      apps: v1
  template:
    metadata:
      labels:
        apps: v1
    spec:
      volumes:
      - name: data
        # 声明存储卷的类型是pvc
        persistentVolumeClaim:
          # 声明pvc的名称
          claimName: cmy-linux-pvc
      - name: dt
        hostPath:
         path: /etc/localtime
      initContainers:
      - name: init01
        image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
        volumeMounts:
        - name: data
          mountPath: /cmy
        - name: dt
          mountPath: /etc/localtime
        command:
        - /bin/sh
        - -c
        - date -R > /cmy/index.html ; echo www.cmy.com >> /cmy/index.html
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
        volumeMounts:
        - name: data
          mountPath: /usr/share/nginx/html
        - name: dt
          mountPath: /etc/localtime
[root@master231 volumes]# 
[root@master231 volumes]# 
[root@master231 volumes]# kubectl apply -f  17-deploy-pvc.yaml 
deployment.apps/deploy-pvc-demo created
[root@master231 volumes]# 
[root@master231 volumes]# kubectl get pods -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
deploy-pvc-demo-688b57bdd-dlkzd   1/1     Running   0          3s    10.100.1.142   worker232   <none>           <none>
[root@master231 volumes]# 
[root@master231 volumes]# curl  10.100.203.163 
Fri, 18 Apr 2025 09:38:12 +0800
www.cmy.com
[root@master231 volumes]# 



	4.基于Pod找到后端的pv
		4.1 找到pvc的名称
[root@master231 volumes]# kubectl describe pod deploy-pvc-demo-688b57bdd-dlkzd 
Name:         deploy-pvc-demo-688b57bdd-dlkzd
Namespace:    default
...
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  cmy-linux-pvc
...
	
	
		4.2 基于pvc找到与之关联的pv
[root@master231 volumes]# kubectl get pvc cmy-linux-pvc 
NAME                  STATUS   VOLUME                 CAPACITY   ACCESS MODES   STORAGECLASS   AGE
cmy-linux-pvc   Bound    cmy-linux-pv02   5Gi        RWX                           12m
[root@master231 volumes]# 
[root@master231 volumes]# kubectl get pv cmy-linux-pv02
NAME                   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                         STORAGECLASS   REASON   AGE
cmy-linux-pv02   5Gi        RWX            Retain           Bound    default/cmy-linux-pvc                           15m
[root@master231 volumes]# 


		4.3 查看pv的详细信息 
[root@master231 volumes]# kubectl describe pv cmy-linux-pv02
Name:            cmy-linux-pv02
Labels:          school=cmy
...
Source:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    10.168.10.231
    Path:      /cmy/data/nfs-server/pv/linux/pv002
    ReadOnly:  false
...


		4.4 验证数据的内容
[root@master231 volumes]# ll /cmy/data/nfs-server/pv/linux/pv002
total 12
drwxr-xr-x 2 root root 4096 Feb 14 16:46 ./
drwxr-xr-x 5 root root 4096 Feb 14 16:36 ../
-rw-r--r-- 1 root root   68 Feb 14 16:49 index.html
[root@master231 volumes]# 
[root@master231 volumes]# cat /cmy/data/nfs-server/pv/linux/pv002/index.html 
Fri, 18 Apr 2025 09:38:12 +0800
www.cmy.com
[root@master231 volumes]# 

5.2 基于nfs4.9.0版本实现动态存储

动态存储(Dynamic Provisioning)​​ 允许集群根据 ​​StorageClass​​ 自动创建 ​​PersistentVolume (PV)​​,而无需管理员手动预先创建 PV。

部署nfs动态存储类

1.克隆代码
[root@master231 nfs]# git clone https://github.com/kubernetes-csi/csi-driver-nfs.git

如果下载不了的SVIP:
wget http://192.168.14.253/Resources/Kubernetes/sc/nfs/code/csi-driver-nfs-4.9.0.tar.gz
tar xf csi-driver-nfs-4.9.0.tar.gz


	2.安装nfs动态存储类
[root@master231 ~]# cd csi-driver-nfs-4.9.0/
[root@master231 csi-driver-nfs-4.9.0]# 
[root@master231 csi-driver-nfs-4.9.0]# ./deploy/install-driver.sh v4.9.0 local
use local deploy
Installing NFS CSI driver, version: v4.9.0 ...
serviceaccount/csi-nfs-controller-sa created
serviceaccount/csi-nfs-node-sa created
clusterrole.rbac.authorization.k8s.io/nfs-external-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/nfs-csi-provisioner-binding created
csidriver.storage.k8s.io/nfs.csi.k8s.io created
deployment.apps/csi-nfs-controller created
daemonset.apps/csi-nfs-node created
NFS CSI driver installed successfully.
[root@master231 csi-driver-nfs-4.9.0]# 

	3.验证是否安装成功
[root@master231 csi-driver-nfs]# kubectl -n kube-system get pod -o wide -l app
NAME                                 READY   STATUS    RESTARTS   AGE   IP           NODE        NOMINATED NODE   READINESS GATES
csi-nfs-controller-5c5c695fb-6psv8   4/4     Running   0          4s    10.168.10.232   worker232   <none>           <none>
csi-nfs-node-bsmr7                   3/3     Running   0          3s    10.168.10.232   worker232   <none>           <none>
csi-nfs-node-ghtvt                   3/3     Running   0          3s    10.168.10.231   master231   <none>           <none>
csi-nfs-node-s4dm5                   3/3     Running   0          3s    10.168.10.233   worker233   <none>           <none>
[root@master231 csi-driver-nfs]# 


温馨提示:
	此步骤如果镜像下载不下来,则可以到我的仓库下载"http://192.168.16.253/Resources/Kubernetes/sc/nfs/images/"。

sc-pvc-pv实战


	4.创建存储类
[root@master231 csi-driver-nfs]# mkdir /cmy/data/nfs-server/sc/
[root@master231 csi-driver-nfs]#
[root@master231 csi-driver-nfs]# cat deploy/v4.9.0/storageclass.yaml 
...
parameters:
  server: 10.168.10.231
  share: /cmy/data/nfs-server/sc
  ...
[root@master231 csi-driver-nfs]# 
[root@master231 csi-driver-nfs]# kubectl apply -f deploy/v4.9.0/storageclass.yaml 
storageclass.storage.k8s.io/nfs-csi created
[root@master231 csi-driver-nfs]# 
[root@master231 csi-driver-nfs]# kubectl get sc
NAME      PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-csi   nfs.csi.k8s.io   Delete          Immediate           false                  3s
[root@master231 csi-driver-nfs]# 


	5.删除pv和pvc保证环境"干净"
[root@master231 volumes]# kubectl delete -f 21-deploy-pvc.yaml -f 20-pvc.yaml -f 19-pv.yaml 
deployment.apps "deploy-pvc-demo" deleted
persistentvolumeclaim "cmy-linux-pvc" deleted
persistentvolume "cmy-linux-pv01" deleted
persistentvolume "cmy-linux-pv02" deleted
persistentvolume "cmy-linux-pv03" deleted
[root@master231 volumes]# 


	6.创建pvc测试
[root@master231 volumes]# cat 22-pvc-sc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cmy-linux-pvc-sc
spec:
# 32 声明要是用的pv
# 33 volumeName: cmy-linux-pv03
# 34 声明使用的存储类
  storageClassName: nfs-csi
# 35 声明资源的访问模式
  accessModes:
  - ReadWriteMany
# 36 声明资源的使用量
  resources:
    limits:
       storage: 2Mi
    requests:
       storage: 1Mi
[root@master231 volumes]# 
[root@master231 volumes]# kubectl apply -f 22-pvc-sc.yaml
persistentvolumeclaim/cmy-linux-pvc-sc created
[root@master231 volumes]# 
[root@master231 volumes]# kubectl get pvc,pv -o wide
NAME                                           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE   VOLUMEMODE
persistentvolumeclaim/cmy-linux-pvc-sc   Bound    pvc-c23ad1b2-3f0b-4f53-bc51-9dbb20e6c037   1Mi        RWX            nfs-csi        6s    Filesystem

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                            STORAGECLASS   REASON   AGE   VOLUMEMODE
persistentvolume/pvc-c23ad1b2-3f0b-4f53-bc51-9dbb20e6c037   1Mi        RWX            Delete           Bound    default/cmy-linux-pvc-sc   nfs-csi                 6s    Filesystem
[root@master231 volumes]# 


	7.pod引用pvc
[root@master231 volumes]# cat 21-deploy-pvc.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-pvc-demo
spec:
  replicas: 1
  selector:
    matchLabels:
      apps: v1
  template:
    metadata:
      labels:
        apps: v1
    spec:
      volumes:
      - name: data
        # 声明存储卷的类型是pvc
        persistentVolumeClaim:
          # 声明pvc的名称
          # claimName: cmy-linux-pvc
          claimName: cmy-linux-pvc-sc
      - name: dt
        hostPath:
         path: /etc/localtime
      initContainers:
      - name: init01
        image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
        volumeMounts:
        - name: data
          mountPath: /cmy
        - name: dt
          mountPath: /etc/localtime
        command:
        - /bin/sh
        - -c
        - date -R > /cmy/index.html ; echo www.cmy.com >> /cmy/index.html
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
        volumeMounts:
        - name: data
          mountPath: /usr/share/nginx/html
        - name: dt
          mountPath: /etc/localtime
[root@master231 volumes]# 
[root@master231 volumes]# kubectl apply -f 21-deploy-pvc.yaml
deployment.apps/deploy-pvc-demo created
[root@master231 volumes]# 
[root@master231 volumes]# kubectl get pods -o wide
NAME                               READY   STATUS    RESTARTS   AGE   IP              NODE        NOMINATED NODE   READINESS GATES
deploy-pvc-demo-65d4b9bf97-st2ch   1/1     Running   0          3s    10.100.140.99   worker233   <none>           <none>
[root@master231 volumes]# 
[root@master231 volumes]# curl 10.100.140.99 
Fri, 18 Apr 2025 10:31:32 +0800
www.cmy.com
[root@master231 volumes]# 



	8.验证pod的后端存储数据
[root@master231 volumes]# kubectl describe pod deploy-pvc-demo-688b57bdd-td2z7   | grep ClaimName
    ClaimName:  cmy-linux-pvc
[root@master231 volumes]# 
[root@master231 volumes]# kubectl get pvc cmy-linux-pvc 
NAME                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
cmy-linux-pvc   Bound    pvc-b425481e-3b15-4854-bf34-801a29edfcc5   3Gi        RWX            nfs-csi        2m29s
[root@master231 volumes]# 
[root@master231 volumes]# kubectl describe pv  pvc-b425481e-3b15-4854-bf34-801a29edfcc5 | grep Source -A  5
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            nfs.csi.k8s.io
    FSType:            
    VolumeHandle:      10.168.10.231#cmy/data/nfs-server/sc#pvc-c23ad1b2-3f0b-4f53-bc51-9dbb20e6c037##
    ReadOnly:          false
[root@master231 volumes]# 
[root@master231 volumes]# ll /cmy/data/nfs-server/sc/pvc-c23ad1b2-3f0b-4f53-bc51-9dbb20e6c037/
total 12
drwxr-xr-x 2 root root 4096 Apr 18 10:31 ./
drwxr-xr-x 3 root root 4096 Apr 18 10:30 ../
-rw-r--r-- 1 root root   50 Apr 18 10:31 index.html
[root@master231 volumes]# 
[root@master231 volumes]# cat /cmy/data/nfs-server/sc/pvc-c23ad1b2-3f0b-4f53-bc51-9dbb20e6c037/index.html 
Fri, 18 Apr 2025 10:31:32 +0800
www.cmy.com
[root@master231 volumes]# 



5.3 配置默认的存储类及多个存储类定义

	1.响应式配置默认存储类
[root@master231 nfs]# kubectl get sc
NAME      PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-csi   nfs.csi.k8s.io   Delete          Immediate           false                  165m
[root@master231 nfs]# 
[root@master231 nfs]# kubectl patch sc nfs-csi -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
storageclass.storage.k8s.io/nfs-csi patched
[root@master231 nfs]# 
[root@master231 nfs]# kubectl get sc
NAME                PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-csi (default)   nfs.csi.k8s.io   Delete          Immediate           false                  166m
[root@master231 nfs]# 
[root@master231 nfs]# 




	2.响应式取消默认存储类
[root@master231 nfs]# kubectl get sc
NAME                PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-csi (default)   nfs.csi.k8s.io   Delete          Immediate           false                  168m
[root@master231 nfs]# 
[root@master231 nfs]# kubectl patch sc nfs-csi -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
storageclass.storage.k8s.io/nfs-csi patched
[root@master231 nfs]# 
[root@master231 nfs]# kubectl get sc
NAME      PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-csi   nfs.csi.k8s.io   Delete          Immediate           false                  168m
[root@master231 nfs]# 



	3.声明式配置多个存储类
[root@master231 storageclasses]# cat sc-multiple.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: cmy-sc-xixi
  annotations:
    storageclass.kubernetes.io/is-default-class: "false"
provisioner: nfs.csi.k8s.io
parameters:
  server: 10.168.10.231
  share: /cmy/data/nfs-server/sc-xixi
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
  - nfsvers=4.1


---

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: cmy-sc-haha
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: nfs.csi.k8s.io
parameters:
  server: 10.168.10.231
  share: /cmy/data/nfs-server/sc-haha
[root@master231 storageclasses]# 
[root@master231 storageclasses]# 
[root@master231 storageclasses]# kubectl apply  -f sc-multiple.yaml 
storageclass.storage.k8s.io/cmy-sc-xixi created
storageclass.storage.k8s.io/cmy-sc-haha created
[root@master231 storageclasses]# 
[root@master231 storageclasses]# kubectl get sc
NAME                          PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-csi                       nfs.csi.k8s.io   Delete          Immediate           false                  16m
cmy-sc-haha (default)   nfs.csi.k8s.io   Delete          Immediate           false                  19s
cmy-sc-xixi             nfs.csi.k8s.io   Delete          Immediate           false                  19s
[root@master231 storageclasses]# 




	4.准备目录
[root@master231 storageclasses]# mkdir -pv /cmy/data/nfs-server/sc-{xixi,haha}


	5.测试验证
[root@master231 volumes]# cat 12-deploy-pvc-sc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-xiuxian
spec:
  accessModes:
  - ReadWriteMany
  resources:
    limits:
       storage: 2Mi
    requests:
       storage: 1Mi

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-xiuxian
spec:
  replicas: 1
  selector:
    matchLabels:
      apps: v1
  template:
    metadata:
      labels:
        apps: v1
    spec:
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: pvc-xiuxian
      - name: dt
        hostPath:
         path: /etc/localtime
      initContainers:
      - name: init01
        image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
        volumeMounts:
        - name: data
          mountPath: /cmy
        - name: dt
          mountPath: /etc/localtime
        command:
        - /bin/sh
        - -c
        - date -R > /cmy/index.html ; echo www.cmy.com >> /cmy/index.html
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
        volumeMounts:
        - name: data
          mountPath: /usr/share/nginx/html
        - name: dt
          mountPath: /etc/localtime
[root@master231 volumes]# 
[root@master231 volumes]# 
[root@master231 volumes]# kubectl apply -f  12-deploy-pvc-sc.yaml
persistentvolumeclaim/pvc-xiuxian created
deployment.apps/deploy-xiuxian created
[root@master231 volumes]# 
[root@master231 volumes]# kubectl get po,pvc,pv
NAME                                  READY   STATUS    RESTARTS   AGE
pod/deploy-xiuxian-587bcdb966-fk9s5   1/1     Running   0          5s

NAME                                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
persistentvolumeclaim/pvc-xiuxian   Bound    pvc-691cf379-8889-4e5d-b346-b362d71ec1f0   1Mi        RWX            cmy-sc-haha   5s

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                 STORAGECLASS        REASON   AGE
persistentvolume/pvc-691cf379-8889-4e5d-b346-b362d71ec1f0   1Mi        RWX            Delete           Bound    default/pvc-xiuxian   cmy-sc-haha            4s
[root@master231 volumes]# 
[root@master231 volumes]# kubectl get po,pvc,pv -o wide
NAME                                  READY   STATUS    RESTARTS   AGE   IP              NODE        NOMINATED NODE   READINESS GATES
pod/deploy-xiuxian-587bcdb966-fk9s5   1/1     Running   0          10s   10.100.140.83   worker233   <none>           <none>

NAME                                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE   VOLUMEMODE
persistentvolumeclaim/pvc-xiuxian   Bound    pvc-691cf379-8889-4e5d-b346-b362d71ec1f0   1Mi        RWX            cmy-sc-haha   10s   Filesystem

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                 STORAGECLASS        REASON   AGE   VOLUMEMODE
persistentvolume/pvc-691cf379-8889-4e5d-b346-b362d71ec1f0   1Mi        RWX            Delete           Bound    default/pvc-xiuxian   cmy-sc-haha            9s    Filesystem
[root@master231 volumes]# 
[root@master231 volumes]# curl 10.100.140.83 
Fri, 06 Jun 2025 15:01:12 +0800
www.cmy.com
[root@master231 volumes]# 

 
 
	6.验证后端存储 
[root@master231 volumes]# cat /usr/local/bin/get-pv.sh
#!/bin/bash

POD_NAME=$1
PVC_NAME=`kubectl describe pod $POD_NAME | grep ClaimName | awk '{print $2}'`
PV_NAME=`kubectl get pvc ${PVC_NAME} | awk 'NR==2{print $3}'`
kubectl describe pv $PV_NAME  | grep Source -A 5
[root@master231 volumes]# 
[root@master231 volumes]# chmod +x /usr/local/bin/get-pv.sh 
[root@master231 volumes]# 
[root@master231 volumes]# kubectl get pod
NAME                              READY   STATUS    RESTARTS   AGE
deploy-xiuxian-587bcdb966-fk9s5   1/1     Running   0          2m36s
[root@master231 volumes]# 
[root@master231 volumes]# get-pv.sh deploy-xiuxian-587bcdb966-fk9s5
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            nfs.csi.k8s.io
    FSType:            
    VolumeHandle:      10.168.10.231#cmy/data/nfs-server/sc-haha#pvc-691cf379-8889-4e5d-b346-b362d71ec1f0##
    ReadOnly:          false
[root@master231 volumes]# 
[root@master231 volumes]# ll /cmy/data/nfs-server/sc-haha/pvc-691cf379-8889-4e5d-b346-b362d71ec1f0/
total 12
drwxr-xr-x 2 root root 4096 Jun  6 15:01 ./
drwxr-xr-x 3 root root 4096 Jun  6 15:01 ../
-rw-r--r-- 1 root root   50 Jun  6 15:01 index.html
[root@master231 volumes]# 
[root@master231 volumes]# cat /cmy/data/nfs-server/sc-haha/pvc-691cf379-8889-4e5d-b346-b362d71ec1f0/index.html 
Fri, 06 Jun 2025 15:01:12 +0800
www.cmy.com
[root@master231 volumes]# 

6 rook部署ceph

1.Rook概述
Rook是一个开源的云原生存储编排器,为Ceph存储提供平台、框架和支持,以便与云原生环境进行原生集成。

Ceph是一个分布式存储系统,提供文件、块和对象存储,部署在大规模生产集群中。

Rook自动化了Ceph的部署和管理,以提供自我管理、自我扩展和自我修复的存储服务。Rook操作员通过构建Kubernetes资源来部署、配置、扩展、升级和监控Ceph来实现这一点。

Ceph运营商于2018年12月在Rook v0.9版本中宣布稳定,提供了多年的生产存储平台。Rook由云原生计算基金会(CNCF)托管,是一个毕业级项目。

Rook是用Golang实现的,ceph是用C++实现的,其中数据路径经过高度优化。

简而言之,Rook是一个自管理的分布式存储编排系统,可以为kubernetes提供便利的存储解决方案,Rook本身并不提供存储,而是kubernetes和存储之间提供适配层,简化存储系统的部署和维护工作。目前主要支持存储系统包括但不限于Ceph,Cassandra,NFS等。

从本质上来讲,Rook是一个可以提供ceph集群管理能力的Operator,Rook使用CRD一个控制器来对Ceph之类的资源进行部署和管理。

官网链接:
https://rook.io/

github地址:
https://github.com/rook/rook
2.Rook和K8S版本对应关系
如上图所示,我的K8S 1.23.17最高能使用的Rook版本为v1.13。

参考链接:
https://rook.io/docs/rook/v1.13/Getting-Started/Prerequisites/prerequisites/
3.Rook架构图解
参考链接:
https://rook.io/docs/rook/v1.13/Getting-Started/storage-architecture/#architecture

6.1 环境准备

4.环境准备 
每个K8S节点增加2~3个块设备文件,分别对应300GB,500GB,1024GB,并重启操作系统。

[root@master231 ~]# lsblk 
NAME  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
...
sdb     8:16   0  300G  0 disk 
sdc     8:32   0  500G  0 disk 
sdd     8:48   0    1T  0 disk 
...
[root@master231 ~]# 



[root@worker232 ~]# lsblk 
NAME  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
...
sdb    8:16   0  300G  0 disk 
sdc    8:32   0  500G  0 disk 
sdd    8:48   0    1T  0 disk 
...
[root@worker232 ~]# 


[root@worker233 ~]# lsblk 
NAME  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
...
sdb    8:16   0  300G  0 disk 
sdc    8:32   0  500G  0 disk 
sdd    8:48   0    1T  0 disk 
...
[root@worker233 ~]# 




	5.所有节点导入镜像
		5.1 通过脚本批量导入【线上同学可以忽略此步骤,可以直接跳过,从百度云盘获取镜像即可。】
[root@worker233 ~]# mkdir rook && cd rook
[root@worker233 rook]# wget http://192.168.14.253/Resources/Kubernetes/Project/Rook/get-rook-ceph-v1.13.10-images.sh
[root@worker233 rook]# chmod +x get-rook-ceph-v1.13.10-images.sh 
[root@worker233 rook]# ./get-rook-ceph-v1.13.10-images.sh 14


		5.2 将镜像传递到其他节点
[root@worker233 rook]# scp -r /root/rook/ 10.168.10.232:~
[root@worker233 rook]# scp -r /root/rook/ 10.168.10.231:~

		5.3 批量导入镜像 
[root@worker232 ~]# cd rook && for i in `ls -1 *.tar.gz` ; do docker load -i $i ; done

[root@worker233 ~]# cd rook && for i in `ls -1 *.tar.gz` ; do docker load -i $i ; done


	
	6.下载指定版本的Rook
[root@master231 ~]# wget https://github.com/rook/rook/archive/refs/tags/v1.13.10.tar.gz

svip:
[root@master231 03-rook]# wget http://192.168.14.253/Resources/Kubernetes/Project/Rook/rook-1.13.10.tar.gz

6.2 部署

	7.解压软件包
[root@master231 03-rook]# tar xf rook-1.13.10.tar.gz 

	8.创建Rook
[root@master231 03-rook]# cd rook-1.13.10/deploy/examples/
[root@master231 examples]# 
[root@master231 examples]# kubectl apply -f crds.yaml -f common.yaml -f operator.yaml 


	9.部署Ceph
[root@master231 examples]# kubectl apply -f cluster.yaml 


	10.部署Rook Ceph工具
[root@master231 examples]# kubectl apply -f toolbox.yaml 


	11.部署CephUI
[root@master231 examples]# kubectl apply -f dashboard-external-https.yaml 


	12.可选操作【取消污点,以便于ceph能够在231节点调度 】
[root@master231 examples]#  kubectl describe nodes| grep Taints 
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             <none>
Taints:             <none>
[root@master231 examples]# 
[root@master231 examples]# kubectl taint node master231 node-role.kubernetes.io/master-
node/master231 untainted
[root@master231 examples]# 
[root@master231 examples]# kubectl describe nodes| grep Taints 
Taints:             <none>
Taints:             <none>
Taints:             <none>
[root@master231 examples]# 


	13.查看Pod列表
kubectl get pods -n rook-ceph  -o wide
NAME                                                   READY   STATUS      RESTARTS   AGE     IP               NODE         NOMINATED NODE   READINESS GATES
csi-cephfsplugin-2qsls                                 2/2     Running     0          7m35s   10.168.10.233    worker233    <none>           <none>
csi-cephfsplugin-cngkq                                 2/2     Running     0          7m36s   10.168.10.232    worker232    <none>           <none>
csi-cephfsplugin-n5trp                                 2/2     Running     0          6m7s    10.168.10.231    master-231   <none>           <none>
csi-cephfsplugin-provisioner-675f8d446d-2wxp5          5/5     Running     0          7m35s   10.100.203.144   worker232    <none>           <none>
csi-cephfsplugin-provisioner-675f8d446d-wc62r          5/5     Running     0          7m35s   10.100.140.111   worker233    <none>           <none>
csi-rbdplugin-7rks2                                    2/2     Running     0          7m36s   10.168.10.232    worker232    <none>           <none>
csi-rbdplugin-h7tfb                                    2/2     Running     0          6m7s    10.168.10.231    master-231   <none>           <none>
csi-rbdplugin-provisioner-dfc566599-c5sfl              5/5     Running     0          7m36s   10.100.203.142   worker232    <none>           <none>
csi-rbdplugin-provisioner-dfc566599-v6gr2              5/5     Running     0          7m36s   10.100.140.120   worker233    <none>           <none>
csi-rbdplugin-vsfzq                                    2/2     Running     0          7m36s   10.168.10.233    worker233    <none>           <none>
rook-ceph-crashcollector-master-231-5dfbb785b8-rbmpp   1/1     Running     0          5m7s    10.100.209.27    master-231   <none>           <none>
rook-ceph-crashcollector-worker232-57cd6f84d4-vzf9n    1/1     Running     0          4m9s    10.100.203.130   worker232    <none>           <none>
rook-ceph-crashcollector-worker233-5bf9645587-99crs    1/1     Running     0          4m11s   10.100.140.69    worker233    <none>           <none>
rook-ceph-exporter-master-231-7f5669d5f9-k5jzh         1/1     Running     0          5m7s    10.100.209.26    master-231   <none>           <none>
rook-ceph-exporter-worker232-7b7478fd9c-mt2ff          1/1     Running     0          4m      10.100.203.143   worker232    <none>           <none>
rook-ceph-exporter-worker233-8654b9d9c4-z8j6p          1/1     Running     0          4m2s    10.100.140.74    worker233    <none>           <none>
rook-ceph-mgr-a-54f5dd7575-f4dlr                       3/3     Running     0          5m9s    10.100.140.70    worker233    <none>           <none>
rook-ceph-mgr-b-564ff5796d-4cktk                       3/3     Running     0          5m8s    10.100.203.146   worker232    <none>           <none>
rook-ceph-mon-a-7ff568c9d8-cvnp7                       2/2     Running     0          6m      10.100.203.152   worker232    <none>           <none>
rook-ceph-mon-b-784b87456-hvw9z                        2/2     Running     0          5m35s   10.100.140.126   worker233    <none>           <none>
rook-ceph-mon-c-7fbdcf8ddd-d98rc                       2/2     Running     0          5m24s   10.100.209.29    master-231   <none>           <none>
rook-ceph-operator-5f54cbd997-9g56h                    1/1     Running     0          9m5s    10.100.203.165   worker232    <none>           <none>
rook-ceph-osd-0-bb6fbb57f-7jt9x                        2/2     Running     0          4m11s   10.100.140.75    worker233    <none>           <none>
rook-ceph-osd-1-66cc65978-48hmn                        2/2     Running     0          4m9s    10.100.203.145   worker232    <none>           <none>
rook-ceph-osd-2-775d7bb6df-wj6wz                       2/2     Running     0          4m7s    10.100.209.39    master-231   <none>           <none>
rook-ceph-osd-3-84f898c44f-dhwzp                       2/2     Running     0          4m10s   10.100.140.122   worker233    <none>           <none>
rook-ceph-osd-4-5c9589f5c-22ds5                        2/2     Running     0          4m9s    10.100.203.137   worker232    <none>           <none>
rook-ceph-osd-5-7df755fbfc-lwkxn                       2/2     Running     0          4m8s    10.100.209.44    master-231   <none>           <none>
rook-ceph-osd-6-d5c4fcf67-78s7l                        2/2     Running     0          4m11s   10.100.140.124   worker233    <none>           <none>
rook-ceph-osd-7-6845df65c7-4d56c                       2/2     Running     0          4m10s   10.100.203.134   worker232    <none>           <none>
rook-ceph-osd-8-696c5746d9-zblm2                       2/2     Running     0          4m8s    10.100.209.43    master-231   <none>           <none>
rook-ceph-osd-prepare-master-231-8pkqc                 0/1     Completed   0          90s     10.100.209.37    master-231   <none>           <none>
rook-ceph-osd-prepare-worker232-cwndp                  0/1     Completed   0          87s     10.100.203.191   worker232    <none>           <none>
rook-ceph-osd-prepare-worker233-f2c5t                  0/1     Completed   0          83s     10.100.140.78    worker233    <none>           <none>
rook-ceph-tools-5846d4dc6c-p22j5                       1/1     Running     0          8m47s   10.100.203.185   worker232    <none>           <none>



    14.访问Ceph的WebUI
kubectl get svc -n rook-ceph
NAME                                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
rook-ceph-exporter                       ClusterIP   10.200.145.218   <none>        9926/TCP            8m3s
rook-ceph-mgr                            ClusterIP   10.200.35.163    <none>        9283/TCP            7m42s
rook-ceph-mgr-dashboard                  ClusterIP   10.200.139.1     <none>        8443/TCP            7m42s
rook-ceph-mgr-dashboard-external-https   NodePort    10.200.24.195    <none>        8443:19395/TCP      11m
rook-ceph-mon-a                          ClusterIP   10.200.218.30    <none>        6789/TCP,3300/TCP   8m54s
rook-ceph-mon-b                          ClusterIP   10.200.251.81    <none>        6789/TCP,3300/TCP   8m29s
rook-ceph-mon-c                          ClusterIP   10.200.95.84     <none>        6789/TCP,3300/TCP   8m19s

[root@master231 examples]# 
[root@master231 examples]# 

https://10.168.10.231:19395



	15.获取ceph dashboard的登录密码
[root@master231 examples]# kubectl -n rook-ceph  get secrets 
NAME                                         TYPE                                  DATA   AGE
cluster-peer-token-rook-ceph                 kubernetes.io/rook                    2      3m25s
default-token-tnkgp                          kubernetes.io/service-account-token   3      26m
objectstorage-provisioner-token-l8r47        kubernetes.io/service-account-token   3      26m
rook-ceph-admin-keyring                      kubernetes.io/rook                    1      19m
rook-ceph-cmd-reporter-token-pjd6m           kubernetes.io/service-account-token   3      26m
rook-ceph-config                             kubernetes.io/rook                    2      19m
rook-ceph-crash-collector-keyring            kubernetes.io/rook                    1      3m27s
rook-ceph-dashboard-password                 kubernetes.io/rook                    1      2m58s
rook-ceph-exporter-keyring                   kubernetes.io/rook                    1      3m27s
rook-ceph-mgr-a-keyring                      kubernetes.io/rook                    1      3m25s
rook-ceph-mgr-b-keyring                      kubernetes.io/rook                    1      3m25s
rook-ceph-mgr-token-t6lwt                    kubernetes.io/service-account-token   3      26m
rook-ceph-mon                                kubernetes.io/rook                    4      19m
rook-ceph-mons-keyring                       kubernetes.io/rook                    1      19m
rook-ceph-osd-token-57xrh                    kubernetes.io/service-account-token   3      26m
rook-ceph-purge-osd-token-s8mkd              kubernetes.io/service-account-token   3      26m
rook-ceph-rgw-token-6qckz                    kubernetes.io/service-account-token   3      26m
rook-ceph-system-token-qkkmq                 kubernetes.io/service-account-token   3      26m
rook-csi-cephfs-node                         kubernetes.io/rook                    2      3m27s
rook-csi-cephfs-plugin-sa-token-t4t5b        kubernetes.io/service-account-token   3      26m
rook-csi-cephfs-provisioner                  kubernetes.io/rook                    2      3m27s
rook-csi-cephfs-provisioner-sa-token-m556n   kubernetes.io/service-account-token   3      26m
rook-csi-rbd-node                            kubernetes.io/rook                    2      3m27s
rook-csi-rbd-plugin-sa-token-86kxk           kubernetes.io/service-account-token   3      26m
rook-csi-rbd-provisioner                     kubernetes.io/rook                    2      3m27s
rook-csi-rbd-provisioner-sa-token-pn9r4      kubernetes.io/service-account-token   3      26m
[root@master231 examples]# 
[root@master231 examples]# kubectl -n rook-ceph get secrets rook-ceph-dashboard-password -o jsonpath='{.data.password}' | base64 -d ;echo
:]uH\>m%_}B;Am@"#EV`
[root@master231 examples]# 

	
用户名为: admin
密码: :]uH\>m%_}B;Am@"#EV`


6.3 rbd

6.4 cephfs

6.5 动态存储类rbd

6.6 动态存储类cephfs

7 OpenEBS

[[OpenEBS]]

上一篇
下一篇