statefulsets

以Nginx的为例,当任意一个Nginx挂掉,其处理的逻辑是相同的,即仅需重新创建一个Pod副本即可,这类服务我们称之为无状态服务。

以MySQL主从同步为例,master,slave两个库任意一个库挂掉,其处理逻辑是不相同的,这类服务我们称之为有状态服务。

有状态服务面临的难题:
(1)启动/停止顺序;
(2)pod实例的数据是独立存储;
(3)需要固定的IP地址或者主机名;

<font color="#9bbb59">StatefulSet一般用于有状态服务,StatefulSets对于需要满足以下一个或多个需求的应用程序很有价值。</font>
<font color="#9bbb59"> (1)稳定唯一的网络标识符。</font>
<font color="#9bbb59"> (2)稳定独立持久的存储。</font>
<font color="#9bbb59"> (3)有序优雅的部署和缩放。</font>
<font color="#9bbb59"> (4)有序自动的滚动更新。 </font>
<font color="#9bbb59"> </font>
<font color="#9bbb59"> </font>
<font color="#9bbb59">稳定的网络标识:</font>
<font color="#9bbb59"> 其本质对应的是一个service资源,只不过这个service没有定义VIP,我们称之为headless service,即"无头服务"。</font>
<font color="#9bbb59"> 通过"headless service"来维护Pod的网络身份,会为每个Pod分配一个数字编号并且按照编号顺序部署。</font>
<font color="#9bbb59"> 综上所述,无头服务("headless service")要求满足以下两点:</font>
<font color="#9bbb59"> (1)将svc资源的clusterIP字段设置None,即"clusterIP: None";</font>
<font color="#9bbb59"> (2)将sts资源的serviceName字段声明为无头服务的名称;</font>
<font color="#9bbb59"> </font>
<font color="#9bbb59"> </font>
<font color="#9bbb59">独享存储:</font>
<font color="#9bbb59"> Statefulset的存储卷使用VolumeClaimTemplate创建,称为"存储卷申请模板"。</font>
<font color="#9bbb59"> 当sts资源使用VolumeClaimTemplate创建一个PVC时,同样也会为每个Pod分配并创建唯一的pvc编号,每个pvc绑定对应pv,从而保证每个Pod都有独立的存储。</font>

1 StatefulSets控制器-网络唯一标识之headless

[root@master231 statefulsets]# cat 01-statefulset-headless-network.yaml 
apiVersion: v1
kind: Service
metadata:
  name: svc-headless
spec:
  ports:
  - port: 80
    name: web
  # 将clusterIP字段设置为None表示为一个无头服务,即svc将不会分配VIP。
  clusterIP: None
  selector:
    app: nginx


---

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: sts-xiuxian
spec:
  selector:
    matchLabels:
      app: nginx
  # 声明无头服务    
  serviceName: svc-headless
  replicas: 3 
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
        imagePullPolicy: Always
[root@master231 statefulsets]# 
[root@master231 statefulsets]# kubectl apply -f  01-statefulset-headless-network.yaml 
service/svc-headless created
statefulset.apps/sts-xiuxian created
[root@master231 statefulsets]# 
[root@master231 statefulsets]# kubectl get sts,svc,po -o wide
NAME                           READY   AGE   CONTAINERS   IMAGES
statefulset.apps/sts-xiuxian   3/3     8s    nginx        registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1

NAME                   TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/kubernetes     ClusterIP   10.200.0.1   <none>        443/TCP   10d   <none>
service/svc-headless   ClusterIP   None         <none>        80/TCP    8s    app=nginx

NAME                READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
pod/sts-xiuxian-0   1/1     Running   0          8s    10.100.203.186   worker232   <none>           <none>
pod/sts-xiuxian-1   1/1     Running   0          6s    10.100.140.87    worker233   <none>           <none>
pod/sts-xiuxian-2   1/1     Running   0          4s    10.100.160.134   master231   <none>           <none>
[root@master231 statefulsets]# 





		2.2 测试验证!!!
[root@master231 statefulsets]# kubectl exec -it sts-xiuxian-0 -- sh
/ # ping sts-xiuxian-1.svc-headless -c 3

2 StatefulSets控制器-独享存储

	3.1 编写资源清单
[root@master231 statefulsets]# cat 02-statefulset-headless-volumeClaimTemplates.yaml
apiVersion: v1
kind: Service
metadata:
  name: svc-headless
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: sts-xiuxian
spec:
  selector:
    matchLabels:
      app: nginx
  serviceName: svc-headless
  replicas: 3 
  # 卷申请模板,会为每个Pod去创建唯一的pvc并与之关联哟!
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      # 声明咱们自定义的动态存储类,即sc资源。
      storageClassName: "cmy-sc-xixi"
      resources:
        requests:
          storage: 2Gi
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
        ports:
        - containerPort: 80
          name: xiuxian
        volumeMounts:
        - name: data
          mountPath: /usr/share/nginx/html
---
apiVersion: v1
kind: Service
metadata:
  name: svc-sts-xiuxian
spec:
  type: ClusterIP
  clusterIP: 10.200.0.200
  selector:
     app: nginx
  ports:
  - port: 80
    targetPort: xiuxian
[root@master231 statefulsets]# 



	
		3.2 测试验证 
[root@master231 statefulsets]# kubectl apply -f  02-statefulset-headless-volumeClaimTemplates.yaml
service/svc-headless created
statefulset.apps/sts-xiuxian created
service/svc-sts-xiuxian created
[root@master231 statefulsets]# 
[root@master231 statefulsets]# kubectl get sts,svc,po -o wide
NAME                           READY   AGE   CONTAINERS   IMAGES
statefulset.apps/sts-xiuxian   3/3     6s    nginx        registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1

NAME                      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/kubernetes        ClusterIP   10.200.0.1     <none>        443/TCP   10d   <none>
service/svc-headless      ClusterIP   None           <none>        80/TCP    6s    app=nginx
service/svc-sts-xiuxian   ClusterIP   10.200.0.200   <none>        80/TCP    6s    app=nginx

NAME                READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
pod/sts-xiuxian-0   1/1     Running   0          6s    10.100.203.163   worker232   <none>           <none>
pod/sts-xiuxian-1   1/1     Running   0          5s    10.100.140.92    worker233   <none>           <none>
pod/sts-xiuxian-2   1/1     Running   0          3s    10.100.160.132   master231   <none>           <none>
[root@master231 statefulsets]# 
[root@master231 statefulsets]# kubectl exec -it sts-xiuxian-0  -- sh
/ # echo AAA > /usr/share/nginx/html/index.html
/ # 
[root@master231 statefulsets]# 
[root@master231 statefulsets]# kubectl exec -it sts-xiuxian-1  -- sh
/ # echo BBB > /usr/share/nginx/html/index.html 
/ # 
[root@master231 statefulsets]# 
[root@master231 statefulsets]# kubectl exec -it sts-xiuxian-2  -- sh
/ # echo CCC > /usr/share/nginx/html/index.html 
/ # 
[root@master231 statefulsets]# 
[root@master231 statefulsets]# for i in `seq 10`; do curl 10.200.0.200; done
CCC
BBB
AAA
CCC
BBB
AAA
CCC
BBB
AAA
CCC

3 sts的分段更新

	4.1.编写资源清单
[root@master231 statefulsets]# cat > 03-statefuleset-updateStrategy-partition.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: sts-headless
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: web

---

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: cmy-sts-web
spec:
  # 指定sts资源的更新策略
  updateStrategy:
    # 配置滚动更新
    rollingUpdate:
      # 当编号小于3时不更新,说白了,就是Pod编号大于等于3的Pod会被更新!
      partition: 3
  selector:
    matchLabels:
      app: web
  serviceName: sts-headless
  replicas: 5
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: c1
        ports:
        - containerPort: 80
          name: xiuxian	  
        image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
---
apiVersion: v1
kind: Service
metadata:
  name: cmy-sts-svc
spec:
  selector:
     app: web
  ports:
  - port: 80
    targetPort: xiuxian
EOF

	
		4.2.验证
[root@master231 statefulsets]# kubectl get pods -o wide
NAME                  READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
cmy-sts-web-0   1/1     Running   0          30s   10.100.203.183   worker232   <none>           <none>
cmy-sts-web-1   1/1     Running   0          28s   10.100.140.102   worker233   <none>           <none>
cmy-sts-web-2   1/1     Running   0          28s   10.100.160.180   master231   <none>           <none>
cmy-sts-web-3   1/1     Running   0          26s   10.100.203.185   worker232   <none>           <none>
cmy-sts-web-4   1/1     Running   0          25s   10.100.140.93    worker233   <none>           <none>
[root@master231 statefulsets]# 
[root@master231 statefulsets]# kubectl get pods -l app=web -o yaml | grep "\- image:"
    - image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
    - image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
    - image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
    - image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
    - image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
[root@master231 statefulsets]# 
[root@master231 statefulsets]# grep hangzhou 03-statefuleset-updateStrategy-partition.yaml 
        image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
[root@master231 statefulsets]# 
[root@master231 statefulsets]# sed -i '/hangzhou/s#v1#v2#' 03-statefuleset-updateStrategy-partition.yaml 
[root@master231 statefulsets]# 
[root@master231 statefulsets]# grep hangzhou 03-statefuleset-updateStrategy-partition.yaml 
        image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v2
[root@master231 statefulsets]# 
[root@master231 statefulsets]# kubectl apply -f 03-statefuleset-updateStrategy-partition.yaml
service/sts-headless unchanged
statefulset.apps/cmy-sts-web configured
service/cmy-sts-svc unchanged
[root@master231 statefulsets]# 
[root@master231 statefulsets]# kubectl get pods -o wide
NAME                  READY   STATUS    RESTARTS   AGE     IP               NODE        NOMINATED NODE   READINESS GATES
cmy-sts-web-0   1/1     Running   0          2m23s   10.100.203.183   worker232   <none>           <none>
cmy-sts-web-1   1/1     Running   0          2m21s   10.100.140.102   worker233   <none>           <none>
cmy-sts-web-2   1/1     Running   0          2m21s   10.100.160.180   master231   <none>           <none>
cmy-sts-web-3   1/1     Running   0          12s     10.100.203.174   worker232   <none>           <none>
cmy-sts-web-4   1/1     Running   0          14s     10.100.140.101   worker233   <none>           <none>
[root@master231 statefulsets]# 
[root@master231 statefulsets]# kubectl get pods -l app=web -o yaml | grep "\- image:"
    - image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
    - image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
    - image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
    - image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v2
    - image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v2
[root@master231 statefulsets]# 

4 基于sts部署zookeeper集群

	5.1 K8S所有节点导入镜像
wget http://192.168.14.253/Resources/Kubernetes/Case-Demo/cmy-kubernetes-zookeeper-v1.0-3.4.10.tar.gz
docker load  -i cmy-kubernetes-zookeeper-v1.0-3.4.10.tar.gz 


		5.2 编写资源清单
cat  04-sts-zookeeper.yaml
apiVersion: v1
kind: Service
metadata:
  name: zk-hs
  labels:
    app: zk
spec:
  ports:
  - port: 2888
    name: server
  - port: 3888
    name: leader-election
  clusterIP: None
  selector:
    app: zk
---
apiVersion: v1
kind: Service
metadata:
  name: zk-cs
  labels:
    app: zk
spec:
  ports:
  - port: 2181
    name: client
  selector:
    app: zk
---
apiVersion: policy/v1
# 此类型用于定义可以对一组Pod造成的最大中断,说白了就是最大不可用的Pod数量。
# 一般情况下,对于分布式集群而言,假设集群故障容忍度为N,则集群最少需要2N+1个Pod。
kind: PodDisruptionBudget
metadata:
  name: zk-pdb
spec:
  # 匹配Pod
  selector:
    matchLabels:
      app: zk
  # 最大不可用的Pod数量。这意味着将来zookeeper集群,最少要2*1 +1 = 3个Pod数量。
  maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: zk
spec:
  selector:
    matchLabels:
      app: zk
  serviceName: zk-hs
  replicas: 3
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: OrderedReady
  template:
    metadata:
      labels:
        app: zk
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                    - zk
              topologyKey: "kubernetes.io/hostname"
      containers:
      - name: kubernetes-zookeeper
        imagePullPolicy: IfNotPresent
        image: "registry.k8s.io/kubernetes-zookeeper:1.0-3.4.10"
        resources:
          requests:
            memory: "1Gi"
            cpu: "0.5"
        ports:
        - containerPort: 2181
          name: client
        - containerPort: 2888
          name: server
        - containerPort: 3888
          name: leader-election
        command:
        - sh
        - -c
        - "start-zookeeper \
          --servers=3 \
          --data_dir=/var/lib/zookeeper/data \
          --data_log_dir=/var/lib/zookeeper/data/log \
          --conf_dir=/opt/zookeeper/conf \
          --client_port=2181 \
          --election_port=3888 \
          --server_port=2888 \
          --tick_time=2000 \
          --init_limit=10 \
          --sync_limit=5 \
          --heap=512M \
          --max_client_cnxns=60 \
          --snap_retain_count=3 \
          --purge_interval=12 \
          --max_session_timeout=40000 \
          --min_session_timeout=4000 \
          --log_level=INFO"
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        volumeMounts:
        - name: datadir
          mountPath: /var/lib/zookeeper
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
  volumeClaimTemplates:
  - metadata:
      name: datadir
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Gi


		5.3 实时观察Pod状态
[root@master231 statefulsets]# kubectl apply -f 04-sts-zookeeper.yaml 
service/zk-hs created
service/zk-cs created
poddisruptionbudget.policy/zk-pdb created
statefulset.apps/zk created
[root@master231 statefulsets]# kubectl get pods -o wide -w -l app=zk
NAME   READY   STATUS    RESTARTS   AGE    IP               NODE         NOMINATED NODE   READINESS GATES
zk-0   1/1     Running   0          101s   10.100.140.108   worker233    <none>           <none>
zk-1   1/1     Running   0          88s    10.100.203.190   worker232    <none>           <none>
zk-2   1/1     Running   0          65s    10.100.209.62    master-231   <none>           <none>


 

		5.4 检查后端的存储
[root@master231 ~]# kubectl get pods -o wide 
NAME   READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
zk-0   1/1     Running   0          85s   10.100.140.125   worker233   <none>           <none>
zk-1   1/1     Running   0          63s   10.100.160.189   master231   <none>           <none>
zk-2   1/1     Running   0          42s   10.100.203.188   worker232   <none>           <none>
[root@master231 ~]# 
[root@master231 ~]# kubectl get pvc -l app=zk
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
datadir-zk-0   Bound    pvc-b6072f27-637a-4c5d-9604-7095c8143f15   10Gi       RWO            nfs-csi        43m
datadir-zk-1   Bound    pvc-10fdeb29-70b9-41a6-ae8c-f3b540ffcbdc   10Gi       RWO            nfs-csi        42m
datadir-zk-2   Bound    pvc-db936b79-be79-4155-b2d0-ccc05a7e4531   10Gi       RWO            nfs-csi        37m
[root@master231 ~]# 



		5.5.验证集群是否正常
[root@master231 sts]# for i in 0 1 2; do kubectl exec zk-$i -- hostname; done
zk-0
zk-1
zk-2
[root@master231 sts]# 
[root@master231 sts]# for i in 0 1 2; do echo "myid zk-$i";kubectl exec zk-$i -- cat /var/lib/zookeeper/data/myid; done
myid zk-0
1
myid zk-1
2
myid zk-2
3


		
		
		5.6 创建数据测试
			5.6.1 在一个Pod写入数据
[root@master231 statefulsets]# kubectl exec -it zk-1 -- zkCli.sh
...
[zk: localhost:2181(CONNECTED) 0] ls /
[zookeeper]
[zk: localhost:2181(CONNECTED) 1] 
[zk: localhost:2181(CONNECTED) 1] 
[zk: localhost:2181(CONNECTED) 1] create /school cmy
Created /school
[zk: localhost:2181(CONNECTED) 2] 
[zk: localhost:2181(CONNECTED) 2] create /school/linux97 XIXI
Created /school/linux97
[zk: localhost:2181(CONNECTED) 3] 
[zk: localhost:2181(CONNECTED) 3] ls /  
[zookeeper, school]
[zk: localhost:2181(CONNECTED) 4] 
[zk: localhost:2181(CONNECTED) 4] ls /school
[linux97]
[zk: localhost:2181(CONNECTED) 5] 

			5.6.2 在另一个Pod查看下数据
[root@master231 statefulsets]# kubectl exec -it zk-2 -- zkCli.sh
...
[zk: localhost:2181(CONNECTED) 0] ls /
[zookeeper, school]
[zk: localhost:2181(CONNECTED) 1] get /school/linux97
XIXI


温馨提示:
业界对于sts控制器有点望而却步,我们知道这个控制器用做有状态服务部署,但是我们不用~

于是coreOS公司有研发出来了Operator(sts+crd)框架,大家可以基于该框架部署各种服务。
上一篇
下一篇