k8s之services

1 Kubernetes Service

Service翻译为中文表示服务,说白了就是用来代理各种服务的,基于标签关联pod。

前面学习过的各种控制器能够控制指定Pod副本始终存活,但是无法解决重启IP地址变化的问题。

如果Pod和Pod之间有关联性或依赖关系,则IP地址的变化很可能导致服务不正常,尤其是微服务场景。

综上所述,k8s就提供Service资源为各种服务提供代理。

1.1 services 功能

<font color="#9bbb59">1. 服务发现​​</font>
<font color="#9bbb59"> – 为 Pod 提供稳定的 DNS 名称或 ClusterIP,客户端可以通过名称访问服务,而无需知道 Pod 的具体 IP。</font>
<font color="#9bbb59">2. ​​负载均衡​​</font>
<font color="#9bbb59"> – 将流量分发到后端的多个 Pod,提高应用的可用性和扩展性。</font>

1.2 四种Service的类型!!!!!

<font color="#9bbb59">ClusterIP:</font>
<font color="#9bbb59"> 集群IP,一般用于K8S集群内部的服务代理。说白了,就是K8S集群内部各个服务之间相互访问。</font>
<font color="#9bbb59">NodePort</font>
<font color="#9bbb59"> 在ClusterIP的基础之上,在每一个worker节点添加了NAT规则。从而达到K8S外部客户端能够访问到K8S集群内部的Pod功能。</font>
Loadbanlacer
一般情况下是用于云厂商环境,配合专门的SLB相关产品进行服务的代理。如果自行部署的k8s需要单独部署第三方插件来实现此功能。
ExternalName
将K8S集群外部的某个服务映射到K8S集群内部的Service。

场景 推荐类型 关键考虑因素
内部微服务通信 ClusterIP 稳定内部访问
开发测试环境 NodePort 简单外部访问
生产环境公网访问 LoadBalancer 需要云厂商 LB
连接外部服务 ExternalName 无需修改现有服务

1.3 K8S的3种网络类型 !!!!!

<font color="#9bbb59">K8S集群网络: 物理机网段</font>
<font color="#9bbb59">10.0.0.0/24</font>
<font color="#9bbb59"> 10.0.0.231</font>
<font color="#9bbb59"> 10.0.0.232</font>
<font color="#9bbb59"> 10.0.0.233</font>
<font color="#9bbb59">K8S的Pod网络:</font>
<font color="#9bbb59">10.100.0.0/16: Pod网段。</font>
<font color="#9bbb59">K8S的Service网络:</font>
<font color="#9bbb59">10.200.0.0/16: Service网段。</font>
<font color="#9bbb59"> </font>

1.4 ClusterIP !!!!!

cat 01-cluster-ip.yaml
apiVersion:  apps/v1
kind: Deployment
metadata:
  name: deploy-test
  labels:
    apps: test
  namespace: default
spec:
  replicas: 3
  selector:
    # 基于标签匹配
    matchLabels:
      apps: test
  template:
    metadata:
      labels:
        apps: test
        version: v1
    spec:
      containers:
      - image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
        name: c1
---
apiVersion: v1
kind: Service
metadata:
  name: svc-test
spec:
  # 配置端口映射
  ports:
    # 表示的是svc的端口
  - port: 90
    # 表示Pod的端口
    targetPort: 80
  # 关联Pod的标签
  selector:
    apps: test
  # 指定svc的类型
  type: ClusterIP

kubectl apply -f 01-cluster-ip.yaml
deployment.apps/deploy-test created
service/svc-test created
[root@master-231 /cmy/manifests/svc]# kubectl get pods -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
deploy-test-568cf47956-7lrnj   1/1     Running   0          4s    10.100.203.141   worker232   <none>           <none>
deploy-test-568cf47956-mv94b   1/1     Running   0          4s    10.100.140.77    worker233   <none>           <none>
deploy-test-568cf47956-zhskp   1/1     Running   0          4s    10.100.140.76    worker233   <none>           <none>
[root@master-231 /cmy/manifests/svc]# kubectl exec -it deploy-test-568cf47956-7lrnj --sh
error: unknown flag: --sh
See 'kubectl exec --help' for usage.
[root@master-231 /cmy/manifests/svc]# kubectl exec -it deploy-test-568cf47956-7lrnj -- sh
/ # echo 1111 > /usr/share/nginx/html/index.html
/ #
command terminated with exit code 130
[root@master-231 /cmy/manifests/svc]# kubectl exec -it deploy-test-568cf47956-mv94b -- sh
/ # echo 2222 > /usr/share/nginx/html/index.html
/ #
[root@master-231 /cmy/manifests/svc]# kubectl exec -it deploy-test-568cf47956-zhskp -- sh
/ # echo 3333 > /usr/share/nginx/html/index.html
/ #





kubectl describe  service/svc-test
Name:              svc-test
Namespace:         default
Labels:            <none>
Annotations:       <none>
Selector:          apps=test
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.200.178.3
IPs:               10.200.178.3
Port:              <unset>  90/TCP
TargetPort:        80/TCP
Endpoints:         10.100.140.76:80,10.100.140.77:80,10.100.203.141:80
Session Affinity:  None
Events:            <none>
[root@master-231 /cmy/manifests/svc]# kubectl get svc,pod -o wide
NAME                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE     SELECTOR
service/kubernetes    ClusterIP   10.200.0.1       <none>        443/TCP        6d23h   <none>
service/new-nginx     NodePort    10.200.245.213   <none>        80:31376/TCP   37h     app=new-nginx
service/svc-test   ClusterIP   10.200.178.3     <none>        90/TCP         3m5s    apps=test

NAME                                  READY   STATUS    RESTARTS   AGE    IP               NODE        NOMINATED NODE   READINESS GATES
pod/deploy-test-568cf47956-7lrnj   1/1     Running   0          3m5s   10.100.203.141   worker232   <none>           <none>
pod/deploy-test-568cf47956-mv94b   1/1     Running   0          3m5s   10.100.140.77    worker233   <none>           <none>
pod/deploy-test-568cf47956-zhskp   1/1     Running   0          3m5s   10.100.140.76    worker233   <none>           <none>
[root@master-231 /cmy/manifests/svc]# for i in `seq 10`;do curl 10.200.178.3:90;done
3333
1111
1111
1111
2222
1111
1111
1111
3333
1111
[root@master-231 /cmy/manifests/svc]#

1.5 NodePort !!!!!

[root@master231 services]# kubectl get pods -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
deploy-test-568cf47956-fc9tv   1/1     Running   0          60m   10.100.140.90    worker233   <none>           <none>
deploy-test-568cf47956-jxm2t   1/1     Running   0          60m   10.100.140.89    worker233   <none>           <none>
deploy-test-568cf47956-n64m4   1/1     Running   0          60m   10.100.203.176   worker232   <none>           <none>
[root@master231 services]# 

		3.2 编写资源清单 
[root@master231 services]# cat 02-svc-NodePort-test.yaml
apiVersion: v1
kind: Service
metadata:
  name: svc-test-nodeport
spec:
  ports:
  - port: 90
    targetPort: 80
    # 指定NodePort的端口范围,默认有效值为: 30000-32767
    nodePort: 30080
  selector:
    apps: test
  type: NodePort
[root@master231 services]# 
[root@master231 services]# kubectl apply -f 02-svc-NodePort-test.yaml 
service/svc-test-nodeport created
[root@master231 services]# 
[root@master231 services]# kubectl get svc 
NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
deploy-test         ClusterIP   10.200.64.99     <none>        80/TCP         60m
kubernetes             ClusterIP   10.200.0.1       <none>        443/TCP        7d
svc-test            ClusterIP   10.200.40.135    <none>        90/TCP         4m40s
svc-test-nodeport   NodePort    10.200.236.205   <none>        90:30080/TCP   4s
[root@master231 services]# 
[root@master231 services]# for i in `seq 10`;do curl 10.200.236.205:90;done
AAA
BBB
BBB
BBB
BBB
AAA
AAA
BBB
BBB
CCC
[root@master231 services]# 
[root@master231 services]# for i in `seq 10`;do curl 10.0.0.233:30080;done
AAA
AAA
AAA
AAA
CCC
BBB
CCC
AAA
CCC
BBB

修改nodePort端口范围

2.修改静态Pod的资源清单 
[root@master231 03-wordpress-svc]# vim /etc/kubernetes/manifests/kube-apiserver.yaml 
...
 12 spec:
 13   containers:
 14   - command:
 15     - kube-apiserver
 16     - --service-node-port-range=3000-50000


	3.拷贝资源清单文件
[root@master231 03-wordpress-svc]# mv /etc/kubernetes/manifests/kube-apiserver.yaml /opt/
[root@master231 03-wordpress-svc]# 
[root@master231 03-wordpress-svc]# mv /opt/kube-apiserver.yaml /etc/kubernetes/manifests/
[root@master231 03-wordpress-svc]# 


	4.再次查看组件是否正常
[root@master231 ~]# kubectl get pods -n kube-system -o wide -l component=kube-apiserver
NAME                       READY   STATUS    RESTARTS      AGE   IP           NODE        NOMINATED NODE   READINESS GATES
kube-apiserver-master231   1/1     Running   1 (13s ago)   8s    10.0.0.231   master231   <none>           <none>
[root@master231 ~]# 

	5.再次创建svc测试 
[root@master231 03-wordpress-svc]# kubectl apply -f 04-svc-wordpress.yaml 
service/svc-wp created
[root@master231 03-wordpress-svc]# 
[root@master231 03-wordpress-svc]# kubectl get svc svc-wp 
NAME     TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)       AGE
svc-wp   NodePort   10.200.212.242   <none>        80:8080/TCP   5s
[root@master231 03-wordpress-svc]# 

1.6 LoadBalancer !!!!!

metallb附加组件部署实现LoadBalancer !!!


	1.metallb概述
如果我们需要在自己的Kubernetes中暴露LoadBalancer的应用,那么Metallb是一个不错的解决方案。


Metallb官网地址:
	https://metallb.universe.tf/installation/


	2.实战案例 
		2.1 默认情况下k8s集群并没有原生支持LoadBalancer类型
[root@master231 services]# cat 03-svc-LoadBalancer-test.yaml
apiVersion: v1
kind: Service
metadata:
  name: svc-test-loadbalancer
spec:
  ports:
  - port: 80
    nodePort: 9090
  selector:
    apps: test
  type: LoadBalancer
[root@master231 services]# 
[root@master231 services]# kubectl apply -f  03-svc-LoadBalancer-test.yaml
service/svc-test-loadbalancer created
[root@master231 services]# 
[root@master231 services]# kubectl get svc
NAME                       TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)       AGE
kubernetes                 ClusterIP      10.200.0.1       <none>        443/TCP       3h16m
svc-test-loadbalancer   LoadBalancer   10.200.117.161   <pending>     80:9090/TCP   2s
[root@master231 services]# 


		2.2 所有节点导入镜像 
wget http://192.168.14.253/Resources/Kubernetes/Add-ons/metallb/v0.14.9/cmy-metallb-speaker-v0.14.9.tar.gz
docker load  -i cmy-metallb-speaker-v0.14.9.tar.gz 

wget http://192.168.14.253/Resources/Kubernetes/Add-ons/metallb/v0.14.9/cmy-metallb-controller-v0.14.9.tar.gz
docker load -i  cmy-metallb-controller-v0.14.9.tar.gz


		2.3 安装metallb 
[root@master231 metallb]# wget http://192.168.14.253/Resources/Kubernetes/Add-ons/metallb/v0.14.9/metallb-native.yaml
[root@master231 metallb]# kubectl apply -f metallb-native.yaml 
[root@master231 metallb]# kubectl get pods -n  metallb-system -o wide
NAME                          READY   STATUS    RESTARTS   AGE    IP              NODE        NOMINATED NODE   READINESS GATES
controller-686c7db689-gpf7r   1/1     Running   0          6m5s   10.100.140.95   worker233   <none>           <none>
speaker-bdvk5                 1/1     Running   0          30s    10.0.0.233      worker233   <none>           <none>
speaker-xfnch                 1/1     Running   0          30s    10.0.0.232      worker232   <none>           <none>
speaker-zshn9                 1/1     Running   0          30s    10.0.0.231      master231   <none>           <none>
[root@master231 metallb]# 


		2.4 创建地址池 
[root@master231 metallb]# cat metallb-ip-pool.yaml 
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: linux97
  namespace: metallb-system
spec:
  addresses:
  - 10.0.0.150-10.0.0.180

---

apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: cmy
  namespace: metallb-system
spec:
  ipAddressPools:
  - linux97
[root@master231 metallb]# 
[root@master231 metallb]# kubectl apply -f  metallb-ip-pool.yaml 
ipaddresspool.metallb.io/linux97 created
l2advertisement.metallb.io/cmy created
[root@master231 metallb]# 


		2.5 查看地址是否自动分配 
[root@master231 metallb]# kubectl get svc
NAME                       TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)       AGE
kubernetes                 ClusterIP      10.200.0.1       <none>        443/TCP       3h25m
svc-test-loadbalancer   LoadBalancer   10.200.117.161   10.0.0.150    80:9090/TCP   9m28s
[root@master231 metallb]# 


		2.6 浏览器基于LoadBalancer进行访问
http://10.0.0.150/

1.7 ExternalName 了解

ExternalName主要的应用场景在于<font color="#9bbb59">映射K8S集群外部的服务</font>。

这个外部服务可以是内网也可以是私网,如果是私网的话需要配置DNS解析。

2 kube-proxy代理模式

kube-proxy组件是K8S底层进行服务代理的基础组件,为svc提供了底层的代理。

kube-proxy有两种主流代理模式:iptables,ipvs。生产环境中为了提高工作效率建议大家使用ipvs工作模式。

	2.验证kube-proxy的工作模式 
[root@master231 ~]# kubectl get pods -o wide -n kube-system  -l k8s-app=kube-proxy
NAME               READY   STATUS    RESTARTS       AGE    IP           NODE        NOMINATED NODE   READINESS GATES
kube-proxy-66dzn   1/1     Running   2 (3d6h ago)   7d3h   10.0.0.231   master231   <none>           <none>
kube-proxy-9tjh8   1/1     Running   2 (3d6h ago)   7d3h   10.0.0.232   worker232   <none>           <none>
kube-proxy-zg282   1/1     Running   0              22h    10.0.0.233   worker233   <none>           <none>
[root@master231 ~]# 
[root@master231 ~]# 
[root@master231 ~]# kubectl -n kube-system logs kube-proxy-66dzn 
I0526 00:33:07.223143       1 node.go:163] Successfully retrieved node IP: 10.0.0.231
I0526 00:33:07.223246       1 server_others.go:138] "Detected node IP" address="10.0.0.231"
I0526 00:33:07.224055       1 server_others.go:572] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0526 00:33:07.268504       1 server_others.go:206] "Using iptables Proxier"
...


温馨提示:
	不难发现,从Pod日志可以看出,默认采用了就是iptables工作模式。
	
[root@master231 ~]# kubectl get svc 
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.200.0.1   <none>        443/TCP   162m
[root@master231 ~]# 
[root@master231 ~]# iptables-save | grep 10.200.0.1
-A KUBE-SERVICES -d 10.200.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.200.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.200.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -d 10.200.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SVC-ERIFXISQEP7F7OF4 ! -s 10.100.0.0/16 -d 10.200.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-JD5MR3NA4I4DYORP ! -s 10.100.0.0/16 -d 10.200.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SVC-NPX46M4PTMTKRN6Y ! -s 10.100.0.0/16 -d 10.200.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-TCOU7JCQXEZGVUNU ! -s 10.100.0.0/16 -d 10.200.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
[root@master231 ~]# 


	3.修改kube-proxy的代理模式 
[root@master231 ~]# kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/"  | \
sed -e 's#mode: ""#mode: "ipvs"#' | \
kubectl apply -f - -n kube-system
	

	4.删除Pod使得配置生效 
[root@master231 ~]# kubectl get pods -o wide -n kube-system  -l k8s-app=kube-proxy
NAME               READY   STATUS    RESTARTS       AGE    IP           NODE        NOMINATED NODE   READINESS GATES
kube-proxy-66dzn   1/1     Running   2 (3d6h ago)   7d3h   10.0.0.231   master231   <none>           <none>
kube-proxy-9tjh8   1/1     Running   2 (3d6h ago)   7d3h   10.0.0.232   worker232   <none>           <none>
kube-proxy-zg282   1/1     Running   0              22h    10.0.0.233   worker233   <none>           <none>
[root@master231 ~]# 
[root@master231 ~]# kubectl delete pods -n kube-system  -l k8s-app=kube-proxy
pod "kube-proxy-66dzn" deleted
pod "kube-proxy-9tjh8" deleted
pod "kube-proxy-zg282" deleted
[root@master231 ~]# 
[root@master231 ~]# kubectl get pods -o wide -n kube-system  -l k8s-app=kube-proxy
NAME               READY   STATUS    RESTARTS   AGE   IP           NODE        NOMINATED NODE   READINESS GATES
kube-proxy-9q28v   1/1     Running   0          3s    10.0.0.232   worker232   <none>           <none>
kube-proxy-fcp25   1/1     Running   0          3s    10.0.0.231   master231   <none>           <none>
kube-proxy-n8njm   1/1     Running   0          3s    10.0.0.233   worker233   <none>           <none>
[root@master231 ~]# 

	
	5.查看Pod日志
[root@master231 ~]# kubectl -n kube-system logs kube-proxy-9q28v 
I0529 06:47:25.390999       1 node.go:163] Successfully retrieved node IP: 10.0.0.232
I0529 06:47:25.391081       1 server_others.go:138] "Detected node IP" address="10.0.0.232"
I0529 06:47:25.423462       1 server_others.go:269] "Using ipvs Proxier"
...

3 附加组件coreDNS !!!!!

CoreDNS是K8S集群内置的DNS服务器,如果是kubeadm方式部署的话,无需手动部署该组件,但二进制部署的话需要你手动部署该组件。

<font color="#9bbb59">CoreDNS的作用就是将svc的名称解析为CLusterIP,也可以实现Pod的负载均衡,还可以为ExternalName提供地址解析功能。</font>

	2.验证dns服务器 
[root@master231 ~]# kubectl get svc,po -n kube-system -l k8s-app=kube-dns -o wide
NAME               TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                  AGE    SELECTOR
service/kube-dns   ClusterIP   10.200.0.10   <none>        53/UDP,53/TCP,9153/TCP   7d5h   k8s-app=kube-dns

NAME                          READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
pod/coredns-6d8c4cb4d-bzggh   1/1     Running   0          28h   10.100.160.135   master231   <none>           <none>
pod/coredns-6d8c4cb4d-l6vfm   1/1     Running   0          28h   10.100.160.133   master231   <none>           <none>
[root@master231 ~]# 
[root@master231 ~]# kubectl -n kube-system describe svc kube-dns 
Name:              kube-dns
Namespace:         kube-system
Labels:            k8s-app=kube-dns
                   kubernetes.io/cluster-service=true
                   kubernetes.io/name=CoreDNS
Annotations:       prometheus.io/port: 9153
                   prometheus.io/scrape: true
Selector:          k8s-app=kube-dns
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.200.0.10
IPs:               10.200.0.10
Port:              dns  53/UDP
TargetPort:        53/UDP
Endpoints:         10.100.160.133:53,10.100.160.135:53
Port:              dns-tcp  53/TCP
TargetPort:        53/TCP
Endpoints:         10.100.160.133:53,10.100.160.135:53
Port:              metrics  9153/TCP
TargetPort:        9153/TCP
Endpoints:         10.100.160.133:9153,10.100.160.135:9153
Session Affinity:  None
Events:            <none>
[root@master231 ~]# 


	3.验证dns组件是否正常工作
[root@master231 ~]# kubectl get svc -A
NAMESPACE          NAME                              TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                  AGE
calico-apiserver   calico-api                        ClusterIP      10.200.113.242   <none>          443/TCP                  7d4h
calico-system      calico-kube-controllers-metrics   ClusterIP      None             <none>          9094/TCP                 7d4h
calico-system      calico-typha                      ClusterIP      10.200.77.175    <none>          5473/TCP                 7d4h
default            kubernetes                        ClusterIP      10.200.0.1       <none>          443/TCP                  4h13m
default            svc-externalname                  ExternalName   <none>           www.baidu.com   80/TCP                   8m1s
default            svc-test-loadbalancer          LoadBalancer   10.200.117.161   10.0.0.150      80:9090/TCP              57m
default            svc-test-nodeport              NodePort       10.200.143.90    <none>          90:30080/TCP             8m10s
kube-system        kube-dns                          ClusterIP      10.200.0.10      <none>          53/UDP,53/TCP,9153/TCP   7d5h
metallb-system     metallb-webhook-service           ClusterIP      10.200.25.11     <none>          443/TCP                  55m
[root@master231 ~]# 
[root@master231 ~]# 
[root@master231 ~]# dig @10.200.0.10 svc-externalname.default.svc.cmy.com +short
www.baidu.com.
www.a.shifen.com.
110.242.70.57
110.242.69.21
[root@master231 ~]# 
[root@master231 ~]# dig @10.200.0.10 svc-test-nodeport.default.svc.cmy.com +short
10.200.143.90
[root@master231 ~]# 
[root@master231 ~]# 
[root@master231 ~]# dig @10.200.0.10 calico-api.calico-apiserver.svc.cmy.com +short
10.200.113.242
[root@master231 ~]# 
[root@master231 ~]# dig @10.200.0.10 metallb-webhook-service.metallb-system.svc.cmy.com +short
10.200.25.11
[root@master231 ~]# 


	4.实战案例 
[root@master231 04-wordpress-svc-ns]# cat 01-deploy-mysql.yaml 
apiVersion:  apps/v1
kind: Deployment
metadata:
  name: deploy-mysql
  namespace: kube-public
spec:
  replicas: 1
  selector:
    matchLabels:
      apps: mysql
  template:
    metadata:
      labels:
        apps: mysql
    spec:
      containers:
      - image: harbor.cmy.com/cmy-db/mysql:8.0.36-oracle
        env:
        - name: MYSQL_ALLOW_EMPTY_PASSWORD
          value: "yes"
        - name: MYSQL_DATABASE
          value: wordpress
        - name: MYSQL_USER
          value: linux97
        - name: MYSQL_PASSWORD
          value: cmy
        name: c1
        args:
        - --character-set-server=utf8 
        - --collation-server=utf8_bin 
        - --default-authentication-plugin=mysql_native_password
[root@master231 04-wordpress-svc-ns]# 
[root@master231 04-wordpress-svc-ns]# 
[root@master231 04-wordpress-svc-ns]# cat 02-svc-mysql.yaml 
apiVersion: v1
kind: Service
metadata:
  name: svc-mysql
  namespace: kube-public
spec:
  ports:
  - port: 3306
  selector:
    apps: mysql
  type: ClusterIP
[root@master231 04-wordpress-svc-ns]# 
[root@master231 04-wordpress-svc-ns]# cat 03-deploy-wordpress.yaml 
apiVersion:  apps/v1
kind: Deployment
metadata:
  name: deploy-wp
spec:
  replicas: 1
  selector:
    matchLabels:
      apps: wp
  template:
    metadata:
      labels:
        apps: wp
    spec:
      containers:
      - image: harbor.cmy.com/cmy-wp/wordpress:6.7.1-php8.1-apache
        env:
        - name: WORDPRESS_DB_HOST
          # 当Pod和svc不在同一个名称空间时,则需要指定其A记录解析格式
          #value: svc-mysql.kube-public.svc.cmy.com
          value: svc-mysql.kube-public
        - name: WORDPRESS_DB_NAME
          value: wordpress
        - name: WORDPRESS_DB_USER
          value: linux97
        - name: WORDPRESS_DB_PASSWORD
          value: cmy
        name: c1
[root@master231 04-wordpress-svc-ns]# 
[root@master231 04-wordpress-svc-ns]# 
[root@master231 04-wordpress-svc-ns]# cat 04-svc-wordpress.yaml 
apiVersion: v1
kind: Service
metadata:
  name: svc-wp
spec:
  ports:
  - port: 80
    # nodePort: 30090
    nodePort: 8080
  selector:
    apps: wp
  type: NodePort
[root@master231 04-wordpress-svc-ns]# 

4 port-forward实现端口转发

	1.准备Pod
[root@master231 probe]# 
[root@master231 probe]# cat /cmy/manifests/deployments/01-deploy-test-matchLabels.yaml 
apiVersion:  apps/v1
kind: Deployment
metadata:
  name: deploy-test
  labels:
    apps: test
  namespace: default
spec:
  replicas: 3
  selector:
    # 基于标签匹配
    matchLabels:
      apps: test
  template:
    metadata:
      labels:
        apps: test
        version: v1
    spec:
      containers:
      - image: registry.cn-hangzhou.aliyuncs.com/cmy-k8s/apps:v1
        name: c1
[root@master231 probe]# 
[root@master231 probe]# 
[root@master231 probe]# kubectl apply -f  /cmy/manifests/deployments/01-deploy-test-matchLabels.yaml 
deployment.apps/deploy-test created
[root@master231 probe]# 
[root@master231 probe]# kubectl get pods -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
deploy-test-568cf47956-755jp   1/1     Running   0          3s    10.100.203.129   worker232   <none>           <none>
deploy-test-568cf47956-8dll4   1/1     Running   0          3s    10.100.160.183   master231   <none>           <none>
deploy-test-568cf47956-zs2hq   1/1     Running   0          3s    10.100.140.108   worker233   <none>           <none>
[root@master231 probe]# 


	2.端口转发
[root@master231 probe]#  kubectl port-forward po/deploy-test-568cf47956-755jp 9999:80 --address=0.0.0.0
Forwarding from 0.0.0.0:9999 -> 80


	3.测试验证 
[root@worker233 ~]# curl  http://10.0.0.231:9999/
<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8"/>
    <title>cmy apps v1</title>
    <style>
       div img {
          width: 900px;
          height: 600px;
          margin: 0;
       }
    </style>
  </head>

  <body>
    <h1 style="color: green">test v1 </h1>
    <div>
      <img src="1.jpg">
    <div>
  </body>

</html>
[root@worker233 ~]# 


上一篇
下一篇