網(wǎng)站建設公司 南京谷歌seo優(yōu)化中文章
目錄
一.purelb簡介
1.簡介
2.purelb的layer2工作模式特點
二.layer2的配置演示
1.首先準備ipvs和arp配置環(huán)境
2.purelb部署開始
(1)下載purelb-complete.yaml文件并應用
(2)查看該有的資源是否創(chuàng)建完成并運行
(3)配置地址池
3.purelb測試
(1)創(chuàng)建deploy和service在主機進行訪問測試
(2)瀏覽器測試
4.卸載purelb
一.purelb簡介
1.簡介
PureLB是一種負載均衡器,它的工作原理主要是用于在網(wǎng)絡中分發(fā)和管理傳入的請求,以便將請求有效地分配給后端服務。
2.purelb的layer2工作模式特點
purelb會在k8s集群受管節(jié)點上新建一個kube-lb0的虛擬網(wǎng)卡,這樣我們可以看到集群的loadbalancervip,那么他也可以使用任意路由協(xié)議去實現(xiàn)ECMP(允許在具有相同cost開銷的多條路徑之間進行負載均衡和流量分發(fā))。
同時purelb的layer2模式根據(jù)單個vip來選擇節(jié)點,將多個vip分散到不同節(jié)點,盡量將流量均衡分開,避免某些節(jié)點負載失衡發(fā)生故障。
二.layer2的配置演示
1.首先準備ipvs和arp配置環(huán)境
(1)開啟ipvs并設置嚴格策略,將mode改為ipvs,將strictarp改為true
[root@k8s-master service]# kubectl edit configmap kube-proxy -n kube-system
configmap/kube-proxy edited
?
(2)修改完后保存并驗證
[root@k8s-master service]# kubectl rollout restart ds kube-proxy -n kube-system
daemonset.apps/kube-proxy restarted
[root@k8s-master service]# kubectl get configmap -n kube-system kube-proxy -o yaml | grep strictARPstrictARP: true
[root@k8s-master service]# kubectl get configmap -n kube-system kube-proxy -o yaml | grep modemode: "ipvs"
(3)到這里我們就可以在受管節(jié)點(node)上看到新建了kube-lb0虛擬網(wǎng)卡
7: kube-lb0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000link/ether 12:00:b5:78:88:25 brd ff:ff:ff:ff:ff:ffinet6 fe80::1000:b5ff:fe78:8825/64 scope link valid_lft forever preferred_lft forever
2.purelb部署開始
(1)下載purelb-complete.yaml文件并應用
鏈接:百度網(wǎng)盤 請輸入提取碼 提取碼:epbx
文件中crd問題導致第一次會失敗,應用兩次后才能成功
[root@k8s-master service]# wget https://gitlab.com/api/v4/projects/purelb%2Fpurelb/packages/generic/manifest/0.0.1/purelb-complete.yaml
#內部不需要有更改
[root@k8s-master service]# kubectl apply -f purelb-complete.yaml
namespace/purelb created
customresourcedefinition.apiextensions.k8s.io/lbnodeagents.purelb.io created
customresourcedefinition.apiextensions.k8s.io/servicegroups.purelb.io created
serviceaccount/allocator created
serviceaccount/lbnodeagent created
role.rbac.authorization.k8s.io/pod-lister created
clusterrole.rbac.authorization.k8s.io/purelb:allocator created
clusterrole.rbac.authorization.k8s.io/purelb:lbnodeagent created
rolebinding.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/purelb:allocator created
clusterrolebinding.rbac.authorization.k8s.io/purelb:lbnodeagent created
deployment.apps/allocator created
daemonset.apps/lbnodeagent created
error: resource mapping not found for name: "default" namespace: "purelb" from "purelb-complete.yaml": no matches for kind "LBNodeAgent" in version "purelb.io/v1"
ensure CRDs are installed first
[root@k8s-master service]# kubectl apply -f purelb-complete.yaml
namespace/purelb unchanged ? #創(chuàng)建了一個purelb的名稱空間
customresourcedefinition.apiextensions.k8s.io/lbnodeagents.purelb.io configured
customresourcedefinition.apiextensions.k8s.io/servicegroups.purelb.io configured
serviceaccount/allocator unchanged
serviceaccount/lbnodeagent unchanged
role.rbac.authorization.k8s.io/pod-lister unchanged
clusterrole.rbac.authorization.k8s.io/purelb:allocator unchanged
clusterrole.rbac.authorization.k8s.io/purelb:lbnodeagent unchanged
rolebinding.rbac.authorization.k8s.io/pod-lister unchanged
clusterrolebinding.rbac.authorization.k8s.io/purelb:allocator unchanged
clusterrolebinding.rbac.authorization.k8s.io/purelb:lbnodeagent unchanged
deployment.apps/allocator unchanged
daemonset.apps/lbnodeagent unchanged
lbnodeagent.purelb.io/default created
(2)查看該有的資源是否創(chuàng)建完成并運行
[root@k8s-master service]# kubectl get deployments.apps,ds -n purelb
NAME ? ? ? ? ? ? ? ? ? ? ? READY ? UP-TO-DATE ? AVAILABLE ? AGE
deployment.apps/allocator ? 1/1 ? ? 1 ? ? ? ? ? 1 ? ? ? ? ? 2m6s
?
NAME ? ? ? ? ? ? ? ? ? ? ? ? DESIRED ? CURRENT ? READY ? UP-TO-DATE ? AVAILABLE ? NODE SELECTOR ? ? ? ? ? AGE
daemonset.apps/lbnodeagent ? 2 ? ? ? ? 2 ? ? ? ? 2 ? ? ? 2 ? ? ? ? ? 2 ? ? ? ? ? kubernetes.io/os=linux ? 2m6s
?
[root@k8s-master service]# kubectl get crd | grep purelb
lbnodeagents.purelb.io ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 2023-12-04T08:18:07Z
servicegroups.purelb.io ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 2023-12-04T08:18:07Z
?
[root@k8s-master service]# kubectl api-resources | grep purelb.io ? #這要查看版本,后面創(chuàng)建地址時要用到
lbnodeagents ? ? ? ? ? ? ? ? ? ? lbna,lbnas ? purelb.io/v1 ? ? ? ? ? ? ? ? ? ? ? ? ? true ? ? ? ? LBNodeAgent
servicegroups ? ? ? ? ? ? ? ? ? ? sg,sgs ? ? ? purelb.io/v1 ? ? ? ? ? ? ? ? ? ? ? ? ? true ? ? ? ? ServiceGroup
(3)配置地址池
這里我們使用192.168.2.11/24-192.168.2.19/24中間的地址
[root@k8s-master service]# cat pure-ip-pool.yaml
apiVersion: purelb.io/v1 ? #剛才查到的版本
kind: ServiceGroup ? #資源類型為ServiceGroup
metadata:name: my-purelb-ip-pool ? #這里指定的名稱,在后面我們創(chuàng)建service要制定這個資源名稱namespace: purelb
spec:local: #本地配置v4pool: ? #ipv4地址池subnet: "192.168.2.0/24" ? #指定子網(wǎng)范圍,寫和主機一個網(wǎng)段但沒有使用的地址pool: "192.168.2.11-192.168.2.19" ? #指定地址范圍aggregation: /32
?
[root@k8s-master service]# kubectl apply -f pure-ip-pool.yaml
servicegroup.purelb.io/my-purelb-ip-pool created
[root@k8s-master service]# kubectl get sg -n purelb -o wide
NAME ? ? ? ? ? ? ? AGE
my-purelb-ip-pool ? 22s
[root@k8s-master service]# kubectl describe sg my-purelb-ip-pool -n purelb
Name: ? ? ? ? my-purelb-ip-pool
Namespace: ? purelb
Labels: ? ? ? <none>
Annotations: <none>
API Version: purelb.io/v1
Kind: ? ? ? ? ServiceGroup
Metadata:Creation Timestamp: 2023-12-04T08:29:55ZGeneration: ? ? ? ? 1Resource Version: ? 2676UID: ? ? ? ? ? ? ? ? 6b564a29-2c6d-4a26-b5df-05aa253595f1
Spec:Local:v4pool:Aggregation: /32Pool: ? ? ? ? 192.168.2.11-192.168.2.19Subnet: ? ? ? 192.168.2.0/24
Events:Type ? Reason Age ? From ? ? ? ? ? ? Message---- ? ------ ---- ---- ? ? ? ? ? ? -------Normal Parsed 54s ? purelb-allocator ServiceGroup parsed successfully
3.purelb測試
(1)創(chuàng)建deploy和service在主機進行訪問測試
在創(chuàng)建service時的注意點比較多,如下
[root@k8s-master service]# vim service2.yaml
?
[root@k8s-master service]# cat service2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:labels:name: my-nginxname: my-nginxnamespace: myns
spec:replicas: 3selector:matchLabels:name: my-nginx-deploytemplate:metadata:labels:name: my-nginx-deployspec:containers:- name: my-nginx-podimage: nginxports:- containerPort: 80
?
---
?
apiVersion: v1
kind: Service
metadata:name: my-nginx-servicenamespace: mynsannotations: ? #像openelb一樣,要添加注解信息,指定我們創(chuàng)建的地址池purelb.io/service-group: my-purelb-ip-pool
spec:allocateLoadBalancerNodePorts: false ?#這個選項指定是否為服務分配負載均衡器的節(jié)點端口。如果設置為false,則不會自動分配節(jié)點端口,而是由用戶手動指定。默認情況下,該選項為true,表示自動分配節(jié)點端口。externalTrafficPolicy: Cluster#這個選項指定了服務的外部流量策略。Cluster表示將外部流量分發(fā)到集群內的所有節(jié)點。其他可選值還有Local和Local,用于指定將外部流量分發(fā)到本地節(jié)點或者使用本地節(jié)點優(yōu)先。internalTrafficPolicy: Cluster#這個選項指定了服務的內部流量策略。Cluster表示將內部流量限制在集群內,不會流出集群。其他可選值還有Local,表示只將內部流量限制在本地節(jié)點。ports:- port: 80protocol: TCPtargetPort: 80selector:name: my-nginx-deploytype: LoadBalancer ? #指定type為負載均衡類型
?
[root@k8s-master service]# kubectl apply -f service2.yaml
deployment.apps/my-nginx unchanged
service/my-nginx-service created
[root@k8s-master service]# kubectl get service -n myns
NAME ? ? ? ? ? ? ? TYPE ? ? ? ? ? CLUSTER-IP ? ? EXTERNAL-IP ? PORT(S) ? AGE
my-nginx-service ? LoadBalancer ? 10.105.214.92 ? 192.168.2.11 ? 80/TCP ? 12s
[root@k8s-master service]# kubectl get pods -n myns
NAME ? ? ? ? ? ? ? ? ? ? ? READY ? STATUS ? RESTARTS ? AGE
my-nginx-5d67c8f488-9lxdm ? 1/1 ? ? Running ? 0 ? ? ? ? 73s
my-nginx-5d67c8f488-mxksb ? 1/1 ? ? Running ? 0 ? ? ? ? 73s
my-nginx-5d67c8f488-nr6pb ? 1/1 ? ? Running ? 0 ? ? ? ? 73s
?
[root@k8s-master service]# kubectl exec -it my-nginx-5d67c8f488-9lxdm -n myns -- /bin/sh -c "echo pod1 > /usr/share/nginx/html/index.html"
[root@k8s-master service]# kubectl exec -it my-nginx-5d67c8f488-mxksb -n myns -- /bin/sh -c "echo pod2 > /usr/share/nginx/html/index.html"
[root@k8s-master service]# kubectl exec -it my-nginx-5d67c8f488-nr6pb -n myns -- /bin/sh -c "echo pod3 > /usr/share/nginx/html/index.html"
[root@k8s-master service]# curl 192.168.2.11 ? #負載均衡實現(xiàn)
pod3
[root@k8s-master service]# curl 192.168.2.11
pod2
[root@k8s-master service]# curl 192.168.2.11
pod1
[root@k8s-master service]# curl 192.168.2.11
pod3
[root@k8s-master service]# curl 192.168.2.11
pod2
[root@k8s-master service]# curl 192.168.2.11
pod1
[root@k8s-master service]# curl 192.168.2.11
pod3
[root@k8s-master service]# curl 192.168.2.11
pod2
[root@k8s-master service]# curl 192.168.2.11
pod1
(2)瀏覽器測試
?
4.卸載purelb
采用delete -f即可卸載
[root@k8s-master service]# kubectl delete -f service2.yaml
[root@k8s-master service]# kubectl delete -f pure-ip-pool.yaml
[root@k8s-master service]# kubectl delete -f purelb-complete.yaml