中文亚洲精品无码_熟女乱子伦免费_人人超碰人人爱国产_亚洲熟妇女综合网

當(dāng)前位置: 首頁 > news >正文

公眾號小程序開發(fā)公司seo診斷分析工具

公眾號小程序開發(fā)公司,seo診斷分析工具,網(wǎng)站頁面上的下載功能怎么做,濟(jì)南網(wǎng)站運營目錄 項目名稱 項目架構(gòu)圖 項目環(huán)境 項目概述 項目準(zhǔn)備 項目步驟 一、修改每臺主機(jī)的ip地址,同時設(shè)置永久關(guān)閉防火墻和selinux,修改好主機(jī)名,在firewalld服務(wù)器上開啟路由功能并配置snat策略。 1. 在firewalld服務(wù)器上配置ip地址、設(shè)…

目錄

項目名稱

項目架構(gòu)圖

項目環(huán)境

項目概述

項目準(zhǔn)備

項目步驟

一、修改每臺主機(jī)的ip地址,同時設(shè)置永久關(guān)閉防火墻和selinux,修改好主機(jī)名,在firewalld服務(wù)器上開啟路由功能并配置snat策略。

1. 在firewalld服務(wù)器上配置ip地址、設(shè)置永久關(guān)閉防火墻和selinux,并修改好主機(jī)名

2. 在firewalld服務(wù)器上開啟路由功能,并配置snat策略,使內(nèi)網(wǎng)服務(wù)器能上網(wǎng)

3. 配置剩下的服務(wù)器的ip地址,永久關(guān)閉防火墻和selinux,并修改好主機(jī)名

二、部署docker+k8s環(huán)境,實現(xiàn)1個master和2個node節(jié)點的k8s集群

1.?在k8s集群那3臺服務(wù)器上安裝好docker,這里根據(jù)官方文檔進(jìn)行安裝

2.?創(chuàng)建k8s集群,這里采用 kubeadm方式安裝

2.1 確認(rèn)docker已經(jīng)安裝好,啟動docker,并且設(shè)置開機(jī)啟動

2.2 配置 Docker使用systemd作為默認(rèn)Cgroup驅(qū)動

2.3?關(guān)閉swap分區(qū)

2.4 修改hosts文件,和內(nèi)核會讀取的參數(shù)文件

2.5 安裝kubeadm,kubelet和kubectl?

2.6 部署Kubernetes Master

2.7 node節(jié)點服務(wù)器加入k8s集群

2.8 安裝網(wǎng)絡(luò)插件flannel

2.9 查看集群狀態(tài)?

三、編譯安裝nginx,制作自己的鏡像,并上傳到docker hub上,給node節(jié)點下載使用

1. 在master建立一個一鍵安裝nginx的腳本?

2. 建立一個Dockerfile文件

3. 創(chuàng)建鏡像

4. 將制作的鏡像推送到docker hub上,供node節(jié)點下載?

5. node節(jié)點去docker hub上拉取這個鏡像

四、創(chuàng)建NFS服務(wù)器為所有的節(jié)點提供相同Web數(shù)據(jù),結(jié)合使用pv+pvc和卷掛載,保障數(shù)據(jù)的一致性,并用探針對pod中容器的狀態(tài)進(jìn)行檢測

1. 用ansible部署nfs服務(wù)器環(huán)境

1.1 在ansible服務(wù)器上對k8s集群和nfs服務(wù)器建立免密通道?

1.2 安裝ansible自動化運維工具在ansible服務(wù)器上,并寫好主機(jī)清單

1.3 編寫安裝nfs腳本

1.4?編寫playbook,實現(xiàn)nfs安裝部署

1.5?檢查yaml文件語法

1.6?執(zhí)行yaml文件

1.7 驗證nfs是否安裝成功

2. 將web數(shù)據(jù)頁面掛載到容器上,并使用探針技術(shù)對容器狀態(tài)進(jìn)行檢查?

2.1 創(chuàng)建web頁面數(shù)據(jù)文件

2.1.1 先在nfs服務(wù)器上創(chuàng)建web頁面數(shù)據(jù)共享文件

2.2 創(chuàng)建nginx.conf配置文件

2.2.1 先再nfs服務(wù)器上下載nginx,使用前面的一鍵編譯安裝nginx的腳本下載,得到nginx.conf配置文件

2.2.2 修改nginx.conf的配置文件,添加就緒探針和存活性探針的位置塊

2.3 編輯/etc/exports文件,并讓其生效

?2.4?掛載web頁面數(shù)據(jù)文件

2.4.1在master服務(wù)器上創(chuàng)建pv

2.4.2?在master服務(wù)器上創(chuàng)建pvc,用來使用pv

2.5 掛載nginx.conf配置文件

2.5.1在master服務(wù)器上創(chuàng)建pv

2.5.2?在master服務(wù)器上創(chuàng)建pvc,用來使用pv

2.6?在master服務(wù)器上創(chuàng)建pod使用pvc

2.7 創(chuàng)建service服務(wù)發(fā)布出去

2.8 在firewalld服務(wù)器上,配置dnat策略,將web服務(wù)發(fā)布出去

2.9 測試訪問

五、采用HPA技術(shù),當(dāng)cpu使用率達(dá)到40%的時候,pod進(jìn)行自動水平擴(kuò)縮,最小10個,最多20個pod

1.?安裝metrics服務(wù)

2.?配置HPA,當(dāng)cpu使用率達(dá)到50%的時候,pod進(jìn)行自動水平擴(kuò)縮,最小20個,最多40個pod

2.1 在原來的deployment yaml文件中配置資源請求

2.2 創(chuàng)建hpa

3. 對集群進(jìn)行壓力測試

3.1 在其他機(jī)器上安裝ab軟件

3.2?對該集群進(jìn)行ab壓力測試

4. 查看hpa效果,觀察變化

5. 觀察集群性能

6. 優(yōu)化整個web集群

六、使用ingress對象結(jié)合ingress-controller給web業(yè)務(wù)實現(xiàn)負(fù)載均衡功能

1. 用ansible部署ingress環(huán)境

1.1 將配置ingress controller需要的配置文件傳入ansible服務(wù)器上

1.2 編寫拉取ingress鏡像的腳本

1.3 編寫playbook,實現(xiàn)ingress controller的安裝部署

1.4 查看是否成功

2.?執(zhí)行ingress-controller-deploy.yaml 文件,去啟動ingress ?controller

3.?啟用ingress 關(guān)聯(lián)ingress controller 和service

3.1 編寫ingrss的yaml文件?

3.2 執(zhí)行文件

3.3 查看效果

3.4 查看ingress controller 里的nginx.conf 文件里是否有ingress對應(yīng)的規(guī)則?

4. 測試訪問

4.1 獲取ingress controller對應(yīng)的service暴露宿主機(jī)的端口

4.2 在其他的宿主機(jī)或者windows機(jī)器上使用域名進(jìn)行訪問

4.2.1 修改host文件

4.2.1 測試訪問

5.?啟動第2個服務(wù)和pod

6. 再次測試訪問,查看www.xin.com的是否能夠訪問到

七、在k8s集群里部署Prometheus對web業(yè)務(wù)進(jìn)行監(jiān)控,結(jié)合Grafana成圖工具進(jìn)行數(shù)據(jù)展示

1.?搭建prometheus監(jiān)控k8s集群

1.1 采用daemonset方式部署node-exporter

1.2?部署Prometheus

1.3 測試

2. 搭建garafana結(jié)合prometheus出圖

2.1 部署grafana

2.2 測試

2.2.1?增添Prometheus數(shù)據(jù)源

2.2.2 導(dǎo)入模板

2.3?出圖效果

八、構(gòu)建CI/CD環(huán)境,使用gitlab集成Jenkins、Harbor構(gòu)建pipeline流水線工作,實現(xiàn)自動相關(guān)拉取代碼、鏡像制作、上傳鏡像等功能

1. 部署gitlab環(huán)境?

1.1 安裝gitlab

1.1.1設(shè)置gitlab的yum源(使用清華鏡像源安裝GitLab)

1.1.2?安裝 gitlab

1.1.3?配置GitLab站點Url

1.2?啟動并訪問GitLab

1.2.1?重新配置并啟動

1.2.2 在firewalld服務(wù)器上配置dnat策略,使windows能訪問進(jìn)來

1.2.3 在window上訪問

1.2.4?配置默認(rèn)訪問密碼

1.2.5 登錄訪問

1.3?配置使用自己創(chuàng)建的用戶登錄

2. 部署jenkins環(huán)境

2.1 先到官網(wǎng)下載通用java項目war包,建議選擇LTS長期支持版

2.2?下載java,jdk11以上版本并安裝,安裝后配置jdk的環(huán)境變量

2.2.1 yum安裝?

2.2.2??查找JAVA安裝目錄

2.2.3?配置環(huán)境變量

2.3 將剛剛下載下來的jenkins.war包傳入服務(wù)器

2.4?啟動jenkins服務(wù)

2.5 測試訪問

3. 部署harbor環(huán)境

3.1?安裝docker、docker-compose

3.1.1 安裝docker

3.1.2 安裝docker-compose

3.2 安裝harbor

3.2.1?下載harbor的源碼,上傳到linux服務(wù)器

3.2.2 解壓并修改內(nèi)容

3.3 登錄harbor

4. gitlab集成jenkins、harbor構(gòu)建pipeline流水線任務(wù),實現(xiàn)相關(guān)拉取代碼、鏡像制作、上傳鏡像等流水線工作?

4.1 jenkins服務(wù)器上需要安裝docker且配置可登錄Harbor服務(wù)拉取鏡像?

4.1.1 jenkins服務(wù)器上安裝docker?

4.1.2? jenkins服務(wù)器上配置可登錄Harbor服務(wù)

4.1.3 測試登錄

4.2 在jenkins上安裝git

4.3?在jenkins上安裝maven

4.3.1 下載安裝包

4.3.2?解壓下載的包

4.3.3?配置環(huán)境變量

4.3.4 mvn校驗

4.4?gitlab中創(chuàng)建測試項目

4.5 在harbor上新建dev項目

4.6 在Jenkins頁面中配置JDK和Maven?

4.7?在Jenkins開發(fā)視圖中創(chuàng)建流水線任務(wù)(pipeline)

4.7.1?流水線任務(wù)需要編寫pipeline腳本,編寫腳本的第一步應(yīng)該是拉取gitlab中的項目

4.7.2 編寫pipeline

5. 驗證

九、部署跳板機(jī)限制用戶訪問內(nèi)部網(wǎng)絡(luò)的權(quán)限

1.? 在firewalld上配置dnat策略,實現(xiàn)用戶ssh到firewalld服務(wù)后自動轉(zhuǎn)入到跳板機(jī)服務(wù)器

2. 在跳板機(jī)服務(wù)器上配置只允許192.168.31.0/24網(wǎng)段的用戶ssh進(jìn)來

3. 將跳板機(jī)與內(nèi)網(wǎng)其他服務(wù)器都建立免密通道

4. 驗證

十、安裝zabbix對所有服務(wù)器區(qū)進(jìn)行監(jiān)控,監(jiān)控其CPU、內(nèi)存、網(wǎng)絡(luò)帶寬等

十一、使用ab軟件對整個k8s集群和相關(guān)服務(wù)器進(jìn)行壓力測試

1.? 安裝ab軟件

2. 測試

項目遇到的問題

1. 重啟服務(wù)器后,發(fā)現(xiàn)除了firewalld服務(wù)器,其他服務(wù)器的xshell連接不上了

2. pod啟動不起來,發(fā)現(xiàn)是pvc與pv的綁定出錯了,原因是pvc和pv的yaml文件中的storageClassName不一致

3. 測試訪問時,發(fā)現(xiàn)訪問的內(nèi)容不足自己設(shè)置的,即web數(shù)據(jù)文件掛載失敗,但是nginx.conf配置文件掛載成功

4.?pipeline執(zhí)行最后一步報錯

5. pipeline執(zhí)行最后一步報錯登錄不了harbor

項目心得


項目名稱

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項目

項目架構(gòu)圖

項目環(huán)境

centos 7.9?

docker 24.0.5

docker compose 2.7.0

kubelet 1.23.6

kubeadm 1.23.6

kubectl 1.23.6

nginx 1.21.1

ansible

ingress

prometheus

grafana

zabbix

gitlab

jenkins

harbor

ab

項目概述

項目名稱:基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項目

項目環(huán)境:centos 7.9(11臺,3臺k8s集群2核2G,1臺gitlab4核8G,7臺1核1G),docker 24.0.5,nginx1.21.1,prometheus ,grafana ,gitlab ,Jenkins ,Harbor ,zabbix ,ansible 等

項目描述:本項目模擬企業(yè)里的生產(chǎn)環(huán)境,并通過sna+dnat發(fā)布內(nèi)網(wǎng)服務(wù),部署了一個跳板機(jī)限制用戶訪問內(nèi)部網(wǎng)絡(luò)的權(quán)限,部署web,nfs,ansible,harbor,zabbix,gitlab,jenkins環(huán)境,基于docker+k8s構(gòu)建一個高可用、高性能的web集群,在k8s中用prometheus+grafana對web集群資源做監(jiān)控和出圖,同時模擬CI/CD流程,深刻體會應(yīng)用開發(fā)中的高度持續(xù)自動化。

項目步驟:

  1. 規(guī)劃好整個集群架構(gòu),部署好防火墻服務(wù)器,開啟路由功能并配置SNAT策略,使用k8s實現(xiàn)web集群部署(1個master,2個node)
  2. 編譯安裝nginx,制作自己的鏡像供web集群內(nèi)部的服務(wù)器使用
  3. 部署nfs為web集群所有節(jié)點提供相同數(shù)據(jù),結(jié)合使用pv+pvc+nfs卷掛載,保障數(shù)據(jù)的一致性,同時使用探針技術(shù)(就緒探針和存活性探針)對容器狀態(tài)進(jìn)行檢查,同時配置DNAT策略讓外面用戶能訪問到web集群的數(shù)據(jù)
  4. 采用HPA技術(shù),當(dāng)cpu使用率達(dá)到40%的時候,pod進(jìn)行自動水平擴(kuò)縮,最小10個,最多20個pod
  5. 使用ingress對象結(jié)合ingress-controller給web業(yè)務(wù)實現(xiàn)基于域名的負(fù)載均衡功能
  6. 在k8s-web集群里部署Prometheus對web業(yè)務(wù)進(jìn)行監(jiān)控,結(jié)合Grafana出圖工具進(jìn)行數(shù)據(jù)展示
  7. 構(gòu)建CI/CD環(huán)境,使用gitlab集成Jenkins、Harbor構(gòu)建pipeline流水線工作,實現(xiàn)自動相關(guān)拉取代碼、鏡像制作、上傳鏡像等功能
  8. 部署跳板機(jī)限制用戶訪問內(nèi)部網(wǎng)絡(luò)的權(quán)限
  9. 使用zabbix對所有web集群外的服務(wù)器進(jìn)行監(jiān)控,監(jiān)控其CPU、內(nèi)存、網(wǎng)絡(luò)帶寬等
  10. 使用ab軟件對整個集群進(jìn)行壓力測試,了解其系統(tǒng)資源瓶頸

項目心得:

通過網(wǎng)絡(luò)拓?fù)鋱D規(guī)劃整個集群的架構(gòu),提高了項目整體的落實和效率,對于k8s的使用和集群的部署更加熟悉,對promehteus+grafana和zabbix兩種監(jiān)控方式理解更深入,通過gitlab集成Jenkins、Harbor構(gòu)建pipeline流水線工作,深刻體會CI/CD流程的持續(xù)自動化。查看日志對排錯的幫助很大,提升了自己的trouble shooting的能力。

項目準(zhǔn)備

11臺Linux服務(wù)器,網(wǎng)絡(luò)模式全部使用橋接模式(其中firewalld要配置兩塊網(wǎng)卡),配置好ip地址,修改好主機(jī)名,同時關(guān)閉防火墻和selinux,設(shè)置開機(jī)不自啟,為后面做項目做好準(zhǔn)備,以免影響項目進(jìn)度。

IP地址角色
192.168.31.69、192.168.107.10firewalld(防火墻服務(wù)器)
192.168.107.11master
192.168.107.12node1
192.168.107.13node2
192.168.107.14jump_server(跳板機(jī))
192.168.107.15nfs
192.168.107.16zabbix
192.168.107.17gitlab
192.168.107.18jenkins
192.168.107.19harbor
192.168.107.20ansible

項目步驟

一、修改每臺主機(jī)的ip地址,同時設(shè)置永久關(guān)閉防火墻和selinux,修改好主機(jī)名,在firewalld服務(wù)器上開啟路由功能并配置snat策略。

修改每臺主機(jī)的ip地址和主機(jī)名,本項目所有主機(jī)的網(wǎng)絡(luò)模式為橋接,注意firewalld有兩張網(wǎng)卡,要配置兩個IP地址。

1. 在firewalld服務(wù)器上配置ip地址、設(shè)置永久關(guān)閉防火墻和selinux,并修改好主機(jī)名

備注信息只做提示用,建議配置時刪掉

[root@fiewalld ~]# cd /etc/sysconfig/network-scripts
[root@fiewalld network-scripts]# ls
ifcfg-ens33  ifdown       ifdown-ippp  ifdown-post    ifdown-sit       ifdown-tunnel  ifup-bnep  ifup-ipv6  ifup-plusb  ifup-routes  ifup-TeamPort  init.ipv6-global ifdown-bnep  ifdown-ipv6  ifdown-ppp     ifdown-Team      ifup           ifup-eth   ifup-isdn  ifup-post   ifup-sit     ifup-tunnel    network-functions
ifcfg-lo     ifdown-eth   ifdown-isdn  ifdown-routes  ifdown-TeamPort  ifup-aliases   ifup-ippp  ifup-plip  ifup-ppp    ifup-Team    ifup-wireless  network-functions-ipv6
[root@fiewalld network-scripts]# vi ifcfg-ens33
BOOTPROTO="none"  #將dhcp改為none,為了實驗的方便防止后面由于ip地址改變而出錯,將ip地址靜態(tài)化
NAME="ens33"
DEVICE="ens33"
ONBOOT="yes"
IPADDR=192.168.31.69   #WAN口ip地址
PREFIX=24
GATEWAY=192.168.31.1
DNS1=114.114.114.114

然后配置這臺機(jī)器的另一個網(wǎng)卡的ip地址

先復(fù)制一個同樣的ifcfg-ens33在同一路徑,改名為ifcfg-ens36,修改里面的內(nèi)容如下(LAN口不需要配置網(wǎng)關(guān)和dns)

[root@fiewalld network-scripts]# cp ifcfg-ens33 ifcfg-ens36
[root@fiewalld network-scripts]# ls
ifcfg-ens33  ifdown       ifdown-ippp  ifdown-post    ifdown-sit       ifdown-tunnel  ifup-bnep  ifup-ipv6  ifup-plusb  ifup-routes  ifup-TeamPort  init.ipv6-global
ifcfg-ens36  ifdown-bnep  ifdown-ipv6  ifdown-ppp     ifdown-Team      ifup           ifup-eth   ifup-isdn  ifup-post   ifup-sit     ifup-tunnel    network-functions
ifcfg-lo     ifdown-eth   ifdown-isdn  ifdown-routes  ifdown-TeamPort  ifup-aliases   ifup-ippp  ifup-plip  ifup-ppp    ifup-Team    ifup-wireless  network-functions-ipv6
[root@fiewalld network-scripts]# vi ifcfg-ens36
BOOTPROTO="none"
NAME="ens36"
DEVICE="ens36"
ONBOOT="yes"
IPADDR=192.168.107.10    #LAN口ip地址
PREFIX=24

然后重啟網(wǎng)絡(luò)

[root@fiewalld network-scripts]# service network restart

查看修改ip地址是否生效

可以看到,ip地址配置成功!

永久關(guān)閉防火墻和selinux?

[root@fiewalld ~]# systemctl disable firewalld  #永久關(guān)閉防火墻
[root@fiewalld ~]# vim /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled     #修改這里
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

修改主機(jī)名

[root@fiewalld ~]# hostnamectl set-hostname firewalld
[root@fiewalld ~]# su - root

2. 在firewalld服務(wù)器上開啟路由功能,并配置snat策略,使內(nèi)網(wǎng)服務(wù)器能上網(wǎng)

編寫一個腳本執(zhí)行

[root@fiewalld ~]# vim snat_dnat.sh
#!/bin/bash
iptables -F
iptables -t nat -F#enable route開啟路由功能
echo 1 >/proc/sys/net/ipv4/ip_forward#enable snat 讓109.168.107.0網(wǎng)段的主機(jī)能夠通過WAN口上網(wǎng)
iptables -t nat -A POSTROUTING -s 192.168.107.0/24 -o ens33 -j SNAT --to-source  192.168.31.69    

執(zhí)行腳本

[root@fiewalld ~]# bash snat_dnat.sh

查看是否搭建成功

[root@fiewalld ~]# iptables -t nat -L -n
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         Chain INPUT (policy ACCEPT)
target     prot opt source               destination         Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
SNAT       all  --  192.168.107.0/24     0.0.0.0/0            to:192.168.31.69
#出現(xiàn)這一條規(guī)則,說明搭建成功

3. 配置剩下的服務(wù)器的ip地址,永久關(guān)閉防火墻和selinux,并修改好主機(jī)名

這里以其中一臺為例

[root@nfs ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33
BOOTPROTO="none"
NAME="ens33"
DEVICE="ens33"
ONBOOT="yes"
IPADDR=192.168.107.15
PREFIX=24
GATEWAY=192.168.107.10    #注意,這里要以firewalld服務(wù)器的LAN口為網(wǎng)關(guān),因為是通過它出去上網(wǎng)
DNS1=114.114.114.114

然后重啟網(wǎng)絡(luò)

[root@nfs ~]# service network restart

查看修改ip地址是否生效

可以看到,ip地址已經(jīng)修改好了!

測試是否能夠上網(wǎng)

可見,firewalld服務(wù)器的snat策略配置成功,內(nèi)網(wǎng)服務(wù)器已經(jīng)可以上網(wǎng)。

永久關(guān)閉防火墻和selinux?

[root@nfs ~]# systemctl disable firewalld  #永久關(guān)閉防火墻
[root@nfs ~]# vim /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled     #修改這里
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

修改主機(jī)名

[root@nfs ~]# hostnamectl set-hostname firewalld
[root@nfs ~]# su - root

二、部署docker+k8s環(huán)境,實現(xiàn)1個master和2個node節(jié)點的k8s集群

1.?在k8s集群那3臺服務(wù)器上安裝好docker,這里根據(jù)官方文檔進(jìn)行安裝

[root@master ~]# yum remove docker \
>                   docker-client \
>                   docker-client-latest \
>                   docker-common \
>                   docker-latest \
>                   docker-latest-logrotate \
>                   docker-logrotate \
>                   docker-engine[root@master ~]# yum install -y yum-utils[root@master ~]# yum-config-manager \--add-repo \https://download.docker.com/linux/centos/docker-ce.repo[root@master ~]# yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y[root@master ~]# systemctl start docker   #啟動docker[root@master ~]# docker --version  #查看docker是否安裝成功
Docker version 24.0.5, build ced0996

2.?創(chuàng)建k8s集群,這里采用 kubeadm方式安裝

2.1 確認(rèn)docker已經(jīng)安裝好,啟動docker,并且設(shè)置開機(jī)啟動

[root@master ~]# systemctl restart docker
[root@master ~]# systemctl enable docker
[root@master ~]# ps aux|grep docker
root      2190  1.4  1.5 1159376 59744 ?       Ssl  16:22   0:00 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
root      2387  0.0  0.0 112824   984 pts/0    S+   16:22   0:00 grep --color=auto docker

2.2 配置 Docker使用systemd作為默認(rèn)Cgroup驅(qū)動

每臺服務(wù)器上都要操作,master和node上都要操作

[root@master ~]# cat <<EOF > /etc/docker/daemon.json
> {
>    "exec-opts": ["native.cgroupdriver=systemd"]
> }
> EOF 
[root@master ~]# systemctl restart docker   #重啟docker

2.3?關(guān)閉swap分區(qū)

因為k8s不想使用swap分區(qū)來存儲數(shù)據(jù),使用swap會降低性能,每臺服務(wù)器都需要操作

[root@master ~]# swapoff -a   #臨時關(guān)閉
[root@master ~]# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab   #永久關(guān)閉

2.4 修改hosts文件,和內(nèi)核會讀取的參數(shù)文件

每臺機(jī)器上的/etc/hosts文件都需要修改

[root@master ~]# cat >> /etc/hosts << EOF 
> 192.168.107.11 master
> 192.168.107.12 node1
> 192.168.107.13 node2
> EOF

修改,每臺機(jī)器上(master和node),永久修改

[rootmaster ~]#cat <<EOF >>  /etc/sysctl.conf  追加到內(nèi)核會讀取的參數(shù)文件里
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF
[root@master ~]#sysctl -p  讓內(nèi)核重新讀取數(shù)據(jù),加載生效
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0

2.5 安裝kubeadm,kubelet和kubectl?

kubeadm 是k8s的管理程序,在master上運行的,用來建立整個k8s集群,背后是執(zhí)行了大量的腳本,幫助我們?nèi)觡8s。

kubelet 是在node節(jié)點上用來管理容器的 --> 管理docker,告訴docker程序去啟動容器
? ? ? ? ? ? ?是master和node通信用的-->管理docker,告訴docker程序去啟動容器。
一個在集群中每個節(jié)點(node)上運行的代理。 它保證容器(containers)都運行在 Pod 中。
kubectl 是在master上用來給node節(jié)點發(fā)號施令的程序,用來控制node節(jié)點的,告訴它們做什么事情的,是命令行操作的工具。

添加kubernetes YUM軟件源

集群里的每臺服務(wù)器都需要安裝

[root@master ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF

安裝kubeadm,kubelet和kubectl

[root@master ~]# yum install -y kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6
#最好指定版本,因為1.24的版本默認(rèn)的容器運行時環(huán)境不是docker了

設(shè)置開機(jī)自啟,因為kubelet是k8s在node節(jié)點上的代理,必須開機(jī)要運行的

[root@master ~]# systemctl enable  kubelet

2.6 部署Kubernetes Master

只是master主機(jī)執(zhí)行

提前準(zhǔn)備coredns:1.8.4的鏡像,后面需要使用,需要在每臺機(jī)器上下載鏡像

[root@master ~]#  docker pull  coredns/coredns:1.8.4
[root@master ~]# docker tag coredns/coredns:1.8.4 registry.aliyuncs.com/google_containers/coredns:v1.8.4

初始化操作在master服務(wù)器上執(zhí)行

[root@master ~]#kubeadm init \--apiserver-advertise-address=192.168.107.11 \--image-repository registry.aliyuncs.com/google_containers \--service-cidr=10.1.0.0/16 \--pod-network-cidr=10.244.0.0/16

#192.168.107.11 是master的ip?

# ? ? ?--service-cidr string ? ? ? ? ? ? ? ? ?Use alternative range of IP address for service VIPs. (default "10.96.0.0/12") ?服務(wù)發(fā)布暴露--》dnat

# ? ? ?--pod-network-cidr string ? ? ? ? ? ? ?Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.

執(zhí)行成功后,將下面這段記錄下來,為后面node節(jié)點加入集群做準(zhǔn)備

kubeadm join 192.168.107.11:6443 --token i25xkd.0xrlqnee2gbky4uv \
?? ?--discovery-token-ca-cert-hash sha256:7384e64dabec0ea4eb9f0b82729aa696f90ae8c8d9f6f7b2c87c33f71c611741?

完成初始化的新建目錄和文件操作,在master上完成

[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config

2.7 node節(jié)點服務(wù)器加入k8s集群

測試node1節(jié)點是否能和master通信

[root@node1 ~]# ping master
PING master (192.168.107.24) 56(84) bytes of data.
64 bytes from master (192.168.107.24): icmp_seq=1 ttl=64 time=0.765 ms
64 bytes from master (192.168.107.24): icmp_seq=2 ttl=64 time=1.34 ms
^C
--- master ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.765/1.055/1.345/0.290 ms

在所有的node節(jié)點上執(zhí)行

[root@node1 ~]#kubeadm join 192.168.107.11:6443 --token i25xkd.0xrlqnee2gbky4uv \--discovery-token-ca-cert-hash sha256:7384e64dabec0ea4eb9f0b82729aa696f90ae8c8d9f6f7b2c87c33f71c611741

在master上查看node是否已經(jīng)加入集群

[root@master ~]# kubectl get node
NAME     STATUS     ROLES                  AGE    VERSION
master   NotReady   control-plane,master   5m2s   v1.23.6
node1    NotReady   <none>                 61s    v1.23.6
node2    NotReady   <none>                 58s    v1.23.6

2.8 安裝網(wǎng)絡(luò)插件flannel

在master節(jié)點執(zhí)行

實現(xiàn)master上的pod和node節(jié)點上的pod之間通信

將flannel文件傳入master主機(jī)

部署flannel?


[root@master ~]# kubectl apply -f kube-flannel.yml  #執(zhí)行
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds create

2.9 查看集群狀態(tài)?

[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES                  AGE     VERSION
master   Ready    control-plane,master   9m49s   v1.23.6
node1    Ready    <none>                 5m48s   v1.23.6
node2    Ready    <none>                 5m45s   v1.23.6

此過程可能需要等一會,看見都Ready狀態(tài)了,則表示k8s環(huán)境搭建成功了!

三、編譯安裝nginx,制作自己的鏡像,并上傳到docker hub上,給node節(jié)點下載使用

1. 在master建立一個一鍵安裝nginx的腳本?

[root@master ~]# mkdir /nginx
[root@master ~]# cd /nginx
[root@master nginx]# vim onekey_install_nginx.sh 
#!/bin/bash#解決軟件的依賴關(guān)系,需要安裝的軟件包yum -y install zlib zlib-devel openssl openssl-devel pcre pcre-devel gcc gcc-c++ autoconf automake make psmisc net-tools lsof vim wget#下載nginx軟件mkdir  /nginxcd /nginxcurl -O  http://nginx.org/download/nginx-1.21.1.tar.gz#解壓軟件tar xf nginx-1.21.1.tar.gz#進(jìn)入解壓后的文件夾cd nginx-1.21.1#編譯前的配置./configure --prefix=/usr/local/nginx1  --with-http_ssl_module   --with-threads  --with-http_v2_module  --with-http_stub_status_module  --with-stream
#編譯
make -j 2
#編譯安裝
make  install

2. 建立一個Dockerfile文件

[root@master nginx]# vim Dockerfile 
FROM centos:7                #指明基礎(chǔ)鏡像
ENV NGINX_VERSION 1.21.1     #將1.21.1這個數(shù)值賦值NGINX_VERSION這個變量
ENV AUTHOR zhouxin           #  作者zhouxin
LABEL maintainer="cali<695811769@qq.com>"    #標(biāo)簽
RUN mkdir /nginx             #在容器中運行的命令
WORKDIR /nginx               #指定進(jìn)入容器的時候,在哪個目錄下
COPY . /nginx                #復(fù)制宿主機(jī)里的文件或者文件夾到容器的/nginx目錄下
RUN set -ex; \               #在容器運行命令bash  onekey_install_nginx.sh ; \         #執(zhí)行一鍵安裝nginx的腳本yum install vim iputils  net-tools iproute -y      #安裝一些工具
EXPOSE 80          #聲明開放的端口號
ENV PATH=/usr/local/nginx1/sbin:$PATH        #定義環(huán)境變量STOPSIGNAL SIGQUIT           #屏蔽信號
CMD ["nginx","-g","daemon off;"]    #在前臺啟動nginx程序, -g daemon off將off值賦給daemon這個變量,告訴nginx不要在后臺啟動,在前臺啟動,daemon是守護(hù)進(jìn)程,默認(rèn)在后臺啟動

3. 創(chuàng)建鏡像

[root@master nginx]# docker build -t zhouxin_nginx:1.0 .

?查看鏡像

4. 將制作的鏡像推送到docker hub上,供node節(jié)點下載?

將自己制作的鏡像推送到我的docker hub倉庫以供其他2個node節(jié)點服務(wù)器使用,首先要在docker hub創(chuàng)建自己的賬號,并創(chuàng)建自己的倉庫,我已經(jīng)創(chuàng)建了zhouxin03/nginx的倉庫

在master上將自己制作的鏡像打標(biāo)簽

[root@master nginx]# docker tag zhouxin_nginx:1.0 zhouxin03/nginx

登錄docker hub

[root@master nginx]# docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: zhouxin03
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded

然后再推到自己的docker hub倉庫里

[root@master nginx]# docker push zhouxin03/nginx
Using default tag: latest
The push refers to repository [docker.io/zhouxin03/nginx]
52bbda705d25: Pushed 
41e872683328: Pushed 
5f70bf18a086: Pushed 
5376459cbb05: Pushed 
174f56854903: Mounted from library/centos 
latest: digest: sha256:39801c440d239b8fec21fda5a750b38f96d64a13eef695c9394ffe244c5034a6 size: 1362

此時,在docker hub上查看鏡像

可見,鏡像已經(jīng)被推送到docker hub上了

5. node節(jié)點去docker hub上拉取這個鏡像

[root@node1 ~]# docker pull zhouxin03/nginx:latest  #拉取鏡像
latest: Pulling from zhouxin03/nginx
2d473b07cdd5: Pull complete 
63fe9f4e3ea7: Pull complete 
4f4fb700ef54: Pull complete 
947ca89e3d17: Pull complete 
0d4cea36d8fd: Pull complete 
Digest: sha256:39801c440d239b8fec21fda5a750b38f96d64a13eef695c9394ffe244c5034a6
Status: Downloaded newer image for zhouxin03/nginx:latest
docker.io/zhouxin03/nginx:latest
[root@node1 ~]# docker images
REPOSITORY                                           TAG       IMAGE ID       CREATED          SIZE
zhouxin03/nginx                                      latest    31274f1e297c   17 minutes ago   636MB
rancher/mirrored-flannelcni-flannel                  v0.19.2   8b675dda11bb   12 months ago    62.3MB
rancher/mirrored-flannelcni-flannel-cni-plugin       v1.1.0    fcecffc7ad4a   15 months ago    8.09MB
registry.aliyuncs.com/google_containers/kube-proxy   v1.23.6   4c0375452406   16 months ago    112MB
registry.aliyuncs.com/google_containers/coredns      v1.8.6    a4ca41631cc7   23 months ago    46.8MB
registry.aliyuncs.com/google_containers/pause        3.6       6270bb605e12   2 years ago      683kB
coredns/coredns                                      1.8.4     8d147537fb7d   2 years ago      47.6MB
registry.aliyuncs.com/google_containers/coredns      v1.8.4    8d147537fb7d   2 years ago      47.6MB

四、創(chuàng)建NFS服務(wù)器為所有的節(jié)點提供相同Web數(shù)據(jù),結(jié)合使用pv+pvc和卷掛載,保障數(shù)據(jù)的一致性,并用探針對pod中容器的狀態(tài)進(jìn)行檢測

1. 用ansible部署nfs服務(wù)器環(huán)境

1.1 在ansible服務(wù)器上對k8s集群和nfs服務(wù)器建立免密通道?

這里展示對nfs服務(wù)器建立免密通道的過程

[root@ansible ~]# ssh-keygen   #生成密鑰對
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:GtLchZ2flfBGzV5K3yqXePoIc9f1oT1WUOZzZ0AQdpw root@ansible
The key's randomart image is:
+---[RSA 2048]----+
|            ===+o|
|         o o =E*+|
|        . +  .*=B|
|     o . . . +.oB|
|    . + S   o. +o|
|     . o    o B.=|
|      .   o .*.+o|
|           +.o. .|
|            ...  |
+----[SHA256]-----+[root@ansible ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.107.15  # 將公鑰傳到要建立免密通道的服務(wù)器上
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.107.15 (192.168.107.15)' can't be established.
ECDSA key fingerprint is SHA256:/y4BmyQxo26qq5BDptWmP9KVykKwBX7YrugbGtSwN1Q.
ECDSA key fingerprint is MD5:8e:26:8d:24:1a:35:94:79:3e:b5:5a:1a:d3:9e:99:83.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.107.15's password:   #第一次傳送公鑰到遠(yuǎn)程服務(wù)器上要輸入遠(yuǎn)程服務(wù)器的登錄密碼Number of key(s) added: 1Now try logging into the machine, with:   "ssh '192.168.107.15'"
and check to make sure that only the key(s) you wanted were added.[root@ansible ~]# ssh root@192.168.107.15  #驗證免密通道是否建立成功
Last login: Sat Sep  2 16:26:00 2023 from 192.168.31.67
[root@nfs ~]# 

其他服務(wù)器只需要把a(bǔ)nsible的公鑰傳到各個服務(wù)器上即可?

[root@ansible ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.107.11  # 將公鑰傳到master
[root@ansible ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.107.12  # 將公鑰傳到node1
[root@ansible ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.107.13  # 將公鑰傳到node2

1.2 安裝ansible自動化運維工具在ansible服務(wù)器上,并寫好主機(jī)清單

[root@ansible ~]# yum install -y epel-release
[root@ansible ~]# yum install ansible -y
[root@ansible ~]# cd /etc/ansible/
[root@ansible ansible]# ls
ansible.cfg  hosts  roles
[root@ansible ansible]# vim hosts 
[nfs]
192.168.107.15  #nfs
[web]
192.168.107.11  #master
192.168.107.12  #node1
192.168.107.13  #node2

1.3 編寫安裝nfs腳本

在nfs服務(wù)器上,要安裝好nfs軟件包并設(shè)計開啟自啟nfs服務(wù)

[root@ansible ~]# vim nfs_install.sh
yum install -y nfs-utils    #安裝nfs軟件包
systemctl start nfs   #設(shè)置nfs開機(jī)自啟
systemctl enable nfs

在k8s集群里要安裝好nfs軟件包

[root@ansible ~]# vim web_nfs_install.sh
yum install -y nfs-utils    #安裝nfs軟件包

1.4?編寫playbook,實現(xiàn)nfs安裝部署

[root@ansible ansible]# vim nfs_install.yaml
- hosts: nfsremote_user: roottasks:- name: install nfs in nfsscript: /root/nfs_install.sh
- hosts: webremote_user: roottasks:- name: install nfs in webscript: /root/web_nfs_install.sh

script模塊:把本地的腳本傳到遠(yuǎn)端執(zhí)行?

1.5?檢查yaml文件語法

[root@ansible ansible]# ansible-playbook --syntax-check /etc/ansible/nfs_install.yamlplaybook: /etc/ansible/nfs_install.yaml

1.6?執(zhí)行yaml文件

[root@ansible ansible]# ansible-playbook  nfs_install.yaml

1.7 驗證nfs是否安裝成功

在nfs服務(wù)器看查看是否啟動nfsd進(jìn)程

[root@nfs ~]# ps aux|grep nfs
root       1693  0.0  0.0      0     0 ?        S<   17:05   0:00 [nfsd4_callbacks]
root       1699  0.0  0.0      0     0 ?        S    17:05   0:00 [nfsd]
root       1700  0.0  0.0      0     0 ?        S    17:05   0:00 [nfsd]
root       1701  0.0  0.0      0     0 ?        S    17:05   0:00 [nfsd]
root       1702  0.0  0.0      0     0 ?        S    17:05   0:00 [nfsd]
root       1703  0.0  0.0      0     0 ?        S    17:05   0:00 [nfsd]
root       1704  0.0  0.0      0     0 ?        S    17:05   0:00 [nfsd]
root       1705  0.0  0.0      0     0 ?        S    17:05   0:00 [nfsd]
root       1706  0.0  0.0      0     0 ?        S    17:05   0:00 [nfsd]
root       1745  0.0  0.0 112824   976 pts/0    R+   17:06   0:00 grep --color=auto nfs

可見,nfs安裝部署成功了!

2. 將web數(shù)據(jù)頁面掛載到容器上,并使用探針技術(shù)對容器狀態(tài)進(jìn)行檢查?

要用到探針技術(shù),需要修改nginx的配置文件,我這里采用就緒探針(readinessProbe)和存活性探針(livenessProbe),就要將就緒探針和存活性探針的位置塊添加到nginx配置中,因此,需要在nfs服務(wù)器上修改nginx的配置文件后,再將nginx的配置文件掛載到容器里。

所以,這里需要掛載兩個文件。

2.1 創(chuàng)建web頁面數(shù)據(jù)文件

2.1.1 先在nfs服務(wù)器上創(chuàng)建web頁面數(shù)據(jù)共享文件
[root@nfs ~]# mkdir /web
[root@nfs ~]# cd /web
[root@nfs web]# vim index.html
<p>welcome!</p>
<h1>name:zhouxin</h1>
<h1>Hunan Agricultural University</h1>
<h1>age: 20</h1>

2.2 創(chuàng)建nginx.conf配置文件

2.2.1 先再nfs服務(wù)器上下載nginx,使用前面的一鍵編譯安裝nginx的腳本下載,得到nginx.conf配置文件
[root@nfs nginx]# vim onekey_install_nginx.sh 
#!/bin/bash#解決軟件的依賴關(guān)系,需要安裝的軟件包yum -y install zlib zlib-devel openssl openssl-devel pcre pcre-devel gcc gcc-c++ autoconf automake make psmisc net-tools lsof vim wget#下載nginx軟件mkdir  /nginxcd /nginxcurl -O  http://nginx.org/download/nginx-1.21.1.tar.gz#解壓軟件tar xf nginx-1.21.1.tar.gz#進(jìn)入解壓后的文件夾cd nginx-1.21.1#編譯前的配置./configure --prefix=/usr/local/nginx1  --with-http_ssl_module   --with-threads  --with-http_v2_module  --with-http_stub_status_module  --with-stream
#編譯
make -j 2
#編譯安裝
make  install
[root@nfs nginx]# bash onekey_install_nginx.sh  #執(zhí)行腳本
2.2.2 修改nginx.conf的配置文件,添加就緒探針和存活性探針的位置塊
[root@nfs ~]# cd /usr/local
[root@nfs local]# ls
bin  etc  games  include  lib  lib64  libexec  nginx1  sbin  share  src
[root@nfs local]# cd nginx1
[root@nfs nginx1]# ls
conf  html  logs  sbin
[root@nfs nginx1]# cd conf
[root@nfs conf]# ls
fastcgi.conf          fastcgi_params          koi-utf  mime.types          nginx.conf          scgi_params          uwsgi_params          win-utf
fastcgi.conf.default  fastcgi_params.default  koi-win  mime.types.default  nginx.conf.default  scgi_params.default  uwsgi_params.default
[root@nfs conf]# vim nginx.conf

在http的server中添加

location /healthz {access_log off;return 200 'ok';
}location /isalive {access_log off;return 200 'ok';
}

如:

2.3 編輯/etc/exports文件,并讓其生效

[root@nfs web]# vim /etc/exports
/web 192.168.107.0/24 (rw,sync,all_squash)
/usr/local/nginx1/conf 192.168.107.0/24 (rw,sync,all_squash)

/nginx? 是我們共享的文件夾的路徑--》使用絕對路徑
192.168.107.0/24 允許過來訪問的客戶機(jī)的ip地址網(wǎng)段
(rw,all_squash,sync) 表示權(quán)限的限制?
? ? ? rw 表示可讀可寫 read and ?write
? ? ? ro 表示只能讀 ?read-only
? ? ? all_squash :任何客戶機(jī)上的用戶過來訪問的時候,都把它認(rèn)為是普通的用戶
? ? ? root_squash 當(dāng)NFS客戶端以root管理員訪問時,映射為NFS服務(wù)器匿名用戶
? ? ? no_root_squash ?當(dāng)NFS客戶端以root管理員訪問時,映射為NFS服務(wù)器的root管理員
? ? ? sync ?同時將數(shù)據(jù)寫入到內(nèi)存與硬盤中,保證不丟失數(shù)據(jù)
? ? ? async 優(yōu)先將數(shù)據(jù)保存到內(nèi)存,然后再寫入硬盤,效率更高,但可能丟失數(shù)據(jù)

讓/etc/exports文件其生效?

[root@nfs web]#  exportfs -av
exportfs: No options for /web 192.168.107.0/24: suggest 192.168.107.0/24(sync) to avoid warning
exportfs: No host name given with /web (rw,sync,all_squash), suggest *(rw,sync,all_squash) to avoid warning
exportfs: No options for /usr/local/nginx1/conf 192.168.107.0/24: suggest 192.168.107.0/24(sync) to avoid warning
exportfs: No host name given with /usr/local/nginx1/conf (rw,sync,all_squash), suggest *(rw,sync,all_squash) to avoid warning
exporting 192.168.107.0/24:/usr/local/nginx1/conf
exporting 192.168.107.0/24:/web
exporting *:/usr/local/nginx1/conf
exporting *:/web

設(shè)置共享目錄的權(quán)限

[root@nfs web]# chown nobody:nobody /web
[root@nfs web]# ll -d /web
drwxr-xr-x 2 nobody nobody 24 9月   2 17:08 /web
[root@nfs web]# chown nobody:nobody /usr/local/nginx1/conf
[root@nfs web]# ll -d /usr/local/nginx1/conf
drwxr-xr-x 2 nobody nobody 333 9月   2 18:25 /usr/local/nginx1/conf

?2.4?掛載web頁面數(shù)據(jù)文件

2.4.1在master服務(wù)器上創(chuàng)建pv
[root@master pod]# mkdir /pod
[root@master pod]# cd /pod
[root@master pod]# vim pv_nfs.yaml 
apiVersion: v1
kind: PersistentVolume   #資源類型
metadata:name: zhou-nginx-pv   #創(chuàng)建的pv的名字labels:type: zhou-nginx-pv
spec:capacity:storage: 5Gi accessModes:- ReadWriteMany     #訪問模式,多個客戶端讀寫persistentVolumeReclaimPolicy: Recycle    #回收策略-可以回收storageClassName: nfs      #pv名字,后面創(chuàng)建pvc的時候要用一樣的nfs:path: "/web"        # nfs共享目錄的路徑server: 192.168.107.15  # nfs服務(wù)器的ipreadOnly: false      #只讀

執(zhí)行pv的yaml文件

[root@master pod]# kubectl apply -f pv_nfs.yaml
persistentvolume/zhou-nginx-pv created
[root@master pod]# kubectl get pv  #查看
NAME            CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
zhou-nginx-pv   5Gi        RWX            Recycle          Available           nfs                     17s
2.4.2?在master服務(wù)器上創(chuàng)建pvc,用來使用pv
[root@master pod]# vim pvc_nfs.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: zhou-nginx-pvc
spec:accessModes:- ReadWriteMany      resources:requests:storage: 1GistorageClassName: nfs  #注意這里要用與前面pv相同的

執(zhí)行并查看

[root@master pod]# kubectl apply -f pvc_nfs.yaml
persistentvolumeclaim/zhou-nginx-pvc created
[root@master pod]# kubectl get pvc #查看
NAME             STATUS   VOLUME          CAPACITY   ACCESS MODES   STORAGECLASS   AGE
zhou-nginx-pvc   Bound    zhou-nginx-pv   5Gi        RWX            nfs            8s

2.5 掛載nginx.conf配置文件

其實這里也可以用configmap實現(xiàn)

參考:https://mp.csdn.net/mp_blog/creation/editor/129893723?

2.5.1在master服務(wù)器上創(chuàng)建pv
[root@master pod]# vim pv_nginx.yaml 
apiVersion: v1
kind: PersistentVolume   #資源類型
metadata:name: zhou-nginx-conf-pv   #創(chuàng)建的pv的名字labels:type: zhou-nginx-conf-pv
spec:capacity:storage: 5Gi accessModes:- ReadWriteMany     #訪問模式,多個客戶端讀寫persistentVolumeReclaimPolicy: Recycle    #回收策略-可以回收storageClassName: nginx-conf      #pv名字,后面創(chuàng)建pvc的時候要用一樣的nfs:path: "/usr/local/nginx1/conf"        # nfs共享目錄的路徑server: 192.168.107.15  # nfs服務(wù)器的ipreadOnly: false      #只讀

執(zhí)行并查看

[root@master pod]# kubectl apply -f pv_nginx.yaml 
persistentvolume/zhou-nginx-conf-pv created
[root@master pod]# kubectl get pv
NAME                 CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                    STORAGECLASS   REASON   AGE
zhou-nginx-conf-pv   5Gi        RWX            Recycle          Available                            nginx-conf              8s
zhou-nginx-pv        5Gi        RWX            Recycle          Bound       default/zhou-nginx-pvc   nfs                     81m
2.5.2?在master服務(wù)器上創(chuàng)建pvc,用來使用pv
[root@master pod]# vim pvc_nginx.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: zhou-nginx-conf-pvc
spec:accessModes:- ReadWriteMany      resources:requests:storage: 1GistorageClassName: nginx-conf  #注意這里要用與前面pv相同的

執(zhí)行并查看

[root@master pod]# kubectl apply -f pvc_nginx.yaml 
persistentvolumeclaim/zhou-nginx-conf-pvc created
[root@master pod]# kubectl get pvc
NAME                  STATUS   VOLUME               CAPACITY   ACCESS MODES   STORAGECLASS   AGE
zhou-nginx-conf-pvc   Bound    zhou-nginx-conf-pv   5Gi        RWX            nginx-conf     3s
zhou-nginx-pvc        Bound    zhou-nginx-pv        5Gi        RWX            nfs            113m

看到兩個都是綁定狀態(tài),則成功?

2.6?在master服務(wù)器上創(chuàng)建pod使用pvc

[root@master pod]# vim pv_pod.yaml 
apiVersion: apps/v1
kind: Deployment   #用副本控制器deployment創(chuàng)建
metadata:name: nginx-deployment      #deployment的名稱labels:app: zhou-nginx
spec:replicas: 10    #建立10個副本selector:matchLabels:app: zhou-nginxtemplate:      #根據(jù)此模版創(chuàng)建Pod的副本(實例)metadata:labels:app: zhou-nginxspec:volumes:- name: zhou-pv-storage-nfspersistentVolumeClaim:claimName: zhou-nginx-pvc   #使用前面創(chuàng)建的pvc- name: zhou-pv-storage-conf-nfspersistentVolumeClaim:claimName: zhou-nginx-conf-pvc   #使用前面創(chuàng)建的pvccontainers:- name: zhou-pv-container-nfs     #容器名字image: zhouxin03/nginx:latest       #使用之前自己制作的鏡像ports:- containerPort: 80       #容器應(yīng)用監(jiān)聽的端口號name: "http-server"volumeMounts:- mountPath: "/usr/local/nginx1/html"     #掛載到的容器里的目錄,這里是自己編譯安裝的nginx下的html路徑name: zhou-pv-storage-nfsvolumeMounts:- mountPath: "/usr/local/nginx1/conf"     #掛載到的容器里的目錄,這里是自己編譯安裝的nginx下的conf路徑name: zhou-pv-storage-conf-nfsreadinessProbe:    #配置就緒探針內(nèi)容httpGet:       #使用httpGet檢查機(jī)制path: /healthz   #使用nginx.conf配置文件里的路徑port: 80initialDelaySeconds: 10periodSeconds: 5livenessProbe:     #配置存活性探針內(nèi)容httpGet:path: /isalive    #使用nginx.conf配置文件里的路徑port: 80initialDelaySeconds: 15periodSeconds: 10

執(zhí)行并查看

[root@master pod]#kubectl apply -f pv_pod.yaml
[root@master pod]# kubectl get deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   20/20   20           20          2m18s
[root@master pod]# kubectl get pod -o wide
NAME                                READY   STATUS    RESTARTS   AGE     IP            NODE    NOMINATED NODE   READINESS GATES
nginx-deployment-79878f849f-5gzfl   1/1     Running   0          2m46s   10.244.1.13   node1   <none>           <none>
nginx-deployment-79878f849f-6nrrf   1/1     Running   0          2m46s   10.244.2.9    node2   <none>           <none>
nginx-deployment-79878f849f-6pl8g   1/1     Running   0          2m46s   10.244.1.6    node1   <none>           <none>
nginx-deployment-79878f849f-82g94   1/1     Running   0          2m46s   10.244.1.14   node1   <none>           <none>
nginx-deployment-79878f849f-8zssk   1/1     Running   0          2m46s   10.244.1.15   node1   <none>           <none>
nginx-deployment-79878f849f-9n8ql   1/1     Running   0          2m46s   10.244.2.4    node2   <none>           <none>
nginx-deployment-79878f849f-bwp9s   1/1     Running   0          2m46s   10.244.1.10   node1   <none>           <none>
nginx-deployment-79878f849f-ct5k4   1/1     Running   0          2m46s   10.244.2.8    node2   <none>           <none>
nginx-deployment-79878f849f-hdj5f   1/1     Running   0          2m46s   10.244.1.7    node1   <none>           <none>
nginx-deployment-79878f849f-hhw4c   1/1     Running   0          2m46s   10.244.1.8    node1   <none>           <none>

這個過程可能需要等一會才能看到全部變成Running狀態(tài),且 READY是1/1,則表示pod啟動成功

如果不是running狀態(tài)或 READY是0/1,表示出錯了,可以通過kubectl describe pod pod的名字?來排錯

測試訪問

[root@master pod]# curl 10.244.1.13
<p>welcome!</p>
<h1>name:zhouxin</h1>
<h1>Hunan Agricultural University</h1>
<h1>age: 20</h1>

查看nginx.conf的配置文件是否掛載成功

[root@master pod]# kubectl exec -it nginx-deployment-79878f849f-r4zsq -- bash
[root@nginx-deployment-79878f849f-r4zsq nginx]# cd /usr/local/nginx1/conf
[root@nginx-deployment-79878f849f-r4zsq conf]# ls
fastcgi.conf          fastcgi_params          koi-utf  mime.types          nginx.conf          scgi_params          uwsgi_params          win-utf
fastcgi.conf.default  fastcgi_params.default  koi-win  mime.types.default  nginx.conf.default  scgi_params.default  uwsgi_params.default
[root@nginx-deployment-79878f849f-r4zsq conf]# vim nginx.conf

看到配置文件里有這兩項,說明掛載成功!

2.7 創(chuàng)建service服務(wù)發(fā)布出去

[root@master pod]# vim my_service.yaml 
apiVersion: v1
kind: Service
metadata:name: my-nginx-nfs   #service的名字,后面配置ingress會用到labels:run: my-nginx-nfs
spec:type: NodePortports:- port: 8070targetPort: 80protocol: TCPname: httpselector:app: zhou-nginx   #注意這里要用app的形式,跟前面的pv_pod.yaml文件對應(yīng),有些使用方法是run,不要搞錯了

執(zhí)行并查看

[root@master pod]# kubectl apply -f my_service.yaml
service/my-nginx-nfs created
[root@master pod]# kubectl get service
NAME           TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
kubernetes     ClusterIP   10.1.0.1      <none>        443/TCP          46h
my-nginx-nfs   NodePort    10.1.32.204   <none>        8070:32621/TCP   9s
#這里的32621就是宿主機(jī)暴露的端口號,驗證時用瀏覽器訪問宿主機(jī)的這個端口號

2.8 在firewalld服務(wù)器上,配置dnat策略,將web服務(wù)發(fā)布出去

[root@fiewalld ~]# vim snat_dnat.sh
#!/bin/bash
iptables -F
iptables -t nat -F#enable route 開啟路由功能
echo 1 >/proc/sys/net/ipv4/ip_forward#enable snat 讓109.168.107.0網(wǎng)段的主機(jī)能夠通過WAN口上網(wǎng)
iptables -t nat -A POSTROUTING -s 192.168.107.0/24 -o ens33 -j SNAT --to-source  192.168.31.69#添加下面的dnat策略
#enable dant 讓外網(wǎng)能夠訪問內(nèi)網(wǎng)數(shù)據(jù)
iptables -t nat -A PREROUTING -i ens33 -d 192.168.31.69 -p tcp --dport 80 -j DNAT --to-destination 192.168.107.11
iptables -t nat -A PREROUTING -i ens33 -d 192.168.31.69 -p tcp --dport 80 -j DNAT --to-destination 192.168.107.12
iptables -t nat -A PREROUTING -i ens33 -d 192.168.31.69 -p tcp --dport 80 -j DNAT --to-destination 192.168.107.13

查看配置的防火墻規(guī)則生效沒

可見,已經(jīng)生效!

2.9 測試訪問

使用瀏覽器訪問3臺k8s集群服務(wù)器任意一臺的32621端口,都能顯示出nfs-server服務(wù)器上的定制頁面

五、采用HPA技術(shù),當(dāng)cpu使用率達(dá)到40%的時候,pod進(jìn)行自動水平擴(kuò)縮,最小10個,最多20個pod

1.?安裝metrics服務(wù)

HPA的指標(biāo)數(shù)據(jù)是通過metrics服務(wù)來獲得,必須要提前安裝好

Metrics Server 從 Kubelets 收集資源指標(biāo),并通過Metrics API在 Kubernetes apiserver 中公開它們, 以供Horizo??ntal Pod Autoscaler(HPA)和Vertical Pod Autoscaler (VPA)使用,比如CPU、文件描述符、內(nèi)存、請求延時等指標(biāo),metric-server收集數(shù)據(jù)給k8s集群內(nèi)使用,如kubectl,hpa,scheduler等。還可以通過 訪問指標(biāo) API kubectl top,從而更輕松地調(diào)試自動縮放管道

[root@master ~]# vim metrics.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:k8s-app: metrics-serverrbac.authorization.k8s.io/aggregate-to-admin: "true"rbac.authorization.k8s.io/aggregate-to-edit: "true"rbac.authorization.k8s.io/aggregate-to-view: "true"name: system:aggregated-metrics-reader
rules:
- apiGroups:- metrics.k8s.ioresources:- pods- nodesverbs:- get- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:k8s-app: metrics-servername: system:metrics-server
rules:
- apiGroups:- ""resources:- pods- nodes- nodes/stats- namespaces- configmapsverbs:- get- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:k8s-app: metrics-servername: metrics-server-auth-readernamespace: kube-system
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:k8s-app: metrics-servername: metrics-server:system:auth-delegator
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:auth-delegator
subjects:
- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:k8s-app: metrics-servername: system:metrics-server
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:metrics-server
subjects:
- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: v1
kind: Service
metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system
spec:ports:- name: httpsport: 443protocol: TCPtargetPort: httpsselector:k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system
spec:selector:matchLabels:k8s-app: metrics-serverstrategy:rollingUpdate:maxUnavailable: 0template:metadata:labels:k8s-app: metrics-serverspec:containers:- args:- --cert-dir=/tmp- --secure-port=4443- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname- --kubelet-use-node-status-port- --metric-resolution=15s- --kubelet-insecure-tlsimage: registry.cn-shenzhen.aliyuncs.com/zengfengjin/metrics-server:v0.5.0imagePullPolicy: IfNotPresentlivenessProbe:failureThreshold: 3httpGet:path: /livezport: httpsscheme: HTTPSperiodSeconds: 10name: metrics-serverports:- containerPort: 4443name: httpsprotocol: TCPreadinessProbe:failureThreshold: 3httpGet:path: /readyzport: httpsscheme: HTTPSinitialDelaySeconds: 20periodSeconds: 10resources:requests:cpu: 100mmemory: 200MisecurityContext:readOnlyRootFilesystem: truerunAsNonRoot: truerunAsUser: 1000volumeMounts:- mountPath: /tmpname: tmp-dirnodeSelector:kubernetes.io/os: linuxpriorityClassName: system-cluster-criticalserviceAccountName: metrics-servervolumes:- emptyDir: {}name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:labels:k8s-app: metrics-servername: v1beta1.metrics.k8s.io
spec:group: metrics.k8s.iogroupPriorityMinimum: 100insecureSkipTLSVerify: trueservice:name: metrics-servernamespace: kube-systemversion: v1beta1versionPriority: 100

可見,metrics已經(jīng)安裝成功

查看節(jié)點的狀態(tài)信息

[root@master ~]# kubectl top nodes
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master   115m         5%     1101Mi          29%       
node1    61m          3%     766Mi           20%       
node2    59m          2%     740Mi           20%      

?查看pod資源消耗

[root@master pod]# kubectl top pods
NAME                                CPU(cores)   MEMORY(bytes)   
nginx-deployment-6fd9b4f959-754lc   1m           1Mi             
nginx-deployment-6fd9b4f959-94p97   1m           1Mi             
nginx-deployment-6fd9b4f959-d66t7   1m           1Mi             
nginx-deployment-6fd9b4f959-hcffl   1m           1Mi             
nginx-deployment-6fd9b4f959-hjbfb   1m           1Mi             
nginx-deployment-6fd9b4f959-k2hvs   1m           1Mi             
nginx-deployment-6fd9b4f959-mgb6m   1m           1Mi             
nginx-deployment-6fd9b4f959-nb4sd   1m           1Mi             
nginx-deployment-6fd9b4f959-rcfnj   1m           1Mi             
nginx-deployment-6fd9b4f959-tv7t4   1m           1Mi      

這個命令需要由metric-server服務(wù)提供數(shù)據(jù),沒有安裝metrics的話會報錯error: Metrics API not available

2.?配置HPA,當(dāng)cpu使用率達(dá)到50%的時候,pod進(jìn)行自動水平擴(kuò)縮,最小20個,最多40個pod

2.1 在原來的deployment yaml文件中配置資源請求

要配置HPA功能,需要在Deployment YAML文件中配置資源請求,由于前面的deployment沒有配置資源請求,因此,先刪除前面用deployment創(chuàng)建的pod

[root@master ~]# cd /pod
[root@master pod]# ls
my_service.yaml  pvc_nfs.yaml  pvc_nginx.yaml  pv_nfs.yaml  pv_nginx.yaml  pv_pod.yaml
[root@master pod]# kubectl delete -f pv_pod.yaml 
deployment.apps "nginx-deployment" deleted

修改pv_pov.yaml配置文件,增加配置資源請求

[root@master pod]# vim pv_pod.yaml 
apiVersion: apps/v1
kind: Deployment   #用副本控制器deployment創(chuàng)建
metadata:name: nginx-deployment      #deployment的名稱labels:app: zhou-nginx
spec:replicas: 10    #建立10個副本selector:matchLabels:app: zhou-nginxtemplate:      #根據(jù)此模版創(chuàng)建Pod的副本(實例)metadata:labels:app: zhou-nginxspec:volumes:- name: zhou-pv-storage-nfspersistentVolumeClaim:claimName: zhou-nginx-pvc   #使用前面創(chuàng)建的pvc- name: zhou-pv-storage-conf-nfspersistentVolumeClaim:claimName: zhou-nginx-conf-pvc   #使用前面創(chuàng)建的pvccontainers:- name: zhou-pv-container-nfs     #容器名字image: zhouxin03/nginx:latest       #使用之前自己制作的鏡像ports:- containerPort: 80       #容器應(yīng)用監(jiān)聽的端口號name: "http-server"volumeMounts:- mountPath: "/usr/local/nginx1/html"     #掛載到的容器里的目錄,這里是自己編譯安裝的nginx下的html路徑name: zhou-pv-storage-nfsvolumeMounts:- mountPath: "/usr/local/nginx1/conf"     #掛載到的容器里的目錄,這里是自己編譯安裝的nginx下的conf路徑name: zhou-pv-storage-conf-nfsreadinessProbe:    #配置就緒探針內(nèi)容httpGet:       #使用httpGet檢查機(jī)制path: /healthz   #使用nginx.conf配置文件里的路徑port: 80initialDelaySeconds: 10periodSeconds: 5livenessProbe:     #配置存活性探針內(nèi)容httpGet:path: /isalive    #使用nginx.conf配置文件里的路徑port: 80initialDelaySeconds: 15periodSeconds: 10#############################添加下面的內(nèi)容##############################resources:requests:cpu: 300m    # 這里設(shè)置了CPU的請求為300mlimits:cpu: 500m    # 這里設(shè)置了CPU的限制為500m

執(zhí)行并查看

[root@master pod]# kubectl apply -f pv_pod.yaml 
deployment.apps/nginx-deployment created
[root@master pod]# kubectl get pod
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-6fd9b4f959-754lc   1/1     Running   0          36s
nginx-deployment-6fd9b4f959-94p97   1/1     Running   0          36s
nginx-deployment-6fd9b4f959-d66t7   1/1     Running   0          36s
nginx-deployment-6fd9b4f959-hcffl   1/1     Running   0          36s
nginx-deployment-6fd9b4f959-hjbfb   1/1     Running   0          36s
nginx-deployment-6fd9b4f959-k2hvs   1/1     Running   0          36s
nginx-deployment-6fd9b4f959-mgb6m   1/1     Running   0          36s
nginx-deployment-6fd9b4f959-nb4sd   1/1     Running   0          36s
nginx-deployment-6fd9b4f959-rcfnj   1/1     Running   0          36s
nginx-deployment-6fd9b4f959-tv7t4   1/1     Running   0          36s
2.2 創(chuàng)建hpa
[root@master ~]# vim hpa.yaml 
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:name: my-hpa
spec:scaleTargetRef:apiVersion: apps/v1kind: Deploymentname: nginx-deployment   #這里用前面的deployment的名字minReplicas: 10    #最少10個maxReplicas: 20    #最多20個metrics:- type: Resourceresource:name: cputarget:type: UtilizationaverageUtilization: 30   #限制%30的內(nèi)存

執(zhí)行并查看

[root@master ~]# kubectl apply -f hpa.yaml [root@master ~]# kubectl get hpa
NAME     REFERENCE                     TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
my-hpa   Deployment/nginx-deployment   0%/30%    10        20        10         48s

該過程可能需要等一會才能看到TARGETS的0%/50%

3. 對集群進(jìn)行壓力測試

3.1 在其他機(jī)器上安裝ab軟件

[root@ansible pod]# yum install httpd-tools -y

3.2?對該集群進(jìn)行ab壓力測試

#1000個并發(fā)數(shù),100000000個請求數(shù)?

[root@ansible ~]# ab -c 1000 -n 100000000 http://192.168.107.11:32621/
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/Benchmarking 127.0.0.1 (be patient)

4. 查看hpa效果,觀察變化

[root@master pod]# kubectl get hpa
NAME     REFERENCE                     TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
my-hpa   Deployment/nginx-deployment   46%/30%   10        20        17         3m4s

可以看出,hpa TARGETS達(dá)到了46%,需要擴(kuò)容。pod數(shù)自動擴(kuò)展到了17個

5. 觀察集群性能

?查看吞吐率

?經(jīng)過多次測試,看到最高吞吐率為4480左右

6. 優(yōu)化整個web集群

可以通過修改內(nèi)核參數(shù)或nginx配置文件中的參數(shù)來優(yōu)化

這里使用ulimit命令

[root@master ~]# ulimit -n 10000
#擴(kuò)大并發(fā)連接數(shù)

六、使用ingress對象結(jié)合ingress-controller給web業(yè)務(wù)實現(xiàn)負(fù)載均衡功能

1. 用ansible部署ingress環(huán)境

1.1 將配置ingress controller需要的配置文件傳入ansible服務(wù)器上

1.2 編寫拉取ingress鏡像的腳本

直接下載github上的 deploy.yaml 部署即可?

由于網(wǎng)絡(luò)問題鏡像如果拉取失敗,可以使用下面hub.docker 上的鏡像

這里是參考博客:ingress-nginx-controller 部署以及優(yōu)化 - 小兔幾白又白 - 博客園 (cnblogs.com)

[root@ansible ~]# vim ingress_images.sh
docker pull koala2020/ingress-nginx-controller:v1
docker pull koala2020/ingress-nginx-kube-webhook-certgen:v1

1.3 編寫playbook,實現(xiàn)ingress controller的安裝部署

編寫主機(jī)清單,ingress-controller-deployment.yaml文件只需要傳到master上,拉取ingress鏡像要在所有k8s集群里

[root@ansible etc]# vim /etc/ansible/hosts
[nfs]
192.168.107.15
[web]
192.168.107.11
192.168.107.12
192.168.107.13
[master]   #添加
192.168.107.11

編寫playbook

[root@ansible ansible]# vim ingress_install.yaml 
- hosts: webremote_user: roottasks:- name: install ingress controllerscript: /root/ingress_images.sh
- hosts: masterremote_user: roottasks:- name: copy ingress controller deployment filecopy: src=/root/ingress-controller-deploy.yaml dest=/root/

檢查yaml文件語法?

[root@ansible ansible]# ansible-playbook --syntax-check /etc/ansible/ingress_install.yamlplaybook: /etc/ansible/ingress_install.yaml

執(zhí)行yaml文件

[root@ansible ansible]# ansible-playbook  ingress_install.yaml

1.4 查看是否成功

發(fā)現(xiàn)鏡像拉取成功,文件也傳送到master上了

2.?執(zhí)行ingress-controller-deploy.yaml 文件,去啟動ingress ?controller

在master機(jī)器上

[root@master ~]# kubectl apply -f ingress-controller-deploy.yaml

查看ingress controller的相關(guān)命名空間

查看ingress controller的相關(guān)service

[root@k8smaster 4-4]# kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.99.160.10   <none>        80:30092/TCP,443:30263/TCP   91s
ingress-nginx-controller-admission   ClusterIP   10.99.138.23   <none>        443/TCP                      91s

查看ingress controller的相關(guān)pod

[root@master ~]# kubectl get pod -n ingress-nginx
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-fbz67        0/1     Completed   0          110s
ingress-nginx-admission-patch-4fsjz         0/1     Completed   1          110s
ingress-nginx-controller-7cd558c647-dgfbd   1/1     Running     0          110s
ingress-nginx-controller-7cd558c647-g9vvt   1/1     Running     0          110s

3.?啟用ingress 關(guān)聯(lián)ingress controller 和service

3.1 編寫ingrss的yaml文件?

[root@master ~]# vim zhou_ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:name: zhou-ingress       #ingress的名字annotations:kubernets.io/ingress.class: nginx #注釋 這個ingress 是關(guān)聯(lián)ingress controller的
spec:ingressClassName: nginx  #關(guān)聯(lián)ingress controllerrules:- host: www.zhou.com     #根據(jù)域名做負(fù)載均衡http:paths:- pathType: Prefixpath: /backend:service:name: my-nginx-nfs  #用前面發(fā)布的service名字port:number: 80- host: www.xin.comhttp:paths:- pathType: Prefixpath: /backend:service:name: my-nginx-nfs2  #后面做發(fā)布service的時候要用到port:number: 80                              

3.2 執(zhí)行文件

[root@master ~]# kubectl apply -f zhou_ingress.yaml 
ingress.networking.k8s.io/zhou-ingress created

3.3 查看效果

[root@master ~]# kubectl get ingress
NAME           CLASS   HOSTS                      ADDRESS                         PORTS   AGE
zhou-ingress   nginx   www.zhou.com,www.xin.com   192.168.107.12,192.168.107.13   80      85s

該過程需要等幾分鐘才能看到ADDRESS中的ip地址

3.4 查看ingress controller 里的nginx.conf 文件里是否有ingress對應(yīng)的規(guī)則?

[root@master ~]# kubectl get pod -n ingress-nginx
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-fbz67        0/1     Completed   0          12m
ingress-nginx-admission-patch-4fsjz         0/1     Completed   1          12m
ingress-nginx-controller-7cd558c647-dgfbd   1/1     Running     0          12m
ingress-nginx-controller-7cd558c647-g9vvt   1/1     Running     0          12m
[root@master ~]# kubectl exec -n ingress-nginx -it ingress-nginx-controller-7cd558c647-dgfbd -- bash
bash-5.1$ cat nginx.conf|grep zhou.com## start server www.zhou.comserver_name www.zhou.com ;## end server www.zhou.com
bash-5.1$ cat nginx.conf|grep xin.com## start server www.xin.comserver_name www.xin.com ;## end server www.xin.com
bash-5.1$ cat nginx.conf|grep -C3 upstream_balancererror_log  /var/log/nginx/error.log notice;upstream upstream_balancer {server 0.0.0.1:1234; # placeholderbalancer_by_lua_block {

4. 測試訪問

4.1 獲取ingress controller對應(yīng)的service暴露宿主機(jī)的端口

訪問宿主機(jī)和相關(guān)端口,就可以驗證ingress controller是否能進(jìn)行負(fù)載均衡

[root@master ~]# kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.1.58.218   <none>        80:30289/TCP,443:32195/TCP   19m
ingress-nginx-controller-admission   ClusterIP   10.1.241.17   <none>        443/TCP                      19m

4.2 在其他的宿主機(jī)或者windows機(jī)器上使用域名進(jìn)行訪問

這里在ansible服務(wù)器上訪問

4.2.1 修改host文件
[root@ansible ansible]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.107.12 www.zhou.com
192.168.107.13 www.xin.com

因為我們是基于域名做的負(fù)載均衡的配置,所有必須要在瀏覽器里使用域名去訪問,不能使用ip地址
同時ingress controller做負(fù)載均衡的時候是基于http協(xié)議的,7層負(fù)載均衡

4.2.1 測試訪問
[root@ansible ansible]# curl  www.zhou.com
<p>welcome!</p>
<h1>name:zhouxin</h1>
<h1>Hunan Agricultural University</h1>
<h1>age: 20</h1>[root@ansible ansible]# curl  www.xin.com
<html>
<head><title>503 Service Temporarily Unavailable</title></head>
<body>
<center><h1>503 Service Temporarily Unavailable</h1></center>
<hr><center>nginx</center>
</body>
</html>
[root@ansible ansible]# 

這里看到,訪問www.zhou.com能正常訪問到,而www.xin.com沒有訪問到,出現(xiàn)503錯誤,原因是我們只發(fā)布另一個service服務(wù),沒有發(fā)布另一個

5.?啟動第2個服務(wù)和pod

[root@master ~]# vim zhou_nginx_svc.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:name: zhou-nginx-deploylabels:app: zhou-nginx
spec:replicas: 3selector:matchLabels:app: zhou-nginxtemplate:metadata:labels:app: zhou-nginxspec:containers:- name: zhou-nginximage: nginximagePullPolicy: IfNotPresentports:- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:name:  my-nginx-nfs2  #要用前面zhou_ingress.yaml中一樣的labels:app: my-nginx-nfs2
spec:selector:app: zhou-nginxports:- name: name-of-service-portprotocol: TCPport: 80

執(zhí)行并查看

[root@master ~]# kubectl apply -f zhou_nginx_svc.yaml 
deployment.apps/zhou-nginx-deploy created
service/my-nginx-nfs2 created
[root@master ~]# kubectl get svc
NAME            TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
kubernetes      ClusterIP   10.1.0.1       <none>        443/TCP          2d1h
my-nginx-nfs    NodePort    10.1.32.204    <none>        8070:32621/TCP   173m
my-nginx-nfs2   ClusterIP   10.1.202.196   <none>        80/TCP           43s
[root@master ~]# kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.1.58.218   <none>        80:30289/TCP,443:32195/TCP   33m
ingress-nginx-controller-admission   ClusterIP   10.1.241.17   <none>        443/TCP                      33m
[root@master ~]# kubectl get ingress
NAME           CLASS   HOSTS                      ADDRESS                         PORTS   AGE
zhou-ingress   nginx   www.zhou.com,www.xin.com   192.168.107.12,192.168.107.13   80      23m

6. 再次測試訪問,查看www.xin.com的是否能夠訪問到

[root@ansible ansible]# curl  www.zhou.com
<p>welcome!</p>
<h1>name:zhouxin</h1>
<h1>Hunan Agricultural University</h1>
<h1>age: 20</h1>[root@ansible ansible]# curl  www.xin.com
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>body {width: 35em;margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif;}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>

可見,這次訪問成功!ingress負(fù)載均衡配置成功!

七、在k8s集群里部署Prometheus對web業(yè)務(wù)進(jìn)行監(jiān)控,結(jié)合Grafana成圖工具進(jìn)行數(shù)據(jù)展示

這里參考了https://blog.csdn.net/rzy1248873545/article/details/125758153這篇博客

監(jiān)控node的資源,可以放一個node_exporter,這是監(jiān)控node資源的,node_exporter是Linux上的采集器,放上去就能采集到當(dāng)前節(jié)點的CPU、內(nèi)存、網(wǎng)絡(luò)IO,等都可以采集的。

監(jiān)控容器,k8s內(nèi)部提供cadvisor采集器,pod、容器都可以采集到這些指標(biāo),都是內(nèi)置的,不需要單獨部署,只知道怎么去訪問這個Cadvisor就可以了。

監(jiān)控k8s資源對象,會部署一個kube-state-metrics這個服務(wù),它會定時的API中獲取到這些指標(biāo),幫存取到Prometheus里,要是告警的話,通過Alertmanager發(fā)送給一些接收方,通過Grafana可視化展示

1.?搭建prometheus監(jiān)控k8s集群

1.1 采用daemonset方式部署node-exporter

[root@master /]# mkdir /prometheus
[root@master /]# cd /prometheus
[root@master prometheus]# vim node_exporter.yaml 
apiVersion: apps/v1
kind: DaemonSet
metadata:name: node-exporternamespace: kube-systemlabels:k8s-app: node-exporter
spec:selector:matchLabels:k8s-app: node-exportertemplate:metadata:labels:k8s-app: node-exporterspec:containers:- image: prom/node-exportername: node-exporterports:- containerPort: 9100protocol: TCPname: http
---
apiVersion: v1
kind: Service
metadata:labels:k8s-app: node-exportername: node-exporternamespace: kube-system
spec:ports:- name: httpport: 9100nodePort: 31672protocol: TCPtype: NodePortselector:k8s-app: node-exporter

執(zhí)行

[root@master prometheus]# kubectl apply -f node-exporter.yaml
daemonset.apps/node-exporter created
service/node-exporter created

1.2?部署Prometheus

[root@master prometheus]# vim prometheus_rbac.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: prometheus
rules:
- apiGroups: [""]resources:- nodes- nodes/proxy- services- endpoints- podsverbs: ["get", "list", "watch"]
- apiGroups:- extensionsresources:- ingressesverbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:name: prometheusnamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: prometheus
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: prometheus
subjects:
- kind: ServiceAccountname: prometheusnamespace: kube-system
[root@master prometheus]# vim prometheus_comfig.yaml 
apiVersion: v1
kind: ConfigMap
metadata:name: prometheus-confignamespace: kube-system
data:prometheus.yml: |global:scrape_interval:     15sevaluation_interval: 15sscrape_configs:- job_name: 'kubernetes-apiservers'kubernetes_sd_configs:- role: endpointsscheme: httpstls_config:ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crtbearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/tokenrelabel_configs:- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]action: keepregex: default;kubernetes;https- job_name: 'kubernetes-nodes'kubernetes_sd_configs:- role: nodescheme: httpstls_config:ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crtbearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/tokenrelabel_configs:- action: labelmapregex: __meta_kubernetes_node_label_(.+)- target_label: __address__replacement: kubernetes.default.svc:443- source_labels: [__meta_kubernetes_node_name]regex: (.+)target_label: __metrics_path__replacement: /api/v1/nodes/${1}/proxy/metrics- job_name: 'kubernetes-cadvisor'kubernetes_sd_configs:- role: nodescheme: httpstls_config:ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crtbearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/tokenrelabel_configs:- action: labelmapregex: __meta_kubernetes_node_label_(.+)- target_label: __address__replacement: kubernetes.default.svc:443- source_labels: [__meta_kubernetes_node_name]regex: (.+)target_label: __metrics_path__replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor- job_name: 'kubernetes-service-endpoints'kubernetes_sd_configs:- role: endpointsrelabel_configs:- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]action: keepregex: true- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]action: replacetarget_label: __scheme__regex: (https?)- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]action: replacetarget_label: __metrics_path__regex: (.+)- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]action: replacetarget_label: __address__regex: ([^:]+)(?::\d+)?;(\d+)replacement: $1:$2- action: labelmapregex: __meta_kubernetes_service_label_(.+)- source_labels: [__meta_kubernetes_namespace]action: replacetarget_label: kubernetes_namespace- source_labels: [__meta_kubernetes_service_name]action: replacetarget_label: kubernetes_name- job_name: 'kubernetes-services'kubernetes_sd_configs:- role: servicemetrics_path: /probeparams:module: [http_2xx]relabel_configs:- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]action: keepregex: true- source_labels: [__address__]target_label: __param_target- target_label: __address__replacement: blackbox-exporter.example.com:9115- source_labels: [__param_target]target_label: instance- action: labelmapregex: __meta_kubernetes_service_label_(.+)- source_labels: [__meta_kubernetes_namespace]target_label: kubernetes_namespace- source_labels: [__meta_kubernetes_service_name]target_label: kubernetes_name- job_name: 'kubernetes-ingresses'kubernetes_sd_configs:- role: ingressrelabel_configs:- source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]action: keepregex: true- source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]regex: (.+);(.+);(.+)replacement: ${1}://${2}${3}target_label: __param_target- target_label: __address__replacement: blackbox-exporter.example.com:9115- source_labels: [__param_target]target_label: instance- action: labelmapregex: __meta_kubernetes_ingress_label_(.+)- source_labels: [__meta_kubernetes_namespace]target_label: kubernetes_namespace- source_labels: [__meta_kubernetes_ingress_name]target_label: kubernetes_name- job_name: 'kubernetes-pods'kubernetes_sd_configs:- role: podrelabel_configs:- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]action: keepregex: true- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]action: replacetarget_label: __metrics_path__regex: (.+)- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]action: replaceregex: ([^:]+)(?::\d+)?;(\d+)replacement: $1:$2target_label: __address__- action: labelmapregex: __meta_kubernetes_pod_label_(.+)- source_labels: [__meta_kubernetes_namespace]action: replacetarget_label: kubernetes_namespace- source_labels: [__meta_kubernetes_pod_name]action: replacetarget_label: kubernetes_pod_name
[root@master prometheus]# vim prometheus_deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:labels:name: prometheus-deploymentname: prometheusnamespace: kube-system
spec:replicas: 1selector:matchLabels:app: prometheustemplate:metadata:labels:app: prometheusspec:containers:- image: prom/prometheus:v2.0.0name: prometheuscommand:- "/bin/prometheus"args:- "--config.file=/etc/prometheus/prometheus.yml"- "--storage.tsdb.path=/prometheus"- "--storage.tsdb.retention=24h"ports:- containerPort: 9090protocol: TCPvolumeMounts:- mountPath: "/prometheus"name: data- mountPath: "/etc/prometheus"name: config-volumeresources:requests:cpu: 100mmemory: 100Milimits:cpu: 500mmemory: 2500MiserviceAccountName: prometheusvolumes:- name: dataemptyDir: {}- name: config-volumeconfigMap:name: prometheus-config
[root@master prometheus]# vim prometheus_service.yaml 
kind: Service
apiVersion: v1
metadata:labels:app: prometheusname: prometheusnamespace: kube-system
spec:type: NodePortports:- port: 9090targetPort: 9090nodePort: 30003selector:app: prometheus

執(zhí)行

[root@master prometheus]# kubectl apply -f prometheus_rbac.yaml 
clusterrole.rbac.authorization.k8s.io/prometheus created
serviceaccount/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
[root@master prometheus]# kubectl apply -f prometheus_comfig.yaml 
configmap/prometheus-config created
[root@master prometheus]# kubectl apply -f prometheus_deployment.yaml 
deployment.apps/prometheus created
[root@master prometheus]# kubectl apply -f prometheus_service.yaml 
service/prometheus created

?查看

[root@master prometheus]# kubectl get service -A
NAMESPACE       NAME                                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
default         kubernetes                           ClusterIP   10.1.0.1       <none>        443/TCP                      2d1h
default         my-nginx-nfs                         NodePort    10.1.32.204    <none>        8070:32621/TCP               3h9m
default         my-nginx-nfs2                        ClusterIP   10.1.202.196   <none>        80/TCP                       15m
ingress-nginx   ingress-nginx-controller             NodePort    10.1.58.218    <none>        80:30289/TCP,443:32195/TCP   47m
ingress-nginx   ingress-nginx-controller-admission   ClusterIP   10.1.241.17    <none>        443/TCP                      47m
kube-system     kube-dns                             ClusterIP   10.1.0.10      <none>        53/UDP,53/TCP,9153/TCP       2d1h
kube-system     metrics-server                       ClusterIP   10.1.33.66     <none>        443/TCP                      152m
kube-system     node-exporter                        NodePort    10.1.199.144   <none>        9100:31672/TCP               6m14s
kube-system     prometheus                           NodePort    10.1.178.35    <none>        9090:30003/TCP               98s

1.3 測試

用瀏覽器訪問192.168.107.11:31672,這是node-exporter采集的數(shù)據(jù)

訪問192.168.107.11:30003,這是Prometheus的頁面,依次點擊Status——Targets可以看到已經(jīng)成功連接到k8s的apiserver

2. 搭建garafana結(jié)合prometheus出圖

2.1 部署grafana

[root@master prometheus]# vim grafana_deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:name: grafana-corenamespace: kube-systemlabels:app: grafanacomponent: core
spec:replicas: 1selector:matchLabels:app: grafanatemplate:metadata:labels:app: grafanacomponent: corespec:containers:- image: grafana/grafana:6.1.4name: grafana-coreimagePullPolicy: IfNotPresent# env:resources:# keep request = limit to keep this container in guaranteed classlimits:cpu: 100mmemory: 100Mirequests:cpu: 100mmemory: 100Mienv:# The following env variables set up basic auth twith the default admin user and admin password.- name: GF_AUTH_BASIC_ENABLEDvalue: "true"- name: GF_AUTH_ANONYMOUS_ENABLEDvalue: "false"# - name: GF_AUTH_ANONYMOUS_ORG_ROLE#   value: Admin# does not really work, because of template variables in exported dashboards:# - name: GF_DASHBOARDS_JSON_ENABLED#   value: "true"readinessProbe:httpGet:path: /loginport: 3000# initialDelaySeconds: 30# timeoutSeconds: 1#volumeMounts:   #先不進(jìn)行掛載#- name: grafana-persistent-storage#  mountPath: /var#volumes:#- name: grafana-persistent-storage#emptyDir: {}
[root@master prometheus]# vim grafana_svc.yaml 
apiVersion: v1
kind: Service
metadata:name: grafananamespace: kube-systemlabels:app: grafanacomponent: core
spec:type: NodePortports:- port: 3000selector:app: grafanacomponent: core
[root@master prometheus]# vim grafana_ing.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:name: grafananamespace: kube-system
spec:rules:- host: k8s.grafanahttp:paths:- path: /pathType: Prefixbackend:service:name: grafanaport: number: 3000

?執(zhí)行

[root@master prometheus]# kubectl apply -f grafana_deploy.yaml 
deployment.apps/grafana-core created
[root@master prometheus]# kubectl apply -f grafana_svc.yaml 
service/grafana created
[root@master prometheus]# kubectl apply -f grafana_ing.yaml 
ingress.networking.k8s.io/grafana created

查看

[root@master prometheus]# kubectl get service -A
NAMESPACE       NAME                                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
default         kubernetes                           ClusterIP   10.1.0.1       <none>        443/TCP                      2d1h
default         my-nginx-nfs                         NodePort    10.1.32.204    <none>        8070:32621/TCP               3h17m
default         my-nginx-nfs2                        ClusterIP   10.1.202.196   <none>        80/TCP                       24m
ingress-nginx   ingress-nginx-controller             NodePort    10.1.58.218    <none>        80:30289/TCP,443:32195/TCP   56m
ingress-nginx   ingress-nginx-controller-admission   ClusterIP   10.1.241.17    <none>        443/TCP                      56m
kube-system     grafana                              NodePort    10.1.254.118   <none>        3000:30276/TCP               71s
kube-system     kube-dns                             ClusterIP   10.1.0.10      <none>        53/UDP,53/TCP,9153/TCP       2d1h
kube-system     metrics-server                       ClusterIP   10.1.33.66     <none>        443/TCP                      160m
kube-system     node-exporter                        NodePort    10.1.199.144   <none>        9100:31672/TCP               14m
kube-system     prometheus                           NodePort    10.1.178.35    <none>        9090:30003/TCP               9m55s

2.2 測試

訪問192.168.107.11:30276,這是grafana的頁面,賬戶、密碼都是admin

2.2.1?增添Prometheus數(shù)據(jù)源

2.2.2 導(dǎo)入模板

輸入模板號,可以到這個網(wǎng)站去找模板

Dashboards | Grafana Labs

2.3?出圖效果

八、構(gòu)建CI/CD環(huán)境,使用gitlab集成Jenkins、Harbor構(gòu)建pipeline流水線工作,實現(xiàn)自動相關(guān)拉取代碼、鏡像制作、上傳鏡像等功能

1. 部署gitlab環(huán)境?

1.1 安裝gitlab

此處參考了:https://blog.csdn.net/weixin_56270746/article/details/125427722?

1.1.1設(shè)置gitlab的yum源(使用清華鏡像源安裝GitLab)

gitlab-ce是它的社區(qū)版,gitlab-ee是企業(yè)版,是收費的。

在 /etc/yum.repos.d/ 下新建 gitlab-ce.repo

[root@gitlab ~]# cd /etc/yum.repos.d/
[root@gitlab yum.repos.d]# vim gitlab-ce.repo
[gitlab-ce]
name=gitlab-ce
baseurl=https://mirrors.tuna.tsinghua.edu.cn/gitlab-ce/yum/el7/
gpgcheck=0
enabled=1
[root@gitlab yum.repos.d]# yum clean all && yum makecache
1.1.2?安裝 gitlab

直接安裝最新版

[root@gitlab yum.repos.d]#yum install -y gitlab-ce

安裝成功后會看到gitlab-ce打印了以下圖形

1.1.3?配置GitLab站點Url

GitLab默認(rèn)的配置文件路徑是/etc/gitlab/gitlab.rb

默認(rèn)的站點Url配置項是: external_url 'http://gitlab.example.com'

這里我將GitLab站點Url修改為http://192.168.107.17:8000

[root@gitlab gitlab]# cd /etc/gitlab
[root@gitlab gitlab]# vim gitlab.rb 
external_url 'http://192.168.107.17:8000'   #修改這里

1.2?啟動并訪問GitLab

1.2.1?重新配置并啟動
[root@gitlab gitlab]# gitlab-ctl reconfigure

完成后將會看到如下輸出

1.2.2 在firewalld服務(wù)器上配置dnat策略,使windows能訪問進(jìn)來
[root@fiewalld ~]# vim snat_dnat.sh 
#!/bin/bash
iptables -F
iptables -t nat -F#enable route
echo 1 >/proc/sys/net/ipv4/ip_forward#enable snat
iptables -t nat -A POSTROUTING -s 192.168.107.0/24 -o ens33 -j SNAT --to-source  192.168.31.69#enable dant
iptables -t nat -A PREROUTING -i ens33 -d 192.168.31.69 -p tcp --dport 80 -j DNAT --to-destination 192.168.107.11
iptables -t nat -A PREROUTING -i ens33 -d 192.168.31.69 -p tcp --dport 80 -j DNAT --to-destination 192.168.107.12
iptables -t nat -A PREROUTING -i ens33 -d 192.168.31.69 -p tcp --dport 80 -j DNAT --to-destination 192.168.107.13#添加下面這條,注意端口是8000
iptables -t nat -A PREROUTING -i ens33 -d 192.168.31.69 -p tcp --dport 8000 -j DNAT --to-destination 192.168.107.17
1.2.3 在window上訪問

打開瀏覽器輸入gitlab服務(wù)器地址,注冊用戶,如下圖?

?注冊用戶

完成后想登錄http://192.168.107.17:8000?需要賬號和密碼登錄,注冊一個后登錄報錯誤,需要管理員賬號初始化。

1.2.4?配置默認(rèn)訪問密碼
  • [root@gitlab gitlab]# cd /opt/gitlab/bin/     #切換到命令運行的目錄
    [root@gitlab bin]# gitlab-rails console -e production    #進(jìn)行初始化密碼
    --------------------------------------------------------------------------------Ruby:         ruby 3.0.6p216 (2023-03-30 revision 23a532679b) [x86_64-linux]GitLab:       16.3.1 (ea817127f2a) FOSSGitLab Shell: 14.26.0PostgreSQL:   13.11
    ------------------------------------------------------------[ booted in 62.10s ]
    Loading production environment (Rails 7.0.6)
    irb(main):001:0> u=User.where(id:1).first
    => #<User id:1 @root>
    irb(main):002:0> u.password='sc123456'
    => "sc123456"
    irb(main):003:0> u.password_confirmation='sc123456'
    => "sc123456"
    irb(main):004:0> u.save!
    => true
    irb(main):005:0> exit
    

出現(xiàn)true說明設(shè)置成功

此時就可以用root/sc123456來登錄頁面

1.2.5 登錄訪問

成功登錄root用戶

1.3?配置使用自己創(chuàng)建的用戶登錄

需要用root賬號通過下

然后再次登錄,即可登錄成功!

至此,gitlab環(huán)境就搭建成功了!

2. 部署jenkins環(huán)境

2.1 先到官網(wǎng)下載通用java項目war包,建議選擇LTS長期支持版

下載地址:?

https://www.jenkins.io/download/

這里下載通用war包

2.2?下載java,jdk11以上版本并安裝,安裝后配置jdk的環(huán)境變量

?參考:https://blog.csdn.net/m0_37048012/article/details/120519348

2.2.1 yum安裝?
[root@jenkins javadoc]# yum install -y java-11-openjdk java-11-openjdk-devel		# 安裝
[root@jenkins javadoc]# java -version  #查看是否安裝成功
openjdk version "11.0.20" 2023-07-18 LTS
OpenJDK Runtime Environment (Red_Hat-11.0.20.0.8-1.el7_9) (build 11.0.20+8-LTS)
OpenJDK 64-Bit Server VM (Red_Hat-11.0.20.0.8-1.el7_9) (build 11.0.20+8-LTS, mixed mode, sharing)
2.2.2??查找JAVA安裝目錄
[root@jenkins javadoc]# whereis java
java: /usr/bin/java /usr/lib/java /etc/java /usr/share/java /usr/share/man/man1/java.1.gz

如果顯示的是/usr/bin/java請執(zhí)行下面命令

[root@jenkins javadoc]# ls -lr /usr/bin/java
lrwxrwxrwx 1 root root 22 9月   3 19:46 /usr/bin/java -> /etc/alternatives/java
[root@jenkins javadoc]# ls -lrt /etc/alternatives/java
lrwxrwxrwx 1 root root 64 9月   3 19:46 /etc/alternatives/java -> /usr/lib/jvm/java-11-openjdk-11.0.20.0.8-1.el7_9.x86_64/bin/java
2.2.3?配置環(huán)境變量
[root@jenkins ~]# vim /etc/profile
#######添加下面內(nèi)容########
#JAVA environment
JAVA_HOME=/usr/lib/jvm/java-11-openjdk-11.0.20.0.8-1.el7_9.x86_64
JRE_HOME=$JAVA_HOME/jre
PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib:$CLASSPATH
#PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
export JAVA_HOME JRE_HOME PATH CLASS_PATH

使環(huán)境變量生效

[root@jenkins ~]# source /etc/profile

2.3 將剛剛下載下來的jenkins.war包傳入服務(wù)器

2.4?啟動jenkins服務(wù)

[root@jenkins ~]# nohup java -jar jenkins.war &

讓其在后臺運行

[root@jenkins local]# ps aux|grep jenkins
root      11790  106 13.6 2492292 136172 pts/0  Sl   20:40   0:06 java -jar jenkins.war
root      11824  0.0  0.0 112824   980 pts/1    R+   20:40   0:00 grep --color=auto jenkins

默認(rèn)情況下端口是8080,如果要使用其他端口啟動,可以通過命令行”java –jar Jenkins.war --httpPort=80”的方式修改

2.5 測試訪問

jenkins服務(wù)器名+8080端口?

這個過程需要等一會

出現(xiàn)解鎖 Jenkins界面,說明jenkins項目搭建完成,這里需要輸入管理員密碼?

上圖中有提示:管理員密碼在:/root/.jenkins/secrets/initialAdminPassword 打開此文件獲得密碼并輸入密碼

[root@jenkins local]# cat /root/.jenkins/secrets/initialAdminPassword
80e0160b23cf4187a0abe4974e6e9ac1

點擊”繼續(xù)”按鈕后如下圖:

等待所有插件安裝完成。安裝插件的時候,會有一些插件安裝失敗,這些插件的安裝是有前置條件的,等安裝結(jié)束后,按右下角“重試”,繼續(xù)安裝。安裝完成后,點擊“繼續(xù)”按鈕,

創(chuàng)建用戶?

到此,jenkins安裝完成,可以開啟jenkins持續(xù)集成之旅了!?

3. 部署harbor環(huán)境

3.1?安裝docker、docker-compose

3.1.1 安裝docker
[root@harbor ~]# yum install -y yum-utils[root@harbor ~]# yum-config-manager \--add-repo \https://download.docker.com/linux/centos/docker-ce.repo[root@harbor ~]# yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y[root@harbor ~]# systemctl start docker[root@harbor ~]# docker -v  #查看docker是否安裝成功
Docker version 24.0.5, build ced0996
3.1.2 安裝docker-compose

下載并且安裝compose的命令行插件

[root@harbor ~]# DOCKER_CONFIG=${DOCKER_CONFIG:-$HOME/.docker}
[root@harbor ~]# echo $DOCKER_CONFIG
/root/.docker
[root@harbor ~]# mkdir -p $DOCKER_CONFIG/cli-plugins
[root@harbor ~]# 

上傳docker-compose程序到自己的linux宿主機(jī)里,存放到/root/.docker/cli-plugins/

[root@harbor ~]# mv docker-compose /root/.docker/cli-plugins/
[root@harbor ~]# cd /root/.docker/cli-plugins/
[root@harbor cli-plugins]# ls
docker-compose
[root@harbor cli-plugins]# chmod +x docker-compose  #授予可執(zhí)行權(quán)限
[root@harbor cli-plugins]# cp docker-compose /usr/bin/  #將docker-compose存放到PATH變量目錄下
[root@harbor cli-plugins]# docker-compose --version  #查看是否安裝成功
Docker Compose version v2.7.0

3.2 安裝harbor

3.2.1?下載harbor的源碼,上傳到linux服務(wù)器

3.2.2 解壓并修改內(nèi)容
[root@harbor ~]# tar xf harbor-offline-installer-v2.1.0.tgz
[root@harbor ~]# ls
anaconda-ks.cfg  harbor  harbor-offline-installer-v2.1.0.tgz
[root@harbor ~]# cd harbor
[root@harbor harbor]# ls
common.sh  harbor.v2.1.0.tar.gz  harbor.yml.tmpl  install.sh  LICENSE  prepare
[root@harbor harbor]# cp harbor.yml.tmpl harbor.yml
[root@harbor harbor]# vim harbor.yml

修改下面這兩處 ,并注釋掉https的配置

3.3 登錄harbor

[root@harbor harbor]# ./install.sh

在windows機(jī)器上訪問網(wǎng)站,去配置harbor
http://192.168.107.19:8089/

默認(rèn)的登錄的用戶名和密碼
admin
Harbor12345

至此,環(huán)境部署就全部完成了!?

4. gitlab集成jenkins、harbor構(gòu)建pipeline流水線任務(wù),實現(xiàn)相關(guān)拉取代碼、鏡像制作、上傳鏡像等流水線工作?

參考:https://www.cnblogs.com/linanjie/p/13986198.html?

在jenkins中構(gòu)建流水線任務(wù)時,從GitLab當(dāng)中拉取代碼,通過maven打包,然后構(gòu)建dokcer鏡像,并將鏡像推送至harbor當(dāng)中?。?

4.1 jenkins服務(wù)器上需要安裝docker且配置可登錄Harbor服務(wù)拉取鏡像?

4.1.1 jenkins服務(wù)器上安裝docker?
[root@jenkins ~]# yum install -y yum-utils[root@jenkins ~]# yum-config-manager \--add-repo \https://download.docker.com/linux/centos/docker-ce.repo[root@jenkins ~]# yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y[root@jenkins ~]# systemctl start docker[root@jenkins ~]# docker -v  #查看docker是否安裝成功
Docker version 24.0.5, build ced0996
4.1.2? jenkins服務(wù)器上配置可登錄Harbor服務(wù)
[root@jenkins local]# vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://registry.docker-cn.com"],
"insecure-registries" : ["192.168.107.19:8089"]
}

重啟docker

[root@jenkins local]# systemctl daemon-reload
[root@jenkins local]# systemctl restart docker
4.1.3 測試登錄
[root@jenkins local]# docker login 192.168.107.19:8089
Username: admin   #這里使用前面的那個默認(rèn)用戶名和密碼
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded

可見,登錄成功!

4.2 在jenkins上安裝git

[root@jenkins .ssh]# yum install -y git

4.3?在jenkins上安裝maven

參考:https://blog.csdn.net/liu_chen_yang/article/details/130106529

4.3.1 下載安裝包

登錄網(wǎng)址查看下載源:清華大學(xué)開源軟件鏡像站

搜索apache

進(jìn)入apache,找到maven并下載?

點擊進(jìn)入選擇自己所需版本,外面是大版本,里面還有小版本

我就點擊最新的maven-4,進(jìn)入之后在點擊4.0.0-alpha-7,在選擇?binaries,選擇自己想要下載包格式,我選擇的是zip格式

下載完成之后上傳到服務(wù)器上解壓即可.

4.3.2?解壓下載的包
[root@jenkins ~]# mkdir -p /usr/local/maven
[root@jenkins ~]# ls
anaconda-ks.cfg  apache-maven-4.0.0-alpha-7-bin.zip  jenkins.war  nohup.out
[root@jenkins ~]# mv apache-maven-4.0.0-alpha-7-bin.zip /usr/local/maven
[root@jenkins ~]# cd /usr/local/maven
[root@jenkins ~]# yum install unzip -y
[root@jenkins ~]# unzip apache-maven-4.0.0-alpha-7-bin.zi
4.3.3?配置環(huán)境變量
[root@jenkins ~]# vim /etc/profile
######添加下面內(nèi)容
MAVEN_HOME=/usr/local/maven/apache-maven-4.0.0-alpha-7
export PATH=${MAVEN_HOME}/bin:${PATH}

使環(huán)境變量生效

[root@jenkins ~]# source /etc/profile
4.3.4 mvn校驗
[root@jenkins ~]# mvn -v
Unable to find the root directory. Create a .mvn directory in the root directory or add the root="true" attribute on the root project's model to identify it.
Apache Maven 4.0.0-alpha-7 (bf699a388cc04b8e4088226ba09a403b68de6b7b)
Maven home: /usr/local/maven/apache-maven-4.0.0-alpha-7
Java version: 11.0.20, vendor: Red Hat, Inc., runtime: /usr/lib/jvm/java-11-openjdk-11.0.20.0.8-1.el7_9.x86_64
Default locale: zh_CN, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-1160.el7.x86_64", arch: "amd64", family: "unix"

看到上面輸出,說明安裝成功!

4.4?gitlab中創(chuàng)建測試項目

參考:https://www.cnblogs.com/linanjie/p/13986198.html??

我這里選擇從模板中創(chuàng)建一個Spring項目,項目名稱自擬

?創(chuàng)建模板成功!

4.5 在harbor上新建dev項目

4.6 在Jenkins頁面中配置JDK和Maven?

編輯完成之后,點擊應(yīng)用,保存

4.7?在Jenkins開發(fā)視圖中創(chuàng)建流水線任務(wù)(pipeline)

jenkins中所需插件有:?

Pipeline、docker-build-step、Docker Pipeline、Docker plugin、docker-build-step
、Role-based、Authorization Strategy

確保在jenkins中將上訴插件安裝好。

4.7.1?流水線任務(wù)需要編寫pipeline腳本,編寫腳本的第一步應(yīng)該是拉取gitlab中的項目

點擊"流水線語法":

然后點擊添加,選擇剛剛創(chuàng)建的憑據(jù)

記錄下來:git credentialsId: '0e0ecf12-6c3d-449b-a957-124d18f2fbb7', url: 'http://192.168.107.17:8001/zhouxin/spring.git'

4.7.2 編寫pipeline
pipeline{agent anyenvironment {// harbor的地址HARBOR_HOST = "192.168.107.19:8089" BUILD_VERSION = createVersion()}tools{// 添加環(huán)境,名稱為Jenkins全局配置中自己定義的別名jdk 'jdk11'maven 'maven4.0.0'}stages{stage("拉取代碼"){//check CODEsteps {// 使用自己前面自己生成的git credentialsId: 'f7c7796f-810c-4ba5-83cb-573f1be3e707', url: 'http://192.168.107.17:8001/zhouxin/my-spring.git'}}stage("maven構(gòu)建"){steps {sh "mvn clean package -Dmaven.test.skip=true"}}stage("構(gòu)建docker鏡像,并push到harbor當(dāng)中"){//docker pushsteps {sh '''docker build -t springproject:$BUILD_VERSION .docker tag springproject:$BUILD_VERSION ${HARBOR_HOST}/dev/springproject:$BUILD_VERSION'''// 使用自己的登陸harbor的用戶名和密碼sh "docker login -u admin -p Harbor12345" + " ${HARBOR_HOST}"sh "docker push ${HARBOR_HOST}/dev/springproject:$BUILD_VERSION"}}}
}def createVersion() {// 定義一個版本號作為當(dāng)次構(gòu)建的版本,輸出結(jié)果 20201116165759_1return new Date().format('yyyyMMddHHmmss') + "_${env.BUILD_ID}"
}

請確保Harbor中已經(jīng)創(chuàng)建dev倉庫;pipeline的寫法可以自己在網(wǎng)上學(xué)習(xí),腳本中應(yīng)盡量不要出現(xiàn)明文的密碼,為了演示方便,我這里直接使用了harbor的明文密碼,正規(guī)來說,應(yīng)該再建一個憑據(jù)來維護(hù)harborn的用戶名和密碼,然后再通過腳本去獲取憑據(jù)中的用戶名和密碼

編寫完成后點擊應(yīng)用,保存

回到開發(fā)視圖頁面,構(gòu)建剛才創(chuàng)建的流水線任務(wù)

第一次構(gòu)建時間相對較久,因為maven構(gòu)建時需要下載對應(yīng)依賴,耐心等待構(gòu)建完成,我這里因為之前已經(jīng)下載過相關(guān)依賴,所以時間較短

經(jīng)過幾次嘗試和排錯之后(報錯內(nèi)容寫在了文章末尾),成功了!

5. 驗證

到harbor中查看,發(fā)現(xiàn)鏡像已上傳

至此,pipeline流水線工作就完成了!

九、部署跳板機(jī)限制用戶訪問內(nèi)部網(wǎng)絡(luò)的權(quán)限

1.? 在firewalld上配置dnat策略,實現(xiàn)用戶ssh到firewalld服務(wù)后自動轉(zhuǎn)入到跳板機(jī)服務(wù)器

[root@fiewalld ~]# vim snat_dnat.sh 
#########添加下面的規(guī)則#####
iptables -t nat -A PREROUTING -i ens33 -d 192.168.31.69 -p tcp --dport 22 -j DNAT --to-destination 192.168.107.14:22

測試,在window上ssh到firewalld服務(wù)器,查看是否自動轉(zhuǎn)到跳板機(jī)里

可見,配置成功!

2. 在跳板機(jī)服務(wù)器上配置只允許192.168.31.0/24網(wǎng)段的用戶ssh進(jìn)來

[root@jump_server ~]# yum install iptables -y
[root@jump_server ~]# iptables -A INPUT -p tcp --dport 22 -s 192.168.31.0/24 -j ACCEPT

3. 將跳板機(jī)與內(nèi)網(wǎng)其他服務(wù)器都建立免密通道

這里只展示一臺的操作,其他的也是一樣,只需要把公鑰依次傳入其他的服務(wù)器上即可?

[root@jump_server ~]# ssh-keygen  #生成密鑰
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:9axtEvUoH+VNh2MQCRO7UgwHn8CV6M05XOeQeCVgPg0 root@jump_server
The key's randomart image is:
+---[RSA 2048]----+
|       .++E*+=.  |
|        oOo**o.. |
|       . +X+o+= o|
|        .o** *.+.|
|        S +.= o .|
|         . * .   |
|          o +    |
|           o     |
|                 |
+----[SHA256]-----+
[root@jump_server ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.107.19  #將公鑰傳到要建立免密通道的服務(wù)器上
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.107.19 (192.168.107.19)' can't be established.
ECDSA key fingerprint is SHA256:YeJAjO9gERUBkV531t5TE3PJy74ezOWN5XlC98sMqxQ.
ECDSA key fingerprint is MD5:04:ab:31:bc:ad:88:80:7c:53:3d:77:95:55:01:9c:b0.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.107.19's password: Number of key(s) added: 1Now try logging into the machine, with:   "ssh '192.168.107.19'"
and check to make sure that only the key(s) you wanted were added.[root@jump_server ~]# ssh root@192.168.107.19  #測試是否成功
Last login: Mon Sep  4 20:41:37 2023 from 192.168.31.67
[root@harbor ~]# 

4. 驗證

用192.168.107.0/24網(wǎng)段的服務(wù)器登錄到firewalld里,看是否會自動轉(zhuǎn)發(fā)到跳板機(jī)里

可見,不能自動轉(zhuǎn)發(fā)到跳板機(jī)中

再用192.168.31.0/24網(wǎng)段的服務(wù)器登錄到firewalld里

可見,能自動轉(zhuǎn)發(fā)到跳板機(jī)中

至此,跳板機(jī)就搭建成功了!

十、安裝zabbix對所有服務(wù)器區(qū)進(jìn)行監(jiān)控,監(jiān)控其CPU、內(nèi)存、網(wǎng)絡(luò)帶寬等

十一、使用ab軟件對整個k8s集群和相關(guān)服務(wù)器進(jìn)行壓力測試

這里用ansible服務(wù)器做壓力測試

1.? 安裝ab軟件

[root@ansible ~]# yum install httpd-tools -y

2. 測試

這里展示對一臺服務(wù)器的壓力測試,其他服務(wù)器也是一樣的?

[root@ansible ~]# ab -n 1000 -c 1000  -r http://192.168.31.69/
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/Benchmarking 192.168.31.69 (be patient)  #完成的進(jìn)度
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requestsServer Software:                        #服務(wù)器軟件版本
Server Hostname:        192.168.31.69  #服務(wù)器主機(jī)名
Server Port:            80  #服務(wù)器端口Document Path:          /         #測試的頁面
Document Length:        0 bytes   #頁面的字節(jié)數(shù)Concurrency Level:      1000  #請求的并發(fā)數(shù),代表著訪問的客戶端數(shù)量
Time taken for tests:   0.384 seconds  #整個測試花費的時間
Complete requests:      1000  #成功的請求數(shù)量
Failed requests:        2000  #失敗的請求數(shù)量(Connect: 0, Receive: 1000, Length: 0, Exceptions: 1000)
Write errors:           0
Total transferred:      0 bytes     #整個測試過程的總數(shù)據(jù)大小(包括header頭信息等)
HTML transferred:       0 bytes    #整個測試過程HTML頁面實際的字節(jié)數(shù)
Requests per second:    2604.40 [#/sec] (mean)  #每秒處理的請求數(shù),這是非常重要的參數(shù),體現(xiàn)了服務(wù)器的吞吐量 #后面括號中的 mean 表示這是一個平均值
Time per request:       383.966 [ms] (mean)  #平均請求響應(yīng)時間,括號中的 mean 表示這是一個平均值#每個請求的時間 0.384[毫秒],意思為在所有的并發(fā)請求每個請求實際運行時間的平均值
#由于對于并發(fā)請求 cpu 實際上并不是同時處理的,而是按照每個請求獲得的時間片逐個輪轉(zhuǎn)處理的
#所以基本上第一個 Time per request 時間約等于第二個 Time per request 時間乘以并發(fā)請求數(shù)
Time per request:       0.384 [ms] (mean, across all concurrent requests)   
Transfer rate:          0.00 [Kbytes/sec] received  傳輸速率,平均每秒的流量 #可以幫助排除是否存在網(wǎng)絡(luò)流量過大導(dǎo)致響應(yīng)時間延長的問題Connection Times (ms)   #連接時間min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:     0    0   1.0      0       7
Waiting:        0    0   0.0      0       0
Total:          0    0   1.0      0       7Percentage of the requests served within a certain time (ms)  #在一定的時間內(nèi)提供服務(wù)的請求的百分比50%      066%      075%      080%      090%      095%      398%      599%      5100%      7 (longest request)[root@ansible ~]# 


項目遇到的問題

1. 重啟服務(wù)器后,發(fā)現(xiàn)除了firewalld服務(wù)器,其他服務(wù)器的xshell連接不上了

排錯思路:

查看ssh進(jìn)程是否開啟

是開啟的,沒有問題

在firewalld防火墻服務(wù)器上看防火墻規(guī)則

發(fā)現(xiàn)之前配置的snat沒有生效,原因是配置snat的腳本重啟后沒有生效

解決:bash snat_dnat.sh

再次查看防火墻規(guī)則

發(fā)現(xiàn),snat策略生效,這時,其他服務(wù)器的xshell可以連接上了

為了后面重啟snat都生效,將bash snat_dnat.sh寫入開啟自啟腳本

步驟如下:

[root@fiewalld ~]# chmod +x /root/snat_dnat.sh   #給腳本設(shè)置可執(zhí)行權(quán)限
[root@fiewalld ~]# vi /etc/rc.d/rc.local   #!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.touch /var/lock/subsys/local/root/snat_dnat.sh  #添加這一行
[root@fiewalld ~]# chmod +x /etc/rc.d/rc.local   #在centos7中,/etc/rc.d/rc.local的權(quán)限被降低了,所以需要執(zhí)行如下命令賦予其可執(zhí)行權(quán)限

2. pod啟動不起來,發(fā)現(xiàn)是pvc與pv的綁定出錯了,原因是pvc和pv的yaml文件中的storageClassName不一致

3. 測試訪問時,發(fā)現(xiàn)訪問的內(nèi)容不足自己設(shè)置的,即web數(shù)據(jù)文件掛載失敗,但是nginx.conf配置文件掛載成功

4.?pipeline執(zhí)行最后一步報錯

?查看錯誤信息

報錯原因:docker沒有啟動起來。

解決:在jenkins服務(wù)器上啟動docker即可

[root@jenkins ~]# service docker start
Redirecting to /bin/systemctl start docker.service
[root@jenkins ~]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

5. pipeline執(zhí)行最后一步報錯登錄不了harbor

報錯信息

原因:默認(rèn)登陸的是443端口,而我們并沒有啟用

解決:重啟harbor就可以了

[root@harbor ~]# cd harbor
[root@harbor harbor]# ./install.sh 

測試

[root@jenkins ~]# docker login -u admin -p Harbor12345 192.168.107.19:8089
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded

登錄成功?


項目心得

  1. 對于snat+dnat策略的原理和使用更熟悉
  2. k8s的使用和集群的部署更熟悉
  3. 查看日志對排錯很有幫助
  4. 一定要提前規(guī)劃好項目架構(gòu)圖,部署環(huán)境的過程要細(xì)心
  5. 對于docker+k8s中的技術(shù)和使用,包括pv+pvc+nfs掛載卷實現(xiàn)數(shù)據(jù)一致性、鏡像制作、探針技術(shù)理解更深刻,使用更熟悉
  6. 觀察到HPA技術(shù)的現(xiàn)象,深刻理解其作用和原理
  7. 對于prometheus和zabbix兩種監(jiān)控方式理解跟深刻
  8. 部署CI/CD完成流水線工作,試錯多次才成功,對其使用方式更清楚了
  9. 同時開啟多臺服務(wù)器,可能會導(dǎo)致電腦卡頓,要又耐心,不要急躁
  10. 排錯過程如果一直失敗不要著急,要多方面思考和解決
  11. ingress做負(fù)載均衡的實現(xiàn)過程更熟悉了
  12. 對于gitlab+jenkins+harbor實現(xiàn)pipeline流水線工作的流程理解更深,知道背后的原理及是如何將3者連接在一起的,實現(xiàn)的過程出現(xiàn)了很多次錯誤,試了10幾次才能夠,要穩(wěn)住心態(tài),不要急躁和放棄
  13. 深刻理解了跳板機(jī)的原理
  14. 知道了壓力測試的意義
http://www.risenshineclean.com/news/65894.html

相關(guān)文章:

  • 東莞中堂網(wǎng)站建設(shè)百度助手app免費下載
  • 順德網(wǎng)站建設(shè)策劃seo全網(wǎng)營銷的方式
  • 網(wǎng)站建設(shè)咋做企業(yè)網(wǎng)站建設(shè)服務(wù)
  • 網(wǎng)站設(shè)計模板素材競價服務(wù)托管價格
  • 直播網(wǎng)站怎么建設(shè)長春網(wǎng)站制作計劃
  • 如何鑒賞網(wǎng)站論文做外貿(mào)推廣
  • 網(wǎng)站建設(shè)代理加盟南寧網(wǎng)站建設(shè)服務(wù)公司
  • 我買了一個備案網(wǎng)站 可是公司注銷了手機(jī)網(wǎng)站模板免費下載
  • h5科技 網(wǎng)站輔導(dǎo)班
  • 溫州推廣平臺關(guān)鍵詞推廣優(yōu)化排名品牌
  • 網(wǎng)站建設(shè)的教學(xué)網(wǎng)站百度手機(jī)助手下載安卓
  • 有心學(xué)做網(wǎng)站東莞網(wǎng)站設(shè)計排行榜
  • 黃石手機(jī)網(wǎng)站建設(shè)東莞營銷網(wǎng)站建設(shè)優(yōu)化
  • 南海網(wǎng)站智能推廣線上推廣的渠道和方法
  • 怎樣用網(wǎng)站模板做網(wǎng)站高傭金app軟件推廣平臺
  • 坪山網(wǎng)站建設(shè)信息房地產(chǎn)銷售
  • 知名設(shè)計網(wǎng)站公司站長之家域名查詢排行
  • 網(wǎng)站關(guān)鍵詞排名沒有了平臺推廣怎么做
  • 游樂場網(wǎng)站開發(fā)百度快速收錄網(wǎng)站
  • 怎么做屬于自己的售卡網(wǎng)站鄭州網(wǎng)絡(luò)營銷策劃
  • 唐山專業(yè)網(wǎng)站建設(shè)公司活動推廣方案怎么寫
  • 成品網(wǎng)站建設(shè)咨詢seo必備軟件
  • 深圳市專業(yè)制作網(wǎng)站公司嗎互聯(lián)網(wǎng)廣告代理
  • 南京個人做網(wǎng)站百度指數(shù)專業(yè)版app
  • ppt可以做網(wǎng)站嗎合肥網(wǎng)絡(luò)推廣優(yōu)化公司
  • 城關(guān)區(qū)建設(shè)局網(wǎng)站自媒體營銷方式有哪些
  • 西安做網(wǎng)站優(yōu)化教育培訓(xùn)機(jī)構(gòu)十大排名
  • 網(wǎng)展企業(yè)網(wǎng)站系統(tǒng) 免費沈陽網(wǎng)站推廣優(yōu)化
  • 濟(jì)源做網(wǎng)站公司bt螞蟻磁力搜索天堂
  • 學(xué)院網(wǎng)站建設(shè)工作總結(jié)網(wǎng)絡(luò)銷售是做什么的