可以做外鏈的音樂網(wǎng)站百度廣告聯(lián)盟app下載官網(wǎng)
文章目錄
- k8s綜合項目
- 1、項目規(guī)劃圖
- 2、項目描述
- 3、項目環(huán)境
- 4、前期準備
- 4.1、環(huán)境準備
- 4.2、ip劃分
- 4.3、靜態(tài)配置ip地址
- 4.4、修改主機名
- 4.5、部署k8s集群
- 4.5.1、關閉防火墻和selinux
- 4.5.2、升級系統(tǒng)
- 4.5.3、每臺主機都配置hosts文件,相互之間通過主機名互相訪問
- 4.5.4、配置master和node之間的免密通道
- 4.5.5、關閉交換分區(qū)swap,提升性能(三臺一起操作)
- 4.5.6、為什么要關閉swap交換分區(qū)?
- 4.5.7、修改機器內核參數(shù)(三臺一起操作)
- 4.5.8、配置阿里云的repo源(三臺一起)
- 4.5.9、配置時間同步(三臺一起)
- 4.5.10、安裝docker服務(三臺一起)
- 4.5.11、安裝docker的最新版本
- 4.5.12、配置鏡像加速器
- 4.5.13、繼續(xù)配置Kubernetes
- 4.5.14、安裝初始化k8s需要的軟件包(三臺一起)
- 4.5.15、kubeadm初始化k8s集群
- 4.5.16、基于kubeadm.yaml文件初始化k8s
- 4.5.17、改一下node的角色為worker
- 4.5.18、安裝網(wǎng)絡插件
- 4.5.19、安裝kubectl top node
- 4.5.20、讓node節(jié)點也可以訪問 kubectl get node
- 5、先搭建k8s里邊的內容
- 5.1、搭建nfs服務器,給web 服務提供網(wǎng)站數(shù)據(jù),創(chuàng)建好相關的pv、pvc
- 5.1.1、設置共享目錄
- 5.1.2、創(chuàng)建共享目錄
- 5.1.3、刷新nfs或者重新輸出共享目錄
- 5.1.4、創(chuàng)建一個pv使用nfs服務器共享的目錄
- 5.1.5、應用一下
- 5.1.6、創(chuàng)建pvc使用存儲類:example-nfs
- 5.1.7、創(chuàng)建pod啟動pvc
- 5.1.8、測試
- 5.2、將自己go語言的代碼鏡像從harbor倉庫中拉取出來
- 5.2.1、先把go語言的代碼制作成鏡像
- 5.2.2、然后上傳到harbor倉庫
- node節(jié)點拉取ghweb鏡像
- 5.3、啟動HPA功能部署自己的web pod,當cpu使用率達到50%的時候,進行水平擴縮,最小10個業(yè)務pod,最多20個業(yè)務pod。
- 5.5、使用探針(liveness、readiness、startup)的(httpget、exec)方法對web業(yè)務pod進行監(jiān)控,一旦出現(xiàn)問題馬上重啟,增強業(yè)務pod的可靠性。
- 5.6、搭建ingress controller 和ingress規(guī)則,給web服務做基于域名的負載均衡
- 5.7、部署和訪問 Kubernetes 儀表板(Dashboard)
- 5.8、使用ab工具對整個k8s集群里的web服務進行壓力測試
- 6、搭建Prometheus服務器
- 6.1、為了方便多臺機器操作,先部署ansible在堡壘機上
- 6.2、搭建prometheus 服務器和grafana出圖軟件,監(jiān)控所有的服務器
- 6.2.1、安裝exporter
- 6.2.2、在Prometheus服務器上添加被監(jiān)控的服務器
- 6.2.3、安裝grafana出圖展示
- 7、進行跳板機和防火墻的配置
- 7.1、將k8s集群里的機器還有nfs服務器,進行tcp wrappers的配置,只允許堡壘機ssh進來,拒絕其他的機器ssh過去。
- 7.2、搭建防火墻服務器
- 7.3、編寫dnat和snat策略
- 7.4、將整個k8s集群里的服務器的網(wǎng)關設置為防火墻服務器的LAN口的ip地址(192.168.182.177)
- 7.5、測試SNAT功能
- 7.6、測試dnat功能
- 7.7、測試堡壘機發(fā)布
- 8、項目心得
k8s綜合項目
1、項目規(guī)劃圖
2、項目描述
項目描述/項目功能: 模擬企業(yè)里的k8s生產(chǎn)環(huán)境,部署web,nfs,harbor,Prometheus,granfa等應用,構建一個高可用高性能的web系統(tǒng),同時能監(jiān)控整個k8s集群的使用。
3、項目環(huán)境
CentOS 7.9,ansible 2.9.27,Docker 2.6.0.0,Docker Compose 2.18.1,Kubernetes 1.20.6,Harbor 2.1.0,nfs v4,metrics-server 0.6.0,ingress-nginx-controllerv1.1.0,kube-webhook-certgen-v1.1.0,Dashboard v2.5.0,Prometheus 2.44.0,Grafana 9.5.1。
4、前期準備
4.1、環(huán)境準備
9臺全新的Linux服務器,關閉firewall和seLinux,配置靜態(tài)ip地址,修改主機名,添加hosts解析
但是由于我的電腦本身只有16G內存,搞不了9臺,所以把prometheus,ansble,堡壘機放到一臺服務器上,把NFS服務器和harbor倉庫放到了一臺服務器上。
4.2、ip劃分
主機名 | ip |
---|---|
防火墻 | 192.168.40.87 |
堡壘機/跳板機+prometheus+ansible | 192.168.182.141 |
NFS服務器,harbor倉庫 | 192.168.182.140 |
master | 192.168.182.142 |
node-1 | 192.168.182.143 |
node-2 | 192.168.182.144 |
4.3、靜態(tài)配置ip地址
'以master為例'
[root@master ~]# cd /etc/sysconfig/network-scripts/
[root@master network-scripts]# ls
ifcfg-ens33 ifdown-eth ifdown-post ifdown-Team ifup-aliases ifup-ipv6 ifu
ifcfg-lo ifdown-ippp ifdown-ppp ifdown-TeamPort ifup-bnep ifup-isdn ifu
ifdown ifdown-ipv6 ifdown-routes ifdown-tunnel ifup-eth ifup-plip ifu
ifdown-bnep ifdown-isdn ifdown-sit ifup ifup-ippp ifup-plusb ifu
[root@master network-scripts]# vim ifcfg-ens33
[root@master network-scripts]# cat ifcfg-ens33
BOOTPROTO="none"
DEFROUTE="yes"
NAME="ens33"
UUID="9c5e3120-2fcf-4124-b924-f2976d52512f"
DEVICE="ens33"
ONBOOT="yes"
IPADDR=192.168.182.142
PREFIX=24
GATEWAY=192.168.182.2
DNS1=114.114.114.114
[root@master network-scripts]#
[root@master network-scripts]# service network restart
Restarting network (via systemctl): [ 確定 ]
[root@master network-scripts]# ping www.baidu.com
PING www.a.shifen.com (183.2.172.42) 56(84) bytes of data.
64 bytes from 183.2.172.42 (183.2.172.42): icmp_seq=1 ttl=128 time=18.1 ms
64 bytes from 183.2.172.42 (183.2.172.42): icmp_seq=2 ttl=128 time=17.7 ms
^C
--- www.a.shifen.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 17.724/17.956/18.188/0.232 ms
4.4、修改主機名
hostnamectl set-hostname master && bash
hostnamectl set-hostname node-1 && bash
hostnamectl set-hostname node-2 && bash
hostnamectl set-hostname nfs && bash
hostnamectl set-hostname firewalld && bash
hostnamectl set-hostname jump && bash
4.5、部署k8s集群
4.5.1、關閉防火墻和selinux
[root@localhost ~]# service firewalld stop
Redirecting to /bin/systemctl stop firewalld.service
[root@localhost ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@localhost ~]# setenforce 0
[root@localhost ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
[root@localhost ~]#
4.5.2、升級系統(tǒng)
yum update -y
4.5.3、每臺主機都配置hosts文件,相互之間通過主機名互相訪問
'加入這三行'
192.168.182.142 master
192.168.182.143 node-1
192.168.182.144 node-2[root@master ~]# vim /etc/hosts
[root@master ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.182.142 master
192.168.182.143 node-1
192.168.182.144 node-2
[root@master ~]#
4.5.4、配置master和node之間的免密通道
ssh-keygen
cd /root/.ssh/
ssh-copy-id -i id_rsa.pub root@node-1
ssh-copy-id -i id_rsa.pub root@node-2
4.5.5、關閉交換分區(qū)swap,提升性能(三臺一起操作)
[root@master .ssh]# swapoff -a
永久關閉:注釋``swap掛載,給
swap`這行開頭加一下注釋
[root@master .ssh]# vim /etc/fstab
#/dev/mapper/centos-swap swap swap defaults 0 0
4.5.6、為什么要關閉swap交換分區(qū)?
Swap
是交換分區(qū),如果機器內存不夠,會使用swap分區(qū),但是swap分區(qū)的性能較低,k8s設計的時候為了能提升性能,默認是不允許使用交換分區(qū)的。``Kubeadm`初始化的時候會檢測swap是否關閉,如果沒關閉,那就初始化失敗。如果不想要關閉交換分區(qū),安裝k8s的時候可以指定–ignore-preflight-errors=Swap來解決。
4.5.7、修改機器內核參數(shù)(三臺一起操作)
[root@master .ssh]# modprobe br_netfilter
[root@master .ssh]# echo "modprobe br_netfilter" >> /etc/profile
[root@master .ssh]# cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
[root@master .ssh]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
[root@master .ssh]#
4.5.8、配置阿里云的repo源(三臺一起)
yum install -y yum-utilsyum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repoyum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet ipvsadm配置安裝k8s組件需要的阿里云的repo源vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
4.5.9、配置時間同步(三臺一起)
[root@master ~]# yum install ntpdate -y
[root@master ~]# ntpdate cn.pool.ntp.org3 Mar 10:15:12 ntpdate[73056]: adjust time server 84.16.67.12 offset 0.007718 sec
[root@master ~]#
加入計劃任務
[root@master ~]# crontab -e
no crontab for root - using an empty one
crontab: installing new crontab
[root@master ~]# crontab -l
* */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org
[root@master ~]#
[root@master ~]# service crond restart
Redirecting to /bin/systemctl restart crond.service
[root@master ~]#
4.5.10、安裝docker服務(三臺一起)
4.5.11、安裝docker的最新版本
[root@master ~]# sudo yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
[root@master ~]# systemctl start docker && systemctl enable docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
您在 /var/spool/mail/root 中有新郵件
[root@master ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@master ~]#
4.5.12、配置鏡像加速器
[root@master ~]# vim /etc/docker/daemon.json
{"registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"],"exec-opts": ["native.cgroupdriver=systemd"]
} [root@master ~]#
[root@master ~]# systemctl daemon-reload
您在 /var/spool/mail/root 中有新郵件
[root@master ~]# systemctl restart docker
[root@master ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@master ~]#
4.5.13、繼續(xù)配置Kubernetes
4.5.14、安裝初始化k8s需要的軟件包(三臺一起)
k8s 1.24
開始就不再使用``docker作為底層的容器運行時軟件,采用
containerd`作為底層的容器運行時軟件
[root@master ~]# yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
Kubeadm
: kubeadm是一個工具,用來初始化k8s集群的
kubelet
: 安裝在集群所有節(jié)點上,用于啟動Pod的
kubectl
: 通過kubectl可以部署和管理應用,查看各種資源,創(chuàng)建、刪除和更新各種組件
[root@master ~]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
您在 /var/spool/mail/root 中有新郵件
[root@master ~]#
4.5.15、kubeadm初始化k8s集群
把初始化k8s集群需要的離線鏡像包上傳到master、node-1、node-2機器上,手動解壓
利用xftp
傳到master上的root用戶的家目錄下
再利用``scp`傳遞到node-1和node-2上(之前建立過免密通道)
[root@master ~]# scp k8simage-1-20-6.tar.gz root@node-1:/root
k8simage-1-20-6.tar.gz 100% 1033MB 129.0MB/s 00:08
[root@master ~]# scp k8simage-1-20-6.tar.gz root@node-2:/root
k8simage-1-20-6.tar.gz 100% 1033MB 141.8MB/s 00:07
[root@master ~]#
導入鏡像(三臺一起)
[root@master ~]# docker load -i k8simage-1-20-6.tar.gz
生成一個yml文件(在master上操作)
[root@master ~]# kubeadm config print init-defaults > kubeadm.yaml
您在 /var/spool/mail/root 中有新郵件
[root@master ~]# ls
anaconda-ks.cfg k8simage-1-20-6.tar.gz kubeadm.yaml
[root@master ~]# cat kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
kind: InitConfiguration
localAPIEndpoint:advertiseAddress: 1.2.3.4bindPort: 6443
nodeRegistration:criSocket: /var/run/dockershim.sockname: mastertaints:- effect: NoSchedulekey: node-role.kubernetes.io/master
---
apiServer:timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:type: CoreDNS
etcd:local:dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.20.0
networking:dnsDomain: cluster.localserviceSubnet: 10.96.0.0/12
scheduler: {}
[root@master ~]#
'修改內容'
[root@master ~]# cat kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
kind: InitConfiguration
localAPIEndpoint:advertiseAddress: 192.168.182.142bindPort: 6443
nodeRegistration:criSocket: /var/run/dockershim.sockname: mastertaints:- effect: NoSchedulekey: node-role.kubernetes.io/master
---
apiServer:timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:type: CoreDNS
etcd:local:dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.20.0
networking:dnsDomain: cluster.localserviceSubnet: 10.96.0.0/12podSubnet: 10.244.0.0/16
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
[root@master ~]#
4.5.16、基于kubeadm.yaml文件初始化k8s
[root@master ~]# kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification
[root@master ~]# mkdir -p $HOME/.kube
您在 /var/spool/mail/root 中有新郵件
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master ~]#
接下來去node-1和node-2上去執(zhí)行
[root@node-1 ~]# kubeadm join 192.168.182.142:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:f655c7887580b8aae5a4b510253c14c76615b0ccc2d8a84aa9759fd02d278f41
去master上查看是否成功
[root@master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master NotReady control-plane,master 8m22s v1.20.6
node-1 NotReady <none> 67s v1.20.6
node-2 NotReady <none> 61s v1.20.6
[root@master ~]#
4.5.17、改一下node的角色為worker
[root@master ~]# kubectl label node node-1 node-role.kubernetes.io/worker=worker
node/node-1 labeled
[root@master ~]# kubectl label node node-2 node-role.kubernetes.io/worker=worker
node/node-2 labeled
[root@master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master NotReady control-plane,master 15m v1.20.6
node-1 NotReady worker 8m12s v1.20.6
node-2 NotReady worker 8m6s v1.20.6
[root@master ~]#
4.5.18、安裝網(wǎng)絡插件
先利用xftp上傳文件:calico.yml
到/root/
[root@master ~]# kubectl apply -f calico.yaml
[root@master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 4h27m v1.20.6
node-1 Ready worker 4h25m v1.20.6
node-2 Ready worker 4h25m v1.20.6
[root@master ~]#
STATUS的狀態(tài)變?yōu)?#xff1a;Ready —>成功了
4.5.19、安裝kubectl top node
首先要裝個軟件:metrics-server----》可獲取pod的cpu,內存使用情況
-
在外界下載下:metrics-server的yaml文件,然后上傳到虛擬機,進行解壓
[root@master pod]# unzip metrics-server.zip [root@master pod]#
-
進入metrics-server文件夾,把tar包傳遞給node節(jié)點
[root@master metrics-server]# lscomponents.yaml metrics-server-v0.6.3.tar [root@master metrics-server]# scp metrics-server-v0.6.3.tar node-1:/root metrics-server-v0.6.3.tar 100% 67MB 150.8MB/s 00:00 [root@master metrics-server]# scp metrics-server-v0.6.3.tar node-2:/root metrics-server-v0.6.3.tar 100% 67MB 151.7MB/s 00:00 [root@master metrics-server]#
-
三臺導入鏡像
[root@node-1 ~]# docker load -i metrics-server-v0.6.3.tar
-
啟用metrics-server pod
[root@master metrics-server]# kubectl apply -f components.yaml serviceaccount/metrics-server created clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrole.rbac.authorization.k8s.io/system:metrics-server created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created service/metrics-server created deployment.apps/metrics-server created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created [root@master metrics-server]#
[root@master metrics-server]# kubectl get pod -n kube-system|grep metrics metrics-server-769f6c8464-ctxl7 1/1 Running 0 49s [root@master metrics-server]#
-
查看是否可用
[root@master metrics-server]# kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% master 118m 5% 1180Mi 68% node-1 128m 6% 985Mi 57% node-2 60m 3% 634Mi 36% [root@master metrics-server]#
4.5.20、讓node節(jié)點也可以訪問 kubectl get node
'現(xiàn)在master節(jié)點上傳遞給node'
[root@master ~]# scp /etc/kubernetes/admin.conf node-1:/root
admin.conf 100% 5567 5.4MB/s 00:00
[root@master ~]# scp /etc/kubernetes/admin.conf node-2:/root
admin.conf 100% 5567 7.4MB/s 00:00
[root@master ~]#
'再在這個node節(jié)點上操作'
mkdir -p $HOME/.kube
sudo cp -i /root/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@node-1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 28m v1.20.6
node-1 Ready worker 27m v1.20.6
node-2 Ready worker 27m v1.20.6
[root@node-1 ~]#
5、先搭建k8s里邊的內容
5.1、搭建nfs服務器,給web 服務提供網(wǎng)站數(shù)據(jù),創(chuàng)建好相關的pv、pvc
'給每一臺機器都下載'
'建議k8s集群內的所有的節(jié)點都安裝nfs-utils軟件,因為節(jié)點服務器里創(chuàng)建卷需要支持nfs網(wǎng)絡文件系統(tǒng)在node-1、node-2上都安裝nfs-utils軟件,不需要啟動nfs服務,主要是使用nfs服務器共享的文件夾,需要去掛載nfs文件系統(tǒng)'
yum install nfs-utils -y
只是在nfs服務器上啟動nfs服務,就可以了
[root@nfs ~]# service nfs restart
Redirecting to /bin/systemctl restart nfs.service
[root@nfs ~]# nfs服務器上的防火墻和selinux都是禁用的
5.1.1、設置共享目錄
[root@nfs ~]# vim /etc/exports
[root@nfs ~]# cat /etc/exports
/web 192.168.182.0/24(rw,sync,all_squash)
[root@nfs ~]#
5.1.2、創(chuàng)建共享目錄
[root@nfs ~]# mkdir /web
[root@nfs ~]# cd /web/
[root@nfs web]# echo "welcome to my-web" >index.html
[root@nfs web]# cat index.html
welcome to my-web
[root@nfs web]#
'設置/web文件夾的權限,允許其他人過來讀寫'
[root@nfs web]# chmod 777 /web
[root@nfs web]# chown nfsnobody:nfsnobody /web
[root@nfs web]# ll -d /web
drwxrwxrwx. 2 nfsnobody nfsnobody 24 3月 27 18:21 /web
[root@nfs web]#
5.1.3、刷新nfs或者重新輸出共享目錄
exportfs -a 輸出所有共享目錄
exportfs -v 顯示輸出的共享目錄
exportfs -r 重新輸出所有的共享目錄
[root@nfs web]# exportfs -rv
exporting 192.168.182.0/24:/web
[root@nfs web]#
'或者執(zhí)行'
[root@nfs web]# service nfs restart
Redirecting to /bin/systemctl restart nfs.service
[root@nfs web]#
5.1.4、創(chuàng)建一個pv使用nfs服務器共享的目錄
[root@master storage]# vim nfs-pv.yaml
[root@master storage]# cat nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:name: sc-nginx-pv-2labels:type: sc-nginx-pv-2
spec:capacity:storage: 5Gi accessModes:- ReadWriteManystorageClassName: nfs #存儲類對應的名字nfs:path: "/web" #nfs共享的目錄server: 192.168.182.140 #nfs服務器的ip地址readOnly: false #訪問模式
[root@master storage]#
5.1.5、應用一下
[root@master storage]# kubectl apply -f nfs-pv.yaml
persistentvolume/sc-nginx-pv-2 created
[root@master storage]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
sc-nginx-pv-2 5Gi RWX Retain Bound default/sc-nginx-pvc-2 nfs 5s
task-pv-volume 10Gi RWO Retain Bound default/task-pv-claim manual 6h17m
[root@master storage]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
sc-nginx-pvc-2 Bound sc-nginx-pv-2 5Gi RWX nfs 9m19s
task-pv-claim Bound task-pv-volume 10Gi RWO manual 5h58m
[root@master storage]#
5.1.6、創(chuàng)建pvc使用存儲類:example-nfs
[root@master storage]# vim pvc-sc.yaml
[root@master storage]# cat pvc-sc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: sc-nginx-pvc-2
spec:accessModes:- ReadWriteMany resources:requests:storage: 1GistorageClassName: nfs
[root@master storage]#
[root@master storage]# kubectl apply -f pvc-sc.yaml
persistentvolumeclaim/sc-nginx-pvc-2 created
[root@master storage]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
sc-nginx-pvc-2 Pending nfs 8s
task-pv-claim Bound task-pv-volume 10Gi RWO manual 5h49m
[root@master storage]#
5.1.7、創(chuàng)建pod啟動pvc
[root@master storage]# vim pod-nfs.yaml
[root@master storage]# cat pod-nfs.yaml
apiVersion: v1
kind: Pod
metadata:name: sc-pv-pod-nfs
spec:volumes:- name: sc-pv-storage-nfspersistentVolumeClaim:claimName: sc-nginx-pvc-2containers:- name: sc-pv-container-nfsimage: nginximagePullPolicy: IfNotPresentports:- containerPort: 80name: "http-server"volumeMounts:- mountPath: "/usr/share/nginx/html"name: sc-pv-storage-nfs
[root@master storage]#
應用一下:
[root@master storage]# kubectl apply -f pod-nfs.yaml
pod/sc-pv-pod-nfs created
[root@master storage]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
sc-pv-pod-nfs 1/1 Running 0 63s 10.244.84.130 node-1 <none> <none>
您在 /var/spool/mail/root 中有新郵件
[root@master storage]#
[root@master storage]#
5.1.8、測試
[root@master storage]# curl 10.244.84.130
welcome to my-web
[root@master storage]#
修改nfs中的index.html在master上查看效果:
[root@nfs web]# vim index.html
welcome to my-web
welcome to changsha
[root@nfs-server web]#
[root@master storage]# curl 10.244.84.130
welcome to my-web
welcome to changsha
您在 /var/spool/mail/root 中有新郵件
[root@master storage]#
5.2、將自己go語言的代碼鏡像從harbor倉庫中拉取出來
5.2.1、先把go語言的代碼制作成鏡像
[root@docker ~]# mkdir /go
[root@docker ~]# cd /go
[root@docker go]# ls
[root@docker go]# vim server.go
[root@docker go]# cat server.go
package mainimport ("net/http""github.com/gin-gonic/gin"
)func main() {r := gin.Default()r.GET("/", func(c *gin.Context) {c.JSON(http.StatusOK, gin.H{"message": "Halou, gaohui 2024 Fighting!",})})r.Run()
}[root@master go]#
[root@docker go]# go mod init web
go: creating new go.mod: module web
go: to add module requirements and sums:go mod tidy
[root@docker go]# go env -w GOPROXY=https://goproxy.cn,direct
[root@docker go]#
[root@docker go]# go mod tidy
[root@docker go]# go run server.go
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.- using env: export GIN_MODE=release- using code: gin.SetMode(gin.ReleaseMode)[GIN-debug] GET / --> main.main.func1 (3 handlers)
[GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value.
Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details.
[GIN-debug] Environment variable PORT is undefined. Using port :8080 by default
[GIN-debug] Listening and serving HTTP on :8080
[GIN] 2024/02/01 - 19:05:04 | 200 | 137.998μs | 192.168.153.1 | GET "/"'開始編譯server.go成一個二進制文件(測試)'
[root@docker go]# go build -o ghweb .
[root@docker go]# ls
ghweb go.mod go.sum server.go
[root@docker go]# ./ghweb
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.- using env: export GIN_MODE=release- using code: gin.SetMode(gin.ReleaseMode)[GIN-debug] GET / --> main.main.func1 (3 handlers)
[GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value.
Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details.
[GIN-debug] Environment variable PORT is undefined. Using port :8080 by default
[GIN-debug] Listening and serving HTTP on :8080
^C
[root@docker go]#
'編寫dockerfile文件'
[root@docker go]# vim Dockerfile
FROM centos:7
WORKDIR /go
COPY . /go
RUN ls /go && pwd
ENTRYPOINT ["/go/ghweb"]
[root@docker go]#
制作鏡像:
[root@docker go]# docker build -t ghweb:1.0 .
[+] Building 29.2s (9/9) FINISHED docker:default=> [internal] load build definition from Dockerfile 0.0s=> => transferring dockerfile: 117B 0.0s=> [internal] load metadata for docker.io/library/centos:7 21.0s=> [internal] load .dockerignore 0.0s=> => transferring context: 2B 0.0s=> [1/4] FROM docker.io/library/centos:7@sha256:9d4bcbbb213dfd745b58be38b13b99 7.8s=> => resolve docker.io/library/centos:7@sha256:9d4bcbbb213dfd745b58be38b13b99 0.0s=> => sha256:9d4bcbbb213dfd745b58be38b13b996ebb5ac315fe75711bd 1.20kB / 1.20kB 0.0s=> => sha256:dead07b4d8ed7e29e98de0f4504d87e8880d4347859d839686a31 529B / 529B 0.0s=> => sha256:eeb6ee3f44bd0b5103bb561b4c16bcb82328cfe5809ab675b 2.75kB / 2.75kB 0.0s=> => sha256:2d473b07cdd5f0912cd6f1a703352c82b512407db6b05b4 76.10MB / 76.10MB 4.2s=> => extracting sha256:2d473b07cdd5f0912cd6f1a703352c82b512407db6b05b43f25537 3.4s=> [internal] load build context 0.0s=> => transferring context: 11.61MB 0.0s=> [2/4] WORKDIR /go 0.1s=> [3/4] COPY . /go 0.1s=> [4/4] RUN ls /go && pwd 0.3s=> exporting to image 0.0s=> => exporting layers 0.0s=> => writing image sha256:59a5509da737328cc0dbe6c91a33409b7cdc5e5eeb8a46efa7d 0.0s=> => naming to docker.io/library/ghweb:1.0 0.0s
[root@master go]#
'查看鏡像'
[root@docker go]# docker images|grep ghweb
ghweb 1.0 458531408d3b 11 seconds ago 216MB
[root@docker go]#
5.2.2、然后上傳到harbor倉庫
https://github.com/goharbor/harbor/releases/tag/v2.1.0 先去官網(wǎng)下載2.1的版本
'新建harbor文件夾,放進去解壓'
[root@nfs ~]# mkdir /harbor
[root@nfs ~]# cd /harbor/
[root@nfs harbor]#
[root@nfs harbor]# tar xf harbor-offline-installer-v2.1.0.tgz
[root@nfs harbor]# ls
docker-compose harbor harbor-offline-installer-v2.1.0.tgz
[root@nfsharbor]#
[root@nfs harbor]# cd harbor
[root@nfs harbor]# ls
common.sh harbor.v2.1.0.tar.gz harbor.yml.tmpl install.sh LICENSE prepare
[root@nfs harbor]# cp harbor.yml.tmpl harbor.yml
'修改 harbor.yml 中的hostname和prot 注釋掉 https(簡化)'
[root@nfs harbor]# vim harbor.yml
然后拖拽docker compose這個軟件進入當前目錄,并且加入可執(zhí)行權限
[root@nfs harbor]# cp ../docker-compose .
[root@nfs harbor]# ls
common.sh docker-compose harbor.v2.1.0.tar.gz harbor.yml harbor.yml.tmpl install.sh LICENSE prepare
[root@nfs harbor]# chmod +x docker-compose
[root@nfs harbor]# cp docker-compose /usr/bin/
[root@nfs harbor]# ./install.sh
為什么拷貝到 /usr/bin 是因為在環(huán)境變量中可以找到
'查看是否成功'
[root@nfs harbor]# docker compose ls
NAME STATUS CONFIG FILES
harbor running(9) /harbor/harbor/docker-compose.yml
您在 /var/spool/mail/root 中有新郵件
[root@nfs harbor]#
先新建項目
再新建用戶
賬號:gh
密碼:Sc123456
然后得在項目中添加成員,之后利用gh的用戶登錄
傳鏡像到倉庫
[root@nfs-server docker]# docker login 192.168.182.140:8089
Username: gh
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded
[root@nfs-server docker]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
192.168.182.140:8089/gao/ghweb 1.0 458531408d3b 12 minutes ago 216MB
ghweb 1.0 458531408d3b 12 minutes ago 216MB
registry.cn-beijing.aliyuncs.com/google_registry/hpa-example latest 4ca4c13a6d7c 8 years ago 481MB
[root@nfs-server docker]# docker push 192.168.182.140:8089/gao/ghweb:1.0
The push refers to repository [192.168.182.140:8089/gao/ghweb]
aed658a8d439: Pushed
3e7a541e1360: Pushed
a72a96e845e5: Pushed
174f56854903: Pushed
1.0: digest: sha256:53ad51fdfd846e8494c547609d2f45331150d2da5081c2f7867affdc65c55cfd size: 1153
[root@nfs-server docker]#
node節(jié)點拉取ghweb鏡像
'k8s集群每個節(jié)點都登入到harbor中,以便于從harbor中拉回鏡像。'
[root@master ~]# vim /etc/docker/daemon.json
您在 /var/spool/mail/root 中有新郵件
[root@master ~]# cat /etc/docker/daemon.json
{"registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"],"insecure-registries":["192.168.182.140:8089"],"exec-opts": ["native.cgroupdriver=systemd"]
}
[root@master ~]# '重新加載配置,重啟docker服務'
[root@master ~]# systemctl daemon-reload && systemctl restart docker'登錄harbor'
[root@master ~]# docker login 192.168.182.140:8089
Username: gh
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded
您在 /var/spool/mail/root 中有新郵件
[root@master ~]# '從harbor中拉取鏡像'
[root@master ~]# docker pull 192.168.182.140:8089/gao/ghweb:1.0
1.0: Pulling from gao/ghweb
2d473b07cdd5: Pull complete
deb4bb5a3691: Pull complete
880231ee488c: Pull complete
ec220df6aef4: Pull complete
Digest: sha256:53ad51fdfd846e8494c547609d2f45331150d2da5081c2f7867affdc65c55cfd
Status: Downloaded newer image for 192.168.182.140:8089/gao/ghweb:1.0
192.168.182.140:8089/gao/ghweb:1.0
[root@master ~]# docker images|grep ghweb
192.168.182.140:8089/gao/ghweb 1.0 458531408d3b 20 minutes ago 216MB
[root@master ~]#
5.3、啟動HPA功能部署自己的web pod,當cpu使用率達到50%的時候,進行水平擴縮,最小10個業(yè)務pod,最多20個業(yè)務pod。
[root@master ~]# mkdir /hpa
您在 /var/spool/mail/root 中有新郵件
[root@master ~]# cd /hpa
[root@master hpa]# vim my-web.yaml
apiVersion: apps/v1
kind: Deployment
metadata:labels:app: mywebname: myweb
spec:replicas: 3selector:matchLabels:app: mywebtemplate:metadata:labels:app: mywebspec:containers:- name: mywebimage: 192.168.182.140:8089/gao/ghweb:1.0imagePullPolicy: IfNotPresentports:- containerPort: 8089resources:limits:cpu: 300mrequests:cpu: 100m
---
apiVersion: v1
kind: Service
metadata:labels:app: myweb-svcname: myweb-svc
spec:selector:app: mywebtype: NodePortports:- port: 8089protocol: TCPtargetPort: 8089nodePort: 30001
[root@master hpa]#
[root@master hpa]# kubectl apply -f my-web.yaml
deployment.apps/myweb created
service/myweb-svc created
[root@master hpa]#
創(chuàng)建HPA功能
[root@master hpa]# kubectl autoscale deployment myweb --cpu-percent=50 --min=10 --max=20
horizontalpodautoscaler.autoscaling/myweb autoscaled
您在 /var/spool/mail/root 中有新郵件
[root@master hpa]#
[root@master hpa]# kubectl get pod
NAME READY STATUS RESTARTS AGE
myweb-7558d9fbc4-869f5 1/1 Running 0 12s
myweb-7558d9fbc4-c5wdr 1/1 Running 0 12s
myweb-7558d9fbc4-dgdbs 1/1 Running 0 82s
myweb-7558d9fbc4-hmt62 1/1 Running 0 12s
myweb-7558d9fbc4-r84bc 1/1 Running 0 12s
myweb-7558d9fbc4-rld88 1/1 Running 0 82s
myweb-7558d9fbc4-s82vh 1/1 Running 0 82s
myweb-7558d9fbc4-sn5dp 1/1 Running 0 12s
myweb-7558d9fbc4-t9pvl 1/1 Running 0 12s
myweb-7558d9fbc4-vzlnb 1/1 Running 0 12s
sc-pv-pod-nfs 1/1 Running 1 7h27m
[root@master hpa]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myweb Deployment/myweb 0%/50% 10 20 10 33s
[root@master hpa]#
訪問30001端口
5.5、使用探針(liveness、readiness、startup)的(httpget、exec)方法對web業(yè)務pod進行監(jiān)控,一旦出現(xiàn)問題馬上重啟,增強業(yè)務pod的可靠性。
[root@master hpa]# vim my-web.yaml
apiVersion: apps/v1
kind: Deployment
metadata:labels:app: mywebname: myweb
spec:replicas: 3selector:matchLabels:app: mywebtemplate:metadata:labels:app: mywebspec:containers:- name: mywebimage: 192.168.182.140:8089/gao/ghweb:1.0imagePullPolicy: IfNotPresentports:- containerPort: 8080resources:limits:cpu: 300mrequests:cpu: 100mlivenessProbe:exec:command:- ls- /initialDelaySeconds: 5periodSeconds: 5readinessProbe:exec:command:- ls- /initialDelaySeconds: 5periodSeconds: 5 startupProbe:exec:command:- ls- /failureThreshold: 3periodSeconds: 10lifecycle:postStart:exec:command: ["/bin/sh", "-c", "echo Container started"]
---
apiVersion: v1
kind: Service
metadata:labels:app: myweb-svcname: myweb-svc
spec:selector:app: mywebtype: NodePortports:- port: 8080protocol: TCPtargetPort: 8080nodePort: 30001
[root@master hpa]#
[root@master hpa]# kubectl apply -f my-web.yaml
deployment.apps/myweb configured
service/myweb-svc unchanged
[root@master hpa]#
Liveness: exec [ls /] delay=5s timeout=1s period=5s #success=1 #failure=3Readiness: exec [ls /] delay=5s timeout=1s period=5s #success=1 #failure=3Startup: exec [ls /] delay=0s timeout=1s period=10s #success=1 #failure=3
5.6、搭建ingress controller 和ingress規(guī)則,給web服務做基于域名的負載均衡
'ingress controller 本質上是一個nginx軟件,用來做負載均衡。'
'ingress 是k8s內部管理nginx配置(nginx.conf)的組件,用來給ingress controller傳參。'
[root@master ingress]# ls
ingress-controller-deploy.yaml nfs-pvc.yaml sc-ingress.yaml
ingress_nginx_controller.tar nfs-pv.yaml sc-nginx-svc-1.yaml
kube-webhook-certgen-v1.1.0.tar.gz nginx-deployment-nginx-svc-2.yaml
[root@master ingress]#
- ingress-controller-deploy.yaml 是部署ingress controller使用的yaml文件
- ingress-nginx-controllerv.tar.gz ingress-nginx-controller鏡像
- kube-webhook-certgen-v1.1.0.tar.gz kube-webhook-certgen鏡像
- sc-ingress.yaml 創(chuàng)建ingress的配置文件
- sc-nginx-svc-1.yaml 啟動sc-nginx-svc服務和相關pod的yaml
- nginx-deployment-nginx-svc-2.yaml 啟動sc-nginx-svc-2服務和相關pod的yaml
[root@master ingress]# scp kube-webhook-certgen-v1.1.0.tar.gz node-2:/root
kube-webhook-certgen-v1.1.0.tar.gz 100% 47MB 123.6MB/s 00:00
[root@master ingress]# scp kube-webhook-certgen-v1.1.0.tar.gz node-1:/root
kube-webhook-certgen-v1.1.0.tar.gz 100% 47MB 144.4MB/s 00:00
[root@master ingress]# scp ingress_nginx_controller.tar node-1:/root
ingress_nginx_controller.tar 100% 276MB 129.2MB/s 00:02
[root@master ingress]# scp ingress_nginx_controller.tar node-2:/root
ingress_nginx_controller.tar 100% 276MB 129.8MB/s 00:02
[root@master ingress]# docker load -i ingress_nginx_controller.tar
e2eb06d8af82: Loading layer 5.865MB/5.865MB
ab1476f3fdd9: Loading layer 120.9MB/120.9MB
ad20729656ef: Loading layer 4.096kB/4.096kB
0d5022138006: Loading layer 38.09MB/38.09MB
8f757e3fe5e4: Loading layer 21.42MB/21.42MB
a933df9f49bb: Loading layer 3.411MB/3.411MB
7ce1915c5c10: Loading layer 309.8kB/309.8kB
986ee27cd832: Loading layer 6.141MB/6.141MB
b94180ef4d62: Loading layer 38.37MB/38.37MB
d36a04670af2: Loading layer 2.754MB/2.754MB
2fc9eef73951: Loading layer 4.096kB/4.096kB
1442cff66b8e: Loading layer 51.67MB/51.67MB
1da3c77c05ac: Loading layer 3.584kB/3.584kB
Loaded image: registry.cn-hangzhou.aliyuncs.com/yutao517/ingress_nginx_controller:v1.1.0
您在 /var/spool/mail/root 中有新郵件
[root@master ingress]# docker load -i kube-webhook-certgen-v1.1.0.tar.gz
c0d270ab7e0d: Loading layer 3.697MB/3.697MB
ce7a3c1169b6: Loading layer 45.38MB/45.38MB
Loaded image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1
[root@master ingress]#
'使用ingress-controller-deploy.yaml 文件去啟動ingress controller'
[root@master ingress]# kubectl apply -f ingress-controller-deploy.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
您在 /var/spool/mail/root 中有新郵件
[root@master ingress]#
[root@master ingress]# kubectl get ns
NAME STATUS AGE
default Active 46h
ingress-nginx Active 26s
kube-node-lease Active 46h
kube-public Active 46h
kube-system Active 46h
[root@master ingress]# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.99.32.216 <none> 80:32351/TCP,443:32209/TCP 52s
ingress-nginx-controller-admission ClusterIP 10.108.207.217 <none> 443/TCP 52s
[root@master ingress]# kubectl get pod -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-6wrfc 0/1 Completed 0 59s
ingress-nginx-admission-patch-z4hwb 0/1 Completed 1 59s
ingress-nginx-controller-589dccc958-9cbht 1/1 Running 0 59s
ingress-nginx-controller-589dccc958-r79rt 1/1 Running 0 59s
[root@master ingress]#
接下來:創(chuàng)建pod和暴露pod的服務
[root@master ingress]# cat sc-nginx-svc-1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: sc-nginx-deploylabels:app: sc-nginx-feng
spec:replicas: 3selector:matchLabels:app: sc-nginx-fengtemplate:metadata:labels:app: sc-nginx-fengspec:containers:- name: sc-nginx-fengimage: nginximagePullPolicy: IfNotPresentports:- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:name: sc-nginx-svclabels:app: sc-nginx-svc
spec:selector:app: sc-nginx-fengports:- name: name-of-service-portprotocol: TCPport: 80targetPort: 80
[root@master ingress]#
[root@master ingress]# kubectl apply -f sc-nginx-svc-1.yaml
deployment.apps/sc-nginx-deploy created
service/sc-nginx-svc created
[root@master ingress]#
[root@master ingress]# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
sc-nginx-deploy 3/3 3 3 8m7s
[root@master ingress]#
[root@master ingress]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 47h
myweb-svc NodePort 10.98.10.240 <none> 8080:30001/TCP 22h
sc-nginx-svc ClusterIP 10.111.4.156 <none> 80/TCP 9m27s
[root@master ingress]#
[root@master ingress]# kubectl apply -f sc-ingress.yaml
ingress.networking.k8s.io/sc-ingress created
[root@master ingress]# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
sc-ingress nginx www.feng.com,www.zhang.com 80 8s
[root@master ingress]#
[root@master ingress]# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
sc-ingress nginx www.feng.com,www.zhang.com 192.168.182.143,192.168.182.144 80 27s
您在 /var/spool/mail/root 中有新郵件
[root@master ingress]#
'查看ingress controller 里的nginx.conf 文件里是否有ingress對應的規(guī)則'
[root@master ingress]# kubectl get pod -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-6wrfc 0/1 Completed 0 68m
ingress-nginx-admission-patch-z4hwb 0/1 Completed 1 68m
ingress-nginx-controller-589dccc958-9cbht 1/1 Running 0 68m
ingress-nginx-controller-589dccc958-r79rt 1/1 Running 0 68m
您在 /var/spool/mail/root 中有新郵件
[root@master ingress]#
[root@master ingress]# kubectl exec -it ingress-nginx-controller-589dccc958-9cbht -n ingress-nginx -- bash
bash-5.1$ cat nginx.conf|grep zhang.com## start server www.zhang.comserver_name www.zhang.com ;## end server www.zhang.com
bash-5.1$
在其他的宿主機(nfs服務器上)或者windows機器上使用域名進行訪問
'先添加hosts'
[root@nfs ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.182.143 www.feng.com
192.168.182.144 www.zhang.com
[root@nfs-server ~]#
[root@nfs etc]# curl www.feng.com
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@nfs etc]#
'啟動2個服務和pod,使用了pv+pvc+nfs'
[root@master ingress]# cat nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:name: sc-nginx-pvlabels:type: sc-nginx-pv
spec:capacity:storage: 10Gi accessModes:- ReadWriteManystorageClassName: nfsnfs:path: "/web" #nfs共享的目錄server: 192.168.182.140 #nfs服務器的ip地址readOnly: false
[root@master ingress]#
[root@master ingress]# kubectl apply -f nfs-pv.yaml
persistentvolume/sc-nginx-pv created
[root@master ingress]# cat nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: sc-nginx-pvc
spec:accessModes:- ReadWriteMany resources:requests:storage: 1GistorageClassName: nfs #使用nfs類型的pv
[root@master ingress]# kubectl apply -f nfs-pvc.yaml
persistentvolumeclaim/sc-nginx-pvc created
[root@master ingress]#
[root@master ingress]# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/sc-nginx-pv 10Gi RWX Retain Bound default/sc-nginx-pvc nfs 31s
persistentvolume/sc-nginx-pv-2 5Gi RWX Retain Bound default/sc-nginx-pvc-2 nfs 31hNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/sc-nginx-pvc Bound sc-nginx-pv 10Gi RWX nfs 28s
persistentvolumeclaim/sc-nginx-pvc-2 Bound sc-nginx-pv-2 5Gi RWX nfs 31h
[root@master ingress]#
啟第二個pod和第二個服務
[root@master ingress]# kubectl apply -f nginx-deployment-nginx-svc-2.yaml
deployment.apps/nginx-deployment created
service/sc-nginx-svc-2 created
[root@master ingress]#
[root@master ingress]# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.99.32.216 <none> 80:32351/TCP,443:32209/TCP 81m
ingress-nginx-controller-admission ClusterIP 10.108.207.217 <none> 443/TCP 81m
[root@master ingress]#
訪問宿主機暴露的端口號32351或者80都可以👆
[root@nfs ~]# curl www.zhang.com
welcome to my-web
welcome to changsha
Halou-gh
[root@nfs ~]#
[root@nfs ~]# curl www.feng.com
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@nfs ~]#
5.7、部署和訪問 Kubernetes 儀表板(Dashboard)
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
使用的dashboard的版本是v2.7.0
'下載yaml文件'
recommended.yaml
'修改配置文件,將service對應的類型設置為NodePort'
[root@master dashboard]# vim recommended.yaml
---kind: Service
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:type: NodePort #指定類型ports:- port: 443targetPort: 8443nodePort: 30088 #指定宿主機端口號selector:k8s-app: kubernetes-dashboard---
其他的配置都不修改應用上面的配置,啟動dashboard相關的實例
'啟動dashboard'
[root@master dashboard]# kubectl apply -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
[root@master dashboard]#
'查看是否啟動dashboard的pod'
[root@master dashboard]# kubectl get pod --all-namespaces|grep dashboard
kubernetes-dashboard dashboard-metrics-scraper-66dd8bdd86-nvwsj 1/1 Running 0 2m20s
kubernetes-dashboard kubernetes-dashboard-785c75749d-nqsm7 1/1 Running 0 2m20s
[root@master dashboard]#
查看服務是否啟動
[root@master dashboard]# kubectl get svc --all-namespaces|grep dash
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.103.48.55 <none> 8000/TCP 3m45s
kubernetes-dashboard kubernetes-dashboard NodePort 10.96.62.20 <none> 443:30088/TCP 3m45s
您在 /var/spool/mail/root 中有新郵件
[root@master dashboard]#
在瀏覽器里訪問,使用https協(xié)議去訪問30088端口
https://192.168.182.142:30088/
出現(xiàn)一個登錄畫圖,需要輸入token
獲取dashboard 的secret的名字
kubectl get secret -n kubernetes-dashboard|grep dashboard-token
[root@master dashboard]# kubectl get secret -n kubernetes-dashboard|grep dashboard-token
kubernetes-dashboard-token-9hsh5 kubernetes.io/service-account-token 3 6m58s
您在 /var/spool/mail/root 中有新郵件
[root@master dashboard]#
kubectl describe secret kubernetes-dashboard-token-9hsh5 -n kubernetes-dashboard
[root@master dashboard]# kubectl describe secret kubernetes-dashboard-token-9hsh5 -n kubernetes-dashboard
Name: kubernetes-dashboard-token-9hsh5
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: kubernetes-dashboardkubernetes.io/service-account.uid: d05961ce-a39b-4445-bc1b-643439b59f41Type: kubernetes.io/service-account-tokenData
====
ca.crt: 1066 bytes
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IkRNdlRFVE9XeDFPdU95Q3FEcEtYUXJHZ0dvcnJPdlBUdEp3MEVtSzF5MHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi05aHNoNSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImQwNTk2MWNlLWEzOWItNDQ0NS1iYzFiLTY0MzQzOWI1OWY0MSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.sbnbgil-sHV71WF1K4nOKTQKOOXNIam-NbUTFqfCdx6lBNN3IVnQFiISdXsjmDELi3q6kmVfpw000KPdavZ307Em2cGLI2F7aOy281dafcelZzIBjdMhw5KHrlzc0JkbL-jQfDvgk7t6T5zABqKfC8LsdButSsMviw8N0eFC5Iz9gSlxDieZDzzPCXVXUnCBWmAxcpOhUfJn81HyoFk6deVK71lwR5zm_KnbjCoTQAYbaCXfoB8fjn3-cyVFMtHbt0rU3mPyV5kYJEuH4WlGGYYMxQfrm0I8elQbyyENKtlI0DK_15Am_wp0I1Gw81eLg53h67FFQrSKHe9QxPx6Cw
[root@master dashboard]#
登錄成功后,發(fā)現(xiàn)dashboard不能訪問任何的資源對象,因為沒有權限,需要RBAC鑒權
授權kubernetes-dashboard,防止找不到namespace資源
kubectl create clusterrolebinding serviceaccount-cluster-admin --clusterrole=cluster-admin --user=system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard
[root@master dashboard]# kubectl create clusterrolebinding serviceaccount-cluster-admin --clusterrole=cluster-admin --user=system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard
clusterrolebinding.rbac.authorization.k8s.io/serviceaccount-cluster-admin created
您在 /var/spool/mail/root 中有新郵件
[root@master dashboard]#
然后刷新一下頁面就有了
如果要刪除角色綁定:
[root@master ~]#kubectl delete clusterrolebinding serviceaccount-cluster-admin
5.8、使用ab工具對整個k8s集群里的web服務進行壓力測試
壓力測試軟件:ab
ab是Apache自帶的一個壓力測試軟件,可以通過ab命令和選項對某個URL進行壓力測試。
ab建議在linux環(huán)境下使用。
ab的主要命令:
ab主要使用的兩個選項就是-n和-c。
其他選項使用命合ab-h進行查看。
命命格式是: ab -n10 -c10 URL
'編寫yaml文件'
[root@master hpa]# cat nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: ab-nginx
spec:selector:matchLabels:run: ab-nginxtemplate:metadata:labels:run: ab-nginxspec:nodeName: node-2containers:- name: ab-nginximage: 192.168.182.140:8089/gao/ghweb:1.0imagePullPolicy: IfNotPresentports:- containerPort: 8080resources:limits:cpu: 100mrequests:cpu: 50m
---
apiVersion: v1
kind: Service
metadata:name: ab-nginx-svclabels:run: ab-nginx-svc
spec:type: NodePortports:- port: 8080targetPort: 8080nodePort: 31000selector:run: ab-nginx
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:name: ab-nginx
spec:scaleTargetRef:apiVersion: apps/v1kind: Deploymentname: ab-nginxminReplicas: 5maxReplicas: 20targetCPUUtilizationPercentage: 50
[root@master hpa]#
啟動開啟了HPA功能的nginx的部署控制器
[root@master hpa]# kubectl apply -f nginx.yaml
deployment.apps/ab-nginx unchanged
service/ab-nginx-svc created
horizontalpodautoscaler.autoscaling/ab-nginx created
[root@master hpa]#
[root@master hpa]# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
ab-nginx 5/5 5 5 30s
myweb 3/3 3 3 31m
nginx-deployment 3/3 3 3 44m
sc-nginx-deploy 3/3 3 3 70m[root@master hpa]# kubectl get pod
NAME READY STATUS RESTARTS AGE
ab-nginx-6d7db4b69f-2j5dz 1/1 Running 0 27s
ab-nginx-6d7db4b69f-6dwcq 1/1 Running 0 27s
ab-nginx-6d7db4b69f-7wkkd 1/1 Running 0 27s
ab-nginx-6d7db4b69f-8mjp6 1/1 Running 0 27s
ab-nginx-6d7db4b69f-gfmsq 1/1 Running 0 43s
myweb-69786769dc-jhsf8 1/1 Running 0 31m
myweb-69786769dc-kfjgk 1/1 Running 0 31m
myweb-69786769dc-msxrf 1/1 Running 0 31m
nginx-deployment-6c685f999-dkkfg 1/1 Running 0 44m
nginx-deployment-6c685f999-khjsp 1/1 Running 0 44m
nginx-deployment-6c685f999-svcvz 1/1 Running 0 44m
sc-nginx-deploy-7bb895f9f5-pmbcd 1/1 Running 0 70m
sc-nginx-deploy-7bb895f9f5-wf55g 1/1 Running 0 70m
sc-nginx-deploy-7bb895f9f5-zbjr9 1/1 Running 0 70m
sc-pv-pod-nfs 1/1 Running 1 31h
[root@master hpa]# [root@master hpa]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
ab-nginx Deployment/ab-nginx 0%/50% 5 20 5 84s
您在 /var/spool/mail/root 中有新郵件
[root@master hpa]#
去訪問31000端口
下載ab壓力測試軟件,在nfs機器上裝(不要在集群內部裝)
[root@nfs ~]# yum install httpd-tools -y
'在master上一直盯著hpa看'
[root@master hpa]# kubectl get hpa --watch
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
ab-nginx Deployment/ab-nginx 0%/50% 5 20 5 5m2s
'給master新建個會話,查看pod的變化'
[root@master hpa]# kubectl get pod --watch
NAME READY STATUS RESTARTS AGE
ab-nginx-6d7db4b69f-2j5dz 1/1 Running 0 5m13s
ab-nginx-6d7db4b69f-6dwcq 1/1 Running 0 5m13s
ab-nginx-6d7db4b69f-7wkkd 1/1 Running 0 5m13s
ab-nginx-6d7db4b69f-8mjp6 1/1 Running 0 5m13s
ab-nginx-6d7db4b69f-gfmsq 1/1 Running 0 5m29s
myweb-69786769dc-jhsf8 1/1 Running 0 36m
myweb-69786769dc-kfjgk 1/1 Running 0 36m
myweb-69786769dc-msxrf 1/1 Running 0 36m
nginx-deployment-6c685f999-dkkfg 1/1 Running 0 49m
nginx-deployment-6c685f999-khjsp 1/1 Running 0 49m
nginx-deployment-6c685f999-svcvz 1/1 Running 0 49m
sc-nginx-deploy-7bb895f9f5-pmbcd 1/1 Running 0 75m
sc-nginx-deploy-7bb895f9f5-wf55g 1/1 Running 0 75m
sc-nginx-deploy-7bb895f9f5-zbjr9 1/1 Running 0 75m
sc-pv-pod-nfs 1/1 Running 1 31h
'開始測試'
ab -n1000 -c50 http://192.168.182.142:31000/
'一直增加'
ab-nginx Deployment/ab-nginx 60%/50% 5 20 5 7m54s
ab-nginx Deployment/ab-nginx 86%/50% 5 20 6 8m9s
ab-nginx Deployment/ab-nginx 83%/50% 5 20 9 8m25s
ab-nginx Deployment/ab-nginx 69%/50% 5 20 9 8m40s
ab-nginx Deployment/ab-nginx 55%/50% 5 20 9 8m56s
ab-nginx Deployment/ab-nginx 55%/50% 5 20 10 9m11s
ab-nginx Deployment/ab-nginx 14%/50% 5 20 10 9m41s
'超過50%就會創(chuàng)建'
ab-nginx-6d7db4b69f-zdv2h 0/1 ContainerCreating 0 0s
ab-nginx-6d7db4b69f-zdv2h 0/1 ContainerCreating 0 1s
ab-nginx-6d7db4b69f-zdv2h 1/1 Running 0 2s
ab-nginx-6d7db4b69f-l4vbw 0/1 Pending 0 0s
ab-nginx-6d7db4b69f-5qb9p 0/1 Pending 0 0s
ab-nginx-6d7db4b69f-vzcn7 0/1 Pending 0 0s
ab-nginx-6d7db4b69f-l4vbw 0/1 ContainerCreating 0 0s
ab-nginx-6d7db4b69f-5qb9p 0/1 ContainerCreating 0 0s
ab-nginx-6d7db4b69f-vzcn7 0/1 ContainerCreating 0 0s
ab-nginx-6d7db4b69f-l4vbw 0/1 ContainerCreating 0 1s
ab-nginx-6d7db4b69f-5qb9p 0/1 ContainerCreating 0 1s
ab-nginx-6d7db4b69f-vzcn7 0/1 ContainerCreating 0 2s
ab-nginx-6d7db4b69f-l4vbw 1/1 Running 0 2s
ab-nginx-6d7db4b69f-vzcn7 1/1 Running 0 2s
ab-nginx-6d7db4b69f-5qb9p 1/1 Running 0 2s
不壓力測試的時候 就會慢慢降低下來
6、搭建Prometheus服務器
6.1、為了方便多臺機器操作,先部署ansible在堡壘機上
'安裝ansible'
[root@jump ~]# yum install epel-release -y
[root@jump ~]# yum install ansible -y
'修改hosts文件'
[root@jump ~]# cd /etc/ansible/
[root@jump ansible]# ls
ansible.cfg hosts roles
[root@jump ansible]# vim hosts #加入要控制的機器
[k8s]
192.168.182.142
192.168.182.143
192.168.182.144[nfs]
192.168.182.140[firewall]
192.168.182.177
'在ansible服務器和其他的服務器之間建立免密通道(單向信任關系)'
1.生成密鑰對
ssh-keygen
2.上傳公鑰到其他服務器
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.182.142
3.測試ansible服務器能否控制所有的服務器
[root@jump ansible]# ansible all -m shell -a 'ip add'
6.2、搭建prometheus 服務器和grafana出圖軟件,監(jiān)控所有的服務器
'1.提前下載好所需要的軟件'
[root@jump prom]# ls
grafana-enterprise-9.5.1-1.x86_64.rpm prometheus-2.44.0-rc.1.linux-amd64.tar.gz
node_exporter-1.7.0.linux-amd64.tar.gz
[root@jump prom]#
'2.解壓源碼包'
[root@jump prom]# tar xf prometheus-2.44.0-rc.1.linux-amd64.tar.gz '修改名字'
[root@jump prom]# mv prometheus-2.44.0-rc.1.linux-amd64 prometheus
[root@jump prom]# ls
grafana-enterprise-9.5.1-1.x86_64.rpm prometheus-2.44.0-rc.1.linux-amd64.tar.gz
node_exporter-1.7.0.linux-amd64.tar.gz
prometheus
[root@jump prom]# '臨時和永久修改PATH變量,添加prometheus的路徑'
[root@jump prom]# PATH=/prom/prometheus:$PATH
[root@jump prom]# echo 'PATH=/prom/prometheus:$PATH' >>/etc/profile
[root@jump prom]# which prometheus
/prom/prometheus/prometheus
[root@jump prom]# '把prometheus做成一個服務來進行管理,非常方便日后維護和使用'
[root@jump prom]# vim /usr/lib/systemd/system/prometheus.service
[Unit]
Description=prometheus
[Service]
ExecStart=/prom/prometheus/prometheus --config.file=/prom/prometheus/prometheus.yml
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target'重新加載systemd相關的服務,識別Prometheus服務的配置文件'
[root@jump prom]# systemctl daemon-reload'啟動Prometheus服務'
[root@jump prom]# systemctl start prometheus
[root@jump prom]# systemctl restart prometheus
[root@jump prom]# '查看服務是否啟動'
[root@jump prom]# ps aux|grep prome
root 17551 0.1 2.2 796920 42344 ? Ssl 13:15 0:00 /prom/prometheus/prometheus --config.file=/prom/prometheus/prometheus.yml
root 17561 0.0 0.0 112824 972 pts/0 S+ 13:16 0:00 grep --color=auto prome
[root@jump prom]# '設置開機自啟動'
[root@jump prom]# systemctl enable prometheus
Created symlink from /etc/systemd/system/multi-user.target.wants/prometheus.service to /usr/lib/systemd/system/prometheus.service.
[root@jump prom]# '去訪問9090端口'
6.2.1、安裝exporter
'將node-exporter傳遞到所有的服務器上的/root目錄下'
ansible all -m copy -a 'src=node_exporter-1.7.0.linux-amd64.tar.gz dest=/root/'
[root@jump prom]# ansible all -m copy -a 'src=node_exporter-1.7.0.linux-amd64.tar.gz dest=/root/'
'編寫在其他機器上安裝node_exporter的腳本'
[root@jump prom]# vim install_node_exporter.sh
#!/bin/bashtar xf /root/node_exporter-1.7.0.linux-amd64.tar.gz -C /
cd /
mv node_exporter-1.7.0.linux-amd64/ node_exporter
cd /node_exporter/
PATH=/node_exporter/:$PATH
echo 'PATH=/node_exporter/:$PATH' >>/etc/profile#生成nodeexporter.service文件
cat >/usr/lib/systemd/system/node_exporter.service <<EOF
[Unit]
Description=node_exporter
[Service]
ExecStart=/node_exporter/node_exporter --web.listen-address 0.0.0.0:9090
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF#讓systemd進程識別node_exporter服務
systemctl daemon-reload
#設置開機啟動
systemctl enable node_exporter
#啟動node_exporter
systemctl start node_exporter
'在ansible服務器上執(zhí)行安裝node_exporter的腳本'
[root@jump prom]# ansible all -m script -a "/prom/install_node_exporter.sh"
'在其他的服務器上查看是否安裝node_exporter成功'
[root@firewalld ~]# ps aux|grep node
root 24717 0.0 0.4 1240476 9200 ? Ssl 13:24 0:00 /node_exporter/node_exporter --web.listen-address 0.0.0.0:9090
root 24735 0.0 0.0 112824 972 pts/0 S+ 13:29 0:00 grep --color=auto node
[root@firewalld ~]#
6.2.2、在Prometheus服務器上添加被監(jiān)控的服務器
'在prometheus服務器上添加抓取數(shù)據(jù)的配置,添加node節(jié)點服務器,將抓取的數(shù)據(jù)存儲到時序數(shù)據(jù)庫里'
[root@jump prometheus]# ls
console_libraries consoles LICENSE NOTICE prometheus prometheus.yml promtool
[root@jump prometheus]# vim prometheus.yml
#添加下面的配置- job_name: "master"static_configs:- targets: ["192.168.182.142:9090"]- job_name: "node1"static_configs:- targets: ["192.168.182.143:9090"]- job_name: "node2"static_configs:- targets: ["192.168.182.144:9090"]- job_name: "nfs"static_configs:- targets: ["192.168.182.140:9090"]- job_name: "firewalld"static_configs:- targets: ["192.168.182.177:9090"]
'重啟Prometheus服務'
[root@jump prometheus]# service prometheus restart
Redirecting to /bin/systemctl restart prometheus.service
[root@jump prometheus]#
6.2.3、安裝grafana出圖展示
'安裝'
[root@jump prom]# yum install grafana-enterprise-9.5.1-1.x86_64.rpm '啟動grafana'
[root@jump prom]# systemctl start grafana-server'設置開機自啟動'
[root@jump prom]# systemctl enable grafana-server
Created symlink from /etc/systemd/system/multi-user.target.wants/grafana-server.service to /usr/lib/systemd/system/grafana-server.service.'查看是否啟動'
[root@jump prom]# ps aux|grep grafana
grafana 17968 2.8 5.5 1288920 103588 ? Ssl 13:41 0:02 /usr/share/grafana/bin/grafana server --config=/etc/grafana/grafana.ini --pidfile=/var/run/grafana/grafana-server.pid --packaging=rpm cfg:default.paths.logs=/var/log/grafana cfg:default.paths.data=/var/lib/grafana cfg:default.paths.plugins=/var/lib/grafana/plugins cfg:default.paths.provisioning=/etc/grafana/provisioning
root 18030 0.0 0.0 112824 972 pts/0 S+ 13:43 0:00 grep --color=auto grafana
[root@jump prom]# netstat -antplu|grep grafana
tcp 0 0 192.168.182.141:42942 34.120.177.193:443 ESTABLISHED 17968/grafana
tcp6 0 0 :::3000 :::* LISTEN 17968/grafana
[root@jump prom]#
登錄,在瀏覽器里登錄,端口是3000 賬號密碼都是admin
添加數(shù)據(jù)源:
選擇模板:
7、進行跳板機和防火墻的配置
7.1、將k8s集群里的機器還有nfs服務器,進行tcp wrappers的配置,只允許堡壘機ssh進來,拒絕其他的機器ssh過去。
[root@jump ~]# cd /prom
[root@jump prom]# vim set_tcp_wrappers.sh
[root@jump prom]# cat set_tcp_wrappers.sh
#!/bin/bash#set /etc/hosts.allow文件的內容,只允許堡壘機訪問sshd服務echo 'sshd:192.168.182.141' >>/etc/hosts.allow
#單獨允許我的windows系統(tǒng)也可以訪問echo 'sshd:192.168.40.93' >>/etc/hosts.allow
#拒絕其他的所有的機器訪問sshd
echo 'sshd:ALL' >>/etc/hosts.deny
[root@jump prom]#
ansible k8s -m script -a "/prom/set_tcp_wrappers.sh"
ansible nfs -m script -a "/prom/set_tcp_wrappers.sh"
'測試是否生效,只允許堡壘機ssh過去'
'拿nfs去跳master'
[root@nfs ~]# ssh root@192.168.182.142
ssh_exchange_identification: read: Connection reset by peer
[root@nfs ~]# '拿jump機器去跳'
[root@jump prom]# ssh root@192.168.182.142
Last login: Sat Mar 30 13:55:24 2024 from 192.168.182.141
[root@master ~]# exit
登出
Connection to 192.168.182.142 closed.
[root@jump prom]#
7.2、搭建防火墻服務器
'關閉防火墻和selinux'
service firewalld stop
systemctl disable firewalld
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
'修改ip地址'
'一個wan口:192.168.40.87'
'一個lan口: 192.168.182.177'
[root@firewalld network-scripts]# cat ifcfg-ens33
BOOTPROTO="none"
DEFROUTE="yes"
NAME="ens33"
UUID="e3072a9e-9e43-4855-9941-cabf05360e32"
DEVICE="ens33"
ONBOOT="yes"
IPADDR=192.168.40.87
PREFIX=24
GATEWAY=192.168.40.166
DNS1=114.114.114.114
[root@firewalld network-scripts]#
[root@firewalld network-scripts]# cat ifcfg-ens34
BOOTPROTO=none
DEFROUTE=yes
NAME=ens34
UUID=0d04766d-7a98-4a68-b9a9-eb7377a4df80
DEVICE=ens34
ONBOOT=yes
IPADDR=192.168.182.177
PREFIX=24
[root@firewalld network-scripts]#
'永久打開路由功能'
[root@firewalld ~]# vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
[root@firewalld ~]# sysctl -p 讓內核讀取配置文件,開啟路由功能
net.ipv4.ip_forward = 1
[root@firewalld ~]#
7.3、編寫dnat和snat策略
'編寫策略snat和dnat'
[root@firewalld ~]# mkdir /nat
[root@firewalld ~]# cd /nat/
[root@firewalld nat]# vim set_snat_dnat.sh
[root@firewalld nat]# cat set_snat_dnat.sh
#!/bin/bash#開啟路由功能
echo 1 >/proc/sys/net/ipv4/ip_forward
#修改/etc/sysctl.conf里添加下面的配置
#net.ipv4.ip_forward = 1#清除防火墻規(guī)則
iptables=/usr/sbin/iptables$iptables -F
$iptables -t nat -F#set snat policy
$iptables -t nat -A POSTROUTING -s 192.168.182.0/24 -o ens33 -j MASQUERADE#set dnat policy,發(fā)布我的web服務
$iptables -t nat -A PREROUTING -d 192.168.40.87 -i ens33 -p tcp --dport 30001 -j DNAT --to-destination 192.168.182.142:30001
$iptables -t nat -A PREROUTING -d 192.168.40.87 -i ens33 -p tcp --dport 31000 -j DNAT --to-destination 192.168.182.142:31000#發(fā)布堡壘機,訪問防火墻的2233端口轉發(fā)到堡壘機的22端口
$iptables -t nat -A PREROUTING -d 192.168.40.87 -i ens33 -p tcp --dport 2233 -j DNAT --to-destination 192.168.182.141:22
[root@firewalld nat]#
'執(zhí)行'
[root@firewalld nat]# bash set_snat_dnat.sh'查看腳本的執(zhí)行效果'
iptables -L -t nat -n
[root@firewalld nat]# iptables -L -t nat -n
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DNAT tcp -- 0.0.0.0/0 192.168.40.87 tcp dpt:30001 to:192.168.182.142:30001
DNAT tcp -- 0.0.0.0/0 192.168.40.87 tcp dpt:31000 to:192.168.182.142:31000
DNAT tcp -- 0.0.0.0/0 192.168.40.87 tcp dpt:2233 to:192.168.182.141:22Chain INPUT (policy ACCEPT)
target prot opt source destination Chain OUTPUT (policy ACCEPT)
target prot opt source destination Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 192.168.182.0/24 0.0.0.0/0
[root@firewalld nat]#
'保存規(guī)則'
[root@firewalld nat]# iptables-save >/etc/sysconfig/iptables_rules'設置snat和dnat策略開機啟動'
[root@firewalld nat]# vim /etc/rc.local
[root@firewalld nat]# vim /etc/rc.local
iptables-restore </etc/sysconfig/iptables_rules
[root@firewalld nat]#
[root@firewalld nat]# chmod +x /etc/rc.d/rc.local
7.4、將整個k8s集群里的服務器的網(wǎng)關設置為防火墻服務器的LAN口的ip地址(192.168.182.177)
'以master為例'
[root@master network-scripts]# cat ifcfg-ens33
BOOTPROTO="none"
DEFROUTE="yes"
NAME="ens33"
UUID="e2cd1765-6b1c-4ff5-88e0-a2bf8bd4203e"
DEVICE="ens33"
ONBOOT="yes"
IPADDR=192.168.182.142
PREFIX=24
GATEWAY=192.168.182.177
DNS1=114.114.114.114
[root@master network-scripts]# [root@master network-scripts]# service network restart
Restarting network (via systemctl): [ 確定 ]
7.5、測試SNAT功能
[root@master ~]# ping www.baidu.com
PING www.a.shifen.com (183.2.172.185) 56(84) bytes of data.
64 bytes from 183.2.172.185 (183.2.172.185): icmp_seq=1 ttl=50 time=40.0 ms
64 bytes from 183.2.172.185 (183.2.172.185): icmp_seq=2 ttl=50 time=33.0 ms
64 bytes from 183.2.172.185 (183.2.172.185): icmp_seq=3 ttl=50 time=34.7 ms
^C
--- www.a.shifen.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 33.048/35.938/40.021/2.972 ms
[root@master ~]#
7.6、測試dnat功能
7.7、測試堡壘機發(fā)布
成功了!!!!!
8、項目心得
- 在配置snat和dnat的時候,想著把里邊的環(huán)境都改成hostonly,結果服務癱了,又換成了nat
- 在做項目期間,掛起之后導致master和node直接的連接斷了,不得不重新恢復快照繼續(xù)做
- 不管是部署k8s還是防火墻,都得仔細仔細再仔細,不然漏了一個步驟,排查起來很不方便
- 有很多鏡像都是在國外,拉不進來的
- 解決辦法是找國內的平替
- 或者直接買一個新加坡的服務器,從那臺拉,再導出來
- 在部署 Kubernetes 儀表板(Dashboard),登錄需要token驗證,只有15分鐘
- 需要修改:recommend.yaml 修改參數(shù)增加時常
- master節(jié)點上是不跑業(yè)務pod,上邊設置了污點
- 在啟動新pod的時候,刪除那些不重要的pod,不然會出現(xiàn)node節(jié)點爆滿,pod起不來