行業(yè)獵頭網(wǎng)seo中文
1?Kubernetes(k8s) 集群升級過程
Kubernetes 使用?kubeadm
?工具來管理集群組件的升級。在集群節(jié)點層面,升級 Kubernetes(k8s)集群的過程可以分為以下幾個步驟:
1)檢查當前環(huán)境和配置是否滿足升級要求。
2)升級master主節(jié)點(如果是多master,則master輪著升級)。
3)升級worker工作節(jié)點。
4)升級網(wǎng)絡插件。
在軟件層面,升級 Kubernetes(k8s)集群的過程可以分為以下幾個步驟:
1)升級kubeadm
2)節(jié)點執(zhí)行drain操作
3)升級各個組件
4)取消drain操作
5)升級kubelet和kubectl
注意:Kubernetes(k8s)集群升級的時候是不能跨版本升級的,比如:Kubernetes(k8s)集群 1.19.x可以升級為1.20.y,但是Kubernetes(k8s)集群 1.19.x不能直接升級為 1.21.y,只能從一個次版本升級到下一個次版本,或者在次版本相同時升級補丁版本。 也就是說,升級時不可以跳過次版本。 例如,只能從 1.y 升級到 1.y+1,而不能從 1.y 升級到 1.y+2。
2?升級master主節(jié)點
2.1 升級kubeadm?
Kubernetes(k8s)集群版本是v1.21.0。
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ops-master-1 Ready control-plane,master 33m v1.21.0
ops-worker-1 Ready <none> 30m v1.21.0
ops-worker-2 Ready <none> 30m v1.21.0
查看可用的kubeadm版本:
yum list --showduplicates kubeadm --disableexcludes=kubernetes
Loaded plugins: fastestmirror, langpacks, releasever-adapter, update-motd
Loading mirror speeds from cached hostfile
Installed Packages
kubeadm.x86_64 1.21.0-0 @k8s
Available Packages
kubeadm.x86_64 1.6.0-0 k8s
kubeadm.x86_64 1.6.1-0 k8s
kubeadm.x86_64 1.6.2-0 k8s
kubeadm.x86_64 1.6.3-0 k8s
kubeadm.x86_64 1.6.4-0 k8s
kubeadm.x86_64 1.6.5-0 k8s
kubeadm.x86_64 1.6.6-0 k8s
kubeadm.x86_64 1.6.7-0 k8s
kubeadm.x86_64 1.6.8-0 k8s
kubeadm.x86_64 1.6.9-0 k8s
kubeadm.x86_64 1.6.10-0 k8s
kubeadm.x86_64 1.6.11-0 k8s
kubeadm.x86_64 1.6.12-0 k8s
kubeadm.x86_64 1.6.13-0 k8s
kubeadm.x86_64 1.7.0-0 k8s
kubeadm.x86_64 1.7.1-0 k8s
kubeadm.x86_64 1.7.2-0 k8s
kubeadm.x86_64 1.7.3-1 k8s
kubeadm.x86_64 1.7.4-0 k8s
kubeadm.x86_64 1.7.5-0 k8s
kubeadm.x86_64 1.7.6-1 k8s
kubeadm.x86_64 1.7.7-1 k8s
kubeadm.x86_64 1.7.8-1 k8s
kubeadm.x86_64 1.7.9-0 k8s
kubeadm.x86_64 1.7.10-0 k8s
kubeadm.x86_64 1.7.11-0 k8s
kubeadm.x86_64 1.7.14-0 k8s
kubeadm.x86_64 1.7.15-0 k8s
kubeadm.x86_64 1.7.16-0 k8s
kubeadm.x86_64 1.8.0-0 k8s
kubeadm.x86_64 1.8.0-1 k8s
kubeadm.x86_64 1.8.1-0 k8s
kubeadm.x86_64 1.8.2-0 k8s
kubeadm.x86_64 1.8.3-0 k8s
kubeadm.x86_64 1.8.4-0 k8s
kubeadm.x86_64 1.8.5-0 k8s
kubeadm.x86_64 1.8.6-0 k8s
kubeadm.x86_64 1.8.7-0 k8s
kubeadm.x86_64 1.8.8-0 k8s
kubeadm.x86_64 1.8.9-0 k8s
kubeadm.x86_64 1.8.10-0 k8s
kubeadm.x86_64 1.8.11-0 k8s
kubeadm.x86_64 1.8.12-0 k8s
kubeadm.x86_64 1.8.13-0 k8s
kubeadm.x86_64 1.8.14-0 k8s
kubeadm.x86_64 1.8.15-0 k8s
kubeadm.x86_64 1.9.0-0 k8s
kubeadm.x86_64 1.9.1-0 k8s
kubeadm.x86_64 1.9.2-0 k8s
kubeadm.x86_64 1.9.3-0 k8s
kubeadm.x86_64 1.9.4-0 k8s
kubeadm.x86_64 1.9.5-0 k8s
kubeadm.x86_64 1.9.6-0 k8s
kubeadm.x86_64 1.9.7-0 k8s
kubeadm.x86_64 1.9.8-0 k8s
kubeadm.x86_64 1.9.9-0 k8s
kubeadm.x86_64 1.9.10-0 k8s
kubeadm.x86_64 1.9.11-0 k8s
kubeadm.x86_64 1.10.0-0 k8s
kubeadm.x86_64 1.10.1-0 k8s
kubeadm.x86_64 1.10.2-0 k8s
kubeadm.x86_64 1.10.3-0 k8s
kubeadm.x86_64 1.10.4-0 k8s
kubeadm.x86_64 1.10.5-0 k8s
kubeadm.x86_64 1.10.6-0 k8s
kubeadm.x86_64 1.10.7-0 k8s
kubeadm.x86_64 1.10.8-0 k8s
kubeadm.x86_64 1.10.9-0 k8s
kubeadm.x86_64 1.10.10-0 k8s
kubeadm.x86_64 1.10.11-0 k8s
kubeadm.x86_64 1.10.12-0 k8s
kubeadm.x86_64 1.10.13-0 k8s
kubeadm.x86_64 1.11.0-0 k8s
kubeadm.x86_64 1.11.1-0 k8s
kubeadm.x86_64 1.11.2-0 k8s
kubeadm.x86_64 1.11.3-0 k8s
kubeadm.x86_64 1.11.4-0 k8s
kubeadm.x86_64 1.11.5-0 k8s
kubeadm.x86_64 1.11.6-0 k8s
kubeadm.x86_64 1.11.7-0 k8s
kubeadm.x86_64 1.11.8-0 k8s
kubeadm.x86_64 1.11.9-0 k8s
kubeadm.x86_64 1.11.10-0 k8s
kubeadm.x86_64 1.12.0-0 k8s
kubeadm.x86_64 1.12.1-0 k8s
kubeadm.x86_64 1.12.2-0 k8s
kubeadm.x86_64 1.12.3-0 k8s
kubeadm.x86_64 1.12.4-0 k8s
kubeadm.x86_64 1.12.5-0 k8s
kubeadm.x86_64 1.12.6-0 k8s
kubeadm.x86_64 1.12.7-0 k8s
kubeadm.x86_64 1.12.8-0 k8s
kubeadm.x86_64 1.12.9-0 k8s
kubeadm.x86_64 1.12.10-0 k8s
kubeadm.x86_64 1.13.0-0 k8s
kubeadm.x86_64 1.13.1-0 k8s
kubeadm.x86_64 1.13.2-0 k8s
kubeadm.x86_64 1.13.3-0 k8s
kubeadm.x86_64 1.13.4-0 k8s
kubeadm.x86_64 1.13.5-0 k8s
kubeadm.x86_64 1.13.6-0 k8s
kubeadm.x86_64 1.13.7-0 k8s
kubeadm.x86_64 1.13.8-0 k8s
kubeadm.x86_64 1.13.9-0 k8s
kubeadm.x86_64 1.13.10-0 k8s
kubeadm.x86_64 1.13.11-0 k8s
kubeadm.x86_64 1.13.12-0 k8s
kubeadm.x86_64 1.14.0-0 k8s
kubeadm.x86_64 1.14.1-0 k8s
kubeadm.x86_64 1.14.2-0 k8s
kubeadm.x86_64 1.14.3-0 k8s
kubeadm.x86_64 1.14.4-0 k8s
kubeadm.x86_64 1.14.5-0 k8s
kubeadm.x86_64 1.14.6-0 k8s
kubeadm.x86_64 1.14.7-0 k8s
kubeadm.x86_64 1.14.8-0 k8s
kubeadm.x86_64 1.14.9-0 k8s
kubeadm.x86_64 1.14.10-0 k8s
kubeadm.x86_64 1.15.0-0 k8s
kubeadm.x86_64 1.15.1-0 k8s
kubeadm.x86_64 1.15.2-0 k8s
kubeadm.x86_64 1.15.3-0 k8s
kubeadm.x86_64 1.15.4-0 k8s
kubeadm.x86_64 1.15.5-0 k8s
kubeadm.x86_64 1.15.6-0 k8s
kubeadm.x86_64 1.15.7-0 k8s
kubeadm.x86_64 1.15.8-0 k8s
kubeadm.x86_64 1.15.9-0 k8s
kubeadm.x86_64 1.15.10-0 k8s
kubeadm.x86_64 1.15.11-0 k8s
kubeadm.x86_64 1.15.12-0 k8s
kubeadm.x86_64 1.16.0-0 k8s
kubeadm.x86_64 1.16.1-0 k8s
kubeadm.x86_64 1.16.2-0 k8s
kubeadm.x86_64 1.16.3-0 k8s
kubeadm.x86_64 1.16.4-0 k8s
kubeadm.x86_64 1.16.5-0 k8s
kubeadm.x86_64 1.16.6-0 k8s
kubeadm.x86_64 1.16.7-0 k8s
kubeadm.x86_64 1.16.8-0 k8s
kubeadm.x86_64 1.16.9-0 k8s
kubeadm.x86_64 1.16.10-0 k8s
kubeadm.x86_64 1.16.11-0 k8s
kubeadm.x86_64 1.16.11-1 k8s
kubeadm.x86_64 1.16.12-0 k8s
kubeadm.x86_64 1.16.13-0 k8s
kubeadm.x86_64 1.16.14-0 k8s
kubeadm.x86_64 1.16.15-0 k8s
kubeadm.x86_64 1.17.0-0 k8s
kubeadm.x86_64 1.17.1-0 k8s
kubeadm.x86_64 1.17.2-0 k8s
kubeadm.x86_64 1.17.3-0 k8s
kubeadm.x86_64 1.17.4-0 k8s
kubeadm.x86_64 1.17.5-0 k8s
kubeadm.x86_64 1.17.6-0 k8s
kubeadm.x86_64 1.17.7-0 k8s
kubeadm.x86_64 1.17.7-1 k8s
kubeadm.x86_64 1.17.8-0 k8s
kubeadm.x86_64 1.17.9-0 k8s
kubeadm.x86_64 1.17.11-0 k8s
kubeadm.x86_64 1.17.12-0 k8s
kubeadm.x86_64 1.17.13-0 k8s
kubeadm.x86_64 1.17.14-0 k8s
kubeadm.x86_64 1.17.15-0 k8s
kubeadm.x86_64 1.17.16-0 k8s
kubeadm.x86_64 1.17.17-0 k8s
kubeadm.x86_64 1.18.0-0 k8s
kubeadm.x86_64 1.18.1-0 k8s
kubeadm.x86_64 1.18.2-0 k8s
kubeadm.x86_64 1.18.3-0 k8s
kubeadm.x86_64 1.18.4-0 k8s
kubeadm.x86_64 1.18.4-1 k8s
kubeadm.x86_64 1.18.5-0 k8s
kubeadm.x86_64 1.18.6-0 k8s
kubeadm.x86_64 1.18.8-0 k8s
kubeadm.x86_64 1.18.9-0 k8s
kubeadm.x86_64 1.18.10-0 k8s
kubeadm.x86_64 1.18.12-0 k8s
kubeadm.x86_64 1.18.13-0 k8s
kubeadm.x86_64 1.18.14-0 k8s
kubeadm.x86_64 1.18.15-0 k8s
kubeadm.x86_64 1.18.16-0 k8s
kubeadm.x86_64 1.18.17-0 k8s
kubeadm.x86_64 1.18.18-0 k8s
kubeadm.x86_64 1.18.19-0 k8s
kubeadm.x86_64 1.18.20-0 k8s
kubeadm.x86_64 1.19.0-0 k8s
kubeadm.x86_64 1.19.1-0 k8s
kubeadm.x86_64 1.19.2-0 k8s
kubeadm.x86_64 1.19.3-0 k8s
kubeadm.x86_64 1.19.4-0 k8s
kubeadm.x86_64 1.19.5-0 k8s
kubeadm.x86_64 1.19.6-0 k8s
kubeadm.x86_64 1.19.7-0 k8s
kubeadm.x86_64 1.19.8-0 k8s
kubeadm.x86_64 1.19.9-0 k8s
kubeadm.x86_64 1.19.10-0 k8s
kubeadm.x86_64 1.19.11-0 k8s
kubeadm.x86_64 1.19.12-0 k8s
kubeadm.x86_64 1.19.13-0 k8s
kubeadm.x86_64 1.19.14-0 k8s
kubeadm.x86_64 1.19.15-0 k8s
kubeadm.x86_64 1.19.16-0 k8s
kubeadm.x86_64 1.20.0-0 k8s
kubeadm.x86_64 1.20.1-0 k8s
kubeadm.x86_64 1.20.2-0 k8s
kubeadm.x86_64 1.20.4-0 k8s
kubeadm.x86_64 1.20.5-0 k8s
kubeadm.x86_64 1.20.6-0 k8s
kubeadm.x86_64 1.20.7-0 k8s
kubeadm.x86_64 1.20.8-0 k8s
kubeadm.x86_64 1.20.9-0 k8s
kubeadm.x86_64 1.20.10-0 k8s
kubeadm.x86_64 1.20.11-0 k8s
kubeadm.x86_64 1.20.12-0 k8s
kubeadm.x86_64 1.20.13-0 k8s
kubeadm.x86_64 1.20.14-0 k8s
kubeadm.x86_64 1.20.15-0 k8s
kubeadm.x86_64 1.21.0-0 k8s
kubeadm.x86_64 1.21.1-0 k8s
kubeadm.x86_64 1.21.2-0 k8s
kubeadm.x86_64 1.21.3-0 k8s
kubeadm.x86_64 1.21.4-0 k8s
kubeadm.x86_64 1.21.5-0 k8s
kubeadm.x86_64 1.21.6-0 k8s
kubeadm.x86_64 1.21.7-0 k8s
kubeadm.x86_64 1.21.8-0 k8s
kubeadm.x86_64 1.21.9-0 k8s
kubeadm.x86_64 1.21.10-0 k8s
kubeadm.x86_64 1.21.11-0 k8s
kubeadm.x86_64 1.21.12-0 k8s
kubeadm.x86_64 1.21.13-0 k8s
kubeadm.x86_64 1.21.14-0 k8s
kubeadm.x86_64 1.22.0-0 k8s
kubeadm.x86_64 1.22.1-0 k8s
kubeadm.x86_64 1.22.2-0 k8s
kubeadm.x86_64 1.22.3-0 k8s
kubeadm.x86_64 1.22.4-0 k8s
kubeadm.x86_64 1.22.5-0 k8s
kubeadm.x86_64 1.22.6-0 k8s
kubeadm.x86_64 1.22.7-0 k8s
kubeadm.x86_64 1.22.8-0 k8s
kubeadm.x86_64 1.22.9-0 k8s
kubeadm.x86_64 1.22.10-0 k8s
kubeadm.x86_64 1.22.11-0 k8s
kubeadm.x86_64 1.22.12-0 k8s
kubeadm.x86_64 1.22.13-0 k8s
kubeadm.x86_64 1.22.14-0 k8s
kubeadm.x86_64 1.22.15-0 k8s
kubeadm.x86_64 1.22.16-0 k8s
kubeadm.x86_64 1.22.17-0 k8s
kubeadm.x86_64 1.23.0-0 k8s
kubeadm.x86_64 1.23.1-0 k8s
kubeadm.x86_64 1.23.2-0 k8s
kubeadm.x86_64 1.23.3-0 k8s
kubeadm.x86_64 1.23.4-0 k8s
kubeadm.x86_64 1.23.5-0 k8s
kubeadm.x86_64 1.23.6-0 k8s
kubeadm.x86_64 1.23.7-0 k8s
kubeadm.x86_64 1.23.8-0 k8s
kubeadm.x86_64 1.23.9-0 k8s
kubeadm.x86_64 1.23.10-0 k8s
kubeadm.x86_64 1.23.11-0 k8s
kubeadm.x86_64 1.23.12-0 k8s
kubeadm.x86_64 1.23.13-0 k8s
kubeadm.x86_64 1.23.14-0 k8s
kubeadm.x86_64 1.23.15-0 k8s
kubeadm.x86_64 1.23.16-0 k8s
kubeadm.x86_64 1.23.17-0 k8s
kubeadm.x86_64 1.24.0-0 k8s
kubeadm.x86_64 1.24.1-0 k8s
kubeadm.x86_64 1.24.2-0 k8s
kubeadm.x86_64 1.24.3-0 k8s
kubeadm.x86_64 1.24.4-0 k8s
kubeadm.x86_64 1.24.5-0 k8s
kubeadm.x86_64 1.24.6-0 k8s
kubeadm.x86_64 1.24.7-0 k8s
kubeadm.x86_64 1.24.8-0 k8s
kubeadm.x86_64 1.24.9-0 k8s
kubeadm.x86_64 1.24.10-0 k8s
kubeadm.x86_64 1.24.11-0 k8s
kubeadm.x86_64 1.24.12-0 k8s
kubeadm.x86_64 1.24.13-0 k8s
kubeadm.x86_64 1.24.14-0 k8s
kubeadm.x86_64 1.24.15-0 k8s
kubeadm.x86_64 1.24.16-0 k8s
kubeadm.x86_64 1.24.17-0 k8s
kubeadm.x86_64 1.25.0-0 k8s
kubeadm.x86_64 1.25.1-0 k8s
kubeadm.x86_64 1.25.2-0 k8s
kubeadm.x86_64 1.25.3-0 k8s
kubeadm.x86_64 1.25.4-0 k8s
kubeadm.x86_64 1.25.5-0 k8s
kubeadm.x86_64 1.25.6-0 k8s
kubeadm.x86_64 1.25.7-0 k8s
kubeadm.x86_64 1.25.8-0 k8s
kubeadm.x86_64 1.25.9-0 k8s
kubeadm.x86_64 1.25.10-0 k8s
kubeadm.x86_64 1.25.11-0 k8s
kubeadm.x86_64 1.25.12-0 k8s
kubeadm.x86_64 1.25.13-0 k8s
kubeadm.x86_64 1.25.14-0 k8s
kubeadm.x86_64 1.26.0-0 k8s
kubeadm.x86_64 1.26.1-0 k8s
kubeadm.x86_64 1.26.2-0 k8s
kubeadm.x86_64 1.26.3-0 k8s
kubeadm.x86_64 1.26.4-0 k8s
kubeadm.x86_64 1.26.5-0 k8s
kubeadm.x86_64 1.26.6-0 k8s
kubeadm.x86_64 1.26.7-0 k8s
kubeadm.x86_64 1.26.8-0 k8s
kubeadm.x86_64 1.26.9-0 k8s
kubeadm.x86_64 1.27.0-0 k8s
kubeadm.x86_64 1.27.1-0 k8s
kubeadm.x86_64 1.27.2-0 k8s
kubeadm.x86_64 1.27.3-0 k8s
kubeadm.x86_64 1.27.4-0 k8s
kubeadm.x86_64 1.27.5-0 k8s
kubeadm.x86_64 1.27.6-0 k8s
kubeadm.x86_64 1.28.0-0 k8s
kubeadm.x86_64 1.28.1-0 k8s
kubeadm.x86_64 1.28.2-0 k8s
升級kubeadm到1.21.9-0版本:
$ yum install -y kubeadm-1.21.9-0 --disableexcludes=kubernetes
Loaded plugins: fastestmirror, langpacks, releasever-adapter, update-motd
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package kubeadm.x86_64 0:1.21.0-0 will be updated
---> Package kubeadm.x86_64 0:1.21.9-0 will be an update
--> Finished Dependency ResolutionDependencies Resolved============================================================================================================================================================================================================Package Arch Version Repository Size
============================================================================================================================================================================================================
Updating:kubeadm x86_64 1.21.9-0 k8s 9.1 MTransaction Summary
============================================================================================================================================================================================================
Upgrade 1 PackageTotal download size: 9.1 M
Downloading packages:
No Presto metadata available for k8s
f41c806d2113e9b88efd9f70e3a07da3cd0597f5361f7842058e06b9601ff7fc-kubeadm-1.21.9-0.x86_64.rpm | 9.1 MB 00:00:01
Running transaction check
Running transaction test
Transaction test succeeded
Running transactionUpdating : kubeadm-1.21.9-0.x86_64 1/2 Cleanup : kubeadm-1.21.0-0.x86_64 2/2 Verifying : kubeadm-1.21.9-0.x86_64 1/2 Verifying : kubeadm-1.21.0-0.x86_64 2/2 Updated:kubeadm.x86_64 0:1.21.9-0 Complete!
kubeadm upgrade plan驗證升級計劃,COMPONENT CURRENT TARGET :告訴我們組件可以從當前版本升級到的版本。
$ kubeadm upgrade plan
kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.21.0
[upgrade/versions] kubeadm version: v1.21.9
I1210 20:34:17.564937 5980 version.go:254] remote version is much newer: v1.28.4; falling back to: stable-1.21
[upgrade/versions] Target version: v1.21.14
[upgrade/versions] Latest version in the v1.21 series: v1.21.14Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 3 x v1.21.0 v1.21.14Upgrade to the latest version in the v1.21 series:COMPONENT CURRENT TARGET
kube-apiserver v1.21.0 v1.21.14
kube-controller-manager v1.21.0 v1.21.14
kube-scheduler v1.21.0 v1.21.14
kube-proxy v1.21.0 v1.21.14
CoreDNS v1.8.0 v1.8.0
etcd 3.4.13-0 3.4.13-0You can now apply the upgrade by executing the following command:kubeadm upgrade apply v1.21.14Note: Before you can perform this upgrade, you have to update kubeadm to v1.21.14._____________________________________________________________________The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
_____________________________________________________________________
2.2?升級各個組件
上一步是升級kubeadm ,接下來升級各個組件(kube-apiserver,kube-controller-manager等等),
kubeadm upgrade apply v1.21.9升級各個組件到1.21.9版本,如果etcd這個組件不想升級,可以加上選項:kubeadm upgrade apply v1.21.9 --etcd-upgrade=false。
可以提前drain節(jié)點,后面drain也可以。
騰空節(jié)點:通過將節(jié)點標記為不可調(diào)度并騰空節(jié)點為節(jié)點作升級準備:kubectl drain?--ignore-daemonsets。
$ kubectl drain ops-master-1 --ignore-daemonsets
node/k8scloude1 cordoned
error: unable to drain node "k8scloude1", aborting command...There are pending nodes to be drained:k8scloude1
error: cannot delete Pods with local storage (use --delete-emptydir-data to override): kube-system/metrics-server-bcfb98c76-j4gs8, kubernetes-dashboard/dashboard-metrics-scraper-7f458d9467-9knf9
因為有本地數(shù)據(jù),需要加--delete-emptydir-data選項。
$ kubectl drain ops-master-1 --ignore-daemonsets --delete-emptydir-data
node/ops-master-1 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-whfb8, kube-system/kube-proxy-srhgz
node/ops-master-1 drained
升級各個組件,--etcd-upgrade=false表示etcd數(shù)據(jù)庫不升級。
$ kubeadm upgrade apply v1.21.9 --etcd-upgrade=false
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.21.9"
[upgrade/versions] Cluster version: v1.21.0
[upgrade/versions] kubeadm version: v1.21.9
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.21.9"...
Static pod: kube-apiserver-ops-master-1 hash: 94a1393eb20a9652cea984f5907ed147
Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b
Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests053748490"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-12-10-20-38-23/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-ops-master-1 hash: 94a1393eb20a9652cea984f5907ed147
Static pod: kube-apiserver-ops-master-1 hash: 94a1393eb20a9652cea984f5907ed147
Static pod: kube-apiserver-ops-master-1 hash: 26e768b6e5881ba632db749b44ed30c5
[apiclient] Found 1 Pods for label selector component=kube-apiserver[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-12-10-20-38-23/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b
Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b
Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b
Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b
Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b
Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b
Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b
Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b
Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b
Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b
Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b
Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b
Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b
Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b
Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b
Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b
Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b
Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b
Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b
Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b
Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b
Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b
Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b
Static pod: kube-controller-manager-ops-master-1 hash: 0074b909bb20d0a0ab9caa7f07b66191
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-12-10-20-38-23/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd
Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd
Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd
Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd
Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd
Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd
Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd
Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd
Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd
Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd
Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd
Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd
Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd
Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd
Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd
Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd
Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd
Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd
Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd
Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd
Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd
Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd
Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd
Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd
Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd
Static pod: kube-scheduler-ops-master-1 hash: 07614ccea2a58fd20ecb645d2901f0c9
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)
[upgrade/postupgrade] Applying label node.kubernetes.io/exclude-from-external-load-balancers='' to control plane Nodes
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.21.9". Enjoy![upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
現(xiàn)在ops-master-1節(jié)點是不可調(diào)度狀態(tài)的。
$ kubectl get node
NAME STATUS ROLES AGE VERSION
ops-master-1 Ready,SchedulingDisabled control-plane,master 49m v1.21.0
ops-worker-1 Ready <none> 46m v1.21.0
ops-worker-2 Ready <none> 46m v1.21.0
解除ops-master-1節(jié)點的保護:通過將節(jié)點標記為可調(diào)度,讓其重新上線。
$ kubectl uncordon ops-master-1
node/ops-master-1 uncordoned
現(xiàn)在ops-master-1節(jié)點是Ready狀態(tài)的。
$ kubectl get node
NAME STATUS ROLES AGE VERSION
ops-master-1 Ready control-plane,master 53m v1.21.0
ops-worker-1 Ready <none> 50m v1.21.0
ops-worker-2 Ready <none> 50m v1.21.0
?2.2?升級 kubelet 和 kubectl
升級 kubelet 和 kubectl到1.21.9版本。
$ yum install -y kubelet-1.21.9-0 kubectl-1.21.9-0 --disableexcludes=kubernetes
重新加載配置文件并重啟 kubelet。
$ systemctl daemon-reload ;systemctl restart kubelet
此時op-master-1節(jié)點的版本就變?yōu)関1.21.9了,k8s集群的master節(jié)點升級成功,如果有多個master,則步驟一樣,但是第二個master節(jié)點不需要執(zhí)行kubeadm upgrade apply v1.21.9命令,第二臺master節(jié)點把kubeadm upgrade apply v1.21.9變?yōu)?span style="color:#fe2c24;">kubeadm upgrade node。
$ kubectl get node
NAME STATUS ROLES AGE VERSION
ops-master-1 Ready control-plane,master 143m v1.21.9
ops-worker-1 Ready <none> 140m v1.21.0
ops-worker-2 Ready <none> 140m v1.21.0
3?升級worker工作節(jié)點
3.1?升級kubeadm
$ yum install -y kubeadm-1.21.9-0 --disableexcludes=kubernetes
通過將節(jié)點標記為不可調(diào)度并騰空節(jié)點,為節(jié)點作升級準備。?
如果本地有數(shù)據(jù),建議使用--delete-emptydir-data選項。
$ kubectl drain ops-worker-1 --ignore-daemonsets
node/ops-worker-1 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-g6ntb, kube-system/kube-proxy-nf2rm
evicting pod kube-system/coredns-59d64cd4d4-qf9rx
pod/coredns-59d64cd4d4-qf9rx evicted
node/ops-worker-1 evicted
kubectl get node
NAME STATUS ROLES AGE VERSION
ops-master-1 Ready control-plane,master 144m v1.21.9
ops-worker-1 Ready,SchedulingDisabled <none> 141m v1.21.0
ops-worker-2 Ready <none> 141m v1.21.0
對于工作節(jié)點, kubeadm upgrade node 命令會升級本地的 kubelet 配置。
$ kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Skipping prepull. Not a control plane node.
[upgrade] Skipping phase. Not a control plane node.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
通過將ops-worker-1節(jié)點標記為可調(diào)度,讓其重新上線。?
$ kubectl uncordon ops-worker-1
node/ops-worker-1 uncordoned
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ops-master-1 Ready control-plane,master 145m v1.21.9
ops-worker-1 Ready <none> 142m v1.21.0
ops-worker-2 Ready <none> 142m v1.21.0
3.2?升級kubelet和kubectl
升級kubelet和kubectl到1.21.9版本。
?
$ yum install -y kubelet-1.21.9-0 kubectl-1.21.9-0 --disableexcludes=kubernetes
Loaded plugins: fastestmirror, langpacks, releasever-adapter, update-motd
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package kubectl.x86_64 0:1.21.0-0 will be updated
---> Package kubectl.x86_64 0:1.21.9-0 will be an update
---> Package kubelet.x86_64 0:1.21.0-0 will be updated
---> Package kubelet.x86_64 0:1.21.9-0 will be an update
--> Finished Dependency ResolutionDependencies Resolved============================================================================================================================================================================================================Package Arch Version Repository Size
============================================================================================================================================================================================================
Updating:kubectl x86_64 1.21.9-0 k8s 9.6 Mkubelet x86_64 1.21.9-0 k8s 20 MTransaction Summary
============================================================================================================================================================================================================
Upgrade 2 PackagesTotal download size: 30 M
Downloading packages:
No Presto metadata available for k8s
(1/2): f53d5be18ac04fa2eebe0f27a984fbc1197a31f1ed4e92c3762f0f584fcd502c-kubectl-1.21.9-0.x86_64.rpm | 9.6 MB 00:00:00
(2/2): 6e68c2e2eb926e163f53a7d64000334c6cae982841fffee350f5003793a63a9c-kubelet-1.21.9-0.x86_64.rpm | 20 MB 00:00:01
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 28 MB/s | 30 MB 00:00:01
Running transaction check
Running transaction test
Transaction test succeeded
Running transactionUpdating : kubelet-1.21.9-0.x86_64 1/4 Updating : kubectl-1.21.9-0.x86_64 2/4 Cleanup : kubectl-1.21.0-0.x86_64 3/4 Cleanup : kubelet-1.21.0-0.x86_64 4/4 Verifying : kubectl-1.21.9-0.x86_64 1/4 Verifying : kubelet-1.21.9-0.x86_64 2/4 Verifying : kubelet-1.21.0-0.x86_64 3/4 Verifying : kubectl-1.21.0-0.x86_64 4/4 Updated:kubectl.x86_64 0:1.21.9-0 kubelet.x86_64 0:1.21.9-0 Complete!
重新加載配置文件并重啟kubelet。
$ systemctl daemon-reload ;systemctl restart kubelet
systemctl daemon-reload ;systemctl restart kubelet
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ops-master-1 Ready control-plane,master 145m v1.21.9
ops-worker-1 Ready <none> 142m v1.21.9
ops-worker-2 Ready <none> 142m v1.21.0
節(jié)點ops-worker-2升級步驟和ops-worker-1節(jié)點一模一樣。
等節(jié)點ops-worker-2升級完成后,整個Kubernetes(k8s) 集群就升級完畢了,版本都是v1.21.9。
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ops-master-1 Ready control-plane,master 147m v1.21.9
ops-worker-1 Ready <none> 144m v1.21.9
ops-worker-2 Ready <none> 144m v1.21.9