外貿(mào)軟件定制域名查詢seo
一、準(zhǔn)備
1、禁用selinux
#臨時禁用
setenforce 0
#永久禁用
sed -i 's/enforcing/disabled/' /etc/selinux/config
#檢查selinux是否已禁用
sestatus
2、禁用交換分區(qū)
#命令行臨時禁用
swapoff -a
#永久禁用
vim /etc/fstab
注釋掉有swap字樣的那行,重啟
3、允許iptables轉(zhuǎn)發(fā)、啟用br_netfilter模塊
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOFecho 1 > /proc/sys/net/ipv4/ip_forwardcat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
bridge
br_netfilter
EOFsysctl --system#停止防火墻
systemctl stop firewalld
systemctl disable firewalld
4、修改hostname,使每臺服務(wù)器的hostname唯一
hostnamectl set-hostname server-xxxxx#把新設(shè)置的hostname映射到服務(wù)器ip上
vim /etc/hosts
127.0.0.1 server-xxxxx
或
局域網(wǎng)ip server-xxxxx
二、開始安裝
1、安裝containerd
centos
yum install -y yum-utils device-mapper-persistent-data lvm2
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo
yum makecache && yum -y install containerd.io
ubuntu
apt install -y apt-transport-https ca-certificates
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
apt update && apt install -y containerd.io
debian
apt install -y apt-transport-https ca-certificates
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/debian/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/debian $(lsb_release -cs) stable"
apt update && apt install -y containerd.io
修改containerd配置
containerd config default > /etc/containerd/config.toml
sed -i 's/registry.k8s.io\/pause:[0-9].[0-9]/registry.aliyuncs.com\/google_containers\/pause:3.9/g' /etc/containerd/config.toml systemctl restart containerd
修改containerd鏡像源
vim /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".registry.mirrors][plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]endpoint = ["https://atomhub.openatom.cn"][plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io/library"]endpoint = ["https://atomhub.openatom.cn/library"][plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.k8s.io"]endpoint = ["https://registry.aliyuncs.com/google_containers"]systemctl restart containerd
2、離線安裝containerd
下載
wget https://github.com/containerd/containerd/releases/download/v1.7.21/containerd-1.7.21-linux-amd64.tar.gz
tar zxvf containerd-1.7.21-linux-amd64.tar.gz
chmod 755 /bin/*
cp -n bin/* /usr/bin/
啟動服務(wù)
cat > /usr/lib/systemd/system/containerd.service <<EOF
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/bin/containerdType=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
TasksMax=infinity
OOMScoreAdjust=-999[Install]
WantedBy=multi-user.target
EOFsystemctl start containerd && systemctl enable containerd
3、安裝docker
centos
yum install -y yum-utils device-mapper-persistent-data lvm2
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo
yum makecache && yum -y install docker-ce
ubuntu
apt install -y apt-transport-https ca-certificates
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
apt update && apt install -y docker-ce
debian
apt install -y apt-transport-https ca-certificates
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/debian/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/debian $(lsb_release -cs) stable"
apt update && apt install -y docker-ce
修改docker配置
mkdir -p /etc/docker
cat > /etc/docker/daemon.json << EOF
{"registry-mirrors": ["http://mirrors.ustc.edu.cn/","http://docker.jx42.com","https://0c105db5188026850f80c001def654a0.mirror.swr.myhuaweicloud.com","https://5tqw56kt.mirror.aliyuncs.com","https://docker.1panel.live","http://mirror.azure.cn/","https://hub.rat.dev/","https://docker.ckyl.me/","https://docker.chenby.cn","https://docker.hpcloud.cloud"],"exec-opts":["native.cgroupdriver=systemd"]
}
EOFsystemctl enable docker && systemctl start docker
3、離線安裝docker
centos
yum install -y yum-utils device-mapper-persistent-data lvm2
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo
yum makecache && yum -y install conntrack cri-tools ebtables ethtool kubernetes-cni socat
ubuntu
apt install -y apt-transport-https ca-certificates
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
apt update && apt install -y conntrack cri-tools ebtables ethtool kubernetes-cni socat
debian
apt install -y apt-transport-https ca-certificates
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/debian/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/debian $(lsb_release -cs) stable"
apt update && apt install -y conntrack cri-tools ebtables ethtool kubernetes-cni socat
解壓二進制包
wget https://download.docker.com/linux/static/stable/x86_64/docker-27.2.0.tgz
tar zxvf docker-27.2.0.tgz -C ./
cp -n ./docker/* /usr/bin/
添加自啟動配置
vim /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutStartSec=0
RestartSec=2
Restart=always# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
# Older systemd versions default to a LimitNOFILE of 1024:1024, which is insufficient for many
# applications including dockerd itself and will be inherited. Raise the hard limit, while
# preserving the soft limit for select(2).
LimitNOFILE=1024:524288# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500[Install]
WantedBy=multi-user.target
vim /usr/lib/systemd/system/docker.socket
[Unit]
Description=Docker Socket for the API
PartOf=docker.service[Socket]
# If /var/run is not implemented as a symlink to /run, you may need to
# specify ListenStream=/var/run/docker.sock instead.
ListenStream=/run/docker.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker[Install]
WantedBy=sockets.target
啟動
groupadd docker
systemctl daemon-reload
systemctl enable docker && systemctl start docker
4、安裝cri-docker
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.15/cri-dockerd-0.3.15.amd64.tgz
tar -xf cri-dockerd-0.3.15.amd64.tgz
cp cri-dockerd/cri-dockerd /usr/bin/cri-dockerd
curl https://github.com/Mirantis/cri-dockerd/raw/master/packaging/systemd/cri-docker.service -L -o /usr/lib/systemd/system/cri-docker.service
curl https://raw.githubusercontent.com/Mirantis/cri-dockerd/master/packaging/systemd/cri-docker.socket -L -o /usr/lib/systemd/system/cri-docker.socket #修改cri-docker配置
vim /usr/lib/systemd/system/cri-docker.service
#修改ExecStart加上pod-infra-container-image參數(shù)
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9systemctl daemon-reload
systemctl start cri-docker#查看cri-docker信息,安裝了k8s后crictl命令才可用
crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock info
5、安裝ipvs
#centos
yum -y install ipvsadm ipset
#ubuntu&debian
apt -y install ipvsadm ipset#如果/etc/sysconfig/modules/ipvs.modules文件不存在則
mkdir -p /etc/sysconfig/modules
cat > /etc/sysconfig/modules/ipvs.modules << EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
#modprobe -- nf_conntrack_ipv4 #4以上的內(nèi)核就沒有ipv4
modprobe -- nf_conntrack
EOFchmod 755 /etc/sysconfig/modules/ipvs.modules
sh /etc/sysconfig/modules/ipvs.modules#檢測是否加載
lsmod | grep ip_vs
6、安裝kubernetes
centos
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOFyum makecache
#查看所有kubelet可安裝版本
yum list --showduplicates kubelet
yum install -y kubelet-1.28.2-0 kubeadm-1.28.2-0 kubectl-1.28.2-0
ubuntu
apt update && apt install -y apt-transport-https ca-certificates gnupgcurl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - cat << EOF > /etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOFapt update
#查看所有kubelet可安裝版本
apt-cache madison kubelet
apt install -y kubelet=1.28.2-00 kubeadm=1.28.2-00 kubectl=1.28.2-00
設(shè)置所有組件自啟動
systemctl enable containerd
systemctl enable docker
systemctl enable cri-docker
systemctl enable kubelet
7、離線安裝kubernetes
安裝 crictl(kubeadm/kubelet 容器運行時接口(CRI)所需)
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.28.0/crictl-v1.28.0-linux-amd64.tar.gz
tar zxvf crictl-v1.28.0-linux-amd64.tar.gz -C /usr/local/bin/
安裝?kubeadm
、kubelet
、kubectl
?并添加?kubelet
?系統(tǒng)服務(wù)
wget https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubeadm
wget https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubelet
wget https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl
chmod 755 kubeadm kubelet kubectl
cp kubeadm kubelet kubectl /usr/bin/curl -sSL https://raw.githubusercontent.com/kubernetes/release/v0.16.2/cmd/krel/templates/latest/kubelet/kubelet.service -o /etc/systemd/system/kubelet.service
sudo mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL https://raw.githubusercontent.com/kubernetes/release/v0.16.2/cmd/krel/templates/latest/kubeadm/10-kubeadm.conf -o /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
三、集群搭建
1、初始化master節(jié)點
#如果kubelet服務(wù)已經(jīng)啟動,先關(guān)閉
systemctl stop kubelet#可以先行拉取鏡像,排除拉取問題
kubeadm config images pull \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.28.2 \
--cri-socket=unix:///var/run/cri-dockerd.sock*如果拉取鏡像很慢或者覺得有問題存在,可以查看服務(wù)日志
查看cri-docker服務(wù)日志
journalctl -xefu cri-docker#如果前面曾經(jīng)初始化過、或者初始化錯參數(shù),可以重置集群為未初始化
kubeadm reset -f \
--cri-socket=unix:///var/run/cri-dockerd.sock#開始初始化
kubeadm init \
--apiserver-advertise-address=服務(wù)器內(nèi)網(wǎng)ip地址 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.28.2 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--cri-socket=unix:///var/run/cri-dockerd.sock#如果想跳過cri-docker,直接讓k8s跟container通信,需要變更一個參數(shù)
--cri-socket=unix:///run/containerd/containerd.sock
k8s的默認網(wǎng)絡(luò)代理使用iptables,換用ipvs的話,性能會更高
kubectl edit -n kube-system cm kube-proxy
修改 mode: "ipvs"#刪除 kube-proxy,k8s會自動重建
kubectl get pod -n kube-system |grep kube-proxy| awk '{print $1}'|xargs kubectl -n kube-system delete pod#接著查看日志,有打印 Using ipvs Proxier 表示使用成功
kubectl get pod -n kube-system | grep kube-proxy
kubectl logs -n kube-system kube-proxy-xxxxx#試試查看一下轉(zhuǎn)發(fā)規(guī)則
ipvsadm -Ln
環(huán)境變量配置
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown root:root $HOME/.kube/config
echo export KUBECONFIG=/etc/kubernetes/admin.conf >> /etc/profilesystemctl daemon-reload
systemctl restart kubelet
查看master節(jié)點
kubectl get nodes
會看到一個control-plane節(jié)點,但是NotReady狀態(tài),需要安裝網(wǎng)絡(luò)插件
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
安裝后,多等一會kubectl get nodes
節(jié)點就是Ready了
另一個網(wǎng)絡(luò)插件Calico的安裝
-
Flannel vs Calico:選擇 Flannel 還是 Calico 主要取決于你的具體需求。如果你的集群規(guī)模較小,不需要太多復(fù)雜的網(wǎng)絡(luò)功能,Flannel 是一個合適的選擇。而如果你需要一個功能強大的網(wǎng)絡(luò)插件來支持大規(guī)模集群和復(fù)雜的網(wǎng)絡(luò)策略,那么 Calico 可能更適合你。
-
最佳實踐:對于未來可能需要擴展或集成更多設(shè)備和策略的集群,建議使用 Calico,因為它提供了更好的可擴展性和更豐富的功能集。而對于小規(guī)模集群或測試環(huán)境,Flannel 可能是一個更簡單易用的選擇。
kubectl apply -f https://docs.tigera.io/archive/v3.25/manifests/calico.yamlkubectl get pods --namespace=kube-system | grep calico-node
如果輸出結(jié)果中顯示了calico-node的Pod狀態(tài)為Running,則表示Calico已經(jīng)成功安裝
2、其他節(jié)點加入集群
在worker節(jié)點上檢查文件,不存在就從master上拷貝過來
#網(wǎng)絡(luò)插件配置
scp /etc/cni/net.d/* worker的ip:/etc/cni/net.d/#master集群配置
scp /etc/kubernetes/admin.conf worker的ip:/etc/kubernetes/#啟動參數(shù)
scp /etc/systemd/system/kubelet.service.d/10-kubeadm.conf worker的ip:/etc/systemd/system/kubelet.service.d/
*/etc/cni/net.d/*,例如如果master已經(jīng)安裝過網(wǎng)絡(luò)插件,并且用的是Flannel,應(yīng)該拷貝/etc/cni/net.d/10-flannel.conflist
{"name": "cbr0","cniVersion": "0.3.1","plugins": [{"type": "flannel","delegate": {"hairpinMode": true,"isDefaultGateway": true}},{"type": "portmap","capabilities": {"portMappings": true}}]
}
*如果網(wǎng)絡(luò)插件是Calico,應(yīng)該拷貝/etc/cni/net.d/10-calico.conflist
{"name": "k8s-pod-network","cniVersion": "0.3.1","plugins": [{"type": "calico","log_level": "info","log_file_path": "/var/log/calico/cni/cni.log","datastore_type": "kubernetes","nodename": "server-180","mtu": 0,"ipam": {"type": "calico-ipam"},"policy": {"type": "k8s"},"kubernetes": {"kubeconfig": "/etc/cni/net.d/calico-kubeconfig"}},{"type": "portmap","snat": true,"capabilities": {"portMappings": true}},{"type": "bandwidth","capabilities": {"bandwidth": true}}]
}
*如果/etc/systemd/system/kubelet.service.d/10-kubeadm.conf在master上也沒有,用下面內(nèi)容保存
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
添加worker節(jié)點
#worker節(jié)點也做環(huán)境變量配置,這樣就可以在worker節(jié)點上使用kubectl命令了
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown ${id -u}:${id -g} $HOME/.kube/config
echo export KUBECONFIG=/etc/kubernetes/admin.conf >> /etc/profile
systemctl daemon-reload
systemctl restart kubelet#在master服務(wù)器上運行命令
kubeadm token create --print-join-command這將生成一個 kubeadm join 命令,將上面生成的命令復(fù)制并在新的 Worker 節(jié)點上執(zhí)行。這將使新的節(jié)點以 Worker 的身份加入集群
*注意,需要在生成的kubeadm join 命令后面再加cri-socket參數(shù),例如kubeadm join 10.1.3.178:6443 --token z994lz.s0ogba045j84195c --discovery-token-ca-cert-hash sha256:89d69bc4b7c03bc8328713794c7aa4af798b0e65a64021a329bb9bf1d7afd23e --cri-socket=unix:///var/run/cri-dockerd.sock
添加其他master節(jié)點
和添加worker節(jié)點操作一樣,只是在join命令時多加一個參數(shù)--control-plane,例如
kubeadm join 10.1.3.178:6443 --token z994lz.s0ogba045j84195c --discovery-token-ca-cert-hash sha256:89d69bc4b7c03bc8328713794c7aa4af798b0e65a64021a329bb9bf1d7afd23e --cri-socket=unix:///var/run/cri-dockerd.sock --control-plane*注意,集群要建立多master節(jié)點的話,還需要創(chuàng)建證書并共享到每個master節(jié)點
@todo
查看所有已加入集群的節(jié)點
kubectl get nodes
3、CNI插件
CNI插件就是上面需要用到的網(wǎng)絡(luò)插件的底層,如果kubeadm init不成功(或者init成功后kubectl老是卡住),可以 journalctl查看各個服務(wù)日志,可能會出現(xiàn)/opt/cni/bin/目錄下某某bin文件不存在的報錯(例如portmap、flannel),檢查一下/opt/cni/bin/目錄下是否有該可執(zhí)行文件,沒有的話下載
#基礎(chǔ)的
mkdir -p /opt/cni/bin/
wget https://github.com/containernetworking/plugins/releases/download/v1.5.1/cni-plugins-linux-amd64-v1.5.1.tgz
tar zxvf cni-plugins-linux-amd64-v1.5.1.tgz -C /opt/cni/bin/#flannel插件
wget https://github.com/flannel-io/cni-plugin/releases/download/v1.5.1-flannel2/cni-plugin-flannel-linux-amd64-v1.5.1-flannel2.tgz
tar zxvf cni-plugin-flannel-linux-amd64-v1.5.1-flannel2.tgz -C /opt/cni/bin/
mv /opt/cni/bin/flannel-amd64 /opt/cni/bin/flannelsystemctl daemon-reload
systemctl restart containerd
systemctl restart docker
systemctl restart cri-docker
systemctl restart kubelet
四、核心組件說明
k8s的核心組件都是pod形式存在,kubectl get pods -n?kube-system即可看到所有
參考:
k8s–多master高可用集群環(huán)境搭建_kubernetes高可用多master搭建-CSDN博客
Master 主控節(jié)點
ETCD(配置存儲中心)
etcd服務(wù)是Kubernetes提供默認的存儲系統(tǒng),保存所有集群數(shù)據(jù),使用時需要為etcd數(shù)據(jù)提供備份計劃。
kube-apiserver(k8s集群的大腦)
kube-apiserver用于暴露Kubernetes API。任何的資源請求/調(diào)用操作都是通過kube-apiserver提供的接口進行。
提供了集群管理的RESTAPI接口(包括鑒權(quán)、數(shù)據(jù)校驗及集群狀態(tài)變更)
負責(zé)其他模塊之間的數(shù)據(jù)交互,承擔(dān)通信樞紐功能
是資源配額控制的入口
提供完備的集群安全機制
kube-controller-manager(控制器管理器)
運行管理控制器,是集群中處理常規(guī)任務(wù)的后臺線程。邏輯上,每個控制器是一個單獨的進程,但為了降低復(fù)雜性,它們都被編譯成單個二進制文件,并在單個進程中運行。
由一系列控制器組成,通過apiserver監(jiān)控整個集群的狀態(tài),并確保集群處于預(yù)期的工作狀態(tài)
● Node Controller
● Deployment Controller
● Service Controller
● Volume Controller
● Endpoint Controller
● Garbage Controller
● Namespace Controller
● Job Controller
● Resource quta Controller
Scheduler(調(diào)度程序,監(jiān)控node資源的狀況)
主要功能是接收調(diào)度pod到適合的運算節(jié)點上
● 預(yù)算策略( predict )
● 優(yōu)選策略( priorities )
Worker節(jié)點
Kubelet(容器的守護進程)
容器的搭起,銷毀等動作,負責(zé)pod的生命周期,運行node上
簡單地說, kubelet的主要功能就是定時從某個地方獲取節(jié)點上pod的期望狀態(tài)(運行什么容器、運行的副本數(shù)量網(wǎng)絡(luò)或者存儲如何配置等等) ,并調(diào)用對應(yīng)的容器平臺接口達到這個狀態(tài)
定時匯報當(dāng)前節(jié)點的狀態(tài)給apiserver,以供調(diào)度的時候使用
鏡像和容器的清理工作保證節(jié)點上鏡像不會占滿磁盤空間,退出的容器不會占用太多資源
kube-proxy(網(wǎng)絡(luò)代理和負載均衡器)
運行在node上,最先用iptables做隔離,現(xiàn)在流行用ipvs,更方便
kube-proxy是K8S在每個節(jié)點上運行網(wǎng)絡(luò)代理, service資源的載體
●建立了pod網(wǎng)絡(luò)和集群網(wǎng)絡(luò)的關(guān)系( clusterip- >podip )
●常用三種流量調(diào)度模式
●Userspace (廢棄)
●Iptables (廢棄)
●Ipvs(推薦)
●負責(zé)建立和刪除包括更新調(diào)度規(guī)則、通知apiserver自己的更新,或者從apiserver哪里獲取其他kube-proxy的調(diào)度規(guī)則變化來更新自己的Endpoint Controller 負責(zé)維護Service和Pod的對應(yīng)關(guān)系
Kube-proxy負責(zé)service的實現(xiàn),即實現(xiàn)了K8s內(nèi)部從pod到Service和外部從node port到service的訪問
注:Pod網(wǎng)絡(luò)是kube-kubelet提供,不是直接由Kube-proxy提供
各組件的工作流程:
User(采用命令kubectl)—> API server(響應(yīng),調(diào)度不同的Schedule)—> Schedule(調(diào)度)—> Controller Manager(創(chuàng)建不同的資源)—> etcd(寫入狀態(tài))—> 查找集群(node哪個有資源,通過Schedule,到對應(yīng)的node上創(chuàng)建pod)
五、后續(xù)
1、安裝helm
https://github.com/helm/helm
Helm | 安裝Helm
Helm 是 Kubernetes 的包管理器,將來會有越來越多的組件轉(zhuǎn)用helm來部署
wget https://get.helm.sh/helm-v3.15.4-linux-amd64.tar.gz
tar zxvf helm-v3.15.4-linux-amd64.tar.gz
cp linux-amd64/helm /usr/local/bin/
chmod +x /usr/local/bin/helm
2、安裝管理界面dashboard
https://github.com/kubernetes/dashboard
helm安裝方式
# 添加kubernetes-dashboard repository
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
# 部署一個 "kubernetes-dashboard" 發(fā)布版本
helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard
非helm安裝方式
#獲取dashboard資源文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml -O kubernetes-dashboard.yaml#修改yaml文件,暴露nodeport端口
spec:type: NodePort# 新增ports:- port: 443targetPort: 8443nodePort: 30100# 新增selector:k8s-app: kubernetes-dashboard#加載
kubectl apply -f kubernetes-dashboard.yaml# 創(chuàng)建 dashboard-admin 用戶
kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
# 綁定 clusterrolebinding
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin
# 創(chuàng)建登錄token
kubectl create token dashboard-admin -n kubernetes-dashboard#訪問
https://服務(wù)器ip地址:30001
把上面創(chuàng)建出來的登錄token復(fù)制到token輸入框里登錄
3、徹底刪除k8s
#清空K8S集群設(shè)置
kubeadm reset -f --cri-socket=unix:///var/run/cri-dockerd.sock
#如果用了ipvs
ipvsadm --clear#停止K8S
systemctl stop kubelet
systemctl stop cri-docker.socket cri-docker
systemctl stop docker.socket docker#刪除K8S相關(guān)軟件
yum -y remove kubelet kubeadm kubectl docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker-ce-rootless-extras
#如果是離線安裝的docker
yum -y remove kubelet kubeadm kubectl containerd.io
rm -rf /usr/bin/docker* /usr/lib/systemd/system/docker.service /usr/lib/systemd/system/docker.socket#手動刪除所有鏡像、容器和卷
rm -rf /var/lib/docker
rm -rf /var/lib/containerd#徹底刪除相關(guān)文件
rm -rf $HOME/.kube ~/.kube/ /etc/kubernetes/ /etc/systemd/system/kubelet.service.d /usr/lib/systemd/system/kubelet.service /usr/lib/systemd/system/cri-docker.service /usr/bin/kube* /etc/cni /opt/cni /var/lib/etcd /etc/docker/daemon.json /etc/containerd/config.toml /usr/lib/systemd/system/containerd.service
六、tips
1、一些有用命令
#用container命令行查看鏡像列表
ctr image list#查看container下k8s拉取的鏡像
ctr -n k8s.io image list#用cri-docker命令行查看鏡像列表
crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock image#強制刪除某個pod
kubectl delete pod <pod> -n <namespace> --grace-period=0 --force#查看iptables的轉(zhuǎn)發(fā)規(guī)則
iptables -L#查看ipvs的轉(zhuǎn)發(fā)規(guī)則
ipvsadm -Ln
2、拉取鏡像太慢
參考:
K8S Containerd導(dǎo)入Docker image鏡像_containerd導(dǎo)入docker鏡像-CSDN博客
K8S在創(chuàng)建容器時,或多或少有些鏡像無法正常拉取(網(wǎng)絡(luò)等原因)。
還在使用Docker Engine時我們能方便的pull第三方同步的鏡像,然后tag成需要的標(biāo)簽版本,讓K8S從本地獲取到想要的鏡像。
因Docker將其容器格式和運行時runC捐贈給OCI(開放容器標(biāo)準(zhǔn)),OCI標(biāo)準(zhǔn)化了容器工具和底層實現(xiàn)之間的大量接口。
以加速calico網(wǎng)絡(luò)插件拉取為例
# 拉取docker鏡像
docker pull calico/cni:v3.25.0
docker pull calico/node:v3.25.0
# 為鏡像打上k8s需要的 tag
docker tag calico/cni:v3.25.0 docker.io/calico/cni:v3.25.0
docker tag calico/node:v3.25.0 docker.io/calico/node:v3.25.0
# 將鏡像保存下來
docker save -o ./calico-cni.tar calico/cni:v3.25.0 docker.io/calico/cni:v3.25.0
docker save -o ./calico-node.tar calico/node:v3.25.0 docker.io/calico/node:v3.25.0
然后進行鏡像導(dǎo)入。注意要導(dǎo)入至K8S使用的containerd默認命名空間是 k8s.io 否則它會找不到鏡像
# 導(dǎo)入,-n 參數(shù)為指定命名空間
ctr -n k8s.io image import calico-cni.tar
ctr -n k8s.io image import calico-node.tar
# 確認下導(dǎo)入
ctr -n k8s.io image list | grep calico
# crictl是Kubernetes社區(qū)定義的CRI接口工具,在這邊也確認下
crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock image | grep calico#加載
wget https://docs.tigera.io/archive/v3.25/manifests/calico.yaml
kubectl apply -f calico.yaml
至此K8S已能在本地找到相應(yīng)鏡像(記得確認imagePullPolicy
已設(shè)置為IfNotPresent
或Never
)
3、一些可能出現(xiàn)的錯誤
Failed to start docker.service: Unit docker.service is masked
systemctl unmask docker.socket
systemctl unmask docker.service
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
modprobe bridge
modprobe br_netfilter
sysctl --system