國外做的好的醫(yī)療網(wǎng)站網(wǎng)站域名服務(wù)器查詢
目錄
安裝文件準備
主機準備
主機配置
修改主機名(三個節(jié)點分別執(zhí)行)
配置hosts(所有節(jié)點)
關(guān)閉防火墻、selinux、swap、dnsmasq(所有節(jié)點)
安裝依賴包(所有節(jié)點)
系統(tǒng)參數(shù)設(shè)置(所有節(jié)點)
時間同步(所有節(jié)點)
配置ipvs功能(所有節(jié)點)
安裝docker(所有節(jié)點)
卸載老版本
安裝docker
安裝依賴
安裝
測試啟動
添加 system啟動
?配置cgroupd
k8s準備和安裝
準備鏡像(所有節(jié)點)
修改鏡像版本(所有節(jié)點)
?安裝 kubeadm,kubelet 和 kubectl(所有節(jié)點)
安裝 master(master節(jié)點)
安裝kubernets node(node節(jié)點)
安裝kubernets 網(wǎng)絡(luò)插件 calico(master節(jié)點操作)
kubenertes使用與測試
安裝kuboard
內(nèi)建用戶庫方式安裝
訪問 Kuboard v3.x
kubernetes方式安裝
訪問 Kuboard
卸載
參考文獻與常見錯誤(見參考文獻)
安裝文件準備
主機準備
主機配置
172.171.16.147 crawler-k8s-master
172.171.16.148?crawler-k8s-node1
172.171.16.149?crawler-k8s-node2
修改主機名(三個節(jié)點分別執(zhí)行)
172.171.16.147
hostnamectl set-hostname crawler-k8s-master
172.171.16.148
hostnamectl set-hostname crawler-k8s-node1
172.171.16.149
hostnamectl set-hostname crawler-k8s-node2
?查看主機名
hostnamectl #查看主機名
配置hosts(所有節(jié)點)
配置? /etc/hosts 文件
cat >> /etc/hosts << EOF
172.171.16.147 crawler-k8s-master
172.171.16.148?crawler-k8s-node1
172.171.16.149?crawler-k8s-node2
EOF
關(guān)閉防火墻、selinux、swap、dnsmasq(所有節(jié)點)
關(guān)閉防火墻
systemctl stop firewalld
systemctl disable firewalld
關(guān)閉selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config ?#永久
setenforce 0 ?#臨時
關(guān)閉swap(k8s禁止虛擬內(nèi)存以提高性能)
sed -ri 's/.*swap.*/#&/' /etc/fstab #永久
swapoff -a #臨時
//關(guān)閉dnsmasq(否則可能導致docker容器無法解析域名)
service dnsmasq stop
systemctl disable dnsmaq
安裝依賴包(所有節(jié)點)
yum -y update
yum install wget -y
yum install vim -y
yum -y install conntranck ipvsadm ipset jq sysstat curl iptables libseccomp
系統(tǒng)參數(shù)設(shè)置(所有節(jié)點)
//制作配置文件 設(shè)置網(wǎng)橋參數(shù)
mkdir /etc/sysctl.d
vim /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
vm.overcommit_memory = 1
vm.panic_on_oom = 0
fs.inotify.max_user_watches = 89100
/生效文件
sysctl -p /etc/sysctl.d/kubernetes.conf
如果報錯:
[root@crawler-k8s-master ~]# sysctl -p /etc/sysctl.d/kubernetes.conf
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: 沒有那個文件或目錄
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: 沒有那個文件或目錄
//加載網(wǎng)橋過濾模塊
modprobe ?br_netfilter
然后再次??
sysctl -p /etc/sysctl.d/kubernetes.conf
時間同步(所有節(jié)點)
//安裝時間同步服務(wù)
yum -y install chrony
//開啟服務(wù)
systemctl start chronyd
systemctl enable chronyd
配置ipvs功能(所有節(jié)點)
在kubernetes中service有兩種代理模型,一種是基于iptables的,一種是基于ipvs的
兩者比較的話,ipvs的性能明顯要高一些,但是如果要使用它,需要手動載入ipvs模塊
//添加需要加載的模塊寫入腳本文件
vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
//為腳本文件添加執(zhí)行權(quán)限
chmod +x /etc/sysconfig/modules/ipvs.modules
//執(zhí)行腳本文件
/bin/bash /etc/sysconfig/modules/ipvs.modules
備注:如果報錯可能是需要將?modprobe -- nf_conntrack_ipv4? 改為modprobe -- nf_conntrack
安裝docker(所有節(jié)點)
卸載老版本
yum remove docker docker-client ?docker-client-latest ?docker-common docker-latest ?docker-latest-logrotate docker-logrotate docker-engine
安裝docker
docker? 安裝包在準備文件的 docker目錄下,上傳到服務(wù)器
安裝依賴
導入相關(guān)依賴至各個節(jié)點
安裝相關(guān)依賴:
rpm -ivh containerd.io-1.6.10-3.1.el7.x86_64.rpm --force --nodepsrpm -ivh container-selinux-2.138.0-1.p01.ky10.noarch.rpm --force --nodepsrpm -ivh docker-ce-20.10.21-3.el7.x86_64.rpm --force --nodepsrpm -ivh docker-ce-cli-20.10.21-3.el7.x86_64.rpm --force --nodepssrpm -ivh docker-ce-cli-20.10.21-3.el7.x86_64.rpm --force --nodepssrpm -ivh docker-compose-1.22.0-4.ky10.noarch.rpm --force --nodepsrpm -ivh docker-scan-plugin-0.21.0-3.el7.x86_64.rpm --force --nodepsrpm -ivh libsodium-1.0.16-7.ky10.x86_64.rpm --force --nodepsrpm -ivh python3-bcrypt-3.1.4-8.ky10.x86_64.rpm --force --nodepsrpm -ivh python3-cached_property-1.5.1-1.ky10.noarch.rpm --force --nodepsrpm -ivh python3-docker-4.0.2-1.ky10.noarch.rpm --force --nodepsrpm -ivh python3-dockerpty-0.4.1-1.ky10.noarch.rpm --force --nodepsrpm -ivh python3-docker-pycreds-0.4.0-1.1.ky10.noarch.rpm --force --nodepsrpm -ivh python3-docopt-0.6.2-11.ky10.noarch.rpm --force --nodepsrpm -ivh python3-ipaddress-1.0.23-1.ky10.noarch.rpm --force --nodepsrpm -ivh python3-jsonschema-2.6.0-6.ky10.noarch.rpm --force --nodepsrpm -ivh python3-paramiko-2.4.3-1.ky10.ky10.noarch.rpm --force --nodepsrpm -ivh python3-pyasn1-0.3.7-8.ky10.noarch.rpm --force --nodepsrpm -ivh python3-pyyaml-5.3.1-4.ky10.x86_64.rpm --force --nodepsrpm -ivh python3-texttable-1.4.0-2.ky10.noarch.rpm --force --nodepsrpm -ivh python3-websocket-client-0.47.0-6.ky10.noarch.rpm --force --nodepsrpm -ivh fuse-overlayfs-0.7.2-6.el7_8.x86_64.rpmrpm -ivh slirp4netns-0.4.3-4.el7_8.x86_64.rpm
安裝
tar xf docker-20.10.9.tgzmv docker/* /usr/bin/
測試啟動
dockerd
添加 system啟動
編輯docker的系統(tǒng)服務(wù)文件
vim /usr/lib/systemd/system/docker.service
[Unit]Description=Docker Application Container EngineDocumentation=https://docs.docker.comAfter=network-online.target firewalld.serviceWants=network-online.target[Service]Type=notifyExecStart=/usr/bin/dockerdExecReload=/bin/kill -s HUP $MAINPIDLimitNOFILE=infinityLimitNPROC=infinityTimeoutStartSec=0Delegate=yesKillMode=processRestart=on-failureStartLimitBurst=3StartLimitInterval=60s[Install]WantedBy=multi-user.target
?設(shè)置自啟動
systemctl start docker & systemctl enable docker
?配置cgroupd
vim /etc/docker/daemon.json
{"exec-opts": ["native.cgroupdriver=systemd"]
}
//設(shè)置開機啟動systemctl start dockersystemctl enable docker//重啟dockersystemctl daemon-reloadsystemctl restart docker
k8s準備和安裝
安裝包在 準備文件的 docker-images下,上傳到服務(wù)器/home 下
準備鏡像(所有節(jié)點)
解壓
???????cd /home/docker-images/tar -zxvf kubeadm-images-1.18.0.tar.gz -C /home/docker-images/kubeadm-images-1.18.0
?
制作加載鏡像腳本
vim?load-image.sh
#!/bin/bash
ls /home/docker-images/kubeadm-images-1.18.0 > /home/images-list.txt
cd /home/docker-images/kubeadm-images-1.18.0
docker load -i /home/docker-images/cni.tar
docker load -i /home/docker-images/node.tar
docker load -i /home/docker-images/kuboard.tar
for i in $(cat /home/images-list.txt)
dodocker load -i $idone
然后導入鏡像
chmod +7 load-image.sh
./load-image.sh
修改鏡像版本(所有節(jié)點)
修改K8S 1.23.7版本所需版本的images
修改命令:docker tag 【鏡像ID】【鏡像名稱】:【tag版本信息】
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.25.4 registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.7
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.25.4 registry.aliyuncs.com/google_containers/kube-proxy:v1.23.7
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.25.4 registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.7
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.25.4 registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.7
docker tag registry.aliyuncs.com/google_containers/etcd:3.5.5-0 registry.aliyuncs.com/google_containers/etcd:3.5.5-0
docker tag registry.aliyuncs.com/google_containers/pause:3.8 registry.aliyuncs.com/google_containers/pause:3.8
docker tag registry.aliyuncs.com/google_containers/coredns:v1.9.3 registry.aliyuncs.com/google_containers/coredns:v1.9.3
?這樣就準備好了所有的鏡像
?安裝 kubeadm,kubelet 和 kubectl(所有節(jié)點)
安裝包在 準備文件的 k8s下,上傳到服務(wù)器/home 下
工具說明:
- kubeadm:部署集群用的命令
- kubelet:在集群中每臺機器上都要運行的組件,負責管理pod、容器的什么周期
- kubectl:集群管理工具配置阿里云源:
安裝:
cd /home/k8srpm -ivh *.rpm
設(shè)置開機自啟動:
systemctl start kubelet && systemctl enable kubelet
安裝 master(master節(jié)點)
kubeadm init --apiserver-advertise-address=172.171.16.147 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.23.7 --service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16
日志如下:
[init] Using Kubernetes version: v1.23.7
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [crawler-k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.171.16.147]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [crawler-k8s-master localhost] and IPs [172.171.16.147 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [crawler-k8s-master localhost] and IPs [172.171.16.147 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 12.507186 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node crawler-k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node crawler-k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: i4dp7i.7t1j8ezmgwkj1gio
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 172.171.16.147:6443 --token i4dp7i.7t1j8ezmgwkj1gio \--discovery-token-ca-cert-hash sha256:9fb74686ff3bea5769e5ed466dbb2c32ed3fc920374ff2175b39b8162ac27f8f
?在 master上進一步執(zhí)行上面提示的命令
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
安裝kubernets node(node節(jié)點)
將?node 添加到集群中
kubeadm join 172.171.16.147:6443 --token i4dp7i.7t1j8ezmgwkj1gio \--discovery-token-ca-cert-hash sha256:9fb74686ff3bea5769e5ed466dbb2c32ed3fc920374ff2175b39b8162ac27f8f
然后顯示日志:
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
安裝kubernets 網(wǎng)絡(luò)插件 calico(master節(jié)點操作)
安裝包在 準備文件的 k8s/calico.yaml下,上傳到服務(wù)器/home 下
下載 calico文檔?https://docs.projectcalico.org/manifests/calico.yaml
修改文件中的鏡像地址
grep image calico.yamlsed -i 's#docker.io##g' calico.yaml
kubectl apply -f calico.yaml
可能出現(xiàn)的問題
(1)修改?CALICO_IPV4POOL_CIDR 參數(shù)為? :
kubeadm init --apiserver-advertise-address=172.171.16.147 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.23.7 --service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16
中的--pod-network-cidr值
(2)修改?IP_AUTODETECTION_METHOD 的值為網(wǎng)卡名稱(沒有這個參數(shù)就不用修改)
ip a 查看網(wǎng)卡名稱
?
kubenertes使用與測試
kubectl create deployment nginx --image=nginx #部署nginx
kubectl expose deployment nginx --port=80 --type=NodePort #暴露端口
kubectl get pod,svc #查看服務(wù)狀態(tài)
??
部署完成?
安裝kuboard
安裝包在 準備文件的docker-images/kuboard.tar下,上傳到服務(wù)器/home 下
安裝包在 準備文件的kuboard下,上傳到服務(wù)器/home 下
上面提到的兩個目錄下都有
cd /home/docker-images
dpcker load -i kuboard.tar
內(nèi)建用戶庫方式安裝
官網(wǎng)安裝地址:安裝 Kuboard v3 - 內(nèi)建用戶庫 | Kuboard
sudo docker run -d \--restart=unless-stopped \--name=kuboard \-p 80:80/tcp \-p 10081:10081/tcp \-e KUBOARD_ENDPOINT="http://172.171.16.147:80" \-e KUBOARD_AGENT_SERVER_TCP_PORT="10081" \-v /root/kuboard-data:/data \eipwork/kuboard:v3# 也可以使用鏡像 swr.cn-east-2.myhuaweicloud.com/kuboard/kuboard:v3 ,可以更快地完成鏡像下載。# 請不要使用 127.0.0.1 或者 localhost 作為內(nèi)網(wǎng) IP \# Kuboard 不需要和 K8S 在同一個網(wǎng)段,Kuboard Agent 甚至可以通過代理訪問 Kuboard Server \
WARNING
- KUBOARD_ENDPOINT 參數(shù)的作用是,讓部署到 Kubernetes 中的?
kuboard-agent
?知道如何訪問 Kuboard Server; - KUBOARD_ENDPOINT 中也可以使用外網(wǎng) IP;
- Kuboard 不需要和 K8S 在同一個網(wǎng)段,Kuboard Agent 甚至可以通過代理訪問 Kuboard Server;
- 建議在 KUBOARD_ENDPOINT 中使用域名;
- 如果使用域名,必須能夠通過 DNS 正確解析到該域名,如果直接在宿主機配置?
/etc/hosts
?文件,將不能正常運行;
參數(shù)解釋
- 建議將此命令保存為一個 shell 腳本,例如?
start-kuboard.sh
,后續(xù)升級 Kuboard 或恢復(fù) Kuboard 時,需要通過此命令了解到最初安裝 Kuboard 時所使用的參數(shù); - 第 4 行,將 Kuboard Web 端口 80 映射到宿主機的?
80
?端口(您可以根據(jù)自己的情況選擇宿主機的其他端口); - 第 5 行,將 Kuboard Agent Server 的端口?
10081/tcp
?映射到宿主機的?10081
?端口(您可以根據(jù)自己的情況選擇宿主機的其他端口); - 第 6 行,指定 KUBOARD_ENDPOINT 為?
http://內(nèi)網(wǎng)IP
,如果后續(xù)修改此參數(shù),需要將已導入的 Kubernetes 集群從 Kuboard 中刪除,再重新導入; - 第 7 行,指定 KUBOARD_AGENT_SERVER 的端口為?
10081
,此參數(shù)與第 5 行中的宿主機端口應(yīng)保持一致,修改此參數(shù)不會改變?nèi)萜鲀?nèi)監(jiān)聽的端口?10081
,例如,如果第 5 行為?-p 30081:10081/tcp
?則第 7 行應(yīng)該修改為?-e KUBOARD_AGENT_SERVER_TCP_PORT="30081"
; - 第 8 行,將持久化數(shù)據(jù)?
/data
?目錄映射到宿主機的?/root/kuboard-data
?路徑,請根據(jù)您自己的情況調(diào)整宿主機路徑;
其他參數(shù)
- 在啟動命令行中增加環(huán)境變量?
KUBOARD_ADMIN_DERAULT_PASSWORD
,可以設(shè)置?admin
?用戶的初始默認密碼。
訪問 Kuboard v3.x
在瀏覽器輸入?http://172.171.16.147:80
?即可訪問 Kuboard v3.x 的界面,登錄方式:
- 用戶名:?
admin
- 密 碼:?
Kuboard123
kubernetes方式安裝
安裝包在 準備文件的 kuboard下,上傳到服務(wù)器/home 下
參考文獻:安裝 Kuboard v3 - kubernetes | Kuboard
-
執(zhí)行 Kuboard v3 在 K8S 中的安裝
kubectl apply -f https://addons.kuboard.cn/kuboard/kuboard-v3.yaml
# 您也可以使用下面的指令,唯一的區(qū)別是,該指令使用華為云的鏡像倉庫替代 docker hub 分發(fā) Kuboard 所需要的鏡像
# kubectl apply -f https://addons.kuboard.cn/kuboard/kuboard-v3-swr.yaml
等待 Kuboard v3 就緒
執(zhí)行指令?watch kubectl get pods -n kuboard
,等待 kuboard 名稱空間中所有的 Pod 就緒,如下所示,
如果結(jié)果中沒有出現(xiàn)?kuboard-etcd-xxxxx
?的容器,請查看??常見錯誤?中關(guān)于?缺少 Master Role
?的描述。
[root@node1 ~]# kubectl get pods -n kuboard
NAME READY STATUS RESTARTS AGE
kuboard-agent-2-65bc84c86c-r7tc4 1/1 Running 2 28s
kuboard-agent-78d594567-cgfp4 1/1 Running 2 28s
kuboard-etcd-fh9rp 1/1 Running 0 67s
kuboard-etcd-nrtkr 1/1 Running 0 67s
kuboard-etcd-ader3 1/1 Running 0 67s
kuboard-v3-645bdffbf6-sbdxb 1/1 Running 0 67s
訪問 Kuboard
-
在瀏覽器中打開鏈接?
http://your-node-ip-address:30080
-
輸入初始用戶名和密碼,并登錄
- 用戶名:?
admin
- 密碼:?
Kuboard123
- 用戶名:?
瀏覽器兼容性
- 請使用 Chrome / FireFox / Safari / Edge 等瀏覽器
- 不兼容 IE 以及以 IE 為內(nèi)核的瀏覽器
添加新的集群
- Kuboard v3 是支持 Kubernetes 多集群管理的,在 Kuboard v3 的首頁里,點擊?添加集群?按鈕,在向?qū)У囊龑驴梢酝瓿杉旱奶砑?#xff1b;
- 向 Kuboard v3 添加新的 Kubernetes 集群時,請確保:
- 您新添加集群可以訪問到當前集群 Master 節(jié)點?
內(nèi)網(wǎng)IP
?的?30080 TCP
、30081 TCP
、30081 UDP
?端口; - 如果您打算新添加到 Kuboard 中的集群與當前集群不在同一個局域網(wǎng),請咨詢 Kuboard 團隊,幫助您解決問題。
- 您新添加集群可以訪問到當前集群 Master 節(jié)點?
卸載
-
執(zhí)行 Kuboard v3 的卸載
kubectl delete -f https://addons.kuboard.cn/kuboard/kuboard-v3.yaml
-
清理遺留數(shù)據(jù)
在 master 節(jié)點以及帶有?
k8s.kuboard.cn/role=etcd
?標簽的節(jié)點上執(zhí)行rm -rf /usr/share/kuboard
參考文獻與常見錯誤(見參考文獻)
Kubeadm部署k8s集群
Kubernetes安裝和試用
kube-flannel.yml(已修改鏡像下載數(shù)據(jù)源)
Linux高級---k8s搭建之使用calico網(wǎng)絡(luò)插件