網(wǎng)站建設(shè)的基本內(nèi)容免費(fèi)直鏈平臺(tái)
寫在前面
- 分享一個(gè)
k8s
集群流量查看器 - 很輕量的一個(gè)工具,監(jiān)控方便
- 博文內(nèi)容涉及:
Kubeshark
簡單介紹- Windows、Linux 下載運(yùn)行監(jiān)控Demo
Kubeshark
特性功能介紹
- 理解不足小伙伴幫忙指正
對(duì)每個(gè)人而言,真正的職責(zé)只有一個(gè):找到自我。然后在心中堅(jiān)守其一生,全心全意,永不停息。所有其它的路都是不完整的,是人的逃避方式,是對(duì)大眾理想的懦弱回歸,是隨波逐流,是對(duì)內(nèi)心的恐懼 ——赫爾曼·黑塞《德米安》
簡單介紹
Kubeshark
是 2021 年由 UP9 公司開源的一個(gè) K8s API 流量查看器 Mizu
發(fā)展而來,試圖成為一款 K8s 全過程流量監(jiān)控工具
。
Kubeshark
也被叫做 kubernetes 的 API 流量查看器,它提供對(duì)進(jìn)出 Kubernetes
集群內(nèi)的 pod 的所有 API 流量和負(fù)載的深度可見性和監(jiān)控。類似于針對(duì) Kubernetes
而重新發(fā)明的 TCPDump
和 Wireshark
。
Kubeshark
是占用空間最小的分布式數(shù)據(jù)包捕獲,專門為在大規(guī)模生產(chǎn)集群上運(yùn)行而構(gòu)建。
Kubeshark 架構(gòu)
Kubeshark 由四個(gè)不同的軟件組成,它們可以協(xié)同工作:CLI、Hub 和 Worker、基于 PCAP 的分布式存儲(chǔ)。
CLI(命令行界面)
: Kubeshark 客戶端的二進(jìn)制分布,它是用 Go 語言編寫的。它通過 K8s API 與集群通信,以部署 Hub 和 Worker Pod。。用于啟動(dòng)關(guān)閉 Kubeshark.
Hub(樞紐)
: 它協(xié)調(diào) worker 部署,接收來自每個(gè) worker 的嗅探和剖析,并收集到一個(gè)中心位置。它還提供一個(gè) Web 界面,用于在瀏覽器上顯示收集到的流量。
Work
: 作為 DaemonSet 部署到集群中,以確保集群中的每個(gè)節(jié)點(diǎn)都被 Kubeshark 覆蓋。
基于 PCAP 的分布式存儲(chǔ)
: Kubeshark 使用基于 PCAP 的分布式存儲(chǔ),其中每個(gè)工作線程將捕獲的 TCP 流存儲(chǔ)在節(jié)點(diǎn)的根文件系統(tǒng)中。Kubeshark 的配置包括默認(rèn)設(shè)置為 200MB 的存儲(chǔ)限制??梢酝ㄟ^ CLI 選項(xiàng)更改該限制。
下載安裝 & 功能Demo介紹
windows 下載安裝
通過下面的方式安裝
PS C:\Users\山河已無恙> curl -o kubeshark.exe https://github.com/kubeshark/kubeshark/releases/download/38.5/kubeshark.exe
運(yùn)行時(shí)需要提前復(fù)制 集群的 kubeconfig
文件。只能通過命令行運(yùn)行,運(yùn)行方式.\kubeshark.exe tap -A
監(jiān)控所有命名空間的流量
PS C:\Users\山河已無恙> .\kubeshark.exe tap -A
2023-03-03T12:08:20-05:00 INF versionCheck.go:23 > Checking for a newer version...
2023-03-03T12:08:20-05:00 INF tapRunner.go:45 > Using Docker: registry=docker.io/kubeshark/ tag=latest
2023-03-03T12:08:20-05:00 INF tapRunner.go:53 > Kubeshark will store the traffic up to a limit (per node). Oldest TCP streams will be removed once the limit is reached. limit=200MB
2023-03-03T12:08:20-05:00 INF common.go:69 > Using kubeconfig: path="C:\\Users\\山河已無恙\\.kube\\config"
2023-03-03T12:08:20-05:00 INF tapRunner.go:74 > Targeting pods in: namespaces=[""]
2023-03-03T12:08:20-05:00 INF tapRunner.go:129 > Targeted pod: cadvisor-5v7hl
2023-03-03T12:08:20-05:00 INF tapRunner.go:129 > Targeted pod: cadvisor-7dnmk
2023-03-03T12:08:20-05:00 INF tapRunner.go:129 > Targeted pod: cadvisor-7l4zf
2023-03-03T12:08:20-05:00 INF tapRunner.go:129 > Targeted pod: cadvisor-dj6dm
2023-03-03T12:08:20-05:00 INF tapRunner.go:129 > Targeted pod: cadvisor-sjpq8
2023-03-03T12:08:20-05:00 INF tapRunner.go:129 > Targeted pod: alertmanager-release-name-kube-promethe-alertmanager-0
2023-03-03T12:08:20-05:00 INF tapRunner.go:129 > Targeted pod: details-v1-5ffd6b64f7-wfbl2
...............................
確定所有 pod 啟動(dòng)成功
如果 pod 可以正常創(chuàng)建,那么會(huì)直接打開監(jiān)控頁面。
如果啟動(dòng)失敗, pod 沒有創(chuàng)建成功,或者監(jiān)控頁面沒有正常打開,提示 8899
,8898
端口不通,嘗試使用kubeshark.exe clean
清理環(huán)境重新安裝。也可以嘗試通過 --storagelimit 5000MB
指定存儲(chǔ)大小
PS C:\Users\山河已無恙> .\kubeshark.exe clean
2023-03-03T12:59:58-05:00 INF versionCheck.go:23 > Checking for a newer version...
2023-03-03T12:59:58-05:00 INF common.go:69 > Using kubeconfig: path="C:\\Users\\山河已無恙\\.kube\\config"
2023-03-03T12:59:58-05:00 WRN cleanResources.go:16 > Removing Kubeshark resources...
PS C:\Users\山河已無恙> .\kubeshark.exe tap -A --storagelimit 5000MB
重新啟動(dòng)
PS C:\Users\山河已無恙> .\kubeshark.exe tap -A --storagelimit 5000MB
2023-03-03T12:30:36-05:00 INF versionCheck.go:23 > Checking for a newer version...
2023-03-03T12:30:36-05:00 INF tapRunner.go:45 > Using Docker: registry=docker.io/kubeshark/ tag=latest
2023-03-03T12:30:36-05:00 INF tapRunner.go:53 > Kubeshark will store the traffic up to a limit (per node). Oldest TCP streams will be removed once the limit is reached. limit=5000MB
2023-03-03T12:30:36-05:00 INF common.go:69 > Using kubeconfig: path="C:\\Users\\山河已無恙\\.kube\\config"
2023-03-03T12:30:37-05:00 INF tapRunner.go:74 > Targeting pods in: namespaces=[""]
2023-03-03T12:30:37-05:00 INF tapRunner.go:129 > Targeted pod: cadvisor-5v7hl
。。。。。。。。。。
2023-03-03T12:31:21-05:00 INF proxy.go:29 > Starting proxy... namespace=kubeshark service=kubeshark-hub src-port=8898
2023-03-03T12:31:21-05:00 INF workers.go:32 > Creating the worker DaemonSet...
2023-03-03T12:31:21-05:00 INF workers.go:51 > Successfully created the worker DaemonSet.
2023-03-03T12:31:23-05:00 INF tapRunner.go:402 > Hub is available at: url=http://localhost:8898
2023-03-03T12:31:23-05:00 INF proxy.go:29 > Starting proxy... namespace=kubeshark service=kubeshark-front src-port=8899
2023-03-03T12:31:23-05:00 INF tapRunner.go:418 > Kubeshark is available at: url=http://localhost:8899
多次測試,發(fā)現(xiàn),啟動(dòng)不成功時(shí)一個(gè)偶然性的問題,并且會(huì)經(jīng)常發(fā)生,有可能鏡像拉取超時(shí),或者代理沒有創(chuàng)建成功 ,本地端口無法訪問。
特新介紹
通過監(jiān)控頁面可以看到流量協(xié)議,請(qǐng)求路由, 請(qǐng)求響應(yīng)狀態(tài),請(qǐng)求方式,請(qǐng)求源/目標(biāo)地址 IP,由那個(gè) POD 發(fā)起。可以通過過濾器對(duì) 包進(jìn)行過濾。
過濾表達(dá)式有專門的文檔 Demo
過濾 HTTP 請(qǐng)求返回狀態(tài)碼為 404 的請(qǐng)求
可以看到 集群備份工具 velero 可能有問題,查看對(duì)應(yīng)的拓?fù)潢P(guān)系確認(rèn)
頂部可以展示所有的 Pod 列表
選擇對(duì)應(yīng)的包,可以查看詳細(xì)信息
請(qǐng)求報(bào)文,響應(yīng)報(bào)文
可以查看報(bào)文的具體內(nèi)容
不加包過濾,默認(rèn)情況下右上角的拓?fù)湫畔⒖梢圆榭串?dāng)前監(jiān)控信息的整體視圖
箭頭的粗細(xì)表示流量數(shù)量。顏色表示不同的協(xié)議
Linux 下載安裝
也可以在 Linux 下安裝。需要注意一下幾點(diǎn)
- 默認(rèn)情況下
kubeshark
會(huì)自動(dòng)創(chuàng)建部署節(jié)點(diǎn)的端口代理,所以不需要修改 創(chuàng)建的 hubSVC
為NodePort
或LB
- 如果當(dāng)前環(huán)境沒有桌面端,需要添加
--set headless=true
,--proxy-host 0.0.0.0
,限制其打開瀏覽器,并且運(yùn)行外部IP訪問
下載運(yùn)行的二進(jìn)制文件
┌──[root@vms100.liruilongs.github.io]-[~/ansible/Kubeshark]
└─$curl -s -Lo kubeshark https://github.com/kubeshark/kubeshark/releases/download/38.5/kubeshark_linux_amd64 && chmod 755 kubeshark
┌──[root@vms100.liruilongs.github.io]-[~/ansible/Kubeshark]
└─$ls
kubeshark_linux_amd64
┌──[root@vms100.liruilongs.github.io]-[~/ansible/Kubeshark]
└─$
移動(dòng)文件修改名字,是其可以直接執(zhí)行
┌──[root@vms100.liruilongs.github.io]-[~/ansible/Kubeshark]
└─$mv kubeshark_linux_amd64 /usr/local/bin/kubeshark
┌──[root@vms100.liruilongs.github.io]-[/usr/local/bin]
└─$chmod +x kubeshark
kubeshark clean
用于清空當(dāng)前部署環(huán)境
┌──[root@vms100.liruilongs.github.io]-[~/ansible/Kubeshark]
└─$kubeshark clean
2023-03-03T09:44:32+08:00 INF versionCheck.go:23 > Checking for a newer version...
2023-03-03T09:44:32+08:00 INF common.go:69 > Using kubeconfig: path=/root/.kube/config
2023-03-03T09:44:32+08:00 WRN cleanResources.go:16 > Removing Kubeshark resources...
運(yùn)行 kubeshark
監(jiān)控所有的命名空間
┌──[root@vms100.liruilongs.github.io]-[~/ansible/Kubeshark]
└─$kubeshark tap -A --storagelimit 2000MB --proxy-host 0.0.0.0 --set headless=true
2023-03-04T01:53:47+08:00 INF tapRunner.go:45 > Using Docker: registry=docker.io/kubeshark/ tag=latest
2023-03-04T01:53:47+08:00 INF tapRunner.go:53 > Kubeshark will store the traffic up to a limit (per node). Oldest TCP streams will be removed once the limit is reached. limit=2000MB
2023-03-04T01:53:47+08:00 INF versionCheck.go:23 > Checking for a newer version...
。。。。。。。。。
2023-03-04T01:53:48+08:00 INF tapRunner.go:240 > Added: pod=kubeshark-front
2023-03-04T01:53:48+08:00 INF tapRunner.go:160 > Added: pod=kubeshark-hub
2023-03-04T01:54:25+08:00 WRN watch.go:61 > K8s watch channel closed, restarting watcher...
2023-03-04T01:54:25+08:00 WRN watch.go:61 > K8s watch channel closed, restarting watcher...
2023-03-04T01:54:25+08:00 WRN watch.go:61 > K8s watch channel closed, restarting watcher...
2023-03-04T01:54:30+08:00 INF tapRunner.go:240 > Added: pod=kubeshark-front
2023-03-04T01:54:30+08:00 INF tapRunner.go:160 > Added: pod=kubeshark-hub
2023-03-04T01:54:58+08:00 INF proxy.go:29 > Starting proxy... namespace=kubeshark service=kubeshark-hubsrc-port=8898
2023-03-04T01:54:58+08:00 INF workers.go:32 > Creating the worker DaemonSet...
2023-03-04T01:54:59+08:00 INF workers.go:51 > Successfully created the worker DaemonSet.
2023-03-04T01:55:00+08:00 INF tapRunner.go:402 > Hub is available at: url=http://localhost:8898
2023-03-04T01:55:00+08:00 INF proxy.go:29 > Starting proxy... namespace=kubeshark service=kubeshark-front src-port=8899
2023-03-04T01:55:00+08:00 INF tapRunner.go:418 > Kubeshark is available at: url=http://localhost:8899
2023-03-04T01:55:56+08:00 WRN watch.go:61 > K8s watch channel closed, restarting watcher...
2023-03-04T01:55:56+08:00 WRN watch.go:61 > K8s watch channel closed, restarting watcher...
2023-03-04T01:55:56+08:00 WRN watch.go:61 > K8s watch channel closed, restarting watcher...
2023-03-04T01:56:01+08:00 INF tapRunner.go:240 > Added: pod=kubeshark-front
2023-03-04T01:56:01+08:00 INF tapRunner.go:160 > Added: pod=kubeshark-hub
瀏覽器訪問
拓?fù)湫畔⒉榭?/p>
部署中遇到的問題
如果啟動(dòng)失敗,可以通過 kubeshark check
命名檢查, 該命令用于檢查部署pod,代理是否成功
┌──[root@vms100.liruilongs.github.io]-[~/ansible/Kubeshark]
└─$kubeshark check
2023-03-03T22:33:22+08:00 INF checkRunner.go:21 > Checking the Kubeshark resources...
2023-03-03T22:33:22+08:00 INF versionCheck.go:23 > Checking for a newer version...
2023-03-03T22:33:22+08:00 INF kubernetesApi.go:11 > Checking: procedure=kubernetes-api
..........
2023-03-03T22:33:22+08:00 INF kubernetesPermissions.go:89 > Can create services
2023-03-03T22:33:23+08:00 INF kubernetesPermissions.go:89 > Can create daemonsets in api group 'apps'
2023-03-03T22:33:23+08:00 INF kubernetesPermissions.go:89 > Can patch daemonsets in api group 'apps'
2023-03-03T22:33:23+08:00 INF kubernetesPermissions.go:89 > Can list namespaces
..........
2023-03-03T22:33:23+08:00 INF kubernetesResources.go:116 > Resource exist. name=kubeshark-cluster-role-binding type="cluster role binding"
2023-03-03T22:33:23+08:00 INF kubernetesResources.go:116 > Resource exist. name=kubeshark-hub type=service
2023-03-03T22:33:23+08:00 INF kubernetesResources.go:64 > Pod is running. name=kubeshark-hub
2023-03-03T22:33:23+08:00 INF kubernetesResources.go:92 > All 8 pods are running. name=kubeshark-worker
2023-03-03T22:33:23+08:00 INF serverConnection.go:11 > Checking: procedure=server-connectivity
2023-03-03T22:33:23+08:00 INF serverConnection.go:33 > Connecting: url=http://localhost:8898
2023-03-03T22:33:26+08:00 ERR serverConnection.go:16 > Couldn't connect to Hub using proxy! error="Couldn't reach the URL: http://localhost:8898 after 3 retries!"
2023-03-03T22:33:26+08:00 INF serverConnection.go:33 > Connecting: url=http://localhost:8899
2023-03-03T22:33:29+08:00 ERR serverConnection.go:23 > Couldn't connect to Front using proxy! error="Couldn't reach the URL: http://localhost:8899 after 3 retries!"
2023-03-03T22:33:29+08:00 ERR checkRunner.go:50 > There are issues in your Kubeshark resources! Run these commands: command1="kubeshark clean" command2="kubeshark tap [POD REGEX]"
┌──[root@vms100.liruilongs.github.io]-[~/ansible/Kubeshark]
└─$
如果代理創(chuàng)建不成功,可以嘗試下面的方式使用。
https://github.com/kubeshark/kubeshark/wiki/CHANGELOG
┌──[root@vms100.liruilongs.github.io]-[~/ansible/Kubeshark]
└─$jobs
┌──[root@vms100.liruilongs.github.io]-[~/ansible/Kubeshark]
└─$coproc kubectl port-forward -n kubeshark service/kubeshark-hub 8898:80;
[1] 125248
┌──[root@vms100.liruilongs.github.io]-[~/ansible/Kubeshark]
└─$coproc kubectl port-forward -n kubeshark service/kubeshark-front 8899:80;
-bash: 警告:execute_coproc: coproc [125248:COPROC] still exists
[2] 125784
┌──[root@vms100.liruilongs.github.io]-[~/ansible/Kubeshark]
└─$jobs
[1]- 運(yùn)行中 coproc COPROC kubectl port-forward -n kubeshark service/kubeshark-hub 8898:80 &
[2]+ 運(yùn)行中 coproc COPROC kubectl port-forward -n kubeshark service/kubeshark-front 8899:80 &
重新檢查校驗(yàn)
┌──[root@vms100.liruilongs.github.io]-[~/ansible/Kubeshark]
└─$kubeshark check
2023-03-03T23:35:50+08:00 INF checkRunner.go:21 > Checking the Kubeshark resources...
2023-03-03T23:35:50+08:00 INF kubernetesApi.go:11 > Checking: procedure=kubernetes-api
2023-03-03T23:35:50+08:00 INF versionCheck.go:23 > Checking for a newer version...
2023-03-03T23:35:50+08:00 INF kubernetesApi.go:18 > Initialization of the client is passed.
2023-03-03T23:35:50+08:00 INF kubernetesApi.go:25 > Querying the Kubernetes API is passed.
2023-03-03T23:35:50+08:00 INF kubernetesVersion.go:13 > Checking: procedure=kubernetes-version
2023-03-03T23:35:50+08:00 INF kubernetesVersion.go:20 > Minimum required Kubernetes API version is passed. k8s-version=v1.25.1
2023-03-03T23:35:50+08:00 INF kubernetesPermissions.go:16 > Checking: procedure=kubernetes-permissions
2023-03-03T23:35:50+08:00 INF kubernetesPermissions.go:89 > Can list pods
。。。。。。。
2023-03-03T23:35:51+08:00 INF kubernetesResources.go:64 > Pod is running. name=kubeshark-hub
2023-03-03T23:35:51+08:00 INF kubernetesResources.go:92 > All 8 pods are running. name=kubeshark-worker
2023-03-03T23:35:51+08:00 INF serverConnection.go:11 > Checking: procedure=server-connectivity
2023-03-03T23:35:51+08:00 INF serverConnection.go:33 > Connecting: url=http://localhost:8898
2023-03-03T23:35:52+08:00 INF serverConnection.go:19 > Connected successfully to Hub using proxy.
2023-03-03T23:35:52+08:00 INF serverConnection.go:33 > Connecting: url=http://localhost:8899
2023-03-03T23:35:52+08:00 INF serverConnection.go:26 > Connected successfully to Front using proxy.
2023-03-03T23:35:52+08:00 INF checkRunner.go:45 > All checks are passed.
博文部分內(nèi)容參考
? 文中涉及參考鏈接內(nèi)容版權(quán)歸原作者所有,如有侵權(quán)請(qǐng)告知, 這是一個(gè)開源項(xiàng)目,如果你認(rèn)可它,不要吝嗇星星哦 😃
https://github.com/kubeshark/kubeshark
https://medium.com/kernel-space/kubeshark-wireshark-for-kubernetes-4069a5f5aa3d
https://docs.kubeshark.co/en/config
https://github.com/kubeshark/kubeshark/wiki/CHANGELOG
? 2018-2023 liruilonger@gmail.com, All rights reserved. 保持署名-非商用-相同方式共享(CC BY-NC-SA 4.0)