中文亚洲精品无码_熟女乱子伦免费_人人超碰人人爱国产_亚洲熟妇女综合网

當(dāng)前位置: 首頁 > news >正文

達(dá)州做網(wǎng)站的公司b站引流推廣

達(dá)州做網(wǎng)站的公司,b站引流推廣,電子商務(wù)網(wǎng)站建設(shè)報價,中小企業(yè)官網(wǎng)K8S環(huán)境部署Prometheus 記錄在K8S 1.18版本環(huán)境下部署Prometheus 0.5版本。 1. 下載kube-prometheus倉庫 git clone https://github.com/coreos/kube-prometheus.git cd kube-prometheus筆者安裝的K8S版本是1.18 ,prometheus選擇配套的分支release-0.5&#xff1…

K8S環(huán)境部署Prometheus

記錄在K8S 1.18版本環(huán)境下部署Prometheus 0.5版本。

1. 下載kube-prometheus倉庫

git clone https://github.com/coreos/kube-prometheus.git
cd kube-prometheus

筆者安裝的K8S版本是1.18 ,prometheus選擇配套的分支release-0.5:

# 切換到release-0.5
git checkout remotes/origin/release-0.5 -b 0.5

K8S和Pormetheus的配套關(guān)系:

kube-prometheus stackKubernetes 1.14Kubernetes 1.15Kubernetes 1.16Kubernetes 1.17Kubernetes 1.18
release-0.3?????
release-0.4?????
release-0.5?????
HEAD?????

最新的版本配套關(guān)系查看kube-prometheus官方倉庫:https://github.com/prometheus-operator/kube-prometheus ,可以切換版本查看配套關(guān)系。

2. 查看manifest

[root@k8s-master kube-prometheus]# cd manifests/
[root@k8s-master manifests]# ll
total 1684
-rw-r--r-- 1 root root     405 Jun 12 16:20 alertmanager-alertmanager.yaml
-rw-r--r-- 1 root root     973 Jun 12 16:20 alertmanager-secret.yaml
-rw-r--r-- 1 root root      96 Jun 12 16:20 alertmanager-serviceAccount.yaml
-rw-r--r-- 1 root root     254 Jun 12 16:20 alertmanager-serviceMonitor.yaml
-rw-r--r-- 1 root root     308 Jun 12 16:22 alertmanager-service.yaml
-rw-r--r-- 1 root root     550 Jun 12 16:20 grafana-dashboardDatasources.yaml
-rw-r--r-- 1 root root 1405645 Jun 12 16:20 grafana-dashboardDefinitions.yaml
-rw-r--r-- 1 root root     454 Jun 12 16:20 grafana-dashboardSources.yaml
-rw-r--r-- 1 root root    7539 Jun 12 16:20 grafana-deployment.yaml
-rw-r--r-- 1 root root      86 Jun 12 16:20 grafana-serviceAccount.yaml
-rw-r--r-- 1 root root     208 Jun 12 16:20 grafana-serviceMonitor.yaml
-rw-r--r-- 1 root root     238 Jun 12 16:22 grafana-service.yaml
-rw-r--r-- 1 root root     376 Jun 12 16:20 kube-state-metrics-clusterRoleBinding.yaml
-rw-r--r-- 1 root root    1651 Jun 12 16:20 kube-state-metrics-clusterRole.yaml
-rw-r--r-- 1 root root    1925 Jun 12 16:20 kube-state-metrics-deployment.yaml
-rw-r--r-- 1 root root     192 Jun 12 16:20 kube-state-metrics-serviceAccount.yaml
-rw-r--r-- 1 root root     829 Jun 12 16:20 kube-state-metrics-serviceMonitor.yaml
-rw-r--r-- 1 root root     403 Jun 12 16:20 kube-state-metrics-service.yaml
-rw-r--r-- 1 root root     266 Jun 12 16:20 node-exporter-clusterRoleBinding.yaml
-rw-r--r-- 1 root root     283 Jun 12 16:20 node-exporter-clusterRole.yaml
-rw-r--r-- 1 root root    2775 Jun 12 16:20 node-exporter-daemonset.yaml
-rw-r--r-- 1 root root      92 Jun 12 16:20 node-exporter-serviceAccount.yaml
-rw-r--r-- 1 root root     711 Jun 12 16:20 node-exporter-serviceMonitor.yaml
-rw-r--r-- 1 root root     355 Jun 12 16:20 node-exporter-service.yaml
-rw-r--r-- 1 root root     292 Jun 12 16:20 prometheus-adapter-apiService.yaml
-rw-r--r-- 1 root root     396 Jun 12 16:20 prometheus-adapter-clusterRoleAggregatedMetricsReader.yaml
-rw-r--r-- 1 root root     304 Jun 12 16:20 prometheus-adapter-clusterRoleBindingDelegator.yaml
-rw-r--r-- 1 root root     281 Jun 12 16:20 prometheus-adapter-clusterRoleBinding.yaml
-rw-r--r-- 1 root root     188 Jun 12 16:20 prometheus-adapter-clusterRoleServerResources.yaml
-rw-r--r-- 1 root root     219 Jun 12 16:20 prometheus-adapter-clusterRole.yaml
-rw-r--r-- 1 root root    1378 Jun 12 16:20 prometheus-adapter-configMap.yaml
-rw-r--r-- 1 root root    1344 Jun 12 16:20 prometheus-adapter-deployment.yaml
-rw-r--r-- 1 root root     325 Jun 12 16:20 prometheus-adapter-roleBindingAuthReader.yaml
-rw-r--r-- 1 root root      97 Jun 12 16:20 prometheus-adapter-serviceAccount.yaml
-rw-r--r-- 1 root root     236 Jun 12 16:20 prometheus-adapter-service.yaml
-rw-r--r-- 1 root root     269 Jun 12 16:20 prometheus-clusterRoleBinding.yaml
-rw-r--r-- 1 root root     216 Jun 12 16:20 prometheus-clusterRole.yaml
-rw-r--r-- 1 root root     621 Jun 12 16:20 prometheus-operator-serviceMonitor.yaml
-rw-r--r-- 1 root root     751 Jun 12 16:20 prometheus-prometheus.yaml
-rw-r--r-- 1 root root     293 Jun 12 16:20 prometheus-roleBindingConfig.yaml
-rw-r--r-- 1 root root     983 Jun 12 16:20 prometheus-roleBindingSpecificNamespaces.yaml
-rw-r--r-- 1 root root     188 Jun 12 16:20 prometheus-roleConfig.yaml
-rw-r--r-- 1 root root     820 Jun 12 16:20 prometheus-roleSpecificNamespaces.yaml
-rw-r--r-- 1 root root   86744 Jun 12 16:20 prometheus-rules.yaml
-rw-r--r-- 1 root root      93 Jun 12 16:20 prometheus-serviceAccount.yaml
-rw-r--r-- 1 root root    6829 Jun 12 16:20 prometheus-serviceMonitorApiserver.yaml
-rw-r--r-- 1 root root     395 Jun 12 16:20 prometheus-serviceMonitorCoreDNS.yaml
-rw-r--r-- 1 root root    6172 Jun 12 16:20 prometheus-serviceMonitorKubeControllerManager.yaml
-rw-r--r-- 1 root root    6778 Jun 12 16:20 prometheus-serviceMonitorKubelet.yaml
-rw-r--r-- 1 root root     347 Jun 12 16:20 prometheus-serviceMonitorKubeScheduler.yaml
-rw-r--r-- 1 root root     247 Jun 12 16:20 prometheus-serviceMonitor.yaml
-rw-r--r-- 1 root root     297 Jun 12 16:21 prometheus-service.yaml
drwxr-xr-x 2 root root    4096 Jun 12 16:20 setup

3. 修改鏡像源

修改prometheus-operator,prometheus,alertmanager,kube-state-metrics,node-exporter,prometheus-adapter的鏡像源為中科大的鏡像源。

sed -i 's/quay.io/quay.mirrors.ustc.edu.cn/g' setup/prometheus-operator-deployment.yaml
sed -i 's/quay.io/quay.mirrors.ustc.edu.cn/g' prometheus-prometheus.yaml 
sed -i 's/quay.io/quay.mirrors.ustc.edu.cn/g' alertmanager-alertmanager.yaml
sed -i 's/quay.io/quay.mirrors.ustc.edu.cn/g' kube-state-metrics-deployment.yaml
sed -i 's/quay.io/quay.mirrors.ustc.edu.cn/g' node-exporter-daemonset.yaml
sed -i 's/quay.io/quay.mirrors.ustc.edu.cn/g' prometheus-adapter-deployment.yaml

4. 修改promethes,alertmanager,grafana的service類型為NodePort類型

為了可以從外部訪問 prometheus,alertmanager,grafana,我們這里修改 promethes,alertmanager,grafana的 service 類型為 NodePort 類型。

  1. 修改 prometheus 的 service
[root@k8s-master kube-prometheus]# cat manifests/prometheus-service.yaml
apiVersion: v1
kind: Service
metadata:labels:prometheus: k8sname: prometheus-k8snamespace: monitoring
spec:type: NodePort        # 增加NodePort配置ports:- name: webport: 9090targetPort: webnodePort: 30090       # 增加NodePort配置selector:app: prometheusprometheus: k8ssessionAffinity: ClientIP
  1. 修改 grafana 的 service
[root@k8s-master kube-prometheus]# cat manifests/grafana-service.yaml
apiVersion: v1
kind: Service
metadata:labels:app: grafananame: grafananamespace: monitoring
spec:type: NodePort      # 增加NodePort配置ports:- name: httpport: 3000targetPort: httpnodePort: 32000   # 增加NodePort配置selector:app: grafana
  1. 修改 alertmanager 的 service
[root@k8s-master kube-prometheus]# cat manifests/alertmanager-service.yaml
apiVersion: v1
kind: Service
metadata:labels:alertmanager: mainname: alertmanager-mainnamespace: monitoring
spec:type: NodePort    # 增加NodePort配置ports:- name: webport: 9093targetPort: webnodePort: 30093   # 增加NodePort配置selector:alertmanager: mainapp: alertmanagersessionAffinity: ClientIP

5. 安裝kube-prometheus

安裝CRD和prometheus-operator

[root@k8s-master manifests]# kubectl apply -f setup/
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl                                    apply
namespace/monitoring configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl                                    apply
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl                                    apply
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl                                    apply
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl                                    apply
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl                                    apply
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl                                    apply
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com configured
clusterrole.rbac.authorization.k8s.io/prometheus-operator created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
deployment.apps/prometheus-operator created
service/prometheus-operator created
serviceaccount/prometheus-operator created
[root@k8s-master manifests]# kubectl get pod -n monitoring
NAME                                   READY   STATUS              RESTARTS   AGE
prometheus-operator-5cd4d464cc-b9vqq   0/2     ContainerCreating   0          16s

下載prometheus-operator鏡像需要花費幾分鐘,等待prometheus-operator變成running狀態(tài)。

安裝prometheus, alertmanager, grafana, kube-state-metrics, node-exporter等資源

[root@k8s-master manifests]# kubectl apply -f .
alertmanager.monitoring.coreos.com/main created
secret/alertmanager-main created
service/alertmanager-main created
serviceaccount/alertmanager-main created
servicemonitor.monitoring.coreos.com/alertmanager created
secret/grafana-datasources created
configmap/grafana-dashboard-apiserver created
configmap/grafana-dashboard-cluster-total created
configmap/grafana-dashboard-controller-manager created
configmap/grafana-dashboard-k8s-resources-cluster created
configmap/grafana-dashboard-k8s-resources-namespace created
configmap/grafana-dashboard-k8s-resources-node created
configmap/grafana-dashboard-k8s-resources-pod created
configmap/grafana-dashboard-k8s-resources-workload created
configmap/grafana-dashboard-k8s-resources-workloads-namespace created
configmap/grafana-dashboard-kubelet created
configmap/grafana-dashboard-namespace-by-pod created
configmap/grafana-dashboard-namespace-by-workload created
configmap/grafana-dashboard-node-cluster-rsrc-use created
configmap/grafana-dashboard-node-rsrc-use created
configmap/grafana-dashboard-nodes created
configmap/grafana-dashboard-persistentvolumesusage created
configmap/grafana-dashboard-pod-total created
configmap/grafana-dashboard-prometheus-remote-write created
configmap/grafana-dashboard-prometheus created
configmap/grafana-dashboard-proxy created
configmap/grafana-dashboard-scheduler created
configmap/grafana-dashboard-statefulset created
configmap/grafana-dashboard-workload-total created
configmap/grafana-dashboards created
deployment.apps/grafana created
service/grafana created
serviceaccount/grafana created
servicemonitor.monitoring.coreos.com/grafana created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
deployment.apps/kube-state-metrics created
service/kube-state-metrics created
serviceaccount/kube-state-metrics created
servicemonitor.monitoring.coreos.com/kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/node-exporter created
clusterrolebinding.rbac.authorization.k8s.io/node-exporter created
daemonset.apps/node-exporter created
service/node-exporter created
serviceaccount/node-exporter created
servicemonitor.monitoring.coreos.com/node-exporter created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
clusterrole.rbac.authorization.k8s.io/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter created
clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator created
clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources created
configmap/adapter-config created
deployment.apps/prometheus-adapter created
rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader created
service/prometheus-adapter created
serviceaccount/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/prometheus-k8s created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s created
servicemonitor.monitoring.coreos.com/prometheus-operator created
prometheus.monitoring.coreos.com/k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s-config created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
prometheusrule.monitoring.coreos.com/prometheus-k8s-rules created
service/prometheus-k8s created
serviceaccount/prometheus-k8s created
servicemonitor.monitoring.coreos.com/prometheus created
servicemonitor.monitoring.coreos.com/kube-apiserver created
servicemonitor.monitoring.coreos.com/coredns created
servicemonitor.monitoring.coreos.com/kube-controller-manager created
servicemonitor.monitoring.coreos.com/kube-scheduler created
servicemonitor.monitoring.coreos.com/kubelet created

等待monitoring命名空間下的pod都變?yōu)檫\行:

[root@k8s-master ~]# kubectl get  pod -n monitoring
NAME                                   READY   STATUS    RESTARTS   AGE
alertmanager-main-0                    2/2     Running   0          157m
alertmanager-main-1                    2/2     Running   0          157m
alertmanager-main-2                    2/2     Running   0          157m
grafana-5c55845445-gh8g7               1/1     Running   0          20h
kube-state-metrics-75f946484-lqrbf     3/3     Running   0          20h
node-exporter-5h5cs                    2/2     Running   0          20h
node-exporter-f28gj                    2/2     Running   0          20h
node-exporter-w9rhr                    2/2     Running   0          20h
prometheus-adapter-7d68d6f886-qwrfg    1/1     Running   0          20h
prometheus-k8s-0                       3/3     Running   0          20h
prometheus-k8s-1                       3/3     Running   0          20h
prometheus-operator-5cd4d464cc-b9vqq   2/2     Running   0          20h

博主部署測試遇到如下問題,解決方式記錄如下:

  1. alertmanager-main的三個容器啟動失敗,狀態(tài)為crashLoopBackOff,pod中的其中一個容器無法啟動。
  Warning  Unhealthy  11m (x5 over 11m)    kubelet, k8s-node2  Liveness probe failed: Get http://10.244.2.8:9093/-/healthy: dial tcp 10.244.2.8:9093: connect: connection refusedWarning  Unhealthy  10m (x10 over 11m)   kubelet, k8s-node2  Readiness probe failed: Get http://10.244.2.8:9093/-/ready: dial tcp 10.244.2.8:9093: connect: connection refused

下面的解決方法參考自:https://github.com/prometheus-operator/kube-prometheus/issues/653

# 暫停更新,修改如下資源文件,增加paused:true
kubectl -n monitoring edit alertmanagers.monitoring.coreos.com
...
spec:image: quay.io/prometheus/alertmanager:v0.23.0nodeSelector:kubernetes.io/os: linuxpaused: truepodMetadata:labels:
...[root@k8s-master ~]# kubectl -n monitoring get statefulset.apps/alertmanager-main -o yaml > dump.yaml
# 修改alertmanager-main.yaml,在spec.template.spec添加hostNetwork: true,在文件的234行左右的位置
[root@k8s-master manifests]# vi dump.yaml
...spec:hostNetwork: true   # 增加的內(nèi)容containers:- args:- --config.file=/etc/alertmanager/config/alertmanager.yaml
...
# 刪除livenessProbe和readinessProbe探針
[root@k8s-master manifests]# vi dump.yaml
...livenessProbe:failureThreshold: 10httpGet:path: /-/healthyport: webscheme: HTTPperiodSeconds: 10successThreshold: 1timeoutSeconds: 3
...readinessProbe:failureThreshold: 10httpGet:path: /-/readyport: webscheme: HTTPinitialDelaySeconds: 3periodSeconds: 5successThreshold: 1timeoutSeconds: 3
...# 刪除原有的statefulset,重新創(chuàng)建
[root@k8s-master ~]# kubectl delete statefulset.apps/alertmanager-main -n monitoring
[root@k8s-master ~]# kubectl create -f dump.yaml
  1. 其中一個alertmanager狀態(tài)為pendding,查看原因為不滿足節(jié)點調(diào)度要求。

解決方法如下,

# 去除污點
kubectl describe node k8s-master | grep Taints
kubectl taint nodes k8s-master node-role.kubernetes.io/master-

6. 訪問prometheus,alert-manager,grafana

  1. 訪問prometheus

瀏覽器打開http://192.168.0.51:30090,192.168.0.51為master的IP

  1. 訪問alert-manager

瀏覽器打開http://192.168.0.51:30093

  1. 訪問grafana

瀏覽器打開http://192.168.0.51:32000

用戶名/密碼:admin/admin

http://www.risenshineclean.com/news/27209.html

相關(guān)文章:

  • 網(wǎng)站后臺添加投票系統(tǒng)電子商務(wù)網(wǎng)站建設(shè)方案
  • 做網(wǎng)站不會P圖怎么辦seo詞庫排行
  • 游戲ui設(shè)計落實20條優(yōu)化措施
  • 網(wǎng)站建設(shè)硬件網(wǎng)站怎么優(yōu)化推廣
  • 江蘇城鄉(xiāng)建設(shè)職業(yè)學(xué)院就業(yè)網(wǎng)站seo賺錢項目
  • 網(wǎng)站怎么進(jìn)行優(yōu)化排名福建鍵seo排名
  • 仿win8網(wǎng)站百度站長工具seo綜合查詢
  • 梅花網(wǎng)官網(wǎng)免費素材中國seo排行榜
  • 企業(yè)建站有什么好處百度秒收錄軟件工具
  • 新鄉(xiāng)網(wǎng)站開發(fā)的公司谷歌瀏覽器下載安裝
  • 網(wǎng)站建設(shè)伍際網(wǎng)絡(luò)百度搜索推廣方法
  • 最近新聞大事件摘抄游戲優(yōu)化大師有用嗎
  • 網(wǎng)站如何添加白名單百度關(guān)鍵詞收錄排名
  • 求職網(wǎng)站開發(fā)我想注冊一個網(wǎng)站怎么注冊
  • 高郵市建設(shè)網(wǎng)站龍崗seo網(wǎng)絡(luò)推廣
  • aso.net 網(wǎng)站開發(fā)營銷策略
  • 做app還是網(wǎng)站太原百度seo排名軟件
  • phpwind怎么做網(wǎng)站整站優(yōu)化工具
  • 怎樣做百度推廣網(wǎng)站天津谷歌優(yōu)化
  • 昆明學(xué)校網(wǎng)站建設(shè)長沙網(wǎng)站優(yōu)化體驗
  • 網(wǎng)站做cdn需要注意什么意思百家號排名
  • 如何做網(wǎng)站靜態(tài)頁面?zhèn)€人推廣網(wǎng)站
  • 網(wǎng)站怎么做聯(lián)系我們頁面營銷技巧培訓(xùn)ppt
  • 請收網(wǎng)址999938關(guān)鍵詞優(yōu)化公司哪家效果好
  • 專業(yè)建網(wǎng)站平臺網(wǎng)絡(luò)營銷是做什么的工作
  • 中山網(wǎng)站建設(shè)哪家好seo是什么服務(wù)
  • 網(wǎng)站的排版網(wǎng)絡(luò)營銷成功的案例分析
  • 連云港專業(yè)網(wǎng)站制作公司seo快排軟件
  • 惠州seo推廣公司南寧seo平臺標(biāo)準(zhǔn)
  • 網(wǎng)站域名地址查詢網(wǎng)站創(chuàng)建