做暖暖XO網站網絡廣告的形式有哪些?
一、前言
? ? ?本次部署elk所有的服務都部署在k8s集群中,服務包含filebeat、logstash、elasticsearch、kibana,其中elasticsearch使用集群的方式部署,所有服務都是用7.17.10版本
二、部署
?部署elasticsearch集群
部署elasticsearch集群需要先優(yōu)化宿主機(所有k8s節(jié)點都要優(yōu)化,不優(yōu)化會部署失敗)
vi /etc/sysctl.conf
vm.max_map_count=262144
重載生效配置
sysctl -p
以下操作在k8s集群的任意master執(zhí)行即可
創(chuàng)建yaml文件存放目錄
mkdir /opt/elk && cd /opt/elk
這里使用無頭服務部署es集群,需要用到pv存儲es集群數據,service服務提供訪問,setafuset服務部署es集群
創(chuàng)建svc的無頭服務和對外訪問的yaml配置文件
vi es-service.yaml
kind: Service
metadata:name: elasticsearchnamespace: elklabels:app: elasticsearch
spec:selector:app: elasticsearchclusterIP: Noneports:- port: 9200name: db- port: 9300name: inter
vi?es-service-nodeport.yaml
apiVersion: v1
kind: Service
metadata:name: elasticsearch-nodeportnamespace: elklabels:app: elasticsearch
spec:selector:app: elasticsearchtype: NodePortports:- port: 9200name: dbnodePort: 30017- port: 9300name: internodePort: 30018
創(chuàng)建pv的yaml配置文件(這里使用nfs共享存儲方式)
vi?es-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:name: es-pv1
spec:storageClassName: es-pv #定義了存儲類型capacity:storage: 30GiaccessModes:- ReadWriteManypersistentVolumeReclaimPolicy: Retainnfs:path: /volume2/k8s-data/es/es-pv1server: 10.1.13.99
---
apiVersion: v1
kind: PersistentVolume
metadata:name: es-pv2
spec:storageClassName: es-pv #定義了存儲類型capacity:storage: 30GiaccessModes:- ReadWriteManypersistentVolumeReclaimPolicy: Retainnfs:path: /volume2/k8s-data/es/es-pv2server: 10.1.13.99
---
apiVersion: v1
kind: PersistentVolume
metadata:name: es-pv3
spec:storageClassName: es-pv #定義了存儲類型capacity:storage: 30GiaccessModes:- ReadWriteManypersistentVolumeReclaimPolicy: Retainnfs:path: /volume2/k8s-data/es/es-pv3server: 10.1.13.99
創(chuàng)建setafulset的yaml配置文件
vi?es-setafulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:name: elasticsearchnamespace: elklabels:app: elasticsearch
spec:podManagementPolicy: Parallel serviceName: elasticsearchreplicas: 3selector:matchLabels:app: elasticsearchtemplate:metadata:labels:app: elasticsearchspec:tolerations: #此配置是容忍污點可以使pod部署到master節(jié)點,可以去掉- key: "node-role.kubernetes.io/control-plane"operator: "Exists"effect: NoSchedulecontainers:- image: elasticsearch:7.17.10name: elasticsearchresources:limits:cpu: 1memory: 2Girequests:cpu: 0.5memory: 500Mienv:- name: network.hostvalue: "_site_"- name: node.namevalue: "${HOSTNAME}"- name: discovery.zen.minimum_master_nodesvalue: "2"- name: discovery.seed_hosts #該參數用于告訴新加入集群的節(jié)點去哪里發(fā)現其他節(jié)點,它應該包含集群中已經在運行的一部分節(jié)點的主機名或IP地址,這里我使用無頭服務的地址value: "elasticsearch-0.elasticsearch.elk.svc.cluster.local,elasticsearch-1.elasticsearch.elk.svc.cluster.local,elasticsearch-2.elasticsearch.elk.svc.cluster.local"- name: cluster.initial_master_nodes #這個參數用于指定初始主節(jié)點。當一個新的集群啟動時,它會從這個列表中選擇一個節(jié)點作為初始主節(jié)點,然后根據集群的情況選舉其他的主節(jié)點value: "elasticsearch-0,elasticsearch-1,elasticsearch-2"- name: cluster.namevalue: "es-cluster"- name: ES_JAVA_OPTSvalue: "-Xms512m -Xmx512m"ports:- containerPort: 9200name: dbprotocol: TCP- name: intercontainerPort: 9300volumeMounts:- name: elasticsearch-datamountPath: /usr/share/elasticsearch/datavolumeClaimTemplates:- metadata:name: elasticsearch-dataspec:storageClassName: "es-pv"accessModes: [ "ReadWriteMany" ]resources:requests:storage: 30Gi
創(chuàng)建elk服務的命名空間
kubectl create namespace elk
創(chuàng)建yaml文件的服務
kubectl create -f es-pv.yaml
kubectl create -f es-service-nodeport.yaml
kubectl create -f es-service.yaml
kubectl create -f es-setafulset.yaml
查看es服務是否正常啟動
kubectl get pod -n elk
檢查elasticsearch集群是否正常?
http://10.1.60.119:30017/_cluster/state/master_node,nodes?pretty
可以看到集群中能正確識別到三個es節(jié)點?
elasticsearch集群部署完成
部署kibana服務?
這里使用deployment控制器部署kibana服務,使用service服務對外提供訪問
創(chuàng)建deployment的yaml配置文件
vi?kibana-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: kibananamespace: elklabels:app: kibana
spec:replicas: 1selector:matchLabels:app: kibanatemplate:metadata:labels:app: kibanaspec:tolerations:- key: "node-role.kubernetes.io/control-plane"operator: "Exists"effect: NoSchedulecontainers:- name: kibanaimage: kibana:7.17.10resources:limits:cpu: 1memory: 1Grequests:cpu: 0.5memory: 500Mienv:- name: ELASTICSEARCH_HOSTSvalue: http://elasticsearch:9200ports:- containerPort: 5601protocol: TCP
創(chuàng)建service的yaml配置文件
vi?kibana-service.yaml
apiVersion: v1
kind: Service
metadata:name: kibananamespace: elk
spec:ports:- port: 5601protocol: TCPtargetPort: 5601nodePort: 30019type: NodePortselector:app: kibana
創(chuàng)建yaml文件的服務
kubectl create -f kibana-service.yaml
kubectl create -f kibana-deployment.yaml
查看kibana是否正常
kubectl get pod -n elk
部署logstash服務?
logstash服務也是通過deployment控制器部署,需要使用到configmap存儲logstash配置,還有service提供對外訪問服務
編輯configmap的yaml配置文件
vi?logstash-configmap.yaml?
apiVersion: v1
kind: ConfigMap
metadata:name: logstash-configmapnamespace: elklabels:app: logstash
data:logstash.conf: |input {beats {port => 5044 #設置日志收集端口# codec => "json"}}filter {}output {# stdout{ 該被注釋的配置項用于將收集的日志輸出到logstash的日志中,主要用于測試看收集的日志中包含哪些內容# codec => rubydebug# }elasticsearch {hosts => "elasticsearch:9200"index => "nginx-%{+YYYY.MM.dd}"}}
編輯deployment的yaml配置文件
vi?logstash-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: logstashnamespace: elk
spec:replicas: 1selector:matchLabels:app: logstashtemplate:metadata:labels:app: logstashspec:containers:- name: logstashimage: logstash:7.17.10imagePullPolicy: IfNotPresentports:- containerPort: 5044volumeMounts:- name: config-volumemountPath: /usr/share/logstash/pipeline/volumes:- name: config-volumeconfigMap:name: logstash-configmapitems:- key: logstash.confpath: logstash.conf
編輯service的yaml配置文件(我這里是收集k8s內部署的服務日志,所以沒開放對外訪問)
vi?logstash-service.yaml
apiVersion: v1
kind: Service
metadata:name: logstashnamespace: elk
spec:ports:- port: 5044targetPort: 5044protocol: TCPselector:app: logstashtype: ClusterIP
創(chuàng)建yaml文件的服務
kubectl create -f logstash-configmap.yaml
kubectl create -f logstash-service.yaml
kubectl create -f logstash-deployment.yaml
查看logstash服務是否正常啟動
kubectl get pod -n elk
部署filebeat服務?
filebeat服務使用daemonset方式部署到k8s的所有工作節(jié)點上,用于收集容器日志,也需要使用configmap存儲配置文件,還需要配置rbac賦權,因為用到了filebeat的自動收集模塊,自動收集k8s集群的日志,需要對k8s集群進行訪問,所以需要賦權
編輯rabc的yaml配置文件
vi?filebeat-rbac.yaml?
apiVersion: v1
kind: ServiceAccount
metadata:name: filebeatnamespace: elklabels:app: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: filebeatlabels:app: filebeat
rules:
- apiGroups: [""]resources: ["namespaces", "pods", "nodes"] #賦權可以訪問的服務verbs: ["get", "list", "watch"] #可以使用以下命令
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: filebeat
subjects:
- kind: ServiceAccountname: filebeatnamespace: elk
roleRef:kind: ClusterRolename: filebeatapiGroup: rbac.authorization.k8s.io
編輯configmap的yaml配置文件
vi?filebeat-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:name: filebeat-confignamespace: elkdata:filebeat.yml: |filebeat.autodiscover: #使用filebeat的自動發(fā)現模塊providers:- type: kubernetes #類型選擇k8s類型templates: #配置需要收集的模板- condition:and:- or:- equals:kubernetes.labels: #通過標簽篩選需要收集的pod日志app: foundation- equals:kubernetes.labels:app: api-gateway- equals: #通過命名空間篩選需要收集的pod日志kubernetes.namespace: java-serviceconfig: #配置日志路徑,使用k8s的日志路徑- type: containersymlinks: true paths: #配置路徑時,需要使用變量去構建路徑,以此來達到收集對應服務的日志- /var/log/containers/${data.kubernetes.pod.name}_${data.kubernetes.namespace}_${data.kubernetes.container.name}-*.logoutput.logstash:hosts: ['logstash:5044']
關于filebeat自動發(fā)現k8s服務的更多內容可以參考elk官網,里面還有很多的k8s參數可用
?參考:Autodiscover | Filebeat Reference [8.12] | Elastic
?
編輯daemonset的yaml配置文件
vi?filebeat-daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:name: filebeatnamespace: elklabels:app: filebeat
spec:selector:matchLabels:app: filebeattemplate:metadata:labels:app: filebeatspec:serviceAccountName: filebeatterminationGracePeriodSeconds: 30containers:- name: filebeatimage: elastic/filebeat:7.17.10args: ["-c", "/etc/filebeat.yml","-e",]env:- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeNamesecurityContext:runAsUser: 0resources:limits:cpu: 200mmemory: 200Mirequests:cpu: 100mmemory: 100MivolumeMounts:- name: configmountPath: /etc/filebeat.ymlreadOnly: truesubPath: filebeat.yml- name: log #這里掛載了三個日志路徑,這是因為k8s的container路徑下的日志文件都是通過軟鏈接去鏈接其它目錄的文件mountPath: /var/log/containersreadOnly: true- name: pod-log #這里是container下的日志軟鏈接的路徑,然而這個還不是真實路徑,這也是個軟鏈接mountPath: /var/log/podsreadOnly: true- name: containers-log #最后這里才是真實的日志路徑,如果不都掛載進來是取不到日志文件的內容的mountPath: /var/lib/docker/containersreadOnly: truevolumes:- name: configconfigMap:defaultMode: 0600name: filebeat-config- name: loghostPath:path: /var/log/containers- name: pod-loghostPath:path: /var/log/pods- name: containers-loghostPath:path: /var/lib/docker/containers
創(chuàng)建yaml文件的服務
kubectl create -f filebeat-rbac.yaml
kubectl create -f filebeat-configmap.yaml
kubectl create -f filebeat-daemonset.yaml
?查看filebeat服務是否正常啟動
kubectl get pod -n elk
至此在k8s集群內部署elk服務完成