專門做批發(fā)的網(wǎng)站seo網(wǎng)絡(luò)推廣經(jīng)理招聘
目錄
一、背景
二、設(shè)計(jì)
三、具體實(shí)現(xiàn)
Filebeat配置
K8S SideCar yaml
Logstash配置
一、背景
? ? 將容器中服務(wù)的trace日志和應(yīng)用日志收集到KAFKA,需要注意的是 trace 日志和app 日志需要存放在同一個(gè)KAFKA兩個(gè)不同的topic中。分別為APP_TOPIC和TRACE_TOPIC
二、設(shè)計(jì)
流程圖如下:

說(shuō)明:
????????APP_TOPIC:主要存放服務(wù)的應(yīng)用日志
????????TRACE_TOPIC:存放程序輸出的trace日志,用于排查某一個(gè)請(qǐng)求的鏈路
文字說(shuō)明:
? ? ?filebeat 采集容器中的日志(這里需要定義一些規(guī)范,我們定義的容器日志路徑如下),filebeat會(huì)采集兩個(gè)不同目錄下的日志,然后輸出到對(duì)應(yīng)的topic中,之后對(duì)kafka 的topic進(jìn)行消費(fèi)、存儲(chǔ)。最終展示出來(lái)
/home/service/
└── logs├── app│ └── pass│ ├── 10.246.84.58-paas-biz-784c68f79f-cxczf.log│ ├── 1.log│ ├── 2.log│ ├── 3.log│ ├── 4.log│ └── 5.log└── trace├── 1.log├── 2.log├── 3.log├── 4.log├── 5.log└── trace.log4 directories, 13 files
三、具體實(shí)現(xiàn)
上干貨~
Filebeat配置
配置說(shuō)明:
????????其中我將filebeat的一些配置設(shè)置成了變量,在接下來(lái)的k8s yaml文件中需要定義變量和設(shè)置變量的value。
? ? ? ? 需要特別說(shuō)明的是我這里是使用了? tags: ["trace-log"]結(jié)合when.contains來(lái)匹配,實(shí)現(xiàn)將對(duì)應(yīng)intput中的日志輸出到對(duì)應(yīng)kafka的topic中
filebeat.inputs:
- type: logenabled: truepaths:- /home/service/logs/trace/*.logfields_under_root: truefields:topic: "${TRACE_TOPIC}"json.keys_under_root: truejson.add_error_key: truejson.message_key: messagescan_frequency: 10smax_bytes: 10485760harvester_buffer_size: 1638400ignore_older: 24hclose_inactive: 1htags: ["trace-log"]processors:- decode_json_fields:fields: ["message"]process_array: falsemax_depth: 1target: ""overwrite_keys: true- type: logenabled: truepaths:- /home/service/logs/app/*/*.logfields:topic: "${APP_TOPIC}"scan_frequency: 10smax_bytes: 10485760harvester_buffer_size: 1638400close_inactive: 1htags: ["app-log"]output.kafka:enabled: truecodec.json:pretty: true # 是否格式化json數(shù)據(jù),默認(rèn)falsecompression: gziphosts: "${KAFKA_HOST}"topics:- topic: "${TRACE_TOPIC}"bulk_max_duration: 2sbulk_max_size: 2048required_acks: 1max_message_bytes: 10485760when.contains:tags: "trace-log"- topic: "${APP_TOPIC}"bulk_flush_frequency: 0bulk_max_size: 2048compression: gzipcompression_level: 4group_id: "k8s_filebeat"grouping_enabled: truemax_message_bytes: 10485760partition.round_robin:reachable_only: truerequired_acks: 1workers: 2when.contains:tags: "app-log"
K8S SideCar yaml
配置說(shuō)明:
? ? ? ? 該yaml中定一個(gè)兩個(gè)容器,容器1為nginx(示例)容器2為filebeat容器。定義了一個(gè)名稱為logs的emptryDir類型的卷,將logs卷同時(shí)掛載在了容器1和容器2的/home/service/logs目錄
????????接下來(lái)又在filebeat容器中自定義了三個(gè)環(huán)境變量,這樣我們就可以通過(guò)修改yaml的方式很靈活的來(lái)配置filebeat
????????????????TRACE_TOPIC: Trace日志的topic
????????????????APP_TOPIC:App日志的topic
????????????????KAFKA_HOST:KAFKA地址
apiVersion: apps/v1
kind: Deployment
metadata:labels:app: nginxname: nginxnamespace: default
spec:replicas: 2selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:imagePullSecrets:- name: uhub-registrycontainers:- image: uhub.service.ucloud.cn/sre-paas/nginx:v1imagePullPolicy: IfNotPresentname: nginxports:- name: nginxcontainerPort: 80- mountPath: /home/service/logsname: logsterminationMessagePath: /dev/termination-logterminationMessagePolicy: FilevolumeMounts:- mountPath: /home/service/logsname: logs- env:- name: TRACE_TOPICvalue: pro_platform_monitor_log- name: APP_TOPICvalue: platform_logs- name: KAFKA_HOSTvalue: '["xxx.xxx.xxx.xxx:9092","xx.xxx.xxx.xxx:9092","xx.xxx.xxx.xxx:9092"]'- name: MY_POD_NAMEvalueFrom:fieldRef:apiVersion: v1fieldPath: metadata.nameimage: xxx.xxx.xxx.cn/sre-paas/filebeat-v2:8.11.2imagePullPolicy: Alwaysname: filebeatresources:limits:cpu: 150mmemory: 200Mirequests:cpu: 50mmemory: 100MisecurityContext:privileged: truerunAsUser: 0terminationMessagePath: /dev/termination-logterminationMessagePolicy: FilevolumeMounts:- mountPath: /home/service/logsname: logsdnsPolicy: ClusterFirstimagePullSecrets:- name: xxx-registryrestartPolicy: AlwaysschedulerName: default-schedulersecurityContext: {}terminationGracePeriodSeconds: 30volumes:- emptyDir: {}name: logs
Logstash配置
input {kafka {type => "platform_logs"bootstrap_servers => "xxx.xxx.xxx.xxx:9092,xxx.xxx.xxx.xxx:9092,xxx.xxx.xxx.xxx:9092"topics => ["platform_logs"]group_id => 'platform_logs'client_id => 'open-platform-logstash-logs'}kafka {type => "platform_pre_log"bootstrap_servers => "xxx.xxx.xxx.xxx:9092,xxx.xxx.xxx.xxx:9092,xxx.xxx.xxx.xxx:9092"topics => ["pre_platform_logs"]group_id => 'pre_platform_logs'client_id => 'open-platform-logstash-pre'}kafka {type => "platform_nginx_log"bootstrap_servers => "xxx.xxx.xxx.xxx:9092,xxx.xxx.xxx.xxx:9092,xxx.xxx.xxx.xxx:9092"topics => ["platform_nginx_log"]group_id => 'platform_nginx_log'client_id => 'open-platform-logstash-nginx'}
}
filter {if [type] == "platform_pre_log" {grok {match => { "message" => "\[%{IP}-(?<service>[a-zA-Z-]+)-%{DATA}\]" }}}if [type] == "platform_logs" {grok {match => { "message" => "\[%{IP}-(?<service>[a-zA-Z-]+)-%{DATA}\]" }}}
}
output {if [type] == "platform_logs" {elasticsearch {id => "platform_logs"hosts => ["http://xxx.xxx.xxx.xxx:9200","http://xxx.xxx.xxx.xxx:9200","http://xxx.xxx.xxx.xxx:9200"]index => "log-xxx-prod-%{service}-%{+yyyy.MM.dd}"user => "logstash_transformer"password => "xxxxxxx"template_name => "log-xxx-prod"manage_template => "true"template_overwrite => "true"}}if [type] == "platform_pre_log" {elasticsearch {id => "platform_pre_logs"hosts => ["http://xxx.xxx.xxx.xxx:9200","http://xxx.xxx.xxx.xxx:9200","http://xxx.xxx.xxx.xxx:9200"]index => "log-xxx-pre-%{service}-%{+yyyy.MM.dd}"user => "logstash_transformer"password => "xxxxxxx"template_name => "log-xxx-pre"manage_template => "true"template_overwrite => "true"}}if [type] == "platform_nginx_log" {elasticsearch {id => "platform_nginx_log"hosts => ["http://xxx.xxx.xxx.xxx:9200","http://xxx.xxx.xxx.xxx:9200","http://xxx.xxx.xxx.xxx:9200"]index => "log-platform-nginx-%{+yyyy.MM.dd}"user => "logstash_transformer"password => "xxxxxxx"template_name => "log-platform-nginx"manage_template => "true"template_overwrite => "true"}}
}
????????如果有幫助到你麻煩給個(gè)贊或者收藏一下~,有問(wèn)題可以隨時(shí)私聊我或者在評(píng)論區(qū)評(píng)論,我看到會(huì)第一時(shí)間回復(fù)