中文亚洲精品无码_熟女乱子伦免费_人人超碰人人爱国产_亚洲熟妇女综合网

當(dāng)前位置: 首頁(yè) > news >正文

wordpress頭像網(wǎng)站百度ai人工智能平臺(tái)

wordpress頭像網(wǎng)站,百度ai人工智能平臺(tái),中國(guó)三大搜索引擎,農(nóng)家樂怎么做網(wǎng)站簡(jiǎn)單的 docker 部署ELK 這是我的運(yùn)維同事部署ELK的文檔,我這里記錄轉(zhuǎn)載一下 服務(wù)規(guī)劃 架構(gòu): Filebeat->kafka->logstash->ES kafka集群部署參照: kafka集群部署 部署服務(wù)程序路徑/數(shù)據(jù)目錄端口配置文件elasticsearch/data/elasticsearch9200/data/elas…

簡(jiǎn)單的 docker 部署ELK

這是我的運(yùn)維同事部署ELK的文檔,我這里記錄轉(zhuǎn)載一下

服務(wù)規(guī)劃

架構(gòu): Filebeat->kafka->logstash->ES
在這里插入圖片描述

  • kafka集群部署參照: kafka集群部署

    部署服務(wù)程序路徑/數(shù)據(jù)目錄端口配置文件
    elasticsearch/data/elasticsearch9200/data/elasticsearch/config/elasticsearch.yml
    logstash/data/logstash/data/logstash/config/logstash.yml
    kibana/data/kibana5601/data/kibana/config/kibana.yml
    filebeat/data/filebeat/data/filebeat/config/filebeat.yml

索引服務(wù)-Elasticsearch

創(chuàng)建數(shù)據(jù)目錄

mkdir -pv /data/elasticsearch/{config,data,logs}
chown 1000 /data/elasticsearch/{data,logs}

修改主機(jī)配置

vim /etc/sysctl.conf
加入
vm.max_map_count=655360
sysctl -pvim /etc/security/limits.conf
加入
* soft memlock unlimited
* hard memlock unlimited

配置文件

cat > /data/elasticsearch/config/elasticsearch.yml << 'EOF'
cluster.name: ccms-es-cluster
node.name: ccms-es1
network.host: 172.16.20.51
http.port: 9200
bootstrap.memory_lock: true# 允許跨域訪問
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-methods: "OPTIONS, HEAD, GET, POST, PUT, DELETE"
http.cors.allow-headers: "Authorization, X-Requested-With, Content-Type, Content-Length, X-User"# Cluster
node.master: true
node.data: true
transport.tcp.port: 9300
discovery.seed_hosts: ["172.16.20.51","172.16.20.52","172.16.20.53"]
cluster.initial_master_nodes: ["ccms-es1","ccms-es2","ccms-es3"]cluster.routing.allocation.same_shard.host: true
cluster.routing.allocation.node_initial_primaries_recoveries: 4
cluster.routing.allocation.node_concurrent_recoveries: 4# X-Pack
xpack.security.enabled: true
xpack.license.self_generated.type: basic
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
EOFchown 1000 /data/elasticsearch/config/*
# 容器啟動(dòng)后先生成證書, 分發(fā)到各個(gè)節(jié)點(diǎn)的config目錄下, 再重啟es容器

discovery.zen.minimum_master_nodes算法: 節(jié)點(diǎn)數(shù)/2+1

# 設(shè)置ES密碼:
# 自動(dòng)設(shè)置密碼命令
elasticsearch-setup-passwords auto
# 或者
# 自定義密碼命令
elasticsearch-setup-passwords interactive# es-head登錄
http://172.16.20.52:9200/?auth_user=elastic&auth_password=elastic123456# 生成證書(證書不需要設(shè)置密碼):
cd /usr/share/elasticsearch/config/
elasticsearch-certutil ca -out config/elastic-certificates.p12 -pass ""

docker-compose編排

mkdir -pv /data/docker-compose/elasticsearch/
cat > /data/docker-compose/elasticsearch/docker-compose.yml << EOF
version: "3"
services:es:container_name: esimage: elasticsearch:7.11.1network_mode: hostrestart: alwaysvolumes:- /etc/localtime:/etc/localtime- /data/elasticsearch/config:/usr/share/elasticsearch/config- /data/elasticsearch/data:/usr/share/elasticsearch/data- /data/elasticsearch/logs:/usr/share/elasticsearch/logsenvironment:TZ: Asia/Shanghaibootstrap.memory_lock: trueES_JAVA_OPTS: "-Xmx8G -Xms8G"ELASTIC_PASSWORD: "G1T@es2022#ccms"ulimits:memlock:soft: -1hard: -1deploy:resources:limits:memory: 10G
EOF
# 1. 解決es-head跨域問題(瀏覽器報(bào): Request header field Content-Type is not allowed by Access-Control-Allow-Headers)
# es配置文件加入:
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-methods: "OPTIONS, HEAD, GET, POST, PUT, DELETE"
http.cors.allow-headers: "X-Requested-With, Content-Type, Content-Length, X-User"# 2. 解決es-head數(shù)據(jù)瀏覽空白(瀏覽器報(bào): 406 Not Acceptable)
# 修改es-head代碼文件vendor.js
# 第6886行左右
contentType: "application/x-www-form-urlencoded" --> contentType: "application/json;charset=UTF-8"

啟動(dòng)

docker-compose up -d

日志采集-Filebeat

創(chuàng)建數(shù)據(jù)目錄

mkdir -pv /data/filebeat/{config,data}

配置文件

發(fā)送到kafka

cat > /data/filebeat/config/filebeat.yml << 'EOF'
###################### Filebeat Configuration Example #########################
filebeat.name: ccms-test-08
filebeat.idle_timeout: 5s
filebeat.spool_zie: 2048#----------------------------------input form ccms servers--------------------------------#
filebeat.inputs:
- type: logenabled: truepaths:- /opt/ccms-auto-deploy/credit-business/*/*/target/logs/*.log- /opt/ccms-auto-deploy/credit-support/*/*/target/logs/*.logfields:kafka_topic: topic-ccms-devfields_under_root: true# filebeat 多行日志的處理multiline.pattern: '^\['multiline.negate: truemultiline.match: afterencoding: plaintail_files: false# 檢測(cè)指定目錄下文件更新時(shí)間scan_frequency: 3s# 每隔1s檢測(cè)一下文件變化,如果連續(xù)檢測(cè)2次之后文件還沒有變化,下一次檢測(cè)間隔時(shí)間變?yōu)?sbackoff: 1smax_backoff: 5sbackoff_factor: 2#----------------------------------input form nginx access_log--------------------------------#
- type: logenabled: truepaths:- /data/nginx/logs/ccms-access.logfields:kafka_topic: topic-nginx-accessfields_under_root: trueencoding: plaintail_files: falsejson.keys_under_root: truejson.overwrite_keys: truejson.add_error_key: false# 檢測(cè)指定目錄下文件更新時(shí)間scan_frequency: 3s# 每隔1s檢測(cè)一下文件變化,如果連續(xù)檢測(cè)2次之后文件還沒有變化,下一次檢測(cè)間隔時(shí)間變?yōu)?sbackoff: 1smax_backoff: 5sbackoff_factor: 2#----------------------------------Kafka output--------------------------------#
output.kafka:enabled: truehosts: ['3.1.101.33:9092','3.1.101.34:9092','3.1.101.35:9092']topic: '%{[kafka_topic]}'
EOF

docker-compose編排

mkdir -pv /data/docker-compose/filebeat
cat > /data/docker-compose/filebeat/docker-compose.yml << EOF
version: "3"
services:filebeat:container_name: filebeatimage: elastic/filebeat:7.11.1user: rootrestart: alwaysvolumes:- /etc/localtime:/etc/localtime- /data/filebeat/config/filebeat.yml:/usr/share/filebeat/filebeat.yml- /data/filebeat/data:/usr/share/filebeat/data/registry- /opt/ccms-auto-deploy:/opt/ccms-auto-deploy- /data/nginx/logs:/data/nginx/logs/deploy:resources:limits:memory: 4Greservations:memory: 1G
EOF

啟動(dòng)

docker-compose up -d

安裝kibana儀表盤

docker-compose exec filebeat filebeat setup --dashboards

過濾服務(wù)-Logstash

創(chuàng)建數(shù)據(jù)目錄

mkdir -pv /data/logstash/{config,data,pipeline,logs}
chown 1000.1000 /data/logstash/{config,data,pipeline,logs}

配置文件

logstash.yml

cat > /data/logstash/config/logstash.yml << 'EOF'
node.name: logstast-node1
http.host: "0.0.0.0"
path.data: data
path.logs: /usr/share/logstash/logs
config.reload.automatic: true
config.reload.interval: 5s
config.test_and_exit: false
EOF

如果使用pipeline管道,不要配置path.config

pipelines.yml

cat > /data/logstash/config/pipelines.yml << 'EOF'
- pipeline.id: ccms-credit-javapath.config: "/usr/share/logstash/pipeline/ccms-credit-java.conf"
- pipeline.id: ccms-credit-nginx-accesspath.config: "/usr/share/logstash/pipeline/ccms-credit-nginx-access.conf"
- pipeline.id: ccms-credit-nginx-errorpath.config: "/usr/share/logstash/pipeline/ccms-credit-nginx-error.conf"
EOF

pipeline配置文件

pipeline/ccms-credit-java.conf

cat > /data/logstash/pipeline/ccms-credit-java.conf<< 'EOF'
input {kafka {topics_pattern => "topic-ccms-credit-sit-java"bootstrap_servers => "172.16.20.51:9092,172.16.20.52:9092,172.16.20.53:9092"consumer_threads => 4decorate_events => truegroup_id => "kafka-ccms-credit-sit-java"add_field => {"logstash-server" => "172.16.20.51"}}
}filter {json {source => "message"}grok {match => { "message" => "\[%{TIMESTAMP_ISO8601:currentDateTime}\] \[%{LOGLEVEL:level}\] \[%{DATA:traceInfo}\] \[%{NOTSPACE:class}\] \[%{DATA:hostName}\] \[%{IP:hostIp}\] \[%{DATA:applicationName}\] \[%{DATA:location}\] \[%{DATA:messageInfo}\] ## %{QUOTEDSTRING:throwable}" }}mutate{enable_metric => "false"remove_field => ["ecs","tags","input","agent","@version","log","port","host","message"]}date {match => [ "currentDateTime", "ISO8601" ]}
}output {elasticsearch {hosts => ["172.16.20.51:9200","172.16.20.52:9200","172.16.20.53:9200"]user => "elastic"password => "G1T@es2022#ccms"index => "index-ccms-credit-sit-java_%{+YYY-MM-dd}"sniffing => truetemplate_overwrite => true}
}
EOF

pipeline/ccms-credit-nginx-access.conf

cat > /data/logstash/pipeline.d/ccms-nginx-access.conf<< 'EOF'
input {kafka {topics_pattern => "topic-ccms-credit-sit-nginx-access"bootstrap_servers => "172.16.20.51:9092,172.16.20.52:9092,172.16.20.53:9092"codec => "json"consumer_threads => 4decorate_events => truegroup_id => "kafka-ccms-credit-sit-nginx-access"add_field => {"logstash-server" => "172.16.20.51"}}
}filter {geoip {source => "client_ip"target => "geoip"add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]remove_field => [ "[geoip][latitude]", "[geoip][longitude]", "[geoip][country_code2]","[geoip][country_code3]", "[geoip][timezone]", "[geoip][continent_code]", "[geoip][dma_code]", "[geoip][region_code]" ]}mutate {convert => [ "size", "integer" ]convert => [ "status", "integer" ]convert => [ "responsetime", "float" ]convert => [ "upstreamtime", "float" ]convert => [ "[geoip][coordinates]", "float" ]# 過濾 filebeat 沒用的字段,這里過濾的字段要考慮好輸出到es的,否則過濾了就沒法做判斷remove_field => [ "ecs","agent","host","cloud","@version","input","logs_type" ]}useragent {source => "http_user_agent"target => "ua"# 過濾useragent沒用的字段remove_field => [ "[ua][minor]","[ua][major]","[ua][build]","[ua][patch]","[ua][os_minor]","[ua][os_major]" ]}}output {elasticsearch {hosts => ["172.16.20.51:9200","172.16.20.52:9200","172.16.20.53:9200"]user => "elastic"password => "G1T@es2022#ccms"index => "logstash-ccms-credit-sit-nginx-access_%{+YYY-MM-dd}"sniffing => truetemplate_overwrite => true}
}
EOF

pipeline/ccms-credit-nginx-error.conf

cat > /data/logstash/pipeline.d/ccms-nginx-error.conf<< 'EOF'
input {kafka {topics_pattern => "topic-ccms-credit-sit-nginx-error"bootstrap_servers => "172.16.20.51:9092,172.16.20.52:9092,172.16.20.53:9092"consumer_threads => 4decorate_events => truegroup_id => "kafka-ccms-credit-sit-nginx-error"add_field => {"logstash-server" => "172.16.20.51"}enable_metric => true}
}filter {json {source => "message"}grok {match => ["message", "%{DATESTAMP:currentDateTime}\s{1,}\[%{LOGLEVEL:level}\]\s{1,}(%{NUMBER:pid:int}#%{NUMBER}:\s{1,}\*%{NUMBER})\s{1,}(%{GREEDYDATA:messageInfo})(?:,\s{1,}client:\s{1,}(?<client>%{IP}|%{HOSTNAME}))(?:,\s{1,}server:\s{1,}%{IPORHOST:server})(?:, request: %{QS:request})?(?:, upstream: \"%{URI:endpoint}\")?(?:, host: \"%{HOSTPORT:host}\")?(?:, referrer: \"%{URI:referrer}\")?","message", "%{DATESTAMP:currentDateTime}\s{1,}\[%{DATA:level}\]\s{1,}%{GREEDYDATA:messageInfo}"]}date{match => ["currentDateTime", "yy/MM/dd HH:mm:ss", "ISO8601"]timezone => "+08:00"target => "@timestamp"}mutate{enable_metric => "false"remove_field => [ "ecs","tags","input","agent","@version","log","port","host","message" ]}
}output {elasticsearch {hosts => ["172.16.20.51:9200","172.16.20.52:9200","172.16.20.53:9200"]user => "elastic"password => "G1T@es2022#ccms"index => "logstash-ccms-credit-sit-nginx-error_%{+YYY-MM-dd}"sniffing => truetemplate_overwrite => true}
}
EOF

docker-compose編排

mkdir -pv /data/docker-compose/logstash
cat > /data/docker-compose/logstash/docker-compose.yml << EOF
version: "3"
services:logstash:container_name: logstashimage: 172.16.20.50:8005/public/logstash:7.11.1user: rootnetwork_mode: hostrestart: alwaysvolumes:- /etc/localtime:/etc/localtime- /data/logstash/config:/usr/share/logstash/config- /data/logstash/data:/usr/share/logstash/data- /data/logstash/pipeline:/usr/share/logstash/pipelineenvironment:TZ: Asia/ShanghaiLS_JAVA_OPTS: "-Xmx8G -Xms8G"deploy:resources:limits:memory: 10G
EOF

啟動(dòng)

docker-compose up -d

展示服務(wù)-Kibana

創(chuàng)建數(shù)據(jù)目錄

mkdir -pv /data/kibana/{config,logs}
chown 1000 /data/kibana/{config,logs}

配置文件

cat > /data/kibana/config/kibana.yml << 'EOF'
# Default Kibana configuration for docker target
server.name: ccms-kibana
server.port: 5601
server.host: "0"
elasticsearch.hosts: [ "http://172.16.20.51:9200","http://172.16.20.52:9200","http://172.16.20.53:9200" ]
monitoring.ui.container.elasticsearch.enabled: true
i18n.locale: "zh-CN"
map.tilemap.url: 'http://webrd02.is.autonavi.com/appmaptile?lang=zh_cn&size=1&scale=1&style=7&x={x}&y={y}&z={z}'xpack.security.enabled: true
xpack.security.encryptionKey: "fhjskloppd678ehkdfdlliverpoolfcr"
elasticsearch.username: "elastic"
elasticsearch.password: "G1T@es2022#ccms"
EOF

docker-compose編排

mkdir -pv /data/docker-compose/kibana/
cat > /data/docker-compose/kibana/docker-compose.yml << EOF
version: "3"
services:kibana:container_name: kibanaimage: kibana:7.11.1restart: alwaysports:- "5601:5601"volumes:- /data/kibana/config/kibana.yml:/opt/kibana/config/kibana.yml
EOF

啟動(dòng)

docker-compose up -d
http://www.risenshineclean.com/news/37356.html

相關(guān)文章:

  • 該怎么給做網(wǎng)站的提頁(yè)面需求深圳網(wǎng)絡(luò)推廣網(wǎng)絡(luò)
  • 廣東省廣州市白云區(qū)太和鎮(zhèn)名風(fēng)seo軟件
  • 中新生態(tài)城建設(shè)局門戶網(wǎng)站昆明seo案例
  • 佛山網(wǎng)站建設(shè)在哪手機(jī)優(yōu)化
  • 重慶網(wǎng)站建設(shè)公司銷售seo營(yíng)銷推廣多少錢
  • 珠海企業(yè)網(wǎng)站建設(shè)seo優(yōu)化seo外包
  • b2b網(wǎng)站怎么發(fā)布信息站長(zhǎng)之家論壇
  • 專業(yè)醫(yī)院網(wǎng)站建設(shè)百度網(wǎng)盤下載
  • 太原市做網(wǎng)站公司微信營(yíng)銷軟件手機(jī)版
  • 企業(yè)做網(wǎng)站大概多少錢湖南關(guān)鍵詞網(wǎng)絡(luò)科技有限公司
  • plone wordpressseo人工智能
  • 平易云 網(wǎng)站建設(shè)看廣告賺錢
  • 政府網(wǎng)站建設(shè)培訓(xùn)講話惠州優(yōu)化怎么做seo
  • 漂亮企業(yè)網(wǎng)站源碼關(guān)鍵詞優(yōu)化排名軟件
  • 網(wǎng)站設(shè)計(jì)概述500字關(guān)鍵詞批量調(diào)詞軟件
  • 學(xué)生做爰網(wǎng)站微信群推廣網(wǎng)站
  • 專業(yè)鄭州做網(wǎng)站的公司今日國(guó)家新聞
  • 動(dòng)態(tài)網(wǎng)站開發(fā)結(jié)束語(yǔ)東莞優(yōu)化怎么做seo
  • 彩票網(wǎng)站建設(shè)方案看網(wǎng)站時(shí)的關(guān)鍵詞
  • 百度競(jìng)價(jià) 十一 pc網(wǎng)站 手機(jī)網(wǎng)站seo技術(shù)團(tuán)隊(duì)
  • 岳陽(yáng)市委網(wǎng)站免費(fèi)seo網(wǎng)站推廣在線觀看
  • 湛江網(wǎng)站設(shè)計(jì)模板視頻500個(gè)游戲推廣群
  • 網(wǎng)站互動(dòng)營(yíng)銷成人編程培訓(xùn)機(jī)構(gòu)排名前十
  • 融資是什么意思株洲seo優(yōu)化報(bào)價(jià)
  • 馬云1688網(wǎng)站在濮陽(yáng)如何做圖片外鏈在線生成
  • 大型b2c網(wǎng)站開發(fā)百度推廣app下載官方
  • 怎么做領(lǐng)券網(wǎng)站上海知名seo公司
  • 電腦做網(wǎng)站電腦編程百度指數(shù)怎么提升
  • 手機(jī)視頻網(wǎng)站怎么做保定seo推廣公司
  • 尚云網(wǎng)站建設(shè)廣東網(wǎng)約車漲價(jià)