1.? 環(huán)境規(guī)劃:
成都創(chuàng)新互聯(lián)公司堅持“要么做到,要么別承諾”的工作理念,服務領域包括:做網(wǎng)站、成都網(wǎng)站設計、企業(yè)官網(wǎng)、英文網(wǎng)站、手機端網(wǎng)站、網(wǎng)站推廣等服務,滿足客戶于互聯(lián)網(wǎng)時代的武侯網(wǎng)站設計、移動媒體設計的需求,幫助企業(yè)找到有效的互聯(lián)網(wǎng)解決方案。努力成為您成熟可靠的網(wǎng)絡建設合作伙伴!主機名 | IP地址 | 角色 |
node1 | 192.168.56.111 | ElasticSearch(master) Zookeeper Kafka |
node2 | 192.168.56.112 | ElasticSearch(slave) Kibana Zookeeper Kafka |
node3 | 192.168.56.113 | ElasticSearch(slave) Zookeeper Kafka |
node4 | 192.168.56.114 | Logstash Filebeat |
2.? node4節(jié)點已經(jīng)安裝jdk:
[root@node4 ~]# java -version
java version "1.8.0_202"
Java(TM) SE Runtime Environment (build 1.8.0_202-b08)
Java HotSpot(TM) 64-Bit Server VM (build 25.202-b08, mixed mode)
3.? 安裝LogStash和Filebeat:
[root@node4 ~]# yum localinstall -y logstash-7.2.0.rpm
[root@node4 ~]# yum localinstall -y filebeat-7.2.0-x86_64.rpm
4.? 配置Filebeat:
[root@node4 ~]# cd /etc/filebeat/
[root@node4 filebeat]# cp filebeat.yml{,.bak}
##配置filebeat,從日志文件輸入,輸出到kafka
[root@node4 filebeat]# vim filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/httpd/access_log
fields:
type: httpd-access
multiline.pattern: ^\[
multiline.negate: true
multiline.match: after
- type: log
enabled: true
paths:
- /var/log/httpd/error_log
fields:
type: httpd-error
multiline.pattern: ^\[
multiline.negate: true
multiline.match: after
- type: log
enabled: true
paths:
- /var/log/mariadb/mariadb.log
fields:
type: mariadb
multiline.pattern: ^\[
multiline.negate: true
multiline.match: after
output.kafka:
hosts: ["192.168.56.111:9092","192.168.56.112:9092","192.168.56.113:9092"] ##kafka集群節(jié)點
topic: "%{[fields][type]}" ##kafka的標題
multiline參數(shù)解析:1.? " multiline.type ":默認值是pattern。
? 2. "?multiline.pattern ":匹配日志的正則,一般情況下會配置為從行首開始匹配。
? 3. "?multiline.match,multiline.negate ":multiline.match追加的位置,?multiline.negate是否為否定模式。
multiline.match為after,multiline.negate為true的意思就是從匹配的位置開始,只要是不滿足匹配條件的,全部追加到匹配的那一行,直到再次遇到匹配條件的位置。
5.? 配置logstash:
[root@node4 ~]# cd /etc/logstash/conf.d
##配置logstash:輸入為kafka,輸出為elasticsearch。
[root@node4 conf.d]# vim all.conf
input{
kafka {
bootstrap_servers =>"192.168.56.111:9092,192.168.56.112:9092,192.168.56.113:9092"
codec =>json
topics =>"httpd-access" ##匹配kafka中的主題
consumer_threads =>1
decorate_events =>true
type =>"httpd-access" ##用于輸出時條件判斷
}
kafka {
bootstrap_servers =>"192.168.56.111:9092,192.168.56.112:9092,192.168.56.113:9092"
codec =>json
topics =>"httpd-error" ##匹配kafka中的主題
consumer_threads =>1
decorate_events =>true
type =>"httpd-error" ##用于輸出時條件判斷
}
kafka {
bootstrap_servers =>"192.168.56.111:9092,192.168.56.112:9092,192.168.56.113:9092"
codec =>json
topics =>"mariadb" ##匹配kafka中的主題
consumer_threads =>1
decorate_events =>true
type =>"mariadb" ##用于輸出時條件判斷
}
}
##輸出時如果滿足type的判斷條件,就按照指定索引輸出到elasticsearch上。
output {
if [type] == "httpd-access" {
elasticsearch {
hosts =>["192.168.56.111:9200","192.168.56.112:9200","192.168.56.113:9200"]
index =>"httpd-accesslog-%{+yyyy.MM.dd}"
}
}
if [type] == "httpd-error" {
elasticsearch {
hosts =>["192.168.56.111:9200","192.168.56.112:9200","192.168.56.113:9200"]
index =>"httpd-errorlog-%{+yyyy.MM.dd}"
}
}
if [type] == "mariadb" {
elasticsearch {
hosts =>["192.168.56.111:9200","192.168.56.112:9200","192.168.56.113:9200"]
index =>"mariadblog-%{+yyyy.MM.dd}"
}
}
}
6.? 測試Filebeat和LogStash是否可以成功采集到日志。
[root@node4 ~]# cd /etc/filebeat
[root@node4 filebeat]# filebeat -e -c filebeat.yml
另起一個終端啟動logstash:
[root@node4 ~]# cd /etc/logstash/conf.d
[root@node4 conf.d]# logstash -f all.conf
訪問apache http和登錄mariadb,查看elasticsearch上是否采集到指定索引的日志。
如果想以服務方式啟動filebeat和logstash,filebeat配置文件名就要是filebeat.yml;logstash配置文件必須要在/etc/filebeat/conf.d目錄下,并且后綴為".conf"。?
[root@node4 ~]# systemctl start filebeat.service
[root@node4 ~]# systemctl start logstash.service
你是否還在尋找穩(wěn)定的海外服務器提供商?創(chuàng)新互聯(lián)www.cdcxhl.cn海外機房具備T級流量清洗系統(tǒng)配攻擊溯源,準確流量調(diào)度確保服務器高可用性,企業(yè)級服務器適合批量采購,新人活動首月15元起,快前往官網(wǎng)查看詳情吧