真实的国产乱ⅩXXX66竹夫人,五月香六月婷婷激情综合,亚洲日本VA一区二区三区,亚洲精品一区二区三区麻豆

成都創(chuàng)新互聯(lián)網(wǎng)站制作重慶分公司

ELK+filebeat集群部署

ELK + filebeat集群部署
ELK簡(jiǎn)介

在長(zhǎng)泰等地區(qū),都構(gòu)建了全面的區(qū)域性戰(zhàn)略布局,加強(qiáng)發(fā)展的系統(tǒng)性、市場(chǎng)前瞻性、產(chǎn)品創(chuàng)新能力,以專注、極致的服務(wù)理念,為客戶提供網(wǎng)站設(shè)計(jì)制作、成都網(wǎng)站制作 網(wǎng)站設(shè)計(jì)制作按需網(wǎng)站制作,公司網(wǎng)站建設(shè),企業(yè)網(wǎng)站建設(shè),成都品牌網(wǎng)站建設(shè),成都營(yíng)銷網(wǎng)站建設(shè),成都外貿(mào)網(wǎng)站建設(shè)公司,長(zhǎng)泰網(wǎng)站建設(shè)費(fèi)用合理。

  1. Elasticsearch
    Elasticsearch是一個(gè)實(shí)時(shí)的分布式搜索分析引擎, 它能讓你以一個(gè)之前從未有過(guò)的速度和規(guī)模,去探索你的數(shù)據(jù)。它被用作全文檢索、結(jié)構(gòu)化搜索、分析以及這三個(gè)功能的組合

2.Logstash
Logstash是一款強(qiáng)大的數(shù)據(jù)處理工具,它可以實(shí)現(xiàn)數(shù)據(jù)傳輸,格式處理,格式化輸出,還有強(qiáng)大的插件功能,常用于日志處理。

3.Kibana
kibana是一個(gè)開(kāi)源和免費(fèi)的工具,它可以為L(zhǎng)ogstash和ElasticSearch提供的日志分析友好的Web界面,可以幫助您匯總、分析和搜索重要數(shù)據(jù)日志。

官網(wǎng)地址:https://www.elastic.co/cn/downloads/

注意:配置文件ip要根據(jù)實(shí)際情況修改

環(huán)境準(zhǔn)備,三臺(tái)Linux服務(wù)器,系統(tǒng)統(tǒng)一

elk-node1   192.168.3.181   數(shù)據(jù)、主節(jié)點(diǎn)(安裝elasticsearch、logstash、kabana、filebeat)

elk-node2   192.168.3.182   數(shù)據(jù)節(jié)點(diǎn)(安裝elasticsearch、filebeat)

elk-node3   192.168.3.183   數(shù)據(jù)節(jié)點(diǎn)(安裝elasticsearch、filebeat)

修改hosts文件每臺(tái)hosts均相同

vim /etc/hosts
192.168.243.162         elk-node1
192.168.243.163         elk-node2

安裝jdk11,二進(jìn)制安裝

已安裝java則略過(guò)此步驟

{{

cd /home/tools &&
wget https://download.java.net/java/GA/jdk11/13/GPL/openjdk-11.0.1_linux-x64_bin.tar.gz

解壓到指定目錄

tar -xzvf jdk-11.0.4_linux-x64_bin.tar.gz -C /usr/local/jdk 

配置環(huán)境變量(set java environment)

JAVA_HOME=/usr/local/jdk/jdk-11.0.1
CLASSPATH=$JAVA_HOME/lib/
PATH=$PATH:$JAVA_HOME/bin
export PATH JAVA_HOME CLASSPATH

使環(huán)境變量生效

source  /etc/profile

yum源安裝

yum -y install java
java -version   

}}
修改系統(tǒng)內(nèi)核參數(shù)
調(diào)整最大虛擬內(nèi)存映射空間
在末尾追加如下

vim  /etc/sysctl.conf
vm.max_map_count=262144   

在末尾追加如下

vim /etc/security/limit.conf  

        * soft nofile  1000000
        * hard nofile 1000000
        * soft nproc  1000000
        * hard nproc 1000000
        * soft memlock unlimited
        * hard memlock unlimited
sysctl -p
cd /etc/security/limits.d
vi 20-nproc.conf 
-# Default limit for number of user's processes to prevent
-# accidental fork bombs.
-# See rhbz #432903 for reasoning.

*          soft    nproc     4096
root       soft    nproc     unlimited
將*號(hào)改成用戶名
esyonghu   soft    nproc     4096
root       soft    nproc     unlimited

下載依賴包,安裝repo源

yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools vim lrzsz tree screen lsof tcpdump wget ntpdate
vim /etc/yum.repos.d/elastic.repo       
[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1   
autorefresh=1
type=rpm-md

[kibana-7.x]
name=Kibana repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
yum repolist

#部署elasticsearch集群,在所有節(jié)點(diǎn)上操作

yum -y install elasticsearch
grep "^[a-Z]" /etc/elasticsearch/elasticsearch.yml
vim /etc/elasticsearch/elasticsearch.yml
cluster.name: my-elk
node.name: elk-node1(對(duì)應(yīng)主機(jī)名)
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
transport.tcp.compress: true
network.host: 0.0.0.0
http.port: 9200
transport.tcp.port: 9300 ##只在其他節(jié)點(diǎn)上配置
discovery.seed_hosts: ["192.168.243.162", "192.168.243.163","192.168.243.164"]
cluster.initial_master_nodes: ["192.168.243.162", "192.168.243.163","192.168.243.164"]
discovery.zen.minimum_master_nodes: 2 #防止集群“腦裂”,需要配置集群最少主節(jié)點(diǎn)數(shù)目,通常為 (主節(jié)點(diǎn)數(shù)目/2) + 1
node.master: true
node.data: true
xpack.security.enabled: true
http.cors.enabled: true ##
http.cors.allow-origin: "*" ##跨域訪問(wèn),支持head插件可以訪問(wèn)es
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/elastic-certificates.p12

elasticesearch在實(shí)際生產(chǎn)中非常消耗cpu,需要將初始申請(qǐng)的JVM內(nèi)存調(diào)高,默認(rèn)是1G,按照實(shí)際情況調(diào)整

vim /etc/elasticsearch/jvm.options 
#修改這兩行
    -Xms4g #設(shè)置最小堆的值為4g
    -Xmx4g #設(shè)置組大堆的值為4g

配置TLS和身份驗(yàn)證 -- 此步為了安全也可跳過(guò)該步
{{

在Elasticsearch主節(jié)點(diǎn)上配置TLS.
cd /usr/share/elasticsearch/
./bin/elasticsearch-certutil ca ##一直用enter鍵
./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
 ll

-rw-------  1 root root   3443 Jun 28 16:46 elastic-certificates.p12
-rw-------  1 root root   2527 Jun 28 16:43 elastic-stack-ca.p12
#####給生產(chǎn)的文件添加elasticsearch組權(quán)限
chgrp elasticsearch /usr/share/elasticsearch/elastic-certificates.p12 /usr/share/elasticsearch/elastic-stack-ca.p12 
#####給這兩個(gè)文件賦640權(quán)限
chmod 640 /usr/share/elasticsearch/elastic-certificates.p12 /usr/share/elasticsearch/elastic-stack-ca.p12
######把這兩個(gè)文件移動(dòng)到elasticsearch配置文件夾中
mv /usr/share/elasticsearch/elastic-* /etc/elasticsearch/

將tls身份驗(yàn)證文件拷貝到節(jié)點(diǎn)配置文件夾中

scp /etc/elasticsearch/elastic-certificates.p12 root@192.168.243.163:/etc/elasticsearch/
scp /etc/elasticsearch/elastic-stack-ca.p12 root@192.168.243.163:/etc/elasticsearch/

}}
啟動(dòng)服務(wù),驗(yàn)證集群
先啟動(dòng)主節(jié)點(diǎn)集群,在隨后啟動(dòng)其他節(jié)點(diǎn)

systemctl start elasticsearch

設(shè)置密碼--統(tǒng)一設(shè)置密碼為123456

/usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive

驗(yàn)證集群--##瀏覽器訪問(wèn)
http://192.168.243.163:9200/_cluster/health?pretty
返回如下

        {
          "cluster_name" : "my-elk",
          "status" : "green",
          "timed_out" : false,
          "number_of_nodes" : 3,##節(jié)點(diǎn)數(shù)
          "number_of_data_nodes" : 3, ##數(shù)據(jù)節(jié)點(diǎn)數(shù)
          "active_primary_shards" : 4,
          "active_shards" : 8,
          "relocating_shards" : 0,
          "initializing_shards" : 0,
          "unassigned_shards" : 0,
          "delayed_unassigned_shards" : 0,
          "number_of_pending_tasks" : 0,
          "number_of_in_flight_fetch" : 0,
          "task_max_waiting_in_queue_millis" : 0,
          "active_shards_percent_as_number" : 100.0
        }

#部署kibana

yum源安裝 ##在任意節(jié)點(diǎn)上安裝

yum -y install kibana

修改kibana配置文件

vim /etc/kibana/kibana.yml
server.port: 5601
server.host: "0.0.0.0"
server.name: "elk-node2"
elasticsearch.hosts: ["http://192.168.243.162:9200","http://192.168.243.163:9200"]
elasticsearch.username: "elastic"
elasticsearch.password: "123456"
i18n.locale: "en"

啟動(dòng)服務(wù)

systemctl start kibana

瀏覽器訪問(wèn) http://192.168.243.162:5601/

安裝logstash

在主節(jié)點(diǎn)上進(jìn)行部署

yum -y install logstash  ##yum 源安裝
##二進(jìn)制安裝
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.4.1.tar.gz
tar -zvxf logstash-7.4.1.tar.gz -C /home/elk
mkdir -p /data/logstash/{logs,data} 

修改配置文件

vim /etc/logstash/logstash.conf
                egrep "#|^$" /etc/logstash/conf.d/logstash_debug.conf
input {
    beats {
            port => 5044
    }
}

filter {
    grok {
            match => {
                    "message" => "(?(?<=logBegin ).*?(?=logEnd))"
            }

            overwrite => ["temMsg"]
    }

 grok {
            match => {
                    "temMsg" => "(?(?<=reqId:).*?(?=,operatName))"
            }
            overwrite => ["reqId"]
    }
 grok {
            match => {
                    "temMsg" => "(?(?<=operatName:).*?(?=,operatUser))"
            }
            overwrite => ["operatName"]
 }
 grok {
            match => {
                    "temMsg" => "(?(?<=operatUser:).*?(?=,userType))"
            }
            overwrite => ["operatUser"]
    }
 grok {
            match => {
                    "temMsg" => "(?(?<=userType:).*?(?=,requestTime))"
            }
            overwrite => ["userType"]
    }
 grok {
            match => {
                    "temMsg" => "(?(?<=requestTime:).*?(?=,method))"
            }
            overwrite => ["requestTime"]
    }
grok {
            match => {
                    "temMsg" => "(?(?<=method:).*?(?=,params))"
            }
            overwrite => ["method"]
    }
 grok {
            match => {
                    "temMsg" => "(?(?<=params:).*?(?=,operatIp))"
            }
            overwrite => ["params"]
    }
 grok {
            match => {
                    "temMsg" => "(?(?<=operatIp:).*?(?=,executionTime))"
            }
            overwrite => ["operatIp"]
    }
 grok {
            match => {
                    "temMsg" => "(?(?<=executionTime:).*?(?=,operatDesc))"
            }
            overwrite => ["executionTime"]
    }
 grok {
            match => {
                    "temMsg" => "(?(?<=operatDesc:).*?(?=result))"
            }
            overwrite => ["operatDesc"]
    }
     grok {
            match => {
                    "temMsg" => "(?(?<=result:).*?(?=,siteCode))"
            }
            overwrite => ["result"]
    }
 grok {
            match => {
                    "temMsg" => "(?(?<=siteCode:).*?(?=,module))"
            }
            overwrite => ["siteCode"]
    }
 grok {
            match => {
                    "temMsg" => "(?(?<=module:).*?(?= ))"
            }
            overwrite => ["module"]
    }
grok {
            match => [
                            "message", "%{NOTSPACE:temMsg}"
                                            ]
            }
    json {
            source => "temMsg"
#       field_split => ","
#       value_split => ":"
#       remove_field => [ "@timestamp","message","path","@version","path","host" ]
            }
            urldecode {
                            all_fields => true
                            }

        mutate {
            rename => {"temMsg" => "message"}
            remove_field => [ "message" ]
            }
}
output {
    elasticsearch {
            hosts => ["192.168.243.162:9200","192.168.243.163:9200","192.168.243.164:9200"] 
            user => "elastic"
            password => "123456"
            index => "logstash-%{+YYYY.MM.dd}"
    }
}
    vim /etc/logstash/logstash.yml
    http.host: "ELK1"
    path.data: /home/elk/data/logstash/data
    path.logs: /data/logstash/logstash/logs
    xpack.monitoring.enabled: true #kibana監(jiān)控插件中啟動(dòng)監(jiān)控logstash
    xpack.monitoring.elasticsearch.hosts: ["192.168.243.162:9200","192.168.243.163:9200","192.168.243.164:9200"]

啟動(dòng)logstash服務(wù)

systemctl start logstash

二進(jìn)制啟動(dòng)方式

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash_debug.conf

部署filebeat

yum -y install filebeat 
 vim /etc/filebaet/filebaet.conf
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /srv/docker/produce/*/*/cloud*.log
  include_lines: [".*logBegin.*",".*logEnd.*"]
  #  multiline.pattern: ^\[
  #  multiline.negate: true
  #  multiline.match: after
filebeat.config.modules: 
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
setup.kibana:
  hosts: ["192.168.243.162:5601"]
output.logstash:
  hosts: ["192.168.243.162:5044"]
processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~   

啟動(dòng)filebeat

systemctl start filebeat

網(wǎng)頁(yè)題目:ELK+filebeat集群部署
網(wǎng)頁(yè)地址:http://weahome.cn/article/jjpcgj.html

其他資訊

在線咨詢

微信咨詢

電話咨詢

028-86922220(工作日)

18980820575(7×24)

提交需求

返回頂部