一 應(yīng)用場(chǎng)景描述
創(chuàng)新互聯(lián)公司是一家專業(yè)提供麻江企業(yè)網(wǎng)站建設(shè),專注與網(wǎng)站設(shè)計(jì)制作、網(wǎng)站建設(shè)、H5建站、小程序制作等業(yè)務(wù)。10年已為麻江眾多企業(yè)、政府機(jī)構(gòu)等服務(wù)。創(chuàng)新互聯(lián)專業(yè)網(wǎng)絡(luò)公司優(yōu)惠進(jìn)行中。
在有些情況下,僅僅通過Zabbix去監(jiān)控MongoDB的端口和各種狀態(tài)還不夠,MongoDB的日志監(jiān)控也是很重要的。例如Mongos連接后端的Shard報(bào)SocketException錯(cuò)誤等。
二 使用Logstash分析MongoDB日志
要記錄慢查詢首先需要開啟慢查詢記錄功能
use jd05; db.setProfilingLevel(1,50) { "was" : 1, "slowms" : 50, "ok" : 1 }
1表示只記錄慢查詢,慢于50毫秒的操作會(huì)被記錄
如果寫成2就會(huì)記錄所有的操作,不建議在生產(chǎn)環(huán)境中使用,可以在開發(fā)環(huán)境中使用
db.setProfilingLevel(2)
在MongoDB的日志文件中會(huì)記錄如下操作信息:
Mon Apr 27 16:45:01.853 [conn282854698] command jd01.$cmd command: { count: "player", query: { request_time: { $gte: 1430123701 } } } ntoreturn:1 keyUpdates:0 numYields: 7 locks(micros) r:640822 reslen:48 340ms
logstash配置文件shipper_mongodb.conf如下
input { file { path => "/data/app_data/mongodb/log/*.log" type => "mongodb" sincedb_path => "/dev/null" } } filter { if [type] == "mongodb" { grok { match => ["message","(?m)%{GREEDYDATA} \[conn%{NUMBER:mongoConnection}\] %{WORD:mongoCommand} %{WORD:mongoDatabase}.%{NOTSPACE:mongoCollection} %{WORD}: \{ %{GREEDYDATA:mongoStatement} \} %{GREEDYDATA} %{NUMBER:mongoElapsedTime:int}ms" ] add_tag => "mongodb" } grok { match => ["message"," cursorid:%{NUMBER:mongoCursorId}"] add_tag => "mongo_profiling_data" } grok { match => ["message"," ntoreturn:%{NUMBER:mongoNumberToReturn:int}"] add_tag => "mongo_profiling_data" } grok { match => ["message"," ntoskip:%{NUMBER:mongoNumberToSkip:int}"] add_tag => "mongo_profiling_data" } grok { match => ["message"," nscanned:%{NUMBER:mongoNumberScanned:int}"] add_tag => "mongo_profiling_data" } grok { match => ["message"," scanAndOrder:%{NUMBER:mongoScanAndOrder:int}"] add_tag => "mongo_profiling_data" } grok { match => ["message"," idhack:%{NUMBER:mongoIdHack:int}"] add_tag => "mongo_profiling_data" } grok { match => ["message"," nmoved:%{NUMBER:mongoNumberMoved:int}"] add_tag => "mongo_profiling_data" } grok { match => ["message"," nupdated:%{NUMBER:mongoNumberUpdated:int}"] add_tag => "mongo_profiling_data" } grok { match => ["message"," keyUpdates:%{NUMBER:mongoKeyUpdates:int}"] add_tag => "mongo_profiling_data" } grok { match => ["message"," numYields: %{NUMBER:mongoNumYields:int}"] add_tag => "mongo_profiling_data" } grok { match => ["message"," locks\(micros\) r:%{NUMBER:mongoReadLocks:int}"] add_tag => "mongo_profiling_data" } grok { match => ["message"," locks\(micros\) w:%{NUMBER:mongoWriteLocks:int}"] add_tag => "mongo_profiling_data" } grok { match => ["message"," nreturned:%{NUMBER:mongoNumberReturned:int}"] add_tag => "mongo_profiling_data" } grok { match => ["message"," reslen:%{NUMBER:mongoResultLength:int}"] add_tag => "mongo_profiling_data" } if "mongo_profiling_data" in [tags] { mutate { remove_tag => "_grokparsefailure" } } if "_grokparsefailure" in [tags] { grep { match => ["message","(Failed|error|SOCKET)"] add_tag => ["zabbix-sender"] add_field => [ "zabbix_host","%{host}", "zabbix_item","mongo.error" # "send_field","%{message}" ] } mutate { remove_tag => "_grokparsefailure" } } } } output { stdout { codec => "rubydebug" } zabbix { tags => "zabbix-sender" host => "zabbixserver" port => "10051" zabbix_sender => "/usr/local/zabbix/bin/zabbix_sender" } redis { host => "10.4.29.162" data_type => "list" key => "logstash" } }
配置文件分為幾步:
使用logstash的file插件從/data/app_data/mongodb/log/目錄中讀取mongodb的日志文件然后對(duì)日志內(nèi)容進(jìn)行解析
如果日志文件中有類似cursorid,nreturned等關(guān)鍵字的就截取添加標(biāo)簽mongo_profiling_data用于以后進(jìn)行數(shù)據(jù)統(tǒng)計(jì)
對(duì)于其他日志就過濾關(guān)鍵字看是否含有錯(cuò)誤信息,如果有就通過zabbix發(fā)送報(bào)警。
注意使用zabbix插件發(fā)送報(bào)警的時(shí)候需要先進(jìn)行過濾關(guān)鍵字,然后要有zabbix_host,zabbix_item,zabbix_field三個(gè)字段,zabbix_item的值需要和zabbix監(jiān)控頁面配置的item相對(duì)應(yīng)。zabbix_field如果沒有指定,默認(rèn)就是發(fā)送這個(gè)message字段
添加zabbix的模板
同理可以通過zabbix對(duì)PHP-FPM,Nginx,Redis,MySQL等發(fā)送報(bào)警
然后要做的就是根據(jù)不同的字段定義不同的圖表
參考文檔:
http://techblog.holidaycheck.com/profiling-mongodb-with-logstash-and-kibana/
http://tech.rhealitycheck.com/visualizing-mongodb-profiling-data-using-logstash-and-kibana/
http://www.logstash.net/docs/1.4.2/outputs/zabbix