MongoDB基礎(chǔ)請(qǐng)參考:https://blog.51cto.com/kaliarch/2044423
創(chuàng)新互聯(lián)建站主營(yíng)洪江網(wǎng)站建設(shè)的網(wǎng)絡(luò)公司,主營(yíng)網(wǎng)站建設(shè)方案,app軟件開(kāi)發(fā)公司,洪江h(huán)5小程序設(shè)計(jì)搭建,洪江網(wǎng)站營(yíng)銷推廣歡迎洪江等地區(qū)企業(yè)咨詢
MongoDB(replica set)請(qǐng)參考:https://blog.51cto.com/kaliarch/2044618
一、概述
1.1 背景
為解決mongodb在replica set每個(gè)從節(jié)點(diǎn)上面的數(shù)據(jù)庫(kù)均是對(duì)數(shù)據(jù)庫(kù)的全量拷貝,從節(jié)點(diǎn)壓力在高并發(fā)大數(shù)據(jù)量的場(chǎng)景下存在很大挑戰(zhàn),同時(shí)考慮到后期mongodb集群的在數(shù)據(jù)壓力巨大時(shí)的擴(kuò)展性,應(yīng)對(duì)海量數(shù)據(jù)引出了分片機(jī)制。
1.2 分片概念
分片是將數(shù)據(jù)庫(kù)進(jìn)行拆分,將其分散在不同的機(jī)器上的過(guò)程,無(wú)需功能強(qiáng)大的服務(wù)器就可以存儲(chǔ)更多的數(shù)據(jù),處理更大的負(fù)載,在總數(shù)據(jù)中,將集合切成小塊,將這些塊分散到若干片中,每個(gè)片只負(fù)載總數(shù)據(jù)的一部分,通過(guò)一個(gè)知道數(shù)據(jù)與片對(duì)應(yīng)關(guān)系的組件mongos的路由進(jìn)程進(jìn)行操作。
1.3 基礎(chǔ)組件
其利用到了四個(gè)組件:mongos,config server,shard,replica set
mongos:數(shù)據(jù)庫(kù)集群請(qǐng)求的入口,所有請(qǐng)求需要經(jīng)過(guò)mongos進(jìn)行協(xié)調(diào),無(wú)需在應(yīng)用層面利用程序來(lái)進(jìn)行路由選擇,mongos其自身是一個(gè)請(qǐng)求分發(fā)中心,負(fù)責(zé)將外部的請(qǐng)求分發(fā)到對(duì)應(yīng)的shard服務(wù)器上,mongos作為統(tǒng)一的請(qǐng)求入口,為防止mongos單節(jié)點(diǎn)故障,一般需要對(duì)其做HA。
config server:配置服務(wù)器,存儲(chǔ)所有數(shù)據(jù)庫(kù)元數(shù)據(jù)(分片,路由)的配置。mongos本身沒(méi)有物理存儲(chǔ)分片服務(wù)器和數(shù)據(jù)路由信息,只是存緩存在內(nèi)存中來(lái)讀取數(shù)據(jù),mongos在第一次啟動(dòng)或后期重啟時(shí)候,就會(huì)從config server中加載配置信息,如果配置服務(wù)器信息發(fā)生更新會(huì)通知所有的mongos來(lái)更新自己的狀態(tài),從而保證準(zhǔn)確的請(qǐng)求路由,生產(chǎn)環(huán)境中通常也需要多個(gè)config server,防止配置文件存在單節(jié)點(diǎn)丟失問(wèn)題。
shard:在傳統(tǒng)意義上來(lái)講,如果存在海量數(shù)據(jù),單臺(tái)服務(wù)器存儲(chǔ)1T壓力非常大,無(wú)論考慮數(shù)據(jù)庫(kù)的硬盤,網(wǎng)絡(luò)IO,又有CPU,內(nèi)存的瓶頸,如果多臺(tái)進(jìn)行分?jǐn)?T的數(shù)據(jù),到每臺(tái)上就是可估量的較小數(shù)據(jù),在mongodb集群只要設(shè)置好分片規(guī)則,通過(guò)mongos操作數(shù)據(jù)庫(kù),就可以自動(dòng)把對(duì)應(yīng)的操作請(qǐng)求轉(zhuǎn)發(fā)到對(duì)應(yīng)的后端分片服務(wù)器上。
replica set:在總體mongodb集群架構(gòu)中,對(duì)應(yīng)的分片節(jié)點(diǎn),如果單臺(tái)機(jī)器下線,對(duì)應(yīng)整個(gè)集群的數(shù)據(jù)就會(huì)出現(xiàn)部分缺失,這是不能發(fā)生的,因此對(duì)于shard節(jié)點(diǎn)需要replica set來(lái)保證數(shù)據(jù)的可靠性,生產(chǎn)環(huán)境通常為2個(gè)副本+1個(gè)仲裁。
1.4 架構(gòu)圖
二、安裝部署
2.1 基礎(chǔ)環(huán)境
為了節(jié)省服務(wù)器,采用多實(shí)例配置,三個(gè)mongos,三個(gè)config server,單個(gè)服務(wù)器上面運(yùn)行不通角色的shard(為了后期數(shù)據(jù)分片均勻,將三臺(tái)shard在各個(gè)服務(wù)器上充當(dāng)不同的角色。),在一個(gè)節(jié)點(diǎn)內(nèi)采用replica set保證高可用,對(duì)應(yīng)主機(jī)與端口信息如下:
主機(jī)名 | IP地址 | 組件mongos | 組件config server | shard |
mongodb-1 | 172.20.6.10 |
端口:20000 | 端口:21000 | 主節(jié)點(diǎn): 22001 |
副本節(jié)點(diǎn):22002 | ||||
仲裁節(jié)點(diǎn):22003 | ||||
mongodb-2 | 172.20.6.11 | 端口:20000 | 端口:21000 | 仲裁節(jié)點(diǎn):22001 |
主節(jié)點(diǎn): 22002 | ||||
副本節(jié)點(diǎn):22003 | ||||
mongodb-3 | 172.20.6.12 | 端口:20000 | 端口:21000 | 副本節(jié)點(diǎn):22001 |
仲裁節(jié)點(diǎn):22002 | ||||
主節(jié)點(diǎn): 22003 |
2.2、安裝部署
2.2.1 軟件下載目錄創(chuàng)建
wget -c https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel62-3.4.10.tgz tar -zxvf mongodb-linux-x86_64-rhel62-3.4.10.tgz ln -sv mongodb-linux-x86_64-rhel62-3.4.10 mongodb echo "PATH=$PAHT:/usr/local/mongodb/bin">/etc/profile.d/mongodb.sh source /etc/profile.d/mongodb.sh
2.2.2 創(chuàng)建目錄
分別在mongodb-1/mongodb-2/mongodb-3創(chuàng)建目錄及日志文件
mkdir -p /data/mongodb/mongos/{log,conf} mkdir -p /data/mongodb/mongoconf/{data,log,conf} mkdir -p /data/mongodb/shard1/{data,log,conf} mkdir -p /data/mongodb/shard2/{data,log,conf} mkdir -p /data/mongodb/shard3/{data,log,conf} touch /data/mongodb/mongos/log/mongos.log touch /data/mongodb/mongoconf/log/mongoconf.log touch /data/mongodb/shard1/log/shard1.log touch /data/mongodb/shard2/log/shard2.log touch /data/mongodb/shard3/log/shard3.log
2.2.3 配置config server副本集
在mongodb3.4版本后要求配置服務(wù)器也創(chuàng)建為副本集,在此副本集名稱:replconf
在三臺(tái)服務(wù)器上配置config server副本集配置文件,并啟動(dòng)服務(wù)
cat>/data/mongodb/mongoconf/conf/mongoconf.conf<任意登錄一臺(tái)服務(wù)器進(jìn)行配置服務(wù)器副本集初始化
use admin config = {_id:"replconf",members:[ {_id:0,host:"172.20.6.10:21000"}, {_id:1,host:"172.20.6.11:21000"}, {_id:2,host:"172.20.6.12:21000"},] } rs.initiate(config);查看集群狀態(tài):
replconf:OTHER> rs.status() { "set" : "replconf", "date" : ISODate("2017-12-04T07:42:09.054Z"), "myState" : 1, "term" : NumberLong(1), "configsvr" : true, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1512373328, 1), "t" : NumberLong(1) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1512373328, 1), "t" : NumberLong(1) }, "appliedOpTime" : { "ts" : Timestamp(1512373328, 1), "t" : NumberLong(1) }, "durableOpTime" : { "ts" : Timestamp(1512373328, 1), "t" : NumberLong(1) } }, "members" : [ { "_id" : 0, "name" : "172.20.6.10:21000", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 221, "optime" : { "ts" : Timestamp(1512373328, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2017-12-04T07:42:08Z"), "infoMessage" : "could not find member to sync from", "electionTime" : Timestamp(1512373296, 1), "electionDate" : ISODate("2017-12-04T07:41:36Z"), "configVersion" : 1, "self" : true }, { "_id" : 1, "name" : "172.20.6.11:21000", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 42, "optime" : { "ts" : Timestamp(1512373318, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1512373318, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2017-12-04T07:41:58Z"), "optimeDurableDate" : ISODate("2017-12-04T07:41:58Z"), "lastHeartbeat" : ISODate("2017-12-04T07:42:08.637Z"), "lastHeartbeatRecv" : ISODate("2017-12-04T07:42:07.648Z"), "pingMs" : NumberLong(0), "syncingTo" : "172.20.6.10:21000", "configVersion" : 1 }, { "_id" : 2, "name" : "172.20.6.12:21000", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 42, "optime" : { "ts" : Timestamp(1512373318, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1512373318, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2017-12-04T07:41:58Z"), "optimeDurableDate" : ISODate("2017-12-04T07:41:58Z"), "lastHeartbeat" : ISODate("2017-12-04T07:42:08.637Z"), "lastHeartbeatRecv" : ISODate("2017-12-04T07:42:07.642Z"), "pingMs" : NumberLong(0), "syncingTo" : "172.20.6.10:21000", "configVersion" : 1 } ], "ok" : 1 }此時(shí)config server集群已經(jīng)配置完成,mongodb-1為primary,mongdb-2/mongodb-3為secondary
2.2.4 配置shard集群
三臺(tái)服務(wù)器均進(jìn)行shard集群配置
shard1配置
cat >/data/mongodb/shard1/conf/shard.conf <查看此時(shí)服務(wù)已經(jīng)正常啟動(dòng),shard1的22001端口已經(jīng)正常監(jiān)聽(tīng),接下來(lái)登錄mongodb-1服務(wù)器進(jìn)行shard1副本集初始化
mongo 172.20.6.10:22001 use admin config = {_id:"shard1",members:[ {_id:0,host:"172.20.6.10:22001"}, {_id:1,host:"172.20.6.11:22001",arbiterOnly:true}, {_id:2,host:"172.20.6.12:22001"},] } rs.initiate(config);查看集群狀態(tài),只列出了部分信息:
{ "_id" : 0, "name" : "172.20.6.10:22001", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", #mongodb-1為primary "uptime" : 276, "optime" : { "ts" : Timestamp(1512373911, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2017-12-04T07:51:51Z"), "infoMessage" : "could not find member to sync from", "electionTime" : Timestamp(1512373879, 1), "electionDate" : ISODate("2017-12-04T07:51:19Z"), "configVersion" : 1, "self" : true }, { "_id" : 1, "name" : "172.20.6.11:22001", "health" : 1, "state" : 7, "stateStr" : "ARBITER", #mongodb-2為arbiter "uptime" : 45, "lastHeartbeat" : ISODate("2017-12-04T07:51:53.597Z"), "lastHeartbeatRecv" : ISODate("2017-12-04T07:51:51.243Z"), "pingMs" : NumberLong(0), "configVersion" : 1 }, { "_id" : 2, "name" : "172.20.6.12:22001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", #mongodb-3為secondary "uptime" : 45, "optime" : { "ts" : Timestamp(1512373911, 1), "t" : NumberLong(1) },此時(shí)shard1 副本集已經(jīng)配置完成,mongodb-1為primary,mongodb-2為arbiter,mongodb-3為secondary。
同樣的操作進(jìn)行shard2配置和shard3配置
注意:進(jìn)行shard2的副本集初始化,在mongodb-2, 初始化shard3副本集在mongodb-3上進(jìn)行操作。
shard2配置文件
cat >/data/mongodb/shard2/conf/shard.conf <shard3配置文件
cat >/data/mongodb/shard3/conf/shard.conf <在mongodb-2上進(jìn)行shard2副本集初始化
mongo 172.20.6.11:22002 #登錄mongodb-2 use admin config = {_id:"shard2",members:[ {_id:0,host:"172.20.6.10:22002"}, {_id:1,host:"172.20.6.11:22002"}, {_id:2,host:"172.20.6.12:22002",arbiterOnly:true},] } rs.initiate(config);查看shard2副本集狀態(tài)
{ "_id" : 0, "name" : "172.20.6.10:22002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", #mongodb-2為secondary "uptime" : 15, "optime" : { "ts" : Timestamp(1512374668, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1512374668, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2017-12-04T08:04:28Z"), "optimeDurableDate" : ISODate("2017-12-04T08:04:28Z"), "lastHeartbeat" : ISODate("2017-12-04T08:04:30.527Z"), "lastHeartbeatRecv" : ISODate("2017-12-04T08:04:28.492Z"), "pingMs" : NumberLong(0), "syncingTo" : "172.20.6.11:22002", "configVersion" : 1 }, { "_id" : 1, "name" : "172.20.6.11:22002", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", #mongodb-2為primary "uptime" : 211, "optime" : { "ts" : Timestamp(1512374668, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2017-12-04T08:04:28Z"), "infoMessage" : "could not find member to sync from", "electionTime" : Timestamp(1512374666, 1), "electionDate" : ISODate("2017-12-04T08:04:26Z"), "configVersion" : 1, "self" : true }, { "_id" : 2, "name" : "172.20.6.12:22002", #mongodb-3為arbiter "health" : 1, "state" : 7, "stateStr" : "ARBITER", "uptime" : 15, "lastHeartbeat" : ISODate("2017-12-04T08:04:30.527Z"), "lastHeartbeatRecv" : ISODate("2017-12-04T08:04:28.384Z"), "pingMs" : NumberLong(0), "configVersion" : 1 }登錄mongodb-3進(jìn)行shard3副本集初始化
mongo 172.20.6.12:22003 #登錄mongodb-3 use admin config = {_id:"shard3",members:[ {_id:0,host:"172.20.6.10:22003",arbiterOnly:true}, {_id:1,host:"172.20.6.11:22003"}, {_id:2,host:"172.20.6.12:22003"},] } rs.initiate(config);查看shard3副本集狀態(tài)
{ "_id" : 0, "name" : "172.20.6.10:22003", "health" : 1, "state" : 7, "stateStr" : "ARBITER", #mongodb-1為arbiter "uptime" : 18, "lastHeartbeat" : ISODate("2017-12-04T08:07:37.488Z"), "lastHeartbeatRecv" : ISODate("2017-12-04T08:07:36.224Z"), "pingMs" : NumberLong(0), "configVersion" : 1 }, { "_id" : 1, "name" : "172.20.6.11:22003", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", #mongodb-2為secondary "uptime" : 18, "optime" : { "ts" : Timestamp(1512374851, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1512374851, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2017-12-04T08:07:31Z"), "optimeDurableDate" : ISODate("2017-12-04T08:07:31Z"), "lastHeartbeat" : ISODate("2017-12-04T08:07:37.488Z"), "lastHeartbeatRecv" : ISODate("2017-12-04T08:07:36.297Z"), "pingMs" : NumberLong(0), "syncingTo" : "172.20.6.12:22003", "configVersion" : 1 }, { "_id" : 2, "name" : "172.20.6.12:22003", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", #mongodb-3為primary "uptime" : 380, "optime" : { "ts" : Timestamp(1512374851, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2017-12-04T08:07:31Z"), "infoMessage" : "could not find member to sync from", "electionTime" : Timestamp(1512374849, 1), "electionDate" : ISODate("2017-12-04T08:07:29Z"), "configVersion" : 1, "self" : true }此時(shí)shard集群全部已經(jīng)配置完畢。
2.2.5 配置路由服務(wù)器mongos
目前三臺(tái)服務(wù)器的配置服務(wù)器和分片服務(wù)器均已啟動(dòng),配置三臺(tái)mongos服務(wù)器
由于mongos服務(wù)器的配置是從內(nèi)存中加載,所以自己沒(méi)有存在數(shù)據(jù)目錄configdb連接為配置服務(wù)器集群
cat >/data/mongodb/mongos/conf/mongos.conf<目前config server集群/shard集群/mongos服務(wù)均已啟動(dòng),但此時(shí)為設(shè)置分片,還不能使用分片功能。需要登錄mongos啟用分片
登錄任意一臺(tái)mongos
mongo 172.20.6.10:20000 use admin db.runCommand({addshard:"shard1/172.20.6.10:22001,172.20.6.11:22001,172.20.6.12:22001"}) db.runCommand({addshard:"shard2/172.20.6.10:22002,172.20.6.11:22002,172.20.6.12:22002"}) db.runCommand({addshard:"shard3/172.20.6.10:22003,172.20.6.11:22003,172.20.6.12:22003"})查看集群
三、 測(cè)試
目前配置服務(wù)、路由服務(wù)、分片服務(wù)、副本集服務(wù)都已經(jīng)串聯(lián)起來(lái)了,此時(shí)進(jìn)行數(shù)據(jù)插入,數(shù)據(jù)能夠自動(dòng)分片。連接在mongos上讓指定的數(shù)據(jù)庫(kù)、指定的集合分片生效。
注意:設(shè)置分片需要在admin數(shù)據(jù)庫(kù)進(jìn)行
use admin db.runCommand( { enablesharding :"kaliarchdb"}); #開(kāi)啟kaliarch庫(kù)分片功能 db.runCommand( { shardcollection : "kaliarchdb.table1",key : {_id:"hashed"} } ) #指定數(shù)據(jù)庫(kù)里需要分片的集合tables和片鍵_id設(shè)置kaliarchdb的 table1 表需要分片,根據(jù) _id 自動(dòng)分片到 shard1 ,shard2,shard3 上面去。
查看分片信息
測(cè)試插入數(shù)據(jù)
use kaliarchdb; for (var i = 1; i <= 100000; i++) db.table1.save({_id:i,"test1":"testval1"});查看分片情況:(省去部分信息)
db.table1.stats() { "sharded" : true, "capped" : false, "ns" : "kaliarchdb.table1", "count" : 100000, #總count "size" : 3800000, "storageSize" : 1335296, "totalIndexSize" : 4329472, "indexSizes" : { "_id_" : 1327104, "_id_hashed" : 3002368 }, "avgObjSize" : 38, "nindexes" : 2, "nchunks" : 6, "shards" : { "shard1" : { "ns" : "kaliarchdb.table1", "size" : 1282690, "count" : 33755, #shard1的count數(shù) "avgObjSize" : 38, "storageSize" : 450560, "capped" : false, ...... "shard2" : { "ns" : "kaliarchdb.table1", "size" : 1259434, "count" : 33143, #shard2的count數(shù) "avgObjSize" : 38, "storageSize" : 442368, "capped" : false, ....... "shard3" : { "ns" : "kaliarchdb.table1", "size" : 1257876, "count" : 33102, #shard3的count數(shù) "avgObjSize" : 38, "storageSize" : 442368, "capped" : false,此時(shí)架構(gòu)中的mongos,config server,shard集群均已經(jīng)搭建部署完畢,在實(shí)際生成環(huán)境話需要對(duì)前端的mongos做高可用來(lái)提示整體高可用。
當(dāng)前題目:搭建高可用MongoDB集群(分片)
文章地址:http://weahome.cn/article/jphsio.html