DRBD(DistributedReplicatedBlockDevice)是一個(gè)基于塊設(shè)備級(jí)別在遠(yuǎn)程服務(wù)器直接同步和鏡像數(shù)據(jù)的開源軟件,類似于RAID1數(shù)據(jù)鏡像,通常配合keepalived、heartbeat等HA軟件來(lái)實(shí)現(xiàn)高可用性。
DRBD是一種塊設(shè)備,可以被用于高可用(HA)之中.它類似于一個(gè)網(wǎng)絡(luò)RAID-1功能,當(dāng)你將數(shù)據(jù)寫入本地文件系統(tǒng)時(shí),數(shù)據(jù)還將會(huì)被發(fā)送到網(wǎng)絡(luò)中另一臺(tái)主機(jī)上.以相同的形式記錄在一個(gè)文件系統(tǒng)中。
本地(master)與遠(yuǎn)程主機(jī)(backup)的保證實(shí)時(shí)同步,如果本地系統(tǒng)出現(xiàn)故障時(shí),遠(yuǎn)程主機(jī)上還會(huì)保留有一份相同的數(shù)據(jù),可以繼續(xù)使用.在高可用(HA)中使用DRBD功能,可以代替使用一個(gè)共享盤陣.因?yàn)閿?shù)據(jù)同時(shí)存在于本地主機(jī)和遠(yuǎn)程主機(jī)上,切換時(shí),遠(yuǎn)程主機(jī)只要使用它上面的那份備份數(shù)據(jù)。
創(chuàng)新互聯(lián)專注于企業(yè)成都全網(wǎng)營(yíng)銷推廣、網(wǎng)站重做改版、魚峰網(wǎng)站定制設(shè)計(jì)、自適應(yīng)品牌網(wǎng)站建設(shè)、H5頁(yè)面制作、購(gòu)物商城網(wǎng)站建設(shè)、集團(tuán)公司官網(wǎng)建設(shè)、外貿(mào)網(wǎng)站建設(shè)、高端網(wǎng)站制作、響應(yīng)式網(wǎng)頁(yè)設(shè)計(jì)等建站業(yè)務(wù),價(jià)格優(yōu)惠性價(jià)比高,為魚峰等各大城市提供網(wǎng)站開發(fā)制作服務(wù)。
一、實(shí)施環(huán)境
系統(tǒng)版本:CentOS 6.5
DRBD版本: drbd-8.3.15
Keepalived:keepalived-1.1.15
Master:192.168.10.128
Backup:192.168.10.130
二、初始化配置
1) 在128、130兩臺(tái)服務(wù)器/etc/hosts里面都添加如下配置:
192.168.149.128 node1
192.168.149.130 node2
2) 優(yōu)化系統(tǒng)kernel參數(shù),直接上sysctl.conf配置如下:
net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 net.ipv4.tcp_max_tw_buckets = 10000 net.ipv4.tcp_sack = 1 net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_rmem = 4096 87380 4194304 net.ipv4.tcp_wmem = 4096 16384 4194304 net.core.wmem_default = 8388608 net.core.rmem_default = 8388608 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.core.netdev_max_backlog = 262144 net.core.somaxconn = 262144 net.ipv4.tcp_max_orphans = 3276800 net.ipv4.tcp_max_syn_backlog = 262144 net.ipv4.tcp_timestamps = 0 net.ipv4.tcp_synack_retries = 1 net.ipv4.tcp_syn_retries = 1 net.ipv4.tcp_tw_recycle = 1 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_mem = 94500000 915000000 927000000 net.ipv4.tcp_fin_timeout = 1 net.ipv4.tcp_keepalive_time = 30 net.ipv4.ip_local_port_range = 1024 65530 net.ipv4.icmp_echo_ignore_all = 1
3)兩臺(tái)服務(wù)器分別添加一塊設(shè)備,用于DRBD主設(shè)備存儲(chǔ),我這里為/dev/sdb 20G硬盤;
執(zhí)行如下命令:
mkfs.ext3 /dev/sdb ;dd if=/dev/zero of=/dev/sdb bs=1M count=1;sync
三、DRBD安裝配置
Yum方式安裝:
rpm -Uvh http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm
yum -y install drbd83* kmod-drbd83 ; modprobe drbd
源碼安裝方式:
http://oss.linbit.com/drbd/8.4/drbd-8.4.4.tar.gz
./configure --prefix=/usr/local/drbd --with-km
make KDIR=/usr/src/kernels/2.6.32-504.el6.x86_64/
make install
cp drbd/drbd.ko /lib/modules/`uname -r`/kernel/lib/
Yum方式和源碼方式都需要執(zhí)行:modprobe drbd 加載DRBD模塊。
安裝完成并加載drbd模塊后,vi修改/etc/drbd.conf配置文件,內(nèi)容如下:
global { usage-count yes; } common { syncer { rate 100M; } } resource r0 { protocol C; startup { } disk { on-io-error detach; #size 1G; } net { } on node1 { device /dev/drbd0; disk /dev/sdb; address 192.168.10.128:7898; meta-disk internal; } on node2 { device /dev/drbd0; disk /dev/sdb; address 192.168.10.130:7898; meta-disk internal; } }
配置修改完畢后執(zhí)行如下命令初始化:
drbdadm create-md r0 ;/etc/init.d/drbd restart ;/etc/init.d/drbd status
如下圖:
以上步驟,需要在兩臺(tái)服務(wù)器都執(zhí)行,兩臺(tái)都配置完畢后,在node2從上面執(zhí)行如下命令:/etc/init.d/drbd status 看到如下信息,表示目前兩臺(tái)都為從,我們需要設(shè)置node1為master,命令如下:
drbdadm -- --overwrite-data-of-peer primary all
mkfs.ext4 /dev/drbd0
mkdir /app ;mount /dev/drbd0 /app
自此,DRBD配置完畢,我們可以往/app目錄寫入任何東西,當(dāng)master出現(xiàn)宕機(jī)或者其他故障,手動(dòng)切換到backup,數(shù)據(jù)沒(méi)有任何丟失,相當(dāng)于兩臺(tái)服務(wù)器做網(wǎng)絡(luò)RAID1。
四、Keepalived配置
wget http://www.keepalived.org/software/keepalived-1.1.15.tar.gz ; tar -xzvf keepalived-1.1.15.tar.gz ;cd keepalived-1.1.15 ; ./configure ; make ;make install
DIR=/usr/local/ ;cp $DIR/etc/rc.d/init.d/keepalived /etc/rc.d/init.d/ ; cp $DIR/etc/sysconfig/keepalived /etc/sysconfig/ ;
mkdir -p /etc/keepalived ; cp $DIR/sbin/keepalived /usr/sbin/
兩臺(tái)服務(wù)器均安裝keepalived,并進(jìn)行配置,首先在node1(master)上配置,keepalived.conf內(nèi)容如下:
! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script check_MySQL { script "/data/sh/check_mysql.sh" interval 5 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 52 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.10.100 } track_script { check_mysql } }
然后創(chuàng)建check_mysql.sh檢測(cè)腳本,內(nèi)容如下:
#!/bin/sh A=`ps -C mysqld --no-header |wc -l` if [ $A -eq 0 ];then /bin/umount /app/ drbdadm secondary r0 killall keepalived fi 添加node2(backup)上配置,keepalived.conf內(nèi)容如下: ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_sync_group VI{ group { VI_1 } notify_master /data/sh/master.sh notify_backup /data/sh/backup.sh } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 52 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.10.100 } }
創(chuàng)建master.sh檢測(cè)腳本,內(nèi)容如下:
#!/bin/bash drbdadm primary r0 /bin/mount /dev/drbd0 /app/ /etc/init.d/mysqld start
創(chuàng)建backup.sh檢測(cè)腳本,內(nèi)容如下:
#!/bin/bash /etc/init.d/mysqld stop /bin/umount /dev/drbd0 drbdadm secondary r0
發(fā)生腦裂恢復(fù)步驟如下:
Master執(zhí)行命令:
drbdadm secondary r0
drbdadm -- --discard-my-data connect r0
drbdadm -- --overwrite-data-of-peer primary all
Backup上執(zhí)行命令:
drbdadm secondary r0
drbdadm connect r0