HA(high availability)
HA 使用的是分布式日志管理方式
1. 問(wèn)題
Namenode出現(xiàn)問(wèn)題,整個(gè)集群將不能使用。
配置兩個(gè)namenode:Active namenode,standby namenode
2. 實(shí)現(xiàn)方式
1. 兩個(gè)namenode內(nèi)存中存儲(chǔ)的元數(shù)據(jù)同步,namenode啟動(dòng)時(shí),會(huì)讀鏡像文件。
2. 編輯日志的安全
分布式的存儲(chǔ)日志文件,存儲(chǔ)于2n+1奇數(shù)個(gè)節(jié)點(diǎn)。(n個(gè)節(jié)點(diǎn)寫(xiě)入成功,日志寫(xiě)入成功。)
Zookeeper監(jiān)控
監(jiān)控兩個(gè)namenode,一個(gè)namenode出現(xiàn)問(wèn)題,實(shí)現(xiàn)故障轉(zhuǎn)移。
Zookeeper對(duì)時(shí)間同步要求較高(ntp時(shí)間同步)
3. 客戶(hù)端如何知道訪問(wèn)哪一個(gè)namenode
使用proxy代理
隔離機(jī)制
使用sshfence
兩個(gè)namenode之間無(wú)密碼登陸
安裝配置
1. 基礎(chǔ)環(huán)境配置
node1 | node2 | node3 | node1 | node2 |
192.168.103.26 |
192.168.103.27 |
192.168.103.28 |
192.168.103.29 |
192.168.103.30 |
namenode |
namenode |
datanode |
datanode |
datanode |
DFSZKFailoverController |
DFSZKFailoverController |
journalnode |
journalnode |
journalnode |
|
|
QuorumPeerMain |
QuorumPeerMain |
QuorumPeerMain |
配置主機(jī)名與IP之間的映射
vim /etc/hosts
192.168.103.26 node1
192.168.103.27 node2
192.168.103.28 node3
192.168.103.29 node4
192.168.103.30 node5
配置各個(gè)節(jié)點(diǎn)之間的免密登陸
Node1
ssh-kengen –t rsa –P ‘’ 在~/.ssh/目錄下生成id_rsa, id_rsa.put密鑰
ssh-copy-id –I ~/.ssh/id._rsa.pub (node1,node2,node3,node4.node5)
Node2操作同node1
配置時(shí)間同步,node1作為ntp服務(wù)器
1. yum install ntp –y (所有節(jié)點(diǎn))
2. node1
vim /etc/ntp.conf
server 210.72.145.44 # 中國(guó)國(guó)家受時(shí)中心
server 127.127.1.0 #為局域網(wǎng)用戶(hù)提供服務(wù)
fudge 127.127.1.0 stratum 10
systemctl start ntpd
3.node2,node3,node4,node5
ntpdate node1
2. 安裝hadoop
1. tar –zxvf jdk-8u171-linux-x64.tar.gz –C /
mv jdk1.8.0_171/ jdk
tar –zxvf hadoop-2.7.7.tar.gz –C /
mv hadoop-2.7.7/ Hadoop
tar –zxvf zookeeper-3.4.10.tar.gz –C /
mv zookeeper-3.4.10 zookeeper
3. vim /etc/profile
export JAVA_HOME=/jdk
export HADOOP_HOME=/Hadoop
export ZOOKEEPER_HOME=/zookeeper
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin
source /etc/profile
scp /etc/profile node2:/etc/
node3,node4,node5
scp –r /jdk node2:/etc
node3,node4,node5
4. 配置zookeeper
進(jìn)入zookeeper目錄,創(chuàng)建zkdata目錄
創(chuàng)建myid文件,node3,node4,node5文件中的值為1,2,3
scp –r /zookeeper node2:/etc
node3,node4,node5
分別修改zookeeper節(jié)點(diǎn)的myid文件
5. 安裝hadoop(重點(diǎn)!!!)
1. hadoop-env.sh
export JAVA_HOME = /jdk
2. core-site.xml
fs.defaultFS
hdfs://ns1
ha.zookeeper.quorum
node3:2181,node4:2181,node5:2181
hadoop.tmp.dir
/hadoop/tmp
3. hdfs.site.xml
dfs.nameservices
ns1
dfs.ha.namenodes.ns1
nn1,nn2
dfs.namenode.rpc-address.ns1.nn1
node1:8020
dfs.namenode.rpc-address.ns1.nn2
node2:8020
dfs.ha.namenode.http-address.ns1.nn1
node1:50070
dfs.ha.namenode.http-address.ns1.nn2
node2:50070
dfs.namenode.shared.edits.dir
qjournal://node3:8485;node4:8485;node5:8485/ns1
dfs.journalnode.edits.dir
/journalnode
dfs.client.failover.proxy.provider.ns1
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
dfs.ha.fencing.methods
sshfence
dfs.ha.fencing.ssh.private-key-files
/home/root/.ssh/id_rsa
dfs.ha.automatic-failover.enabled
true
vim slaves
node3
node4
node5
4. 啟動(dòng)
node3,node4,node5
hadoop-daemon.sh start journalnode
zkServer.sh start
node1
hdfs namenode -format
scp –r /Hadoop/tmp node2:/Hadoop/
hdfs zkfc –formatZK
start-dfs.sh
5. 驗(yàn)證HDFS HA
通過(guò)瀏覽器查看node1與node2 namenode狀態(tài)
![](/upload/otherpic55/146158.jpg)
![](/upload/otherpic55/146160.jpg)
hadoop-daemon.sh stop namenode
![](/upload/otherpic55/146161.jpg)
當(dāng)前標(biāo)題:HDFS高可用環(huán)境搭建
分享URL:
http://weahome.cn/article/gjighp.html