這篇文章主要介紹了Hadoop集群中如何升級(jí)Hadoop,具有一定借鑒價(jià)值,感興趣的朋友可以參考下,希望大家閱讀完這篇文章之后大有收獲,下面讓小編帶著大家一起了解一下。
10年積累的成都做網(wǎng)站、成都網(wǎng)站制作、成都外貿(mào)網(wǎng)站建設(shè)經(jīng)驗(yàn),可以快速應(yīng)對(duì)客戶對(duì)網(wǎng)站的新想法和需求。提供各種問(wèn)題對(duì)應(yīng)的解決方案。讓選擇我們的客戶得到更好、更有力的網(wǎng)絡(luò)服務(wù)。我雖然不認(rèn)識(shí)你,你也不認(rèn)識(shí)我。但先網(wǎng)站設(shè)計(jì)后付款的網(wǎng)站建設(shè)流程,更有開(kāi)州免費(fèi)網(wǎng)站建設(shè)讓你可以放心的選擇與我們合作。
Hadoop前面安裝的集群是2.6版本,現(xiàn)在升級(jí)到2.7版本。
注意,這個(gè)集群上有運(yùn)行Hbase,所以,升級(jí)前后,需要啟停Hbase。
升級(jí)步驟如下:
集群IP列表
Namenode: 192.168.143.46 192.168.143.103 Journalnode: 192.168.143.101 192.168.143.102 192.168.143.103 Datanode&Hbase regionserver: 192.168.143.196 192.168.143.231 192.168.143.182 192.168.143.235 192.168.143.41 192.168.143.127 Hbase master: 192.168.143.103 192.168.143.101 Zookeeper: 192.168.143.101 192.168.143.102 192.168.143.103
1. 首先確定hadoop運(yùn)行的路徑,將新版本的軟件分發(fā)到每個(gè)節(jié)點(diǎn)的這個(gè)路徑下,并解壓。
# ll /usr/local/hadoop/ total 493244 drwxrwxr-x 9 root root 4096 Mar 21 2017 hadoop-release ->hadoop-2.6.0-EDH-0u1-SNAPSHOT-HA-SECURITY drwxr-xr-x 9 root root 4096 Oct 11 11:06 hadoop-2.7.1 -rw-r--r-- 1 root root 194690531 Oct 9 10:55 hadoop-2.7.1.tar.gz drwxrwxr-x 7 root root 4096 May 21 2016 hbase-1.1.3 -rw-r--r-- 1 root root 128975247 Apr 10 2017 hbase-1.1.3.tar.gz lrwxrwxrwx 1 root root 29 Apr 10 2017 hbase-release -> /usr/local/hadoop/hbase-1.1.3
由于是升級(jí),配置文件完全不變,將原h(huán)adoop-2.6.0下的etc/hadoop路徑完全拷貝/替換到hadoop-2.7.1下。
至此,升級(jí)前的準(zhǔn)備就已經(jīng)完成了。
下面開(kāi)始升級(jí)操作過(guò)程。全程都是在一個(gè)中轉(zhuǎn)機(jī)上執(zhí)行的命令,通過(guò)shell腳本執(zhí)行,省去頻繁ssh登陸的操作。
## 停止hbase,hbase用戶執(zhí)行
2. 停止Hbase master,hbase用戶執(zhí)行
狀態(tài)檢查,確認(rèn)master,先停standby master
http://192.168.143.101:16010/master-status
master: ssh -t -q 192.168.143.103 sudo su -l hbase -c "/usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ stop\ master" ssh -t -q 192.168.143.103 sudo su -l hbase -c "jps" ssh -t -q 192.168.143.101 sudo su -l hbase -c "/usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ stop\ master" ssh -t -q 192.168.143.101 sudo su -l hbase -c "jps"
3. 停止Hbase regionserver,hbase用戶執(zhí)行
ssh -t -q 192.168.143.196 sudo su -l hbase -c "/usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ stop\ regionserver" ssh -t -q 192.168.143.231 sudo su -l hbase -c "/usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ stop\ regionserver" ssh -t -q 192.168.143.182 sudo su -l hbase -c "/usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ stop\ regionserver" ssh -t -q 192.168.143.235 sudo su -l hbase -c "/usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ stop\ regionserver" ssh -t -q 192.168.143.41 sudo su -l hbase -c "/usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ stop\ regionserver" ssh -t -q 192.168.143.127 sudo su -l hbase -c "/usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ stop\ regionserver"
檢查運(yùn)行狀態(tài)
ssh -t -q 192.168.143.196 sudo su -l hbase -c "jps" ssh -t -q 192.168.143.231 sudo su -l hbase -c "jps" ssh -t -q 192.168.143.182 sudo su -l hbase -c "jps" ssh -t -q 192.168.143.235 sudo su -l hbase -c "jps" ssh -t -q 192.168.143.41 sudo su -l hbase -c "jps" ssh -t -q 192.168.143.127 sudo su -l hbase -c "jps"
## 停止服務(wù)--HDFS
4. 先確認(rèn),active的namenode,網(wǎng)頁(yè)確認(rèn).后續(xù)要先啟動(dòng)這個(gè)namenode
https://192.168.143.46:50470/dfshealth.html#tab-overview
5. 停止NameNode,hdfs用戶執(zhí)行
NN: 先停standby namenode
ssh -t -q 192.168.143.103 sudo su -l hdfs -c "/usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ stop\ namenode" ssh -t -q 192.168.143.46 sudo su -l hdfs -c "/usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ stop\ namenode" 檢查狀態(tài) ssh -t -q 192.168.143.103 sudo su -l hdfs -c "jps" ssh -t -q 192.168.143.46 sudo su -l hdfs -c "jps"
6. 停止DataNode,hdfs用戶執(zhí)行
ssh -t -q 192.168.143.196 sudo su -l hdfs -c "/usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ stop\ datanode" ssh -t -q 192.168.143.231 sudo su -l hdfs -c "/usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ stop\ datanode" ssh -t -q 192.168.143.182 sudo su -l hdfs -c "/usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ stop\ datanode" ssh -t -q 192.168.143.235 sudo su -l hdfs -c "/usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ stop\ datanode" ssh -t -q 192.168.143.41 sudo su -l hdfs -c "/usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ stop\ datanode" ssh -t -q 192.168.143.127 sudo su -l hdfs -c "/usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ stop\ datanode"
7. 停止ZKFC,hdfs用戶執(zhí)行
ssh -t -q 192.168.143.46 sudo su -l hdfs -c "/usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ stop\ zkfc" ssh -t -q 192.168.143.103 sudo su -l hdfs -c "/usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ stop\ zkfc"
8.停止JournalNode,hdfs用戶執(zhí)行
JN: ssh -t -q 192.168.143.101 sudo su -l hdfs -c "/usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ stop\ journalnode" ssh -t -q 192.168.143.102 sudo su -l hdfs -c "/usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ stop\ journalnode" ssh -t -q 192.168.143.103 sudo su -l hdfs -c "/usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ stop\ journalnode"
### 備份NameNode的數(shù)據(jù),由于生產(chǎn)環(huán)境,原有的數(shù)據(jù)需要備份。以備升級(jí)失敗回滾。
9. 備份namenode1
ssh -t -q 192.168.143.46 "cp -r /data1/dfs/name /data1/dfs/name.bak.20171011-2;ls -al /data1/dfs/;du -sm /data1/dfs/*" ssh -t -q 192.168.143.46 "cp -r /data2/dfs/name /data2/dfs/name.bak.20171011-2;ls -al /data1/dfs/;du -sm /data1/dfs/*"
10. 備份namenode2
ssh -t -q 192.168.143.103 "cp -r /data1/dfs/name /data1/dfs/name.bak.20171011-2;ls -al /data1/dfs/;du -sm /data1/dfs/*"
11. 備份journal
ssh -t -q 192.168.143.101 "cp -r /data1/journalnode /data1/journalnode.bak.20171011;ls -al /data1/dfs/;du -sm /data1/*" ssh -t -q 192.168.143.102 "cp -r /data1/journalnode /data1/journalnode.bak.20171011;ls -al /data1/dfs/;du -sm /data1/*" ssh -t -q 192.168.143.103 "cp -r /data1/journalnode /data1/journalnode.bak.20171011;ls -al /data1/dfs/;du -sm /data1/*"
journal路徑,可以查看hdfs-site.xml文件
dfs.journalnode.edits.dir: /data1/journalnode
### 升級(jí)相關(guān)
12. copy文件(已提前處理,參考第一步)
切換軟連接到2.7.1版本
ssh -t -q $h "cd /usr/local/hadoop; rm hadoop-release; ln -s hadoop-2.7.1 hadoop-release"
13. 切換文件軟鏈接,root用戶執(zhí)行
ssh -t -q 192.168.143.46 "cd /usr/local/hadoop; rm hadoop-release; ln -s hadoop-2.7.1 hadoop-release" ssh -t -q 192.168.143.103 "cd /usr/local/hadoop; rm hadoop-release; ln -s hadoop-2.7.1 hadoop-release" ssh -t -q 192.168.143.101 "cd /usr/local/hadoop; rm hadoop-release; ln -s hadoop-2.7.1 hadoop-release" ssh -t -q 192.168.143.102 "cd /usr/local/hadoop; rm hadoop-release; ln -s hadoop-2.7.1 hadoop-release" ssh -t -q 192.168.143.196 "cd /usr/local/hadoop; rm hadoop-release; ln -s hadoop-2.7.1 hadoop-release" ssh -t -q 192.168.143.231 "cd /usr/local/hadoop; rm hadoop-release; ln -s hadoop-2.7.1 hadoop-release" ssh -t -q 192.168.143.182 "cd /usr/local/hadoop; rm hadoop-release; ln -s hadoop-2.7.1 hadoop-release" ssh -t -q 192.168.143.235 "cd /usr/local/hadoop; rm hadoop-release; ln -s hadoop-2.7.1 hadoop-release" ssh -t -q 192.168.143.41 "cd /usr/local/hadoop; rm hadoop-release; ln -s hadoop-2.7.1 hadoop-release" ssh -t -q 192.168.143.127 "cd /usr/local/hadoop; rm hadoop-release; ln -s hadoop-2.7.1 hadoop-release"
確認(rèn)狀態(tài)
ssh -t -q 192.168.143.46 "cd /usr/local/hadoop; ls -al" ssh -t -q 192.168.143.103 "cd /usr/local/hadoop; ls -al" ssh -t -q 192.168.143.101 "cd /usr/local/hadoop; ls -al" ssh -t -q 192.168.143.102 "cd /usr/local/hadoop; ls -al" ssh -t -q 192.168.143.196 "cd /usr/local/hadoop; ls -al" ssh -t -q 192.168.143.231 "cd /usr/local/hadoop; ls -al" ssh -t -q 192.168.143.182 "cd /usr/local/hadoop; ls -al" ssh -t -q 192.168.143.235 "cd /usr/local/hadoop; ls -al" ssh -t -q 192.168.143.41 "cd /usr/local/hadoop; ls -al" ssh -t -q 192.168.143.127 "cd /usr/local/hadoop; ls -al"
### 啟動(dòng)HDFS,hdfs用戶執(zhí)行
14. 啟動(dòng)JournalNode
JN: ssh -t -q 192.168.143.101 sudo su -l hdfs -c "/usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ start\ journalnode" ssh -t -q 192.168.143.102 sudo su -l hdfs -c "/usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ start\ journalnode" ssh -t -q 192.168.143.103 sudo su -l hdfs -c "/usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ start\ journalnode"
ssh -t -q 192.168.143.101 sudo su -l hdfs -c "jps" ssh -t -q 192.168.143.102 sudo su -l hdfs -c "jps" ssh -t -q 192.168.143.103 sudo su -l hdfs -c "jps"
15. 啟動(dòng)第一個(gè)NameNode
ssh 192.168.143.46 su - hdfs /usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh start namenode -upgrade
16. 確認(rèn)狀態(tài),在狀態(tài)完全OK之后,才可以啟動(dòng)另一個(gè)namenode
https://192.168.143.46:50470/dfshealth.html#tab-overview
17. 啟動(dòng)第一個(gè)ZKFC
su - hdfs /usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh start zkfc 192.168.143.46
18. 啟動(dòng)第二個(gè)NameNode
ssh 192.168.143.103 su - hdfs /usr/local/hadoop/hadoop-release/bin/hdfs namenode -bootstrapStandby /usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh start namenode
19. 啟動(dòng)第二個(gè)ZKFC
ssh 192.168.143.103 su - hdfs /usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh start zkfc
20. 啟動(dòng)DataNode
ssh -t -q 192.168.143.196 sudo su -l hdfs -c "/usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ start\ datanode" ssh -t -q 192.168.143.231 sudo su -l hdfs -c "/usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ start\ datanode" ssh -t -q 192.168.143.182 sudo su -l hdfs -c "/usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ start\ datanode" ssh -t -q 192.168.143.235 sudo su -l hdfs -c "/usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ start\ datanode" ssh -t -q 192.168.143.41 sudo su -l hdfs -c "/usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ start\ datanode" ssh -t -q 192.168.143.127 sudo su -l hdfs -c "/usr/local/hadoop/hadoop-release/sbin/hadoop-daemon.sh\ start\ datanode"
確認(rèn)狀態(tài)
ssh -t -q 192.168.143.196 sudo su -l hdfs -c "jps" ssh -t -q 192.168.143.231 sudo su -l hdfs -c "jps" ssh -t -q 192.168.143.182 sudo su -l hdfs -c "jps" ssh -t -q 192.168.143.235 sudo su -l hdfs -c "jps" ssh -t -q 192.168.143.41 sudo su -l hdfs -c "jps" ssh -t -q 192.168.143.127 sudo su -l hdfs -c "jps"
21. 一切正常之后,啟動(dòng)hbase, hbase用戶執(zhí)行
啟動(dòng)hbase master,最好先啟動(dòng)原來(lái)的active master。
ssh -t -q 192.168.143.101 sudo su -l hbase -c "/usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ start\ master" ssh -t -q 192.168.143.103 sudo su -l hbase -c "/usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ start\ master"
啟動(dòng)Hbase regionserver
ssh -t -q 192.168.143.196 sudo su -l hbase -c "/usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ start\ regionserver" ssh -t -q 192.168.143.231 sudo su -l hbase -c "/usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ start\ regionserver" ssh -t -q 192.168.143.182 sudo su -l hbase -c "/usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ start\ regionserver" ssh -t -q 192.168.143.235 sudo su -l hbase -c "/usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ start\ regionserver" ssh -t -q 192.168.143.41 sudo su -l hbase -c "/usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ start\ regionserver" ssh -t -q 192.168.143.127 sudo su -l hbase -c "/usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ start\ regionserver"
22. Hbase region需要手動(dòng)Balance開(kāi)啟、關(guān)閉
需要登錄HBase Shell運(yùn)行如下命令
開(kāi)啟
balance_switch true
關(guān)閉
balance_switch false
23. 本次不執(zhí)行,系統(tǒng)運(yùn)行一周,確保系統(tǒng)運(yùn)行穩(wěn)定,再執(zhí)行Final。
注意:這期間,磁盤(pán)空間可能會(huì)快速增長(zhǎng)。在執(zhí)行完final之后,會(huì)釋放一部分空間。
Finallize upgrade: hdfs dfsadmin -finalizeUpgrade
感謝你能夠認(rèn)真閱讀完這篇文章,希望小編分享的“Hadoop集群中如何升級(jí)Hadoop”這篇文章對(duì)大家有幫助,同時(shí)也希望大家多多支持創(chuàng)新互聯(lián),關(guān)注創(chuàng)新互聯(lián)行業(yè)資訊頻道,更多相關(guān)知識(shí)等著你來(lái)學(xué)習(xí)!