小編給大家分享一下SUSE上如何搭建Hadoop環(huán)境,希望大家閱讀完這篇文章之后都有所收獲,下面讓我們一起去探討吧!
“專業(yè)、務(wù)實(shí)、高效、創(chuàng)新、把客戶的事當(dāng)成自己的事”是我們每一個(gè)人一直以來(lái)堅(jiān)持追求的企業(yè)文化。 創(chuàng)新互聯(lián)是您可以信賴的網(wǎng)站建設(shè)服務(wù)商、專業(yè)的互聯(lián)網(wǎng)服務(wù)提供商! 專注于成都網(wǎng)站設(shè)計(jì)、成都做網(wǎng)站、外貿(mào)網(wǎng)站建設(shè)、軟件開(kāi)發(fā)、設(shè)計(jì)服務(wù)業(yè)務(wù)。我們始終堅(jiān)持以客戶需求為導(dǎo)向,結(jié)合用戶體驗(yàn)與視覺(jué)傳達(dá),提供有針對(duì)性的項(xiàng)目解決方案,提供專業(yè)性的建議,創(chuàng)新互聯(lián)建站將不斷地超越自我,追逐市場(chǎng),引領(lǐng)市場(chǎng)!
【環(huán)境】:
經(jīng)常遭遇因?yàn)橐蕾囓浖姹静黄ヅ鋵?dǎo)致的問(wèn)題,這次大意了,以為java問(wèn)題不大,就用本來(lái)通過(guò)yast安裝的java1.6 openjdk去搞了,結(jié)果可想而知,問(wèn)題很多,反復(fù)定位,反復(fù)谷歌百度,最后一朋友啟發(fā)下決定換換jdk版本。問(wèn)題解決了,所以這里貼下我的環(huán)境
java環(huán)境: java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
系統(tǒng): openSUSE 11.2 (x86_64)
hadoop版本:Hadoop-1.1.2.tar.gz
【Step1:】創(chuàng)建hadoop用戶及用戶組
組:hadoop
用戶:hadoop -> /home/hadoop
加權(quán)限: vi /etc/sudoers 增加 hadoop ALL=(ALL:ALL) ALL
【Stpe2:】安裝hadoop
筆者tar xf 安裝完后是這樣的目錄結(jié)構(gòu)(供參考):
/home/hadoop/hadoop-home/[bin|conf]
【Step3:】配SSH(避免啟動(dòng)hadoop時(shí)需要密碼)
略安裝ssh
ssh-keygen -t rsa -P "" [一路回車及確認(rèn)]
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
嘗試 ssh localhost [檢查下是不是不需要密碼啦]
【Step4:】安裝java
版本見(jiàn)【環(huán)境】部分
【Step5:】配conf/hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.7.0_17xxx #[jdk目錄](méi)
export HADOOP_INSTALL=/home/hadoop/hadoop-home
export PATH=$PATH:$HADOOP_INSTALL/bin #[這里是hadoop腳本所在目錄](méi)
【Step6:】使用單機(jī)模式
hadoop version
mkdir input
man find > input/test.txt
hadoop jar hadoop-examples-1.1.2.jar wordcount input output
【Step7:】偽分布模式(單機(jī)實(shí)現(xiàn)namenode,datanode,tackerd等模塊)
conf/[core-site.xml、hdfs-site.xml、mapred-site.xml]
core-site.xml
fs.default.name hdfs://localhost:9000 hadoop.tmp.dir /usr/local/hadoop/tmp
hdfs-site.xml
erty> dfs.replication 2 dfs.name.dir /usr/local/hadoop/datalog1,/usr/local/hadoop/datalog2 mapred.job.tracker localhost:9001 dfs.data.dir /usr/local/hadoop/data1,/usr/local/hadoop/data2
mapred-site.xml
mapred.job.tracker localhost:9001
【Step8:】啟動(dòng)
格式化:hadoop namenode -format
cd bin
sh start-all.sh
hadoop@linux-peterguo:~/hadoop-home/bin> sh start-all.sh starting namenode, logging to /home/hadoop/hadoop-home/libexec/../logs/hadoop-hadoop-namenode-linux-peterguo.out localhost: starting datanode, logging to /home/hadoop/hadoop-home/libexec/../logs/hadoop-hadoop-datanode-linux-peterguo.out localhost: starting secondarynamenode, logging to /home/hadoop/hadoop-home/libexec/../logs/hadoop-hadoop-secondarynamenode-linux-peterguo.out starting jobtracker, logging to /home/hadoop/hadoop-home/libexec/../logs/hadoop-hadoop-jobtracker-linux-peterguo.out localhost: starting tasktracker, logging to /home/hadoop/hadoop-home/libexec/../logs/hadoop-hadoop-tasktracker-linux-peterguo.out
jps查看進(jìn)程是否全啟動(dòng) 五個(gè)java進(jìn)程 jobtracker/tasktracker/namenode/datanode/sencondarynamenode
可以通過(guò)下面的操作來(lái)查看服務(wù)是否正常,在Hadoop中用于監(jiān)控集群健康狀態(tài)的Web界面:
http://localhost:50030/ - Hadoop 管理介面
http://localhost:50060/ - Hadoop Task Tracker 狀態(tài)
http://localhost:50070/ - Hadoop DFS 狀態(tài)
【Step9:】操作dfs數(shù)據(jù)文件
hadoop dfs -mkdir input
hadoop dfs -copyFromLocal input/test.txt input
hadoop dfs -ls input
【Step10:】運(yùn)行dfs上的mr
hadoop jar hadoop-examples-1.1.2.jar wordcount input output
hadoop dfs -cat output/*
【Step11:】關(guān)閉
stop-all.sh
看完了這篇文章,相信你對(duì)“SUSE上如何搭建Hadoop環(huán)境”有了一定的了解,如果想了解更多相關(guān)知識(shí),歡迎關(guān)注創(chuàng)新互聯(lián)行業(yè)資訊頻道,感謝各位的閱讀!