本篇內(nèi)容主要講解“CentOS7-64bit編譯Hadoop-2.5.0并分布式安裝的步驟”,感興趣的朋友不妨來看看。本文介紹的方法操作簡單快捷,實(shí)用性強(qiáng)。下面就讓小編來帶大家學(xué)習(xí)“CentOS7-64bit編譯Hadoop-2.5.0并分布式安裝的步驟”吧!
創(chuàng)新互聯(lián)建站2013年至今,是專業(yè)互聯(lián)網(wǎng)技術(shù)服務(wù)公司,擁有項(xiàng)目網(wǎng)站設(shè)計(jì)、成都網(wǎng)站建設(shè)網(wǎng)站策劃,項(xiàng)目實(shí)施與項(xiàng)目整合能力。我們以讓每一個夢想脫穎而出為使命,1280元含山做網(wǎng)站,已為上家服務(wù),為含山各地企業(yè)和個人服務(wù),聯(lián)系電話:18982081108
CentOS 7.0 x64 版本
192.168.1.7 master
192.168.1.8 slave
192.168.1.9 slave
192.168.1.10 slave
# systemctl status firewalld.service --查看防火墻狀態(tài) # systemctl stop firewalld.service --關(guān)閉防火墻 # systemctl disable firewalld.service --永久關(guān)閉防火墻
# systemctl status sshd.service --查看ssh狀態(tài) # yum install openssh-server openssh-clients
# yum -y install vim
# vim /etc/sysconfig/network-scripts/ifcfg-eno16777736
BOOTPROTO="static"
ONBOOT="yes"
IPADDR0="192.168.1.7"
PREFIX0="255.255.255.0"
GATEWAY0="192.168.1.1"
DNS1="61.147.37.1"
DNS2="101.226.4.6"
# vim /etc/sysconfig/network
HOSTNAME=master
# vim /etc/hosts
192.168.1.7 master 192.168.1.8 slave1 192.168.1.9 slave2 192.168.1.10 slave3
# hostnamectl set-hostname master (CentOS7 下原有的修改host方法無效了)
# useradd hadoop --創(chuàng)建用戶名為hadoop的用戶# passwd hadoop --為用戶hadoop設(shè)置密碼
-----------下面是在master上面的操作
# su hadoop --切換到hadoop用戶$ cd ~ --打開用戶文件夾 $ ssh-keygen -t rsa -P '' --生成密碼對,/home/hadoop/.ssh/id_rsa和/home/hadoop/.ssh/id_rsa.pub $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys --把id_rsa.pub追加到授權(quán)的key里面去 $ chmod 600 ~/.ssh/authorized_keys --修改權(quán)限 $ su --切換到root用戶# vim /etc/ssh/sshd_config --修改ssh配置文件 RSAAuthentication yes #啟用RSA認(rèn)證 PubkeyAuthentication yes #啟用公鑰私鑰配對認(rèn)證方式 AuthorizedKeysFile .ssh/authorized_keys #公鑰文件路徑 # su hadoop --切換到hadoop用戶 $ scp ~/.ssh/id_rsa.pub hadoop@192.168.1.8:~/ --把公鑰復(fù)制所有的Slave機(jī)器上
----------下面是在slave1上面的操作
# su hadoop --切換到hadoop用戶 $ mkdir ~/.ssh $ chmod 700 ~/.ssh $ cat ~/id_rsa.pub >> ~/.ssh/authorized_keys --追加到授權(quán)文件"authorized_keys" $ chmod 600 ~/.ssh/authorized_keys --修改權(quán)限 $ su --切換回root用戶 # vim /etc/ssh/sshd_config --修改ssh配置文件 RSAAuthentication yes #啟用RSA認(rèn)證 PubkeyAuthentication yes #啟用公鑰私鑰配對認(rèn)證方式 AuthorizedKeysFile .ssh/authorized_keys #公鑰文件路徑
# rpm -ivh jdk-7u67-linux-x64.rpm
Preparing...
##################################### [100%] 1:jdk
##################################### [100%] Unpacking JAR files...
rt.jar...
jsse.jar...
charsets.jar...
tools.jar...
localedata.jar...
# vim /etc/profile export JAVA_HOME=/usr/java/jdk1.7.0_67 export PATH=$PATH:$JAVA_HOME/bin # source profile --修改生效
# yum install maven svn ncurses-devel gcc* lzo-devel zlib-devel autoconf automake libtool cmake openssl-devel
# tar zxvf apache-ant-1.9.4-bin.tar.gz# vim /etc/profile export ANT_HOME=/usr/local/apache-ant-1.9.4 export PATH=$PATH:$ANT_HOME/bin
# tar zxvf findbugs-3.0.0.tar.gz# vim /etc/profile export FINDBUGS_HOME=/usr/local/findbugs-3.0.0 export PATH=$PATH:$FINDBUGS_HOME/bin
# tar zxvf protobuf-2.5.0.tar.gz(必須是2.5.0版本的,不然編譯hadoop的時候報(bào)錯) # cd protobuf-2.5.0 # ./configure --prefix=/usr/local # make && make install
# tar zxvf hadoop-2.5.0-src.tar.gz # cd hadoop-2.5.0-src # mvn package -Pdist,native,docs -DskipTests -Dtar
# vim /usr/share/mavem/conf/settings.xmlnexus-osc * Nexus osc http://maven.oschina.net/content/groups/public/ jdk17 true 1.7 1.7 1.7 1.7 nexus local private nexus http://maven.oschina.net/content/groups/public/ true false nexus local private nexus http://maven.oschina.net/content/groups/public/ true false
# ./bin/hadoop versionHadoop 2.5.0Subversion Unknown -r Unknown Compiled by root on 2014-09-12T00:47Z Compiled with protoc 2.5.0From source with checksum 423dcd5a752eddd8e45ead6fd5ff9a24 This command was run using /usr/hadoop-2.5.0-src/hadoop-dist/target/hadoop-2.5.0/share/hadoop/common/hadoop-common-2.5.0.jar # file lib//native/*lib//native/libhadoop.a: current ar archivelib//native/libhadooppipes.a: current ar archivelib//native/libhadoop.so: symbolic link to `libhadoop.so.1.0.0'lib//native/libhadoop.so.1.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=0x972b31264a1ce87a12cfbcc331c8355e32d0e774, not strippedlib//native/libhadooputils.a: current ar archivelib//native/libhdfs.a: current ar archivelib//native/libhdfs.so: symbolic link to `libhdfs.so.0.0.0'lib//native/libhdfs.so.0.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=0x200ccf97f44d838239db3347ad5ade435b472cfa, not stripped
# cp -r /usr/hadoop-2.5.0-src/hadoop-dist/target/hadoop-2.5.0 /opt/hadoop-2.5.0 # chown -R hadoop:hadoop /opt/hadoop-2.5.0 # vi /etc/profile export HADOOP_HOME=/opt/hadoop-2.5.0 export PATH=$PATH:$HADOOP_HOME/bin # su hadoop $ cd /opt/hadoop-2.5.0 $ mkdir -p dfs/name $ mkdir -p dfs/data $ mkdir -p tmp $ cd etc/hadoop
$ vim slaves slave1 slave2 slave3
$ vim hadoop-env.sh / vim yarn-env.sh export JAVA_HOME=/usr/java/jdk1.7.0_67
fs.defaultFS hdfs://master:9000 io.file.buffer.size 131702 hadoop.tmp.dir file:/opt/hadoop-2.5.0/tmp hadoop.proxyuser.hadoop.hosts hadoop.proxyuser.hadoop.groups
dfs.namenode.name.dir /opt/hadoop-2.5.0/dfs/name dfs.datanode.data.dir /opt/hadoop-2.5.0/dfs/data dfs.replication 3 dfs.namenode.secondary.http-address master:9001 dfs.webhdfs.enabled true
# cp mapred-site.xml.template mapred-site.xml
mapreduce.framework.name yarn mapreduce.jobhistory.address master:10020 mapreduce.jobhistory.webapp.address master:19888
yarn.nodemanager.aux-services mapreduce_shuffle yarn.nodemanager.auxservices.mapreduce.shuffle.class org.apache.hadoop.mapred.ShuffleHandler yarn.resourcemanager.address master:8032 yarn.resourcemanager.scheduler.address master:8030 yarn.resourcemanager.resource-tracker.address master:8031 yarn.resourcemanager.admin.address master:8033 yarn.resourcemanager.webapp.address master:8088 yarn.nodemanager.resource.memory-mb 768
$ ./bin/hdfs namenode -format
$ ./sbin/start-dfs.sh $ ./sbin/start-yarn.sh
http://192.168.1.7:8088http://192.168.1.7:50070
到此,相信大家對“CentOS7-64bit編譯Hadoop-2.5.0并分布式安裝的步驟”有了更深的了解,不妨來實(shí)際操作一番吧!這里是創(chuàng)新互聯(lián)網(wǎng)站,更多相關(guān)內(nèi)容可以進(jìn)入相關(guān)頻道進(jìn)行查詢,關(guān)注我們,繼續(xù)學(xué)習(xí)!