之前我們已經(jīng)介紹了如何在單機(jī)上搭建偽分布式的Hadoop環(huán)境,而在實(shí)際情況中,肯定都是多機(jī)器多節(jié)點(diǎn)的分布式集群環(huán)境,所以本文將簡(jiǎn)單介紹一下如何在多臺(tái)機(jī)器上搭建Hadoop的分布式環(huán)境。
創(chuàng)新互聯(lián)公司專業(yè)為企業(yè)提供武進(jìn)網(wǎng)站建設(shè)、武進(jìn)做網(wǎng)站、武進(jìn)網(wǎng)站設(shè)計(jì)、武進(jìn)網(wǎng)站制作等企業(yè)網(wǎng)站建設(shè)、網(wǎng)頁(yè)設(shè)計(jì)與制作、武進(jìn)企業(yè)網(wǎng)站模板建站服務(wù),十多年武進(jìn)做網(wǎng)站經(jīng)驗(yàn),不只是建網(wǎng)站,更提供有價(jià)值的思路和整體網(wǎng)絡(luò)服務(wù)。
我這里準(zhǔn)備了三臺(tái)機(jī)器,IP地址如下:
首先在這三臺(tái)機(jī)器上編輯/etc/hosts
配置文件,修改主機(jī)名以及配置其他機(jī)器的主機(jī)名
[root@localhost ~]# vim /etc/hosts # 三臺(tái)機(jī)器都需要操作
192.168.77.128 hadoop000
192.168.77.130 hadoop001
192.168.77.134 hadoop002
[root@localhost ~]# reboot
三臺(tái)機(jī)器在集群中所擔(dān)任的角色:
集群之間的機(jī)器需要相互通信,所以我們得先配置免密碼登錄。在三臺(tái)機(jī)器上分別運(yùn)行如下命令,生成密鑰對(duì):
[root@hadoop000 ~]# ssh-keygen -t rsa # 三臺(tái)機(jī)器都需要執(zhí)行這個(gè)命令生成密鑰對(duì)
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
0d:00:bd:a3:69:b7:03:d5:89:dc:a8:a2:ca:28:d6:06 root@hadoop000
The key's randomart image is:
+--[ RSA 2048]----+
| .o. |
| .. |
| . *.. |
| B +o |
| = .S . |
| E. * . |
| .oo o . |
|=. o o |
|*.. . |
+-----------------+
[root@hadoop000 ~]# ls .ssh/
authorized_keys id_rsa id_rsa.pub known_hosts
[root@hadoop000 ~]#
以hadoop000為主,執(zhí)行以下命令,分別把公鑰拷貝到其他機(jī)器上:
[root@hadoop000 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop000
[root@hadoop000 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop001
[root@hadoop000 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop002
注:其他兩臺(tái)機(jī)器也需要執(zhí)行以上這三條命令。
拷貝完成之后,測(cè)試能否正常進(jìn)行免密登錄:
[root@hadoop000 ~]# ssh hadoop000
Last login: Mon Apr 2 17:20:02 2018 from localhost
[root@hadoop000 ~]# ssh hadoop001
Last login: Tue Apr 3 00:49:59 2018 from 192.168.77.1
[root@hadoop001 ~]# 登出
Connection to hadoop001 closed.
[root@hadoop000 ~]# ssh hadoop002
Last login: Tue Apr 3 00:50:03 2018 from 192.168.77.1
[root@hadoop002 ~]# 登出
Connection to hadoop002 closed.
[root@hadoop000 ~]# 登出
Connection to hadoop000 closed.
[root@hadoop000 ~]#
如上,hadoop000機(jī)器已經(jīng)能夠正常免密登錄其他兩臺(tái)機(jī)器,那么我們的配置就成功了。
到Oracle官網(wǎng)拿到JDK的下載鏈接,我這里用的是JDK1.8,地址如下:
http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
使用wget命令將JDK下載到/usr/local/src/
目錄下,我這里已經(jīng)下載好了:
[root@hadoop000 ~]# cd /usr/local/src/
[root@hadoop000 /usr/local/src]# ls
jdk-8u151-linux-x64.tar.gz
[root@hadoop000 /usr/local/src]#
解壓下載的壓縮包,并將解壓后的目錄移動(dòng)到/usr/local/
目錄下:
[root@hadoop000 /usr/local/src]# tar -zxvf jdk-8u151-linux-x64.tar.gz
[root@hadoop000 /usr/local/src]# mv ./jdk1.8.0_151 /usr/local/jdk1.8
編輯/etc/profile
文件配置環(huán)境變量:
[root@hadoop000 ~]# vim /etc/profile # 增加如下內(nèi)容
JAVA_HOME=/usr/local/jdk1.8/
JAVA_BIN=/usr/local/jdk1.8/bin
JRE_HOME=/usr/local/jdk1.8/jre
PATH=$PATH:/usr/local/jdk1.8/bin:/usr/local/jdk1.8/jre/bin
CLASSPATH=/usr/local/jdk1.8/jre/lib:/usr/local/jdk1.8/lib:/usr/local/jdk1.8/jre/lib/charsets.jar
export PATH=$PATH:/usr/local/MySQL/bin/
使用source
命令加載配置文件,讓其生效,生效后執(zhí)行java -version
命令即可看到JDK的版本:
[root@hadoop000 ~]# source /etc/profile
[root@hadoop000 ~]# java -version
java version "1.8.0_151"
Java(TM) SE Runtime Environment (build 1.8.0_151-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)
[root@hadoop000 ~]#
在hadoop000上安裝完JDK后,通過(guò)rsync命令,將JDK以及配置文件都同步到其他機(jī)器上:
[root@hadoop000 ~]# rsync -av /usr/local/jdk1.8 hadoop001:/usr/local
[root@hadoop000 ~]# rsync -av /usr/local/jdk1.8 hadoop002:/usr/local
[root@hadoop000 ~]# rsync -av /etc/profile hadoop001:/etc/profile
[root@hadoop000 ~]# rsync -av /etc/profile hadoop002:/etc/profile
同步完成后,分別在兩臺(tái)機(jī)器上source配置文件,讓環(huán)境變量生效,生效后再執(zhí)行java -version
命令測(cè)試JDK是否已安裝成功。
下載Hadoop 2.6.0-cdh6.7.0的tar.gz包并解壓:
[root@hadoop000 ~]# cd /usr/local/src/
[root@hadoop000 /usr/local/src]# wget http://archive.cloudera.com/cdh6/cdh/5/hadoop-2.6.0-cdh6.7.0.tar.gz
[root@hadoop000 /usr/local/src]# tar -zxvf hadoop-2.6.0-cdh6.7.0.tar.gz -C /usr/local/
注:如果在Linux上下載得很慢的話,可以在windows的迅雷上使用這個(gè)鏈接進(jìn)行下載。然后再上傳到Linux中,這樣就會(huì)快一些。
解壓完后,進(jìn)入到解壓后的目錄下,可以看到hadoop的目錄結(jié)構(gòu)如下:
[root@hadoop000 /usr/local/src]# cd /usr/local/hadoop-2.6.0-cdh6.7.0/
[root@hadoop000 /usr/local/hadoop-2.6.0-cdh6.7.0]# ls
bin cloudera examples include libexec NOTICE.txt sbin src
bin-mapreduce1 etc examples-mapreduce1 lib LICENSE.txt README.txt share
[root@hadoop000 /usr/local/hadoop-2.6.0-cdh6.7.0]#
簡(jiǎn)單說(shuō)明一下其中幾個(gè)目錄存放的東西:
以上就算是把hadoop給安裝好了,接下來(lái)就是編輯配置文件,把JAVA_HOME配置一下:
[root@hadoop000 /usr/local/hadoop-2.6.0-cdh6.7.0]# cd etc/
[root@hadoop000 /usr/local/hadoop-2.6.0-cdh6.7.0/etc]# cd hadoop
[root@hadoop000 /usr/local/hadoop-2.6.0-cdh6.7.0/etc/hadoop]# vim hadoop-env.sh
export JAVA_HOME=/usr/local/jdk1.8/ # 根據(jù)你的環(huán)境變量進(jìn)行修改
[root@hadoop000 /usr/local/hadoop-2.6.0-cdh6.7.0/etc/hadoop]#
然后將Hadoop的安裝目錄配置到環(huán)境變量中,方便之后使用它的命令:
[root@hadoop000 ~]# vim ~/.bash_profile # 增加以下內(nèi)容
export HADOOP_HOME=/usr/local/hadoop-2.6.0-cdh6.7.0/
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
[root@localhost ~]# source !$
source ~/.bash_profile
[root@localhost ~]#
接著分別編輯core-site.xml
以及hdfs-site.xml
配置文件:
[root@hadoop000 ~]# cd $HADOOP_HOME
[root@hadoop000 /usr/local/hadoop-2.6.0-cdh6.7.0]# cd etc/hadoop
[root@hadoop000 /usr/local/hadoop-2.6.0-cdh6.7.0/etc/hadoop]# vim core-site.xml # 增加如下內(nèi)容
fs.default.name
hdfs://hadoop000:8020 # 指定默認(rèn)的訪問(wèn)地址以及端口號(hào)
[root@hadoop000 /usr/local/hadoop-2.6.0-cdh6.7.0/etc/hadoop]# vim hdfs-site.xml # 增加如下內(nèi)容
dfs.namenode.name.dir
/data/hadoop/app/tmp/dfs/name # namenode臨時(shí)文件所存放的目錄
dfs.datanode.data.dir
/data/hadoop/app/tmp/dfs/data # datanode臨時(shí)文件所存放的目錄
[root@hadoop000 /usr/local/hadoop-2.6.0-cdh6.7.0/etc/hadoop]# mkdir -p /data/hadoop/app/tmp/dfs/name
[root@hadoop000 /usr/local/hadoop-2.6.0-cdh6.7.0/etc/hadoop]# mkdir -p /data/hadoop/app/tmp/dfs/data
接下來(lái)還需要編輯yarn-site.xml
配置文件:
[root@hadoop000 /usr/local/hadoop-2.6.0-cdh6.7.0/etc/hadoop]# vim yarn-site.xml # 增加如下內(nèi)容
yarn.nodemanager.aux-services
mapreduce_shuffle
yarn.resourcemanager.hostname
hadoop000
[root@hadoop000 /usr/local/hadoop-2.6.0-cdh6.7.0/etc/hadoop]#
拷貝并編輯MapReduce的配置文件:
[root@hadoop000 /usr/local/hadoop-2.6.0-cdh6.7.0/etc/hadoop]# cp mapred-site.xml.template mapred-site.xml
[root@hadoop000 /usr/local/hadoop-2.6.0-cdh6.7.0/etc/hadoop]# vim !$ # 增加如下內(nèi)容
mapreduce.framework.name
yarn
[root@hadoop000 /usr/local/hadoop-2.6.0-cdh6.7.0/etc/hadoop]#
最后是配置從節(jié)點(diǎn)的主機(jī)名,如果沒(méi)有配置主機(jī)名的情況下就使用IP:
[root@hadoop000 /usr/local/hadoop-2.6.0-cdh6.7.0/etc/hadoop]# vim slaves
hadoop000
hadoop001
hadoop002
[root@hadoop000 /usr/local/hadoop-2.6.0-cdh6.7.0/etc/hadoop]#
到此為止,我們就已經(jīng)在hadoop000上搭建好了我們主節(jié)點(diǎn)(master)的Hadoop集群環(huán)境,但是還有其他兩臺(tái)作為從節(jié)點(diǎn)(slave)的機(jī)器沒(méi)配置Hadoop環(huán)境,所以接下來(lái)需要把hadoop000上的Hadoop安裝目錄以及環(huán)境變量配置文件分發(fā)到其他兩臺(tái)機(jī)器上,分別執(zhí)行如下命令:
[root@hadoop000 ~]# rsync -av /usr/local/hadoop-2.6.0-cdh6.7.0/ hadoop001:/usr/local/hadoop-2.6.0-cdh6.7.0/
[root@hadoop000 ~]# rsync -av /usr/local/hadoop-2.6.0-cdh6.7.0/ hadoop002:/usr/local/hadoop-2.6.0-cdh6.7.0/
[root@hadoop000 ~]# rsync -av ~/.bash_profile hadoop001:~/.bash_profile
[root@hadoop000 ~]# rsync -av ~/.bash_profile hadoop002:~/.bash_profile
分發(fā)完成之后到兩臺(tái)機(jī)器上分別執(zhí)行source命令以及創(chuàng)建臨時(shí)目錄:
[root@hadoop001 ~]# source .bash_profile
[root@hadoop001 ~]# mkdir -p /data/hadoop/app/tmp/dfs/name
[root@hadoop001 ~]# mkdir -p /data/hadoop/app/tmp/dfs/data
[root@hadoop002 ~]# source .bash_profile
[root@hadoop002 ~]# mkdir -p /data/hadoop/app/tmp/dfs/name
[root@hadoop002 ~]# mkdir -p /data/hadoop/app/tmp/dfs/data
對(duì)NameNode做格式化,只需要在hadoop000上執(zhí)行即可:
[root@hadoop000 ~]# hdfs namenode -format
格式化完成之后,就可以啟動(dòng)Hadoop集群了:
[root@hadoop000 ~]# start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
18/04/02 20:10:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [hadoop000]
hadoop000: starting namenode, logging to /usr/local/hadoop-2.6.0-cdh6.7.0/logs/hadoop-root-namenode-hadoop000.out
hadoop000: starting datanode, logging to /usr/local/hadoop-2.6.0-cdh6.7.0/logs/hadoop-root-datanode-hadoop000.out
hadoop001: starting datanode, logging to /usr/local/hadoop-2.6.0-cdh6.7.0/logs/hadoop-root-datanode-hadoop001.out
hadoop002: starting datanode, logging to /usr/local/hadoop-2.6.0-cdh6.7.0/logs/hadoop-root-datanode-hadoop002.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
ECDSA key fingerprint is 4d:5a:9d:31:65:75:30:47:a3:9c:f5:56:63:c4:0f:6a.
Are you sure you want to continue connecting (yes/no)? yes # 輸入yes即可
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.6.0-cdh6.7.0/logs/hadoop-root-secondarynamenode-hadoop000.out
18/04/02 20:11:21 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop-2.6.0-cdh6.7.0/logs/yarn-root-resourcemanager-hadoop000.out
hadoop001: starting nodemanager, logging to /usr/local/hadoop-2.6.0-cdh6.7.0/logs/yarn-root-nodemanager-hadoop001.out
hadoop002: starting nodemanager, logging to /usr/local/hadoop-2.6.0-cdh6.7.0/logs/yarn-root-nodemanager-hadoop002.out
hadoop000: starting nodemanager, logging to /usr/local/hadoop-2.6.0-cdh6.7.0/logs/yarn-root-nodemanager-hadoop000.out
[root@hadoop000 ~]# jps # 查看是否有以下幾個(gè)進(jìn)程
6256 Jps
5538 DataNode
5843 ResourceManager
5413 NameNode
5702 SecondaryNameNode
5945 NodeManager
[root@hadoop000 ~]#
到另外兩臺(tái)機(jī)器上檢查進(jìn)程:
hadoop001:
[root@hadoop001 ~]# jps
3425 DataNode
3538 NodeManager
3833 Jps
[root@hadoop001 ~]#
hadoop002:
[root@hadoop002 ~]# jps
3171 DataNode
3273 NodeManager
3405 Jps
[root@hadoop002 ~]#
各機(jī)器的進(jìn)程檢查完成,并且確定沒(méi)有問(wèn)題后,在瀏覽器上訪問(wèn)主節(jié)點(diǎn)的50070端口,例如:192.168.77.128:50070
。會(huì)訪問(wèn)到如下頁(yè)面:
點(diǎn)擊 ”Live Nodes“ 查看存活的節(jié)點(diǎn):
如上,可以訪問(wèn)50070端口就代表集群中的HDFS是正常的。
接下來(lái)我們還需要訪問(wèn)主節(jié)點(diǎn)的8088端口,這是YARN的web服務(wù)端口,例如:192.168.77.128:8088
。如下:
點(diǎn)擊 “Active Nodes” 查看存活的節(jié)點(diǎn):
好了,到此為止我們的Hadoop分布式集群環(huán)境就搭建完畢了,就是這么簡(jiǎn)單。那么啟動(dòng)了集群之后要如何關(guān)閉集群呢?也很簡(jiǎn)單,在主節(jié)點(diǎn)上執(zhí)行如下命令即可:
[root@hadoop000 ~]# stop-all.sh
實(shí)際上分布式環(huán)境下HDFS及YARN的使用和偽分布式下是一模一樣的,例如HDFS的shell命令的使用方式依舊是和偽分布式下一樣的。例如:
[root@hadoop000 ~]# hdfs dfs -ls /
[root@hadoop000 ~]# hdfs dfs -mkdir /data
[root@hadoop000 ~]# hdfs dfs -put ./test.sh /data
[root@hadoop000 ~]# hdfs dfs -ls /
Found 1 items
drwxr-xr-x - root supergroup 0 2018-04-02 20:29 /data
[root@hadoop000 ~]# hdfs dfs -ls /data
Found 1 items
-rw-r--r-- 3 root supergroup 68 2018-04-02 20:29 /data/test.sh
[root@hadoop000 ~]#
在集群中的其他節(jié)點(diǎn)也可以訪問(wèn)HDFS,而且在集群中HDFS是共享的,所有節(jié)點(diǎn)訪問(wèn)的數(shù)據(jù)都是一樣的。例如我在hadoop001節(jié)點(diǎn)中,上傳一個(gè)目錄:
[root@hadoop001 ~]# hdfs dfs -ls /
Found 1 items
drwxr-xr-x - root supergroup 0 2018-04-02 20:29 /data
[root@hadoop001 ~]# hdfs dfs -put ./logs /
[root@hadoop001 ~]# hdfs dfs -ls /
drwxr-xr-x - root supergroup 0 2018-04-02 20:29 /data
drwxr-xr-x - root supergroup 0 2018-04-02 20:31 /logs
[root@hadoop001 ~]#
然后再到hadoop002上查看:
[root@hadoop002 ~]# hdfs dfs -ls /
Found 2 items
drwxr-xr-x - root supergroup 0 2018-04-02 20:29 /data
drwxr-xr-x - root supergroup 0 2018-04-02 20:31 /logs
[root@hadoop002 ~]#
可以看到,不同的節(jié)點(diǎn),訪問(wèn)的數(shù)據(jù)也是一樣的。由于和偽分布式下的操作是一樣的,我這里就不再過(guò)多演示了。
簡(jiǎn)單演示了HDFS的操作之后,我們?cè)賮?lái)運(yùn)行一下Hadoop自帶的案例,看看YARN上是否能獲取到任務(wù)的執(zhí)行信息。隨便在一個(gè)節(jié)點(diǎn)上執(zhí)行如下命令:
[root@hadoop002 ~]# cd /usr/local/hadoop-2.6.0-cdh6.7.0/share/hadoop/mapreduce
[root@hadoop002 /usr/local/hadoop-2.6.0-cdh6.7.0/share/hadoop/mapreduce]# hadoop jar ./hadoop-mapreduce-examples-2.6.0-cdh6.7.0.jar pi 3 4
[root@hadoop002 ~]#
申請(qǐng)資源:
執(zhí)行任務(wù):
然而我這不幸的執(zhí)行失?。ㄈ菸液耙痪洚?dāng)媽的撕高達(dá)):
能咋辦,只能排錯(cuò)咯,查看到命令行終端的報(bào)錯(cuò)信息如下:
Note: System times on machines may be out of sync. Check system time and time zones.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:159)
at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:379)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
18/04/03 04:32:17 INFO mapreduce.Job: Task Id : attempt_1522671083370_0001_m_000002_0, Status : FAILED
Container launch failed for container_1522671083370_0001_01_000004 : org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container.
This token is expired. current time is 1522701136752 found 1522673393827
Note: System times on machines may be out of sync. Check system time and time zones.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:159)
at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:379)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
18/04/03 04:32:18 INFO mapreduce.Job: Task Id : attempt_1522671083370_0001_m_000001_1, Status : FAILED
Container launch failed for container_1522671083370_0001_01_000005 : org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container.
This token is expired. current time is 1522701157769 found 1522673395895
Note: System times on machines may be out of sync. Check system time and time zones.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:159)
at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:379)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
18/04/03 04:32:20 INFO mapreduce.Job: Task Id : attempt_1522671083370_0001_m_000001_2, Status : FAILED
Container launch failed for container_1522671083370_0001_01_000007 : org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container.
This token is expired. current time is 1522701159832 found 1522673397934
Note: System times on machines may be out of sync. Check system time and time zones.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:159)
at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:379)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
18/04/03 04:32:23 INFO mapreduce.Job: map 33% reduce 100%
18/04/03 04:32:24 INFO mapreduce.Job: map 100% reduce 100%
18/04/03 04:32:24 INFO mapreduce.Job: Job job_1522671083370_0001 failed with state FAILED due to: Task failed task_1522671083370_0001_m_000001
Job failed as tasks failed. failedMaps:1 failedReduces:0
18/04/03 04:32:24 INFO mapreduce.Job: Counters: 12
Job Counters
Killed map tasks=2
Launched map tasks=2
Other local map tasks=4
Data-local map tasks=3
Total time spent by all maps in occupied slots (ms)=10890
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=10890
Total vcore-seconds taken by all map tasks=10890
Total megabyte-seconds taken by all map tasks=11151360
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
Job Finished in 23.112 seconds
java.io.FileNotFoundException: File does not exist: hdfs://hadoop000:8020/user/root/QuasiMonteCarlo_1522701120069_2085123424/out/reduce-out
at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1219)
at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1211)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1211)
at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1750)
at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1774)
at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
雖然報(bào)了一大串的錯(cuò)誤信息,但是從報(bào)錯(cuò)信息中,可以看到第一句是System times on machines may be out of sync. Check system time and time zones.
,這是說(shuō)機(jī)器上的系統(tǒng)時(shí)間可能不同步。讓我們檢查系統(tǒng)時(shí)間和時(shí)區(qū)。然后我就檢查了集群中所有機(jī)器的時(shí)間,的確是不同步的。那么要如何同步時(shí)間呢?那就要使用到ntpdate
命令了,在所有機(jī)器上安裝ntp包,并執(zhí)行同步時(shí)間的命令,如下:
[root@hadoop000 ~]# yum install -y ntp
[root@hadoop000 ~]# ntpdate -u ntp.api.bz
完成之后再次執(zhí)行之前的命令,這次任務(wù)執(zhí)行成功:
在這之前用Hadoop寫(xiě)了一個(gè)統(tǒng)計(jì)日志數(shù)據(jù)的小項(xiàng)目,現(xiàn)在既然我們的集群搭建成功了,那么當(dāng)然是得拿上來(lái)跑一下看看。首先將日志文件以及jar包上傳到服務(wù)器上:
[root@hadoop000 ~]# ls
10000_access.log hadoop-train-1.0-jar-with-dependencies.jar
[root@hadoop000 ~]#
把日志文件put到HDFS文件系統(tǒng)中:
[root@hadoop000 ~]# hdfs dfs -put ./10000_access.log /
[root@hadoop000 ~]# hdfs dfs -ls /
Found 5 items
-rw-r--r-- 3 root supergroup 2769741 2018-04-02 21:13 /10000_access.log
drwxr-xr-x - root supergroup 0 2018-04-02 20:29 /data
drwxr-xr-x - root supergroup 0 2018-04-02 20:31 /logs
drwx------ - root supergroup 0 2018-04-02 20:39 /tmp
drwxr-xr-x - root supergroup 0 2018-04-02 20:39 /user
[root@hadoop000 ~]#
執(zhí)行以下命令,將項(xiàng)目運(yùn)行在Hadoop集群之上:
[root@hadoop000 ~]# hadoop jar ./hadoop-train-1.0-jar-with-dependencies.jar org.zero01.hadoop.project.LogApp /10000_access.log /browserout
到Y(jié)ARN上查看任務(wù)執(zhí)行時(shí)的信息:
申請(qǐng)資源:
執(zhí)行任務(wù):
任務(wù)執(zhí)行成功:
查看輸出文件內(nèi)容:
[root@hadoop000 ~]# hdfs dfs -ls /browserout
Found 2 items
-rw-r--r-- 3 root supergroup 0 2018-04-02 21:22 /browserout/_SUCCESS
-rw-r--r-- 3 root supergroup 56 2018-04-02 21:22 /browserout/part-r-00000
[root@hadoop000 ~]# hdfs dfs -text /browserout/part-r-00000
Chrome 2775
Firefox 327
MSIE 78
Safari 115
Unknown 6705
[root@hadoop000 ~]#
處理結(jié)果沒(méi)有問(wèn)題,到此為止,我們的測(cè)試也完成了,接下來(lái)就可以愉快的使用Hadoop集群來(lái)幫我們處理數(shù)據(jù)了(當(dāng)然代碼你還是得寫(xiě)的)。
從整個(gè)Hadoop分布式集群環(huán)境的搭建到使用的過(guò)程中,可以看到除了搭建與偽分布式有些許區(qū)別外,在使用上基本是一模一樣的。所以也建議在學(xué)習(xí)的情況下使用偽分布式環(huán)境即可,畢竟集群的環(huán)境比較復(fù)雜,容易出現(xiàn)節(jié)點(diǎn)間通信障礙的問(wèn)題。如果卡在這些問(wèn)題上,導(dǎo)致學(xué)習(xí)不成還氣得不行就得不償失了233。