真实的国产乱ⅩXXX66竹夫人,五月香六月婷婷激情综合,亚洲日本VA一区二区三区,亚洲精品一区二区三区麻豆

成都創(chuàng)新互聯(lián)網(wǎng)站制作重慶分公司

spark2.2.0高可用搭建

一、概述

創(chuàng)新互聯(lián)建站是一家成都網(wǎng)站制作、做網(wǎng)站,提供網(wǎng)頁設(shè)計(jì),網(wǎng)站設(shè)計(jì),網(wǎng)站制作,建網(wǎng)站,按需規(guī)劃網(wǎng)站,網(wǎng)站開發(fā)公司,自2013年起是互聯(lián)行業(yè)建設(shè)者,服務(wù)者。以提升客戶品牌價(jià)值為核心業(yè)務(wù),全程參與項(xiàng)目的網(wǎng)站策劃設(shè)計(jì)制作,前端開發(fā),后臺(tái)程序制作以及后期項(xiàng)目運(yùn)營(yíng)并提出專業(yè)建議和思路。

1.實(shí)驗(yàn)環(huán)境基于以前搭建的haoop HA;

2.spark HA所需要的zookeeper環(huán)境前文已經(jīng)配置過,此處不再重復(fù)。

3.所需軟件包為:scala-2.12.3.tgz、spark-2.2.0-bin-hadoop2.7.tar

4.主機(jī)規(guī)劃

bd1

bd2

bd3

Worker

bd4

bd5

Master、Worker

二、配置Scala

1.解壓并拷貝

[root@bd1 ~]# tar -zxf scala-2.12.3.tgz 
[root@bd1 ~]# cp -r scala-2.12.3 /usr/local/

2.配置環(huán)境變量

[root@bd1 ~]# vim /etc/profile
export SCALA_HOME=/usr/local/scala
export PATH=:$SCALA_HOME/bin:$PATH
[root@bd1 ~]# source /etc/profile

3.驗(yàn)證

[root@bd1 ~]# scala -version
Scala code runner version 2.12.3 -- Copyright 2002-2017, LAMP/EPFL and Lightbend, Inc.

三、配置Spark

1.解壓并拷貝

[root@bd1 ~]# tar -zxf spark-2.2.0-bin-hadoop2.7.tgz
[root@bd1 ~]# cp spark-2.2.0-bin-hadoop2.7 /usr/local/spark

2.配置環(huán)境變量

[root@bd1 ~]# vim /etc/profile
export SCALA_HOME=/usr/local/scala
export PATH=:$SCALA_HOME/bin:$PATH
[root@bd1 ~]# source /etc/profile

3.修改spark-env.sh    #文件不存在需要拷貝模板

[root@bd1 conf]# vim spark-env.sh
export JAVA_HOME=/usr/local/jdk
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
export SCALA_HOME=/usr/local/scala
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=bd4:2181,bd5:2181 -Dspark.deploy.zookeeper.dir=/spark"
export SPARK_WORKER_MEMORY=1g
export SPARK_WORKER_CORES=2
export SPARK_WORKER_INSTANCES=1

4.修改spark-defaults.conf    #文件不存在需要拷貝模板

[root@bd1 conf]# vim spark-defaults.conf
spark.master                     spark://master:7077
spark.eventLog.enabled           true
spark.eventLog.dir               hdfs://master:/user/spark/history
spark.serializer                 org.apache.spark.serializer.KryoSerializer

5.在HDFS文件系統(tǒng)中新建日志文件目錄

hdfs dfs -mkdir -p /user/spark/history
hdfs dfs -chmod 777 /user/spark/history

6.修改slaves

[root@bd1 conf]# vim slaves
bd1
bd2
bd3
bd4
bd5

四、同步到其他主機(jī)

1.使用scp同步Scala到bd2-bd5

scp -r /usr/local/scala root@bd2:/usr/local/
scp -r /usr/local/scala root@bd3:/usr/local/
scp -r /usr/local/scala root@bd4:/usr/local/
scp -r /usr/local/scala root@bd5:/usr/local/

2.同步Spark到bd2-bd5

scp -r /usr/local/spark root@bd2:/usr/local/
scp -r /usr/local/spark root@bd3:/usr/local/
scp -r /usr/local/spark root@bd4:/usr/local/
scp -r /usr/local/spark root@bd5:/usr/local/

五、啟動(dòng)集群并測(cè)試HA

1.啟動(dòng)順序?yàn)椋簔ookeeper-->hadoop-->spark

2.啟動(dòng)spark

bd4:

[root@bd4 sbin]# cd /usr/local/spark/sbin/
[root@bd4 sbin]# ./start-all.sh 
starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.master.Master-1-bd4.out
bd4: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd4.out
bd2: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd2.out
bd3: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd3.out
bd5: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd5.out
bd1: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd1.out

[root@bd4 sbin]# jps
3153 DataNode
7235 Jps
3046 JournalNode
7017 Master
3290 NodeManager
7116 Worker
2958 QuorumPeerMain

bd5:

[root@bd5 sbin]# ./start-master.sh 
starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.master.Master-1-bd5.out

[root@bd5 sbin]# jps
3584 NodeManager
5602 RunJar
3251 QuorumPeerMain
8564 Master
3447 DataNode
8649 Jps
8474 Worker
3340 JournalNode

spark 2.2.0 高可用搭建

spark 2.2.0 高可用搭建

3.停掉bd4的Master進(jìn)程

[root@bd4 sbin]# kill -9 7017
[root@bd4 sbin]# jps
3153 DataNode
7282 Jps
3046 JournalNode
3290 NodeManager
7116 Worker
2958 QuorumPeerMain

spark 2.2.0 高可用搭建

spark 2.2.0 高可用搭建

五、總結(jié)

一開始時(shí)想把Master放到bd1和bd2上,但是啟動(dòng)Spark后發(fā)現(xiàn)兩個(gè)節(jié)點(diǎn)上都是Standby。然后修改配置文件轉(zhuǎn)移到bd4和bd5上,才順利運(yùn)行。換言之Spark HA的Master必須位于Zookeeper集群上才能正常運(yùn)行,即該節(jié)點(diǎn)上要有JournalNode這個(gè)進(jìn)程。


網(wǎng)頁名稱:spark2.2.0高可用搭建
標(biāo)題URL:http://weahome.cn/article/jhijsp.html

其他資訊

在線咨詢

微信咨詢

電話咨詢

028-86922220(工作日)

18980820575(7×24)

提交需求

返回頂部