引用
成都創(chuàng)新互聯(lián)公司主營(yíng)高密網(wǎng)站建設(shè)的網(wǎng)絡(luò)公司,主營(yíng)網(wǎng)站建設(shè)方案,app開發(fā)定制,高密h5微信小程序開發(fā)搭建,高密網(wǎng)站營(yíng)銷推廣歡迎高密等地區(qū)企業(yè)咨詢
Spark Standalone集群是Master-Slaves架構(gòu)的集群模式,和大部分的Master-Slaves結(jié)構(gòu)集群一樣,存在著Master單點(diǎn)故障的問(wèn)題。如何解決這個(gè)單點(diǎn)故障的問(wèn)題,Spark提供了兩種方案:
基于文件系統(tǒng)的單點(diǎn)恢復(fù)(Single-Node Recovery with Local File System)
基于zookeeper的Standby Masters(Standby Masters with ZooKeeper)
ZooKeeper提供了一個(gè)Leader Election機(jī)制,利用這個(gè)機(jī)制可以保證雖然集群存在多個(gè)Master,但是只有一個(gè)是Active的,其他的都是Standby。當(dāng)Active的Master出現(xiàn)故障時(shí),另外的一個(gè)Standby Master會(huì)被選舉出來(lái)。由于集群的信息,包括Worker, Driver和Application的信息都已經(jīng)持久化到文件系統(tǒng),因此在切換的過(guò)程中只會(huì)影響新Job的提交,對(duì)于正在進(jìn)行的Job沒(méi)有任何的影響。加入ZooKeeper的集群整體架構(gòu)
Zookeeper集群正常運(yùn)行
wget http://mirrors.shu.edu.cn/apache/spark/spark-2.4.0/spark-2.4.0-bin-hadoop2.7.tgz
tar -zxvf spark-2.4.0-bin-hadoop2.7.tgz -C /opt
mv spark-2.4.0-bin-hadoop2.7 spark-2.4.0
export JAVA_HOME=/usr/lib/jdk1.8.0_172
export CLASSPATH=${JAVA_HOME}/jre/lib:${JAVA_HOME}/lib
export HADOOP_HOME=/opt/hadoop-2.7.6
export SPARK_HOME=/opt/spark-2.4.0
export PATH=${JAVA_HOME}/bin:$HADOOP_HOME/bin:$SPARK_HOME/bin:$PATH
修改機(jī)器名稱
hostnamectl set-hostname res-spark-0001
執(zhí)行命令使得環(huán)境變量生效
source /etc/profile
cd /opt/spark-2.4.0/conf
cp log4j.properties.template log4j.properties
cp slaves.template slaves
cp spark-env.sh.template spark-env.sh
cp spark-defaults.conf.template spark-defaults.conf
4.1 slaves
res-spark-0003
res-spark-0004
res-spark-0005
4.2 spark-defaults.conf
spark.deploy.recoveryMode ZOOKEEPER
spark.deploy.zookeeper.url res-spark-0001:2181,res-spark-0002:2181,res-spark-0003:2181
spark.master spark://res-spark-0001:7077
spark.eventLog.enabled true
spark.eventLog.dir hdfs://cluster1/spark/eventLog
spark.shuffle.service.enabled true
4.3 spark-env.sh
export JAVA_HOME=/usr/lib/jdk1.8.0_172
export HADOOP_HOME=/opt/hadoop-2.7.6
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export SPARK_HOME=/opt/spark-2.4.0
export SPARK_WORKER_CORES=6
export SPARK_WORKER_MEMORY=24g
4.4 log4j.properties
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Set everything to be logged to the console
log4j.rootCategory=INFO, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n
# Set the default spark-shell log level to WARN. When running the spark-shell, the
# log level for this class is used to overwrite the root logger's log level, so that
# the user can have different defaults for the shell and regular Spark apps.
log4j.logger.org.apache.spark.repl.Main=WARN
# Settings to quiet third party logs that are too verbose
log4j.logger.org.spark_project.jetty=WARN
log4j.logger.org.spark_project.jetty.util.component.AbstractLifeCycle=ERROR
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO
log4j.logger.org.apache.parquet=ERROR
log4j.logger.parquet=ERROR
# SPARK-9183: Settings to avoid annoying messages when looking up nonexistent UDFs in SparkSQL with Hive support
log4j.logger.org.apache.hadoop.hive.metastore.RetryingHMSHandler=FATAL
log4j.logger.org.apache.hadoop.hive.ql.exec.FunctionRegistry=ERROR
分發(fā)spark程序以及配置文件到其他節(jié)點(diǎn)
scp -r /opt/spark-2.4.0 res-spark-0002:/opt
scp -r /opt/spark-2.4.0 res-spark-0003:/opt
scp -r /opt/spark-2.4.0 res-spark-0004:/opt
scp -r /opt/spark-2.4.0 res-spark-0005:/opt
修改 res-spark-0002節(jié)點(diǎn)的配置文件
6.1 spark-defaults.conf
spark.master spark://res-spark-0002:7077
cd sbin
./start-all.sh
res-spark-0002節(jié)點(diǎn)
cd sbin
./start-master.sh
./stop-master.sh
得到如下結(jié)果
spark-submit --master spark://res-spark-0001:7077 --driver-cores 4 --driver-memory 6g --conf spark.dynamicAllocation.enabled=true --conf spark.shuffle.service.enabled=true --class com.cloud.RuleEngine rule-engine-1.0-SNAPSHOT-jar-with-dependencies.jar
報(bào)錯(cuò)信息
18/12/30 08:47:41 ERROR TaskSchedulerImpl: Lost executor 3 on 172.16.0.24: Unable to create executor due to Unable to register with external shuffle server due to : Failed to connect to /172.16.0.24:7337
官網(wǎng):
In standalone mode, simply start your workers with spark.shuffle.service.enabled set to true.