kubernetes,簡(jiǎn)稱K8s,是用8代替8個(gè)字符“ubernete”而成的縮寫。是一個(gè)開源的,用于管理云平臺(tái)中多個(gè)主機(jī)上的容器化的應(yīng)用,Kubernetes的目標(biāo)是讓部署容器化的應(yīng)用簡(jiǎn)單并且高效(powerful),Kubernetes提供了應(yīng)用部署,規(guī)劃,更新,維護(hù)的一種機(jī)制。
成都創(chuàng)新互聯(lián)公司是專業(yè)的寧強(qiáng)網(wǎng)站建設(shè)公司,寧強(qiáng)接單;提供網(wǎng)站設(shè)計(jì)、成都網(wǎng)站設(shè)計(jì),網(wǎng)頁設(shè)計(jì),網(wǎng)站設(shè)計(jì),建網(wǎng)站,PHP網(wǎng)站建設(shè)等專業(yè)做網(wǎng)站服務(wù);采用PHP框架,可快速的進(jìn)行寧強(qiáng)網(wǎng)站開發(fā)網(wǎng)頁制作和功能擴(kuò)展;專業(yè)做搜索引擎喜愛的網(wǎng)站,專業(yè)的做網(wǎng)站團(tuán)隊(duì),希望更多企業(yè)前來合作!
主機(jī)列表:
主機(jī)名 | Centos版本 | ip | docker version | flannel version | Keepalived version | 主機(jī)配置 | 備注 |
---|---|---|---|---|---|---|---|
master01 | 7.6.1810 | 172.27.34.3 | 18.09.9 | v0.11.0 | v1.3.5 | 4C4G | control plane |
master02 | 7.6.1810 | 172.27.34.4 | 18.09.9 | v0.11.0 | v1.3.5 | 4C4G | control plane |
master03 | 7.6.1810 | 172.27.34.5 | 18.09.9 | v0.11.0 | v1.3.5 | 4C4G | control plane |
work01 | 7.6.1810 | 172.27.34.93 | 18.09.9 | / | / | 4C4G | worker nodes |
work02 | 7.6.1810 | 172.27.34.94 | 18.09.9 | / | / | 4C4G | worker nodes |
work03 | 7.6.1810 | 172.27.34.95 | 18.09.9 | / | / | 4C4G | worker nodes |
VIP | 7.6.1810 | 172.27.34.130 | 18.09.9 | v0.11.0 | v1.3.5 | 4C4G | 在control plane上浮動(dòng) |
client | 7.6.1810 | 172.27.34.234 | / | / | / | 4C4G | client |
共有7臺(tái)服務(wù)器,3臺(tái)control plane,3臺(tái)work,1臺(tái)client。
k8s 版本:
主機(jī)名 | kubelet version | kubeadm version | kubectl version | 備注 |
---|---|---|---|---|
master01 | v1.16.4 | v1.16.4 | v1.16.4 | kubectl選裝 |
master02 | v1.16.4 | v1.16.4 | v1.16.4 | kubectl選裝 |
master03 | v1.16.4 | v1.16.4 | v1.16.4 | kubectl選裝 |
work01 | v1.16.4 | v1.16.4 | v1.16.4 | kubectl選裝 |
work02 | v1.16.4 | v1.16.4 | v1.16.4 | kubectl選裝 |
work03 | v1.16.4 | v1.16.4 | v1.16.4 | kubectl選裝 |
client | / | / | v1.16.4 | client |
本文采用kubeadm方式搭建高可用k8s集群,k8s集群的高可用實(shí)際是k8s各核心組件的高可用,這里使用主備模式,架構(gòu)如下:
主備模式高可用架構(gòu)說明:
核心組件 | 高可用模式 | 高可用實(shí)現(xiàn)方式 |
---|---|---|
apiserver | 主備 | keepalived |
controller-manager | 主備 | leader election |
scheduler | 主備 | leader election |
etcd | 集群 | kubeadm |
- apiserver 通過keepalived實(shí)現(xiàn)高可用,當(dāng)某個(gè)節(jié)點(diǎn)故障時(shí)觸發(fā)keepalived vip 轉(zhuǎn)移;
- controller-manager k8s內(nèi)部通過選舉方式產(chǎn)生領(lǐng)導(dǎo)者(由--leader-elect 選型控制,默認(rèn)為true),同一時(shí)刻集群內(nèi)只有一個(gè)controller-manager組件運(yùn)行;
- scheduler k8s內(nèi)部通過選舉方式產(chǎn)生領(lǐng)導(dǎo)者(由--leader-elect 選型控制,默認(rèn)為true),同一時(shí)刻集群內(nèi)只有一個(gè)scheduler組件運(yùn)行;
- etcd 通過運(yùn)行kubeadm方式自動(dòng)創(chuàng)建集群來實(shí)現(xiàn)高可用,部署的節(jié)點(diǎn)數(shù)為奇數(shù),3節(jié)點(diǎn)方式最多容忍一臺(tái)機(jī)器宕機(jī)。
control plane和work節(jié)點(diǎn)都執(zhí)行本部分操作。
Centos7.6安裝詳見:Centos7.6操作系統(tǒng)安裝及優(yōu)化全紀(jì)錄
安裝Centos時(shí)已經(jīng)禁用了防火墻和selinux并設(shè)置了阿里源。
[root@centos7 ~]# hostnamectl set-hostname master01
[root@centos7 ~]# more /etc/hostname
master01
退出重新登陸即可顯示新設(shè)置的主機(jī)名master01
[root@master01 ~]# cat >> /etc/hosts << EOF
172.27.34.3 master01
172.27.34.4 master02
172.27.34.5 master03
172.27.34.93 work01
172.27.34.94 work02
172.27.34.95 work03
EOF
[root@master01 ~]# cat /sys/class/net/ens160/address
[root@master01 ~]# cat /sys/class/dmi/id/product_uuid
保證各節(jié)點(diǎn)mac和uuid唯一
[root@master01 ~]# swapoff -a
若需要重啟后也生效,在禁用swap后還需修改配置文件/etc/fstab,注釋swap
[root@master01 ~]# sed -i.bak '/swap/s/^/#/' /etc/fstab
本文的k8s網(wǎng)絡(luò)使用flannel,該網(wǎng)絡(luò)需要設(shè)置內(nèi)核參數(shù)bridge-nf-call-iptables=1,修改這個(gè)參數(shù)需要系統(tǒng)有br_netfilter模塊。
查看br_netfilter模塊:
[root@master01 ~]# lsmod |grep br_netfilter
如果系統(tǒng)沒有br_netfilter模塊則執(zhí)行下面的新增命令,如有則忽略。
臨時(shí)新增br_netfilter模塊:
[root@master01 ~]# modprobe br_netfilter
該方式重啟后會(huì)失效
永久新增br_netfilter模塊:
[root@master01 ~]# cat > /etc/rc.sysinit << EOF
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done
EOF
[root@master01 ~]# cat > /etc/sysconfig/modules/br_netfilter.modules << EOF
modprobe br_netfilter
EOF
[root@master01 ~]# chmod 755 /etc/sysconfig/modules/br_netfilter.modules
[root@master01 ~]# sysctl net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-iptables = 1
[root@master01 ~]# sysctl net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-ip6tables = 1
[root@master01 ~]# cat < /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@master01 ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@master01 ~]# cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
- [] 中括號(hào)中的是repository id,唯一,用來標(biāo)識(shí)不同倉庫
- name 倉庫名稱,自定義
- baseurl 倉庫地址
- enable 是否啟用該倉庫,默認(rèn)為1表示啟用
- gpgcheck 是否驗(yàn)證從該倉庫獲得程序包的合法性,1為驗(yàn)證
- repo_gpgcheck 是否驗(yàn)證元數(shù)據(jù)的合法性 元數(shù)據(jù)就是程序包列表,1為驗(yàn)證
- gpgkey=URL 數(shù)字簽名的公鑰文件所在位置,如果gpgcheck值為1,此處就需要指定gpgkey文件的位置,如果gpgcheck值為0就不需要此項(xiàng)了
[root@master01 ~]# yum clean all
[root@master01 ~]# yum -y makecache
配置master01到master02、master03免密登錄,本步驟只在master01上執(zhí)行。
[root@master01 ~]# ssh-keygen -t rsa
[root@master01 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@172.27.34.4
[root@master01 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@172.27.34.5
[root@master01 ~]# ssh 172.27.34.4
[root@master01 ~]# ssh master03
master01可以直接登錄master02和master03,不需要輸入密碼。
control plane和work節(jié)點(diǎn)都執(zhí)行本部分操作。
[root@master01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@master01 ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
[root@master01 ~]# yum list docker-ce --showduplicates | sort -r
[root@master01 ~]# yum install docker-ce-18.09.9 docker-ce-cli-18.09.9 containerd.io -y
指定安裝的docker版本為18.09.9
[root@master01 ~]# systemctl start docker
[root@master01 ~]# systemctl enable docker
[root@master01 ~]# yum -y install bash-completion
[root@master01 ~]# source /etc/profile.d/bash_completion.sh
由于Docker Hub的服務(wù)器在國外,下載鏡像會(huì)比較慢,可以配置鏡像加速器。主要的加速器有:Docker官方提供的中國registry mirror、阿里云加速器、DaoCloud 加速器,本文以阿里加速器配置為例。
登陸地址為:https://cr.console.aliyun.com ,未注冊(cè)的可以先注冊(cè)阿里云賬戶
配置daemon.json文件
[root@master01 ~]# mkdir -p /etc/docker
[root@master01 ~]# tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://v16stybc.mirror.aliyuncs.com"]
}
EOF
重啟服務(wù)
[root@master01 ~]# systemctl daemon-reload
[root@master01 ~]# systemctl restart docker
加速器配置完成
[root@master01 ~]# docker --version
[root@master01 ~]# docker run hello-world
通過查詢docker版本和運(yùn)行容器hello-world來驗(yàn)證docker是否安裝成功。
修改daemon.json,新增‘"exec-opts": ["native.cgroupdriver=systemd"’
[root@master01 ~]# more /etc/docker/daemon.json
{
"registry-mirrors": ["https://v16stybc.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
[root@master01 ~]# systemctl daemon-reload
[root@master01 ~]# systemctl restart docker
修改cgroupdriver是為了消除告警:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
control plane節(jié)點(diǎn)都執(zhí)行本部分操作。
[root@master01 ~]# yum -y install keepalived
master01上keepalived配置:
[root@master01 ~]# more /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id master01
}
vrrp_instance VI_1 {
state MASTER
interface ens160
virtual_router_id 50
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.27.34.130
}
}
master02上keepalived配置:
[root@master02 ~]# more /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id master02
}
vrrp_instance VI_1 {
state BACKUP
interface ens160
virtual_router_id 50
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.27.34.130
}
}
master03上keepalived配置:
[root@master03 ~]# more /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id master03
}
vrrp_instance VI_1 {
state BACKUP
interface ens160
virtual_router_id 50
priority 80
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.27.34.130
}
所有control plane啟動(dòng)keepalived服務(wù)并設(shè)置開機(jī)啟動(dòng)
[root@master01 ~]# service keepalived start
[root@master01 ~]# systemctl enable keepalived
[root@master01 ~]# ip a
vip在master01上
control plane和work節(jié)點(diǎn)都執(zhí)行本部分操作。
[root@master01 ~]# yum list kubelet --showduplicates | sort -r
本文安裝的kubelet版本是1.16.4,該版本支持的docker版本為1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09。
[root@master01 ~]# yum install -y kubelet-1.16.4 kubeadm-1.16.4 kubectl-1.16.4
- kubelet 運(yùn)行在集群所有節(jié)點(diǎn)上,用于啟動(dòng)Pod和容器等對(duì)象的工具
- kubeadm 用于初始化集群,啟動(dòng)集群的命令工具
- kubectl 用于和集群通信的命令行,通過kubectl可以部署和管理應(yīng)用,查看各種資源,創(chuàng)建、刪除和更新各種組件
啟動(dòng)kubelet并設(shè)置開機(jī)啟動(dòng)
[root@master01 ~]# systemctl enable kubelet && systemctl start kubelet
[root@master01 ~]# echo "source <(kubectl completion bash)" >> ~/.bash_profile
[root@master01 ~]# source .bash_profile
Kubernetes幾乎所有的安裝組件和Docker鏡像都放在goolge自己的網(wǎng)站上,直接訪問可能會(huì)有網(wǎng)絡(luò)問題,這里的解決辦法是從阿里云鏡像倉庫下載鏡像,拉取到本地以后改回默認(rèn)的鏡像tag。本文通過運(yùn)行image.sh腳本方式拉取鏡像。
[root@master01 ~]# more image.sh
#!/bin/bash
url=registry.cn-hangzhou.aliyuncs.com/loong576
version=v1.16.4
images=(`kubeadm config images list --kubernetes-version=$version|awk -F '/' '{print $2}'`)
for imagename in ${images[@]} ; do
docker pull $url/$imagename
docker tag $url/$imagename k8s.gcr.io/$imagename
docker rmi -f $url/$imagename
done
url為阿里云鏡像倉庫地址,version為安裝的kubernetes版本。
運(yùn)行腳本image.sh,下載指定版本的鏡像
[root@master01 ~]# ./image.sh
[root@master01 ~]# docker images
master01節(jié)點(diǎn)執(zhí)行本部分操作。
[root@master01 ~]# more kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.16.4
apiServer:
certSANs: #填寫所有kube-apiserver節(jié)點(diǎn)的hostname、IP、VIP
- master01
- master02
- master03
- node01
- node02
- node03
- 172.27.34.3
- 172.27.34.4
- 172.27.34.5
- 172.27.34.93
- 172.27.34.94
- 172.27.34.95
- 172.27.34.130
controlPlaneEndpoint: "172.27.34.130:6443"
networking:
podSubnet: "10.244.0.0/16"
kubeadm.conf為初始化的配置文件
[root@master01 ~]# kubeadm init --config=kubeadm-config.yaml
記錄kubeadm join的輸出,后面需要這個(gè)命令將work節(jié)點(diǎn)和其他control plane節(jié)點(diǎn)加入集群中。
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 172.27.34.130:6443 --token qbwt6v.rr4hsh73gv8vrcij \
--discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.27.34.130:6443 --token qbwt6v.rr4hsh73gv8vrcij \
--discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966
初始化失?。?/strong>
如果初始化失敗,可執(zhí)行kubeadm reset后重新初始化
[root@master01 ~]# kubeadm reset
[root@master01 ~]# rm -rf $HOME/.kube/config
[root@master01 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@master01 ~]# source .bash_profile
本文所有操作都在root用戶下執(zhí)行,若為非root用戶,則執(zhí)行如下操作:
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
在master01上新建flannel網(wǎng)絡(luò)
[root@master01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
由于網(wǎng)絡(luò)原因,可能會(huì)安裝失敗,可以在文末直接下載kube-flannel.yml文件,然后再執(zhí)行apply
master01分發(fā)證書:
在master01上運(yùn)行腳本cert-main-master.sh,將證書分發(fā)至master02和master03
[root@master01 ~]# ll|grep cert-main-master.sh
-rwxr--r-- 1 root root 638 1月 2 15:23 cert-main-master.sh
[root@master01 ~]# more cert-main-master.sh
USER=root # customizable
CONTROL_PLANE_IPS="172.27.34.4 172.27.34.5"
for host in ${CONTROL_PLANE_IPS}; do
scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
# Quote this line if you are using external etcd
scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
done
master02移動(dòng)證書至指定目錄:
在master02上運(yùn)行腳本cert-other-master.sh,將證書移至指定目錄
[root@master02 ~]# pwd
/root
[root@master02 ~]# ll|grep cert-other-master.sh
-rwxr--r-- 1 root root 484 1月 2 15:29 cert-other-master.sh
[root@master02 ~]# more cert-other-master.sh
USER=root # customizable
mkdir -p /etc/kubernetes/pki/etcd
mv /${USER}/ca.crt /etc/kubernetes/pki/
mv /${USER}/ca.key /etc/kubernetes/pki/
mv /${USER}/sa.pub /etc/kubernetes/pki/
mv /${USER}/sa.key /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.key /etc/kubernetes/pki/
mv /${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
# Quote this line if you are using external etcd
mv /${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
[root@master02 ~]# ./cert-other-master.sh
master03移動(dòng)證書至指定目錄:
在master03上也運(yùn)行腳本cert-other-master.sh
[root@master03 ~]# pwd
/root
[root@master03 ~]# ll|grep cert-other-master.sh
-rwxr--r-- 1 root root 484 1月 2 15:31 cert-other-master.sh
[root@master03 ~]# ./cert-other-master.sh
kubeadm join 172.27.34.130:6443 --token qbwt6v.rr4hsh73gv8vrcij \
--discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966 \
--control-plane
運(yùn)行初始化master生成的control plane節(jié)點(diǎn)加入集群的命令
kubeadm join 172.27.34.130:6443 --token qbwt6v.rr4hsh73gv8vrcij \
--discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966 \
--control-plane
master02和master03加載環(huán)境變量
[root@master02 ~]# scp master01:/etc/kubernetes/admin.conf /etc/kubernetes/
[root@master02 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@master02 ~]# source .bash_profile
[root@master03 ~]# scp master01:/etc/kubernetes/admin.conf /etc/kubernetes/
[root@master03 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@master03 ~]# source .bash_profile
該步操作是為了在master02和master03上也能執(zhí)行kubectl命令。
[root@master01 ~]# kubectl get nodes
[root@master01 ~]# kubectl get po -o wide -n kube-system
所有control plane節(jié)點(diǎn)處于ready狀態(tài),所有的系統(tǒng)組件也正常。
kubeadm join 172.27.34.130:6443 --token qbwt6v.rr4hsh73gv8vrcij \
--discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966
運(yùn)行初始化master生成的work節(jié)點(diǎn)加入集群的命令
[root@master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 Ready master 44m v1.16.4
master02 Ready master 33m v1.16.4
master03 Ready master 23m v1.16.4
work01 Ready 11m v1.16.4
work02 Ready 7m50s v1.16.4
work03 Ready 3m4s v1.16.4
[root@client ~]# cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@client ~]# yum clean all
[root@client ~]# yum -y makecache
[root@client ~]# yum install -y kubectl-1.16.4
安裝版本與集群版本保持一致
[root@client ~]# yum -y install bash-completion
[root@client ~]# source /etc/profile.d/bash_completion.sh
[root@client ~]# mkdir -p /etc/kubernetes
[root@client ~]# scp 172.27.34.3:/etc/kubernetes/admin.conf /etc/kubernetes/
[root@client ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@client ~]# source .bash_profile
[root@master01 ~]# echo "source <(kubectl completion bash)" >> ~/.bash_profile
[root@master01 ~]# source .bash_profile
[root@client ~]# kubectl get nodes
[root@client ~]# kubectl get cs
[root@client ~]# kubectl get po -o wide -n kube-system
本節(jié)內(nèi)容都在client端完成
[root@client ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
如果連接超時(shí),可以多試幾次。recommended.yaml已上傳,也可以在文末下載。
[root@client ~]# sed -i 's/kubernetesui/registry.cn-hangzhou.aliyuncs.com\/loong576/g' recommended.yaml
由于默認(rèn)的鏡像倉庫網(wǎng)絡(luò)訪問不通,故改成阿里鏡像
[root@client ~]# sed -i '/targetPort: 8443/a\ \ \ \ \ \ nodePort: 30001\n\ \ type: NodePort' recommended.yaml
配置NodePort,外部通過https://NodeIp:NodePort 訪問Dashboard,此時(shí)端口為30001
[root@client ~]# cat >> recommended.yaml << EOF
---
# ------------------- dashboard-admin ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard-admin
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: dashboard-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
創(chuàng)建超級(jí)管理員的賬號(hào)用于登錄Dashboard
[root@client ~]# kubectl apply -f recommended.yaml
[root@client ~]# kubectl get all -n kubernetes-dashboard
[root@client ~]# kubectl describe secrets -n kubernetes-dashboard dashboard-admin
令牌為:
eyJhbGciOiJSUzI1NiIsImtpZCI6Ikd0NHZ5X3RHZW5pNDR6WEdldmlQUWlFM3IxbGM3aEIwWW1IRUdZU1ZKdWMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNms1ZjYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZjk1NDE0ODEtMTUyZS00YWUxLTg2OGUtN2JmMWU5NTg3MzNjIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.LAe7N8Q6XR3d0W8w-r3ylOKOQHyMg5UDfGOdUkko_tqzUKUtxWQHRBQkowGYg9wDn-nU9E-rkdV9coPnsnEGjRSekWLIDkSVBPcjvEd0CVRxLcRxP6AaysRescHz689rfoujyVhB4JUfw1RFp085g7yiLbaoLP6kWZjpxtUhFu-MKh2NOp7w4rT66oFKFR-_5UbU3FoetAFBmHuZ935i5afs8WbNzIkM6u9YDIztMY3RYLm9Zs4KxgpAmqUmBSlXFZNW2qg6hxBqDijW_1bc0V7qJNt_GXzPs2Jm1trZR6UU1C2NAJVmYBu9dcHYtTCgxxkWKwR0Qd2bApEUIJ5Wug
請(qǐng)使用火狐瀏覽器訪問:https://VIP:30001
接受風(fēng)險(xiǎn)
通過令牌方式登錄
Dashboard提供了可以實(shí)現(xiàn)集群管理、工作負(fù)載、服務(wù)發(fā)現(xiàn)和負(fù)載均衡、存儲(chǔ)、字典配置、日志視圖等功能。
本節(jié)內(nèi)容都在client端完成
通過ip查看apiserver所在節(jié)點(diǎn),通過leader-elect查看scheduler和controller-manager所在節(jié)點(diǎn):
[root@master01 ~]# ip a|grep 130
inet 172.27.34.130/32 scope global ens160
[root@client ~]# kubectl get endpoints kube-controller-manager -n kube-system -o yaml |grep holderIdentity
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master01_6caf8003-052f-451d-8dce-4516825213ad","leaseDurationSeconds":15,"acquireTime":"2020-01-02T09:36:23Z","renewTime":"2020-01-03T07:57:55Z","leaderTransitions":2}'
[root@client ~]# kubectl get endpoints kube-scheduler -n kube-system -o yaml |grep holderIdentity
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master01_720d65f9-e425-4058-95d7-e5478ac951f7","leaseDurationSeconds":15,"acquireTime":"2020-01-02T09:36:20Z","renewTime":"2020-01-03T07:58:03Z","leaderTransitions":2}'
組件名 | 所在節(jié)點(diǎn) |
---|---|
apiserver | master01 |
controller-manager | master01 |
scheduler | master01 |
[root@master01 ~]# init 0
vip飄到了master02
[root@master02 ~]# ip a|grep 130
inet 172.27.34.130/32 scope global ens160
controller-manager和scheduler也發(fā)生了遷移
[root@client ~]# kubectl get endpoints kube-controller-manager -n kube-system -o yaml |grep holderIdentity
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master02_b3353e8f-a02f-4322-bf17-2f596cd25ba5","leaseDurationSeconds":15,"acquireTime":"2020-01-03T08:04:42Z","renewTime":"2020-01-03T08:06:36Z","leaderTransitions":3}'
[root@client ~]# kubectl get endpoints kube-scheduler -n kube-system -o yaml |grep holderIdentity
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master03_e0a2ec66-c415-44ae-871c-18c73258dc8f","leaseDurationSeconds":15,"acquireTime":"2020-01-03T08:04:56Z","renewTime":"2020-01-03T08:06:45Z","leaderTransitions":3}'
組件名 | 所在節(jié)點(diǎn) |
---|---|
apiserver | master02 |
controller-manager | master02 |
scheduler | master03 |
查詢:
[root@client ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 NotReady master 22h v1.16.4
master02 Ready master 22h v1.16.4
master03 Ready master 22h v1.16.4
work01 Ready 22h v1.16.4
work02 Ready 22h v1.16.4
work03 Ready 22h v1.16.4
master01狀態(tài)為NotReady
新建pod:
[root@client ~]# more nginx-master.yaml
apiVersion: apps/v1 #描述文件遵循extensions/v1beta1版本的Kubernetes API
kind: Deployment #創(chuàng)建資源類型為Deployment
metadata: #該資源元數(shù)據(jù)
name: nginx-master #Deployment名稱
spec: #Deployment的規(guī)格說明
selector:
matchLabels:
app: nginx
replicas: 3 #指定副本數(shù)為3
template: #定義Pod的模板
metadata: #定義Pod的元數(shù)據(jù)
labels: #定義label(標(biāo)簽)
app: nginx #label的key和value分別為app和nginx
spec: #Pod的規(guī)格說明
containers:
- name: nginx #容器的名稱
image: nginx:latest #創(chuàng)建容器所使用的鏡像
[root@client ~]# kubectl apply -f nginx-master.yaml
deployment.apps/nginx-master created
[root@client ~]# kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-master-75b7bfdb6b-lnsfh 1/1 Running 0 4m44s 10.244.5.6 work03
nginx-master-75b7bfdb6b-vxfg7 1/1 Running 0 4m44s 10.244.3.3 work01
nginx-master-75b7bfdb6b-wt9kc 1/1 Running 0 4m44s 10.244.4.5 work02
當(dāng)有一個(gè)control plane節(jié)點(diǎn)宕機(jī)時(shí),VIP會(huì)發(fā)生漂移,集群各項(xiàng)功能不受影響。
在關(guān)閉master01的同時(shí)關(guān)閉master02,測(cè)試集群還能否正常對(duì)外服務(wù)。
[root@master02 ~]# init 0
[root@master03 ~]# ip a|grep 130
inet 172.27.34.130/32 scope global ens160
vip漂移至唯一的control plane:master03
[root@client ~]# kubectl get nodes
Error from server: etcdserver: request timed out
[root@client ~]# kubectl get nodes
The connection to the server 172.27.34.130:6443 was refused - did you specify the right host or port?
etcd集群崩潰,整個(gè)k8s集群也不能正常對(duì)外服務(wù)。
單節(jié)點(diǎn)版k8s集群部署詳見:Centos7.6部署k8s(v1.14.2)集群
k8s集群高可用部署詳見:lvs+keepalived部署k8s v1.16.4高可用集群