這篇文章主要介紹如何使用二進制方式搭建K8S高可用集群,文中介紹的非常詳細,具有一定的參考價值,感興趣的小伙伴們一定要看完!
創(chuàng)新互聯(lián)公司是一家集網站建設,婁底企業(yè)網站建設,婁底品牌網站建設,網站定制,婁底網站建設報價,網絡營銷,網絡優(yōu)化,婁底網站推廣為一體的創(chuàng)新建站企業(yè),幫助傳統(tǒng)企業(yè)提升企業(yè)形象加強企業(yè)競爭力??沙浞譂M足這一群體相比中小企業(yè)更為豐富、高端、多元的互聯(lián)網需求。同時我們時刻保持專業(yè)、時尚、前沿,時刻以成就客戶成長自我,堅持不斷學習、思考、沉淀、凈化自己,讓我們?yōu)楦嗟钠髽I(yè)打造出實用型網站。1、系統(tǒng)概述
操作系統(tǒng)版本:CentOS7.5
k8s版本:1.12
系統(tǒng)要求:關閉swap、selinux、iptables
具體信息:
拓撲圖:
二進制包下載地址
etcd:
https://github.com/coreos/etcd/releases/tag/v3.2.12
flannel:
https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
k8s:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md
2、自簽Etcd SSL證書
master01操作:
# cat cfssl.sh #!/bin/bash wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 mv cfssl_linux-amd64 /usr/local/bin/cfssl mv cfssljson_linux-amd64 /usr/local/bin/cfssljson mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo自簽Etcd SSL證書
# cat cert-etcd.sh cat > ca-config.json <3、Etcd數(shù)據(jù)庫集群部署
master01 02 03操作:
# mkdir -pv /opt/etcd/{bin,cfg,ssl} # tar zxvf etcd-v3.2.12-linux-amd64.tar.gz # mv etcd-v3.2.12-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/master01操作:
# cd cert-etcd/ [root@master01 cert-etcd]# ll total 40 -rw-r--r-- 1 root root 287 Jan 11 15:50 ca-config.json -rw-r--r-- 1 root root 956 Jan 11 15:50 ca.csr -rw-r--r-- 1 root root 209 Jan 11 15:50 ca-csr.json -rw------- 1 root root 1675 Jan 11 15:50 ca-key.pem -rw-r--r-- 1 root root 1265 Jan 11 15:50 ca.pem -rw-r--r-- 1 root root 1013 Jan 11 15:50 server.csr -rw-r--r-- 1 root root 296 Jan 11 15:50 server-csr.json -rw------- 1 root root 1679 Jan 11 15:50 server-key.pem -rw-r--r-- 1 root root 1338 Jan 11 15:50 server.pem -rwxr-xr-x 1 root root 1076 Jan 11 15:50 ssl-etcd.sh [root@master01 cert-etcd]# cp *.pem /opt/etcd/ssl/# scp -r /opt/etcd master02:/opt/ # scp -r /opt/etcd master03:/opt/分別在master01 02 03操作:
# cat etcd.sh #!/bin/bash # example: ./etcd.sh etcd01 192.168.1.10 etcd02=https://192.168.1.11:2380,etcd03=https://192.168.1.12:2380 ETCD_NAME=$1 ETCD_IP=$2 ETCD_CLUSTER=$3 WORK_DIR=/opt/etcd cat <如有報錯,查看/var/log/message日志
4、node節(jié)點安裝docker
可以放到腳本內執(zhí)行
# cat docker.sh yum remove -y docker docker-common docker-selinux docker-engine yum install -y yum-utils device-mapper-persistent-data lvm2 wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo yum makecache fast yum install -y docker-ce systemctl enable docker systemctl start docker docker version如果拉取鏡像較慢,可以配置daocloud提供的docker加速器
5、Flannel網絡部署
master01執(zhí)行:
# pwd /opt/etcd/ssl # /opt/etcd/bin/etcdctl \ --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \ --endpoints="https://192.168.247.161:2379,https://192.168.247.162:2379,https://192.168.247.163:2379" \ set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'node01執(zhí)行:
# wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz # tar zxvf flannel-v0.10.0-linux-amd64.tar.gz # mkdir -pv /opt/kubernetes/{bin,cfg,ssl} # mv flanneld mk-docker-opts.sh /opt/kubernetes/bin/# cat /opt/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.247.161:2379,https://192.168.247.162:2379,https://192.168.247.163:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem"將master節(jié)點的/opt/etcd/ssl/*拷貝到node節(jié)點
[root@master01 ~]# scp -r /opt/etcd/ssl node01:/opt/etcd/# cat /usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/opt/kubernetes/cfg/flanneld ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target# cat /usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify EnvironmentFile=/run/flannel/subnet.env ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target重啟flannel和docker:
# systemctl daemon-reload # systemctl start flanneld # systemctl enable flanneld # systemctl restart docker # systemctl enable docker# cat /run/flannel/subnet.env DOCKER_OPT_BIP="--bip=172.17.12.1/24" DOCKER_OPT_IPMASQ="--ip-masq=false" DOCKER_OPT_MTU="--mtu=1450" DOCKER_NETWORK_OPTIONS=" --bip=172.17.12.1/24 --ip-masq=false --mtu=1450"# ip a 5: docker0:將介質及配置文件拷貝至node02節(jié)點
# scp -r /opt/kubernetes node02:/opt/ # cd /usr/lib/systemd/system/ # scp flanneld.service docker.service node02:/usr/lib/systemd/system/ # scp -r /opt/etcd/ssl/ node02:/opt/etcd/node02執(zhí)行:
# mkdir /opt/etcd# systemctl daemon-reload # systemctl start flanneld # systemctl enable flanneld # systemctl restart docker# ip a 5: docker0:網絡測試:
[root@node02 opt]# ping 172.17.12.1 PING 172.17.12.1 (172.17.12.1) 56(84) bytes of data. 64 bytes from 172.17.12.1: icmp_seq=1 ttl=64 time=1.07 ms 64 bytes from 172.17.12.1: icmp_seq=2 ttl=64 time=0.300 ms[root@node01 system]# ping 172.17.16.1 PING 172.17.16.1 (172.17.16.1) 56(84) bytes of data. 64 bytes from 172.17.16.1: icmp_seq=1 ttl=64 time=1.13 ms6、自簽APIServer SSL證書
在master01執(zhí)行:
# cat cert-k8s.sh #創(chuàng)建ca證書 cat > ca-config.json <7、部署Master組件
master01、02、03執(zhí)行:
# mkdir -pv /opt/kubernetes/{bin,cfg,ssl} # tar zxvf kubernetes-server-linux-amd64.tar.gz # cd kubernetes/server/bin # cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin/ # pwd /root/cert-k8s # cp *.pem /opt/kubernetes/ssl/ # head -c 16 /dev/urandom |od -An -t x |tr -d ' ' 1c96cf8a12d4555a52e89bf3925a5c87 # cat /opt/kubernetes/cfg/token.csv 1c96cf8a12d4555a52e89bf3925a5c87,kubelet-bootstrap,10001,"system:kubelet-bootstrap"1)、api-server:
# cat api-server.sh #!/bin/bash # example: ./api-server.sh 192.168.247.161 https://192.168.247.161:2379,https://192.168.247.162:2379,https://192.168.247.163:2379 MASTER_IP=$1 ETCD_SERVERS=$2 cat <2)、scheduler組件
# cat scheduler.sh cat <添加環(huán)境變量
K8S_HOME=/opt/kubernetes PATH=$K8S_HOME/bin:$PATH[root@master01 ~]# kubectl get cs # kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-1 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"} [root@master02 ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-2 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} [root@master03 ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-1 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"}8、生成Node kubeconfig文件
[root@master01 ~]# scp kubernetes/server/bin/{kubelet,kube-proxy} node01:/opt/kubernetes/bin/ [root@master01 ~]# scp kubernetes/server/bin/{kubelet,kube-proxy} node02:/opt/kubernetes/bin/master01執(zhí)行: kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap在master01執(zhí)行:
cat kubeconfig.sh # 創(chuàng)建kubelet bootstrapping kubeconfig APISERVER=$1 SSL_DIR=$2 export BOOTSTRAP_TOKEN=`cat /opt/kubernetes/cfg/token.csv |awk -F',' '{print $1}'` export KUBE_APISERVER="https://$APISERVER:6443" # 設置集群參數(shù) kubectl config set-cluster kubernetes \ --certificate-authority=$SSL_DIR/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig # 設置客戶端認證參數(shù) kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig # 設置上下文參數(shù) kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 設置默認上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig #---------------------- # 創(chuàng)建kube-proxy kubeconfig文件 kubectl config set-cluster kubernetes \ --certificate-authority=$SSL_DIR/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=$SSL_DIR/kube-proxy.pem \ --client-key=$SSL_DIR/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig# ./kubeconfig.sh 192.168.247.160 /opt/kubernetes/ssl# ll total 16 -rw------- 1 root root 2169 Jan 12 08:09 bootstrap.kubeconfig -rwxr-xr-x 1 root root 1419 Jan 12 08:07 kubeconfig.sh -rw------- 1 root root 6271 Jan 12 08:09 kube-proxy.kubeconfig# scp bootstrap.kubeconfig kube-proxy.kubeconfig node01:/opt/kubernetes/cfg/ # scp bootstrap.kubeconfig kube-proxy.kubeconfig node02:/opt/kubernetes/cfg/9、部署Node組件
在node01、02執(zhí)行:
1)、部署kubelet組件
cat kubelet.sh #!/bin/bash NODE_IP=$1 cat <2)、部署kube-proxy組件:
cat kube-proxy.sh #!/bin/bash NODE_IP=$1 cat <10、安裝nginx
使用nginx四層進行轉發(fā)
# cat nginx.repo [nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/7/$basearch/ gpgcheck=0 enabled=1# yum install nginx1) LB01和LB02配置:
nginx配置文件添加以下內容:
# cat /etc/nginx/nginx.conf stream{ log_format main "$remote_addr $upstream_addr $time_local $status"; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { server 192.168.247.161:6443; server 192.168.247.162:6443; server 192.168.247.163:6443; } server { listen 0.0.0.0:6443; proxy_pass k8s-apiserver; } }11、安裝keepalived
# yum install keepalived # yum install libnl3-devel ipset-devel# cat /etc/keepalived/check_nginx.sh #!/bin/bash count=$(ps -ef |grep nginx |egrep -cv "grep|$$") if [ "$count" -eq 0 ];then systemctl stop keeplived fi# chmod 755 check_nginx.shLB01配置:
# cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_skip_check_adv_addr vrrp_strict vrrp_garp_interval 0 vrrp_gna_interval 0 } vrrp_script check_nginx { script "/etc/keepalived/check_nginx.sh" } vrrp_instance VI_1 { state MASTER interface ens33 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.247.160/24 } track_script { check_nginx } }LB02配置:
# cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_skip_check_adv_addr vrrp_strict vrrp_garp_interval 0 vrrp_gna_interval 0 } vrrp_script check_nginx { script "/etc/keepalived/check_nginx.sh" } vrrp_instance VI_1 { state BACKUP interface ens33 virtual_router_id 51 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.247.160/24 } track_script { check_nginx } }# systemctl enable nginx # systemctl start nginx # systemctl enable keepalived # systemctl start keepalived12、節(jié)點發(fā)現(xiàn)
# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-gvRm9pzQJCj4cp_hGYp5qwW93LLdAbVPtz7AaztlGv8 17m kubelet-bootstrap Pending node-csr-luowueA4U43ca96d-Ff64X7o8p9BW6MGIxWfASUPukE 20m kubelet-bootstrap Pending # kubectl certificate approve node-csr-gvRm9pzQJCj4cp_hGYp5qwW93LLdAbVPtz7AaztlGv8 certificatesigningrequest.certificates.k8s.io/node-csr-gvRm9pzQJCj4cp_hGYp5qwW93LLdAbVPtz7AaztlGv8 approved # kubectl certificate approve node-csr-luowueA4U43ca96d-Ff64X7o8p9BW6MGIxWfASUPukE certificatesigningrequest.certificates.k8s.io/node-csr-luowueA4U43ca96d-Ff64X7o8p9BW6MGIxWfASUPukE approved # kubectl get node NAME STATUS ROLES AGE VERSION 192.168.247.171 Ready13、運行一個測試示例
# kubectl run nginx --image=nginx --replicas=3 # kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-dbddb74b8-dkhcw 1/1 Running 0 38m 172.17.35.2 192.168.247.172瀏覽器訪問:
http://192.168.247.171:48363
http://192.168.247.172:48363
以上是“如何使用二進制方式搭建K8S高可用集群”這篇文章的所有內容,感謝各位的閱讀!希望分享的內容對大家有幫助,更多相關知識,歡迎關注創(chuàng)新互聯(lián)行業(yè)資訊頻道!
另外有需要云服務器可以了解下創(chuàng)新互聯(lián)cdcxhl.cn,海內外云服務器15元起步,三天無理由+7*72小時售后在線,公司持有idc許可證,提供“云服務器、裸金屬服務器、高防服務器、香港服務器、美國服務器、虛擬主機、免備案服務器”等云主機租用服務以及企業(yè)上云的綜合解決方案,具有“安全穩(wěn)定、簡單易用、服務可用性高、性價比高”等特點與優(yōu)勢,專為企業(yè)上云打造定制,能夠滿足用戶豐富、多元化的應用場景需求。