下文給大家?guī)?a title="負(fù)載均衡" target="_blank" >負(fù)載均衡調(diào)度器部署及實(shí)驗(yàn)環(huán)境分享,希望能夠給大家在實(shí)際運(yùn)用中帶來一定的幫助,負(fù)載均衡涉及的東西比較多,理論也不多,網(wǎng)上有很多書籍,今天我們就用創(chuàng)新互聯(lián)在行業(yè)內(nèi)累計(jì)的經(jīng)驗(yàn)來做一個(gè)解答。
專業(yè)從事成都網(wǎng)站建設(shè)、做網(wǎng)站,高端網(wǎng)站制作設(shè)計(jì),小程序開發(fā),網(wǎng)站推廣的成都做網(wǎng)站的公司。優(yōu)秀技術(shù)團(tuán)隊(duì)竭力真誠服務(wù),采用H5技術(shù)+CSS3前端渲染技術(shù),自適應(yīng)網(wǎng)站建設(shè),讓網(wǎng)站在手機(jī)、平板、PC、微信下都能呈現(xiàn)。建站過程建立專項(xiàng)小組,與您實(shí)時(shí)在線互動(dòng),隨時(shí)提供解決方案,暢聊想法和感受。
內(nèi)容要點(diǎn):
1、實(shí)驗(yàn)環(huán)境
2、負(fù)載均衡調(diào)度器部署
一、實(shí)驗(yàn)環(huán)境:
基于之前部署好的多 Master 集群架構(gòu)的基礎(chǔ)上,部署兩臺調(diào)度器云服務(wù)器(這邊我用的是 nginx),實(shí)現(xiàn)負(fù)載均衡:
kubernetes二進(jìn)制集群部署一——etcd存儲組件、flannel網(wǎng)絡(luò)組件部署:
kubernetes二進(jìn)制集群部署一——單master集群部署+多master群及部署:
服務(wù)器信息
角色 | IP地址 |
master01 | 192.168.109.138 |
master02 | 192.168.109.230 |
調(diào)度器1(nginx01) | 192.168.109.131 |
調(diào)度器1(nginx02) | 192.168.109.132 |
node01節(jié)點(diǎn) | 192.168.109.133 |
node02節(jié)點(diǎn) | 192.168.109.137 |
虛擬 ip | 192.168.109.100 |
需要兩個(gè)的腳本:
第一個(gè):keepalived.conf ! Configuration File for keepalived global_defs { # 接收郵件地址 notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } # 郵件發(fā)送地址 notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_MASTER } vrrp_script check_nginx { script "/usr/local/nginx/sbin/check_nginx.sh" } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 # VRRP 路由 ID實(shí)例,每個(gè)實(shí)例是唯一的 priority 100 # 優(yōu)先級,備服務(wù)器設(shè)置 90 advert_int 1 # 指定VRRP 心跳包通告間隔時(shí)間,默認(rèn)1秒 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.0.0.188/24 } track_script { check_nginx } } mkdir /usr/local/nginx/sbin/ -p vim /usr/local/nginx/sbin/check_nginx.sh count=$(ps -ef |grep nginx |egrep -cv "grep|$$") if [ "$count" -eq 0 ];then /etc/init.d/keepalived stop fi chmod +x /usr/local/nginx/sbin/check_nginx.sh 第二個(gè):nginx cat > /etc/yum.repos.d/nginx.repo << EOF [nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/7/$basearch/ gpgcheck=0 EOF stream { log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { server 10.0.0.3:6443; server 10.0.0.8:6443; } server { listen 6443; proxy_pass k8s-apiserver; } }
二、負(fù)載均衡調(diào)度器部署
//首先關(guān)閉防火墻: [root@localhost ~]# systemctl stop firewalld.service [root@localhost ~]# setenforce 0 //將這個(gè)腳本文件放進(jìn)家目錄中: [root@localhost ~]# ls anaconda-ks.cfg initial-setup-ks.cfg keepalived.conf nginx.sh 公共 模板 視頻 圖片 文檔 下載 音樂 桌面 //建立本地yum倉庫: [root@localhost ~]# vim /etc/yum.repos.d/nginx.repo [nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/7/$basearch/ gpgcheck=0 [root@localhost ~]# yum list [root@localhost ~]# yum install nginx -y //下載nginx //接下來是添加四層轉(zhuǎn)發(fā): [root@localhost ~]# vim /etc/nginx/nginx.conf 添加以下模塊: stream { log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { server 192.168.109.138:6443; //master01的IP地址 server 192.168.109.230:6443; //master02的IP地址 } server { listen 6443; proxy_pass k8s-apiserver; } } [root@localhost ~]# systemctl start nginx //開啟服務(wù) //接下來是部署 keepalived服務(wù): [root@localhost ~]# yum install keepalived -y //修改配置文件(nginx01是master): [root@localhost ~]# cp keepalived.conf /etc/keepalived/keepalived.conf cp:是否覆蓋"/etc/keepalived/keepalived.conf"? yes [root@localhost ~]# vim /etc/keepalived/keepalived.conf //做如下刪改: ! Configuration File for keepalived global_defs { # 接收郵件地址 notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } # 郵件發(fā)送地址 notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_MASTER } vrrp_script check_nginx { script "/etc/nginx/check_nginx.sh" ##檢測腳本的路徑,稍后會創(chuàng)建 } vrrp_instance VI_1 { state MASTER interface ens33 virtual_router_id 51 priority 100 ##優(yōu)先級 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.109.100/24 ##虛擬IP地址 } track_script { check_nginx } } //nginx02(是backup),配置如下: ! Configuration File for keepalived global_defs { # 接收郵件地址 notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } # 郵件發(fā)送地址 notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_MASTER } vrrp_script check_nginx { script "/etc/nginx/check_nginx.sh" ##檢測腳本的路徑,稍后會創(chuàng)建 } vrrp_instance VI_1 { state BACKUP interface ens33 virtual_router_id 51 priority 90 ##優(yōu)先級低于master advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.109.100/24 ##虛擬IP地址 } track_script { check_nginx } } //創(chuàng)建檢測腳本 [root@localhost ~]# vim /etc/nginx/check_nginx.sh count=$(ps -ef |grep nginx |egrep -cv "grep|$$") if [ "$count" -eq 0 ];then systemctl stop keepalived fi [root@localhost ~]# chmod +x /etc/nginx/check_nginx.sh //授權(quán) [root@localhost ~]# systemctl start keepalived.service //開啟服務(wù) [root@localhost ~]# ip a //查看ip地址
2、實(shí)驗(yàn)結(jié)果驗(yàn)證
驗(yàn)證一:漂移地址是否起作用(高可用是否實(shí)現(xiàn))
1、此時(shí) 虛擬ip在 nginx01 上,驗(yàn)證地址漂移,可以在 lb01 中使用 pkill nginx 停止nginx服務(wù),再在 lb02 上使用 ip a 命令查看地址是否進(jìn)行了漂移。
2、恢復(fù),此時(shí),在 nginx02上,我們先啟動(dòng) nginx服務(wù),再啟動(dòng) keepalived服務(wù),再用 ip a命令查看,地址又漂移回來了,而 nginx02上沒有虛擬ip。
驗(yàn)證二:驗(yàn)證負(fù)載均衡是否實(shí)現(xiàn)<此時(shí)VIP在bl2上>
1、修改nginx01(master)的首頁內(nèi)容:
[root@localhost ~]# vim /usr/share/nginx/html/index.htmlWelcome to master nginx!
2、修改nginx02(backup)的首頁內(nèi)容:
[root@localhost ~]# vim /usr/share/nginx/html/index.htmlWelcome to backup nginx!
3、用瀏覽器訪問:http://192.168.109.100/
此時(shí),負(fù)載均衡和高可用功能都已經(jīng)完全實(shí)現(xiàn)了?。?!
3、部署 node節(jié)點(diǎn):
//開始修改 node節(jié)點(diǎn)配置文件統(tǒng)一的 VIP(bootstrap.kubeconfig,kubelet.kubeconfig) 修改內(nèi)容:server: https://192.168.109.100:6443(都改成vip) [root@localhost cfg]# vim /opt/kubernetes/cfg/bootstrap.kubeconfig [root@localhost cfg]# vim /opt/kubernetes/cfg/kubelet.kubeconfig [root@localhost cfg]# vim /opt/kubernetes/cfg/kube-proxy.kubeconfig //重啟服務(wù): [root@localhost cfg]# systemctl restart kubelet.service [root@localhost cfg]# systemctl restart kube-proxy.service //檢查修改內(nèi)容: [root@localhost cfg]# grep 100 * bootstrap.kubeconfig: server: https://192.168.109.100:6443 kubelet.kubeconfig: server: https://192.168.109.100:6443 kube-proxy.kubeconfig: server: https://192.109.220.100:6443 //接下來在 調(diào)度器1 上查看 nginx的k8s日志: [root@localhost ~]# tail /var/log/nginx/k8s-access.log 192.168.109.131 192.168.109.138:6443 - [09/Feb/2020:13:14:45 +0800] 200 1122 192.168.109.131 192.168.109.230:6443 - [09/Feb/2020:13:14:45 +0800] 200 1121 192.168.109.132 192.168.109.138:6443 - [09/Feb/2020:13:18:14 +0800] 200 1120 192.168.109.132 192.168.109.230:6443 - [09/Feb/2020:13:18:14 +0800] 200 1121 可以看出是以輪詢調(diào)度的算法,將請求流量分發(fā)給兩臺master ———— 接下來是測試創(chuàng)建 Pod: 在 master01 上操作: [root@localhost kubeconfig]# kubectl run nginx --image=nginx //查看狀態(tài): [root@localhost kubeconfig]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-dbddb74b8-zbhhr 1/1 Running 0 47s 此時(shí)已經(jīng)創(chuàng)建完成,正在運(yùn)行中 *** 注意日志問題 ***: [root@localhost kubeconfig]# kubectl logs nginx-dbddb74b8-zbhhr Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-dbddb74b8-zbhhr) 此時(shí),由于權(quán)限問題查看日志,會出現(xiàn)報(bào)錯(cuò) 解決辦法(提升權(quán)限): [root@localhost kubeconfig]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created 此時(shí),再次查看日志,就不會出現(xiàn)報(bào)錯(cuò): //查看 Pod網(wǎng)絡(luò): [root@localhost kubeconfig]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-dbddb74b8-zbhhr 1/1 Running 0 7m11s 172.17.93.2 192.168.109.131可以看出,這個(gè)在master01上創(chuàng)建的pod被分配到了node01上了。 我們可以在對應(yīng)網(wǎng)絡(luò)的 node節(jié)點(diǎn)上操作就可以直接訪問: 在node01上操作: [root@localhost cfg]# curl 172.17.93.2
此時(shí),由于 flannel網(wǎng)絡(luò)組件的作用下,都可以在node01和node02的瀏覽器上訪問這個(gè)地址:172.17.93.2
由于剛剛訪問了網(wǎng)頁,我們也可以在 master01上查看到日志信息:
看了以上關(guān)于負(fù)載均衡調(diào)度器部署及實(shí)驗(yàn)環(huán)境分享,如果大家還有什么地方需要了解的可以在創(chuàng)新互聯(lián)行業(yè)資訊里查找自己感興趣的或者找我們的專業(yè)技術(shù)工程師解答的,創(chuàng)新互聯(lián)技術(shù)工程師在行業(yè)內(nèi)擁有十幾年的經(jīng)驗(yàn)了。