內(nèi)容要點(diǎn):
為平遙等地區(qū)用戶提供了全套網(wǎng)頁(yè)設(shè)計(jì)制作服務(wù),及平遙網(wǎng)站建設(shè)行業(yè)解決方案。主營(yíng)業(yè)務(wù)為做網(wǎng)站、網(wǎng)站設(shè)計(jì)、平遙網(wǎng)站設(shè)計(jì),以傳統(tǒng)方式定制建設(shè)網(wǎng)站,并提供域名空間備案等一條龍服務(wù),秉承以專業(yè)、用心的態(tài)度為用戶提供真誠(chéng)的服務(wù)。我們深信只要達(dá)到每一位用戶的要求,就會(huì)得到認(rèn)可,從而選擇與我們長(zhǎng)期合作。這樣,我們也可以走得更遠(yuǎn)!
一、實(shí)驗(yàn)環(huán)境
二、單master群集部署
三、多master群集部署
一、實(shí)驗(yàn)環(huán)境:
基于上篇博客:https://blog.51cto.com/14475876/2470049?部署的環(huán)境上
二、單master群集部署
單master群集架構(gòu)圖:
以下是自簽SSL證書列表:
1、首先,我們要了解在 Master 上,要部署以下三大核心組件:
kube-apiserver:是集群的統(tǒng)一入口,各組件協(xié)調(diào)者,所有對(duì)象資源的增刪改查和監(jiān)聽操作都交給 APIServer 處理后再提交給 Etcd 存儲(chǔ);
kube-controller-manager:處理群集中常規(guī)后臺(tái)任務(wù),一個(gè)資源對(duì)應(yīng)一個(gè)控制器,而 controller-manager 就是負(fù)責(zé)管理這些控制器的;
kube-scheduler:根據(jù)調(diào)度算法為新創(chuàng)建的 Pod 選擇一個(gè) Node 節(jié)點(diǎn),可以任意部署,可以部署在同一個(gè)節(jié)點(diǎn)上,也可以部署在不同節(jié)點(diǎn)上。
操作流程:配置文件 -----> systemd 管理組件 -----> 啟動(dòng)
—— 部署開始:
接下來是在 master 上的操作,生成 api-server 證書:
將宿主機(jī)上下載好的?master.zip?包上傳到?/root/k8s/?目錄下,并解壓: [root@localhost?k8s]#?unzip?master.zip [root@localhost?k8s]#?mkdir?/opt/kubernetes/{cfg,bin,ssl}?-p?? [root@localhost?k8s]#?mkdir?k8s-cert??????//創(chuàng)建?apiserver自簽證書的目錄 [root@localhost?k8s]#?cd?k8s-cert/ [root@localhost?k8s-cert]#?vim?k8s-cert.sh cat?>?ca-config.json?<?ca-csr.json?< ?server-csr.json?< 服務(wù)器地址(master) ??????"192.168.109.137",???//第二臺(tái)調(diào)度服務(wù)器地址(backup) ??????"kubernetes", ??????"kubernetes.default", ??????"kubernetes.default.svc", ??????"kubernetes.default.svc.cluster", ??????"kubernetes.default.svc.cluster.local" ????], ????"key":?{ ????????"algo":?"rsa", ????????"size":?2048 ????}, ????"names":?[ ????????{ ????????????"C":?"CN", ????????????"L":?"BeiJing", ????????????"ST":?"BeiJing", ????????????"O":?"k8s", ????????????"OU":?"System" ????????} ????] } EOF cfssl?gencert?-ca=ca.pem?-ca-key=ca-key.pem?-config=ca-config.json?-profile=kubernetes?server-csr.json?|?cfssljson?-bare?server #----------------------- cat?>?admin-csr.json?< ?kube-proxy-csr.json?< 2、在 node 節(jié)點(diǎn)上的部署:
首先以下是 node 節(jié)點(diǎn)上的 三大核心組件:
kubelet:是master在node節(jié)點(diǎn)上的agent,可以管理本機(jī)運(yùn)行容器的生命周期,例如創(chuàng)建容器、Pod掛載數(shù)據(jù)卷、下載secret、獲取容器和節(jié)點(diǎn)狀態(tài)等工作,kubelet 將每個(gè) Pod轉(zhuǎn)換成一組容器。
kube-proxy:在 node節(jié)點(diǎn)上實(shí)現(xiàn) Pod網(wǎng)絡(luò)代理,維護(hù)網(wǎng)絡(luò)規(guī)劃和四層負(fù)載均衡工作。
docker:容器(我們已經(jīng)安裝好了)
—— 部署開始:
//先在master上,把?kubelet、kube-proxy?拷貝到?node節(jié)點(diǎn)上去: [root@localhost?~]#?cd?k8s/kubernetes/server/bin/ [root@localhost?bin]#?ls apiextensions-apiserver??????????????kube-apiserver.docker_tag???????????kube-proxy cloud-controller-manager?????????????kube-apiserver.tar??????????????????kube-proxy.docker_tag cloud-controller-manager.docker_tag??kube-controller-manager?????????????kube-proxy.tar cloud-controller-manager.tar?????????kube-controller-manager.docker_tag??kube-scheduler hyperkube????????????????????????????kube-controller-manager.tar?????????kube-scheduler.docker_tag kubeadm??????????????????????????????kubectl?????????????????????????????kube-scheduler.tar kube-apiserver???????????????????????kubelet?????????????????????????????mounter [root@localhost?bin]#?scp?kubelet?kube-proxy?root@192.168.109.131:/opt/kubernetes/bin/ [root@localhost?bin]#?scp?kubelet?kube-proxy?root@192.168.109.132:/opt/kubernetes/bin/ //在?node01節(jié)點(diǎn)上操作(將宿主機(jī)上的?node.zip包?到/root?目錄下再解壓): [root@localhost?~]#?ls anaconda-ks.cfg??flannel-v0.10.0-linux-amd64.tar.gz??node.zip???公共??視頻??文檔??音樂 flannel.sh???????initial-setup-ks.cfg????????????????README.md??模板??圖片??下載??桌面 [root@localhost?~]#?unzip?node.zip????//解壓,獲得?kubelet.sh?proxy.sh Archive:??node.zip ??inflating:?proxy.sh???????????????? ??inflating:?kubelet.sh? ————接下來在?master?上操作: [root@localhost?k8s]#?mkdir?kubeconfig [root@localhost?k8s]#?cd?kubeconfig/ [root@localhost?kubeconfig]#?cat?/opt/kubernetes/cfg/token.csv??//獲取?token信息 1232eb0133309f6ccde54802cc0b3ebe,kubelet-bootstrap,10001,"system:kubelet-bootstrap" [root@localhost?kubeconfig]#?vim?kubeconfig APISERVER=$1 SSL_DIR=$2 #?創(chuàng)建kubelet?bootstrapping?kubeconfig? export?KUBE_APISERVER="https://$APISERVER:6443" #?設(shè)置集群參數(shù) kubectl?config?set-cluster?kubernetes?\ ??--certificate-authority=$SSL_DIR/ca.pem?\ ??--embed-certs=true?\ ??--server=${KUBE_APISERVER}?\ ??--kubeconfig=bootstrap.kubeconfig #?設(shè)置客戶端認(rèn)證參數(shù) kubectl?config?set-credentials?kubelet-bootstrap?\ ??--token=1232eb0133309f6ccde54802cc0b3ebe?\ ??--kubeconfig=bootstrap.kubeconfig #?設(shè)置上下文參數(shù) kubectl?config?set-context?default?\ ??--cluster=kubernetes?\ ??--user=kubelet-bootstrap?\ ??--kubeconfig=bootstrap.kubeconfig #?設(shè)置默認(rèn)上下文 ??--kubeconfig=bootstrap.kubeconfig #?設(shè)置默認(rèn)上下文 kubectl?config?use-context?default?--kubeconfig=bootstrap.kubeconfig #---------------------- #?創(chuàng)建kube-proxy?kubeconfig文件 kubectl?config?set-cluster?kubernetes?\ ??--certificate-authority=$SSL_DIR/ca.pem?\ ??--embed-certs=true?\ ??--server=${KUBE_APISERVER}?\ ??--kubeconfig=kube-proxy.kubeconfig kubectl?config?set-credentials?kube-proxy?\ ??--client-certificate=$SSL_DIR/kube-proxy.pem?\ ??--client-key=$SSL_DIR/kube-proxy-key.pem?\ ??--embed-certs=true?\ ??--kubeconfig=kube-proxy.kubeconfig kubectl?config?set-context?default?\ ??--cluster=kubernetes?\ ??--user=kube-proxy?\ ??--kubeconfig=kube-proxy.kubeconfig kubectl?config?use-context?default?--kubeconfig=kube-proxy.kubeconfig //設(shè)置環(huán)境變量(可以寫入到?/etc/profile?中): [root@localhost?kubeconfig]#?export?PATH=$PATH:/opt/kubernetes/bin/ //檢查健康狀態(tài): [root@localhost?kubeconfig]#?kubectl?get?cs NAME?????????????????STATUS????MESSAGE?????????????ERROR scheduler????????????Healthy???ok?????????????????? controller-manager???Healthy???ok?????????????????? etcd-0???????????????Healthy???{"health":"true"}??? etcd-2???????????????Healthy???{"health":"true"}??? etcd-1???????????????Healthy???{"health":"true"}? //生成配置文件: [root@localhost?kubeconfig]#?bash?kubeconfig?192.168.109.138?/root/k8s/k8s-cert/ [root@localhost?kubeconfig]#?ls bootstrap.kubeconfig??kubeconfig??kube-proxy.kubeconfig //拷貝配置文件到?node節(jié)點(diǎn)上: [root@localhost?kubeconfig]#?scp?bootstrap.kubeconfig?kube-proxy.kubeconfig?root@192.168.109.131:/opt/kubernetes/cfg/ [root@localhost?kubeconfig]#?scp?bootstrap.kubeconfig?kube-proxy.kubeconfig?root@192.168.109.132:/opt/kubernetes/cfg/ //創(chuàng)建?bootstrap角色賦予權(quán)限用于連接?apiserver請(qǐng)求簽名(至關(guān)重要): [root@localhost?kubeconfig]#?kubectl?create?clusterrolebinding?kubelet-bootstrap?--clusterrole=system:node-bootstrapper?--user=kubelet-bootstrap ————接下來在?node01?節(jié)點(diǎn)上的操作: [root@localhost?~]#?bash?kubelet.sh?192.168.109.131 //檢查?kubelet?服務(wù)啟動(dòng): [root@localhost?~]#?ps?aux|grep?kube ————在master上: //檢查到?node01?節(jié)點(diǎn)的請(qǐng)求: [root@localhost?kubeconfig]#?kubectl?get?csr NAME???????????????????????????????????????????????????AGE???REQUESTOR???????????CONDITION node-csr-M9Iv_3cKuOZaiKSvoQGIarJHOaK1S9FnRs6SGIXP9nk???5s????kubelet-bootstrap???Pending(意思:等待群集給該節(jié)點(diǎn)頒發(fā)證書) //接下來同意請(qǐng)求,頒發(fā)證書即可: [root@localhost?kubeconfig]#?kubectl?certificate?approve?node-csr-M9Iv_3cKuOZaiKSvoQGIarJHOaK1S9FnRs6SGIXP9nk [root@localhost?kubeconfig]#?kubectl?get?csr NAME???????????????????????????????????????????????????AGE????REQUESTOR???????????CONDITION node-csr-M9Iv_3cKuOZaiKSvoQGIarJHOaK1S9FnRs6SGIXP9nk???7m7s???kubelet-bootstrap???Approved,Issued (Approved,Issued:就表示已經(jīng)被允許加入群集) //查看群集節(jié)點(diǎn),成功加入?node01?節(jié)點(diǎn): [root@localhost?kubeconfig]#?kubectl?get?node NAME??????????????STATUS???ROLES????AGE????VERSION 192.168.109.131???Ready???????3m8s???v1.12.3 ————在?node01上操作,啟動(dòng)?proxy服務(wù): [root@localhost?~]#?bash?proxy.sh?192.168.109.131 [root@localhost?~]#?systemctl?status?kube-proxy.service????//查看狀態(tài)是否正常 ————部署?node02?: 為了提高效率,我們將?node01上現(xiàn)成的?/opt/kubernetes目錄復(fù)制到其他節(jié)點(diǎn)進(jìn)行修改即可: [root@localhost?~]#?scp?-r?/opt/kubernetes/?root@192.168.109.132:/opt/ //再把kubelet,kube-proxy的service文件拷貝到node2中 [root@localhost?~]#?scp?/usr/lib/systemd/system/{kubelet,kube-proxy}.service?root@192.168.109.132:/usr/lib/systemd/system/ --接下來就是在?node02?節(jié)點(diǎn)上的操作: //首先,先刪除復(fù)制過來的證書,因?yàn)榇龝?huì)?node02?會(huì)自行申請(qǐng)屬于自己的證書: [root@localhost?~]#?cd?/opt/kubernetes/ssl/ [root@localhost?ssl]#?rm?-rf?* //修改配置文件?kubelet?、kubelet.config?、kube-proxy(三個(gè)配置文件) [root@localhost?ssl]#?cd?/opt/kubernetes/cfg/ [root@localhost?cfg]#?vim?kubelet KUBELET_OPTS="--logtostderr=true?\ --v=4?\ --hostname-override=192.168.109.132?\??##改成自己的IP地址 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig?\ --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig?\ --config=/opt/kubernetes/cfg/kubelet.config?\ --cert-dir=/opt/kubernetes/ssl?\ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0" [root@localhost?cfg]#?vim?kubelet.config kind:?KubeletConfiguration apiVersion:?kubelet.config.k8s.io/v1beta1 address:?192.168.109.132???##改成自己的IP地址 port:?10250 readOnlyPort:?10255 cgroupDriver:?cgroupfs clusterDNS: -?10.0.0.2 clusterDomain:?cluster.local. failSwapOn:?false authentication: ??anonymous: ????enabled:?true [root@localhost?cfg]#?vim?kube-proxy KUBE_PROXY_OPTS="--logtostderr=true?\ --v=4?\ --hostname-override=192.168.109.132?\???##改成自己的IP地址 --cluster-cidr=10.0.0.0/24?\ --proxy-mode=ipvs?\ --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig" //啟動(dòng)服務(wù): [root@localhost?cfg]#?systemctl?start?kubelet.service [root@localhost?cfg]#?systemctl?start?kube-proxy.service //和之前一樣,在?master?上操作查看請(qǐng)求: [root@localhost?kubeconfig]#?kubectl?get?csr NAME???????????????????????????????????????????????????AGE?????REQUESTOR???????????CONDITION node-csr-M9Iv_3cKuOZaiKSvoQGIarJHOaK1S9FnRs6SGIXP9nk???29m?????kubelet-bootstrap???Approved,Issued node-csr-vOfkpLYSYqFtD__GgZZZiV7NU_WaqECDvBbFuGyckRc???2m21s???kubelet-bootstrap???Pending //接下來和剛剛一樣,同意授權(quán),頒發(fā)證書即可: [root@localhost?kubeconfig]#?kubectl?certificate?approve?node-csr-vOfkpLYSYqFtD__GgZZZiV7NU_WaqECDvBbFuGyckRc //查看群集中的節(jié)點(diǎn): [root@localhost?kubeconfig]#?kubectl?get?node NAME??????????????STATUS???ROLES????AGE???VERSION 192.168.109.131???Ready???? ???34s???v1.12.3 192.168.109.132???Ready???? ???25m???v1.12.3 ??!至此,我們一個(gè)單節(jié)點(diǎn)的 Master 部署就完成了,接下來是帶來 多節(jié)點(diǎn) Master部署
三、多 Master 節(jié)點(diǎn)部署:
多 Master節(jié)點(diǎn)集群圖:
在有單 Master 節(jié)點(diǎn)部署環(huán)境的基礎(chǔ)上,在部署一個(gè) Master02 即可。
角色 IP地址 master02 192.168.109.230 –部署開始:
//首先關(guān)閉防火墻: [root@localhost?~]#?systemctl?stop?firewalld.service? [root@localhost?~]#?setenforce?0 //在?master01上,直接將?kubernetes目錄拷貝到?master02上即可: [root@localhost?kubeconfig]#?scp?-r?/opt/kubernetes/?root@192.168.109.230:/opt //在復(fù)制?master01?上的三個(gè)組件啟動(dòng)腳本:kube-apiserver.service、kube-controller-manager.service、kube-scheduler.service?? [root@localhost?kubeconfig]#?scp?/usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service?root@192.168.220.129:/usr/lib/systemd/system/ //接下來,在?master02上,修改配置文件?kube-apiserver中的IP地址: [root@localhost?cfg]#?pwd /opt/kubernetes/cfg [root@localhost?cfg]#?vim?kube-apiserver . (省略部分) . --etcd-servers=https://192.168.109.138:2379,https://192.168.109.131:2379,https://192.168.109.132:2379?\ --bind-address=192.168.109.230?\????????##改成自己的ip地址 --secure-port=6443?\ --advertise-address=192.168.109.230?\???##改成自己的ip地址 --allow-privileged=true?\ --service-cluster-ip-range=10.0.0.0/24?\ . (省略部分) . //拷貝?master01?上已有的?etcd?證書給?master02?使用: [root@localhost?kubeconfig]#?scp?-r?/opt/etcd/?root@192.168.109.230:/opt/ //接下來,啟動(dòng)?master02中的三個(gè)組件: [root@localhost?cfg]#?systemctl?start?kube-apiserver.service? [root@localhost?cfg]#?systemctl?start?kube-controller-manager.service? [root@localhost?cfg]#?systemctl?start?kube-scheduler.service? //增加環(huán)境變量: [root@localhost?cfg]#?vim?/etc/profile 在末尾添加: export?PATH=$PATH:/opt/kubernetes/bin/ [root@localhost?cfg]#?source?/etc/profile???//環(huán)境變量生效 //master02?上查看節(jié)點(diǎn)情況(和?master01一模一樣): [root@localhost?cfg]#?kubectl?get?node NAME??????????????STATUS???ROLES????AGE???VERSION 192.168.109.131???Ready???????44m???v1.12.3 192.168.109.132???Ready???? ???70m???v1.12.3
網(wǎng)頁(yè)標(biāo)題:kubernetes二進(jìn)制集群部署二——單master集群
標(biāo)題URL:http://weahome.cn/article/gessog.html