搭建K8S集群 kubernetes 1.11.3
10多年的濰城網(wǎng)站建設經(jīng)驗,針對設計、前端、開發(fā)、售后、文案、推廣等六對一服務,響應快,48小時及時工作處理。成都營銷網(wǎng)站建設的優(yōu)勢是能夠根據(jù)用戶設備顯示端的尺寸不同,自動調整濰城建站的顯示方式,使網(wǎng)站能夠適用不同顯示終端,在瀏覽器中調整網(wǎng)站的寬度,無論在任何一種瀏覽器上瀏覽網(wǎng)站,都能展現(xiàn)優(yōu)雅布局與設計,從而大程度地提升瀏覽體驗。成都創(chuàng)新互聯(lián)公司從事“濰城網(wǎng)站設計”,“濰城網(wǎng)站推廣”以來,每個客戶項目都認真落實執(zhí)行。
1.1 實驗架構:
kubernetes架構
node1: master 10.192.44.129
node2: node2 10.192.44.127
node3: node3 10.192.44.126
etcd架構
node1: master 10.192.44.129
node2: node 10.192.44.127
node3: node 10.192.44.126
harbor服務器
redhat128.example.com
10.192.44.128
2.安裝
2.1配置系統(tǒng)相關參數(shù)(每臺):
2.1.1 臨時禁用selinux
sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux
setenforce 0
2.1.2 臨時關閉swap ,永久關閉直接注釋fstab中swap行
swapoff -a
2.1.3 開啟forward
iptables -P FORWARD ACCEPT
2.1.3 配置轉發(fā)相關參數(shù),否則可能會出錯
cat <
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF
sysctl --system
2.1.4 配置hosts
10.192.44.126 node3
10.192.44.127 node2
10.192.44.128 redhat128
10.192.44.129 node1
2.1.5 安裝docker 參考我此前的blog。
2.1.6 時間同步
yum install ntpdate -y &&ntpdate 0.asia.pool.ntp.org
3.創(chuàng)建TLS證書和秘鑰(master節(jié)點)
3.1 生成的證書文件如下:
ca-key.pem #根私鑰
ca.pem #根證書
kubernetes-key.pem #集群私鑰
kubernetes.pem #集群證書
kube-proxy.pem #proxy私鑰-node節(jié)點進行認證
kube-proxy-key.pem #proxy證書-node節(jié)點進行認證
admin.pem #管理員私鑰-主要用于kubectl認證
admin-key.pem #管理員證書-主要用于kubectl認證
知識點補充:
TLS: TLS 的作用就是對通訊加密,防止中間人竊聽;同時如果證書不信任的話根本就無法與 apiserver 建立連接,更不用提有沒有權限向 apiserver 請求指定內(nèi)容。
RBAC作用:RBAC 中規(guī)定了一個用戶或者用戶組(subject)具有請求哪些 api 的權限;在配合 TLS 加密的時候,實際上 apiserver 讀取客戶端證書的 CN 字段作為用戶名,讀取 O 字段作為用戶組。
總結:想要與 apiserver 通訊就必須采用由 apiserver CA 簽發(fā)的證書,這樣才能形成信任關系,建立 TLS 連接;第二,可以通過證書的 CN、O 字段來提供 RBAC 所需的用戶與用戶組。
3.2 下載安裝CFSSL(用于簽名,驗證和捆綁TLS證書的HTTP API工具)(master節(jié)點)
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 /usr/local/bin/cfssljson
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssl-certinfo /usr/local/bin/cfssljson
3.3創(chuàng)建CA(Certificate Authority)(master節(jié)點)
mkdir /root/ssl
cd /root/ssl
cfssl print-defaults config > config.json
cfssl print-defaults csr > csr.json
# 根據(jù)config.json文件的格式創(chuàng)建如下的ca-config.json文件
# 過期時間設置成了 87600h
cat > ca-config.json < { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "87600h" } } } } EOF 知識點: ca-config.json:可以定義多個 profiles,分別指定不同的過期時間、使用場景等參數(shù);后續(xù)在簽名證書時使用某個 profile; signing:表示該證書可用于簽名其它證書;生成的 ca.pem 證書中 CA=TRUE; server auth:表示client可以用該 CA 對server提供的證書進行驗證; client auth:表示server可以用該CA對client提供的證書進行驗證; 3.4 創(chuàng)建證書請求 cat > ca-csr.json < { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "GuangDong", "L": "ShenZhen", "O": "k8s", "OU": "System" } ], "ca": { "expiry": "87600h" } } EOF 知識點: "CN":Common Name,kube-apiserver 從證書中提取該字段作為請求的用戶名 (User Name) "O":Organization,kube-apiserver 從證書中提取該字段作為請求用戶所屬的組 (Group) 3.5 生成CA證書和私鑰 cfssl gencert -initca ca-csr.json | cfssljson -bare ca 3.6 創(chuàng)建kubernetes證書請求文件 cat > kubernetes-csr.json < { "CN": "kubernetes", "hosts": [ "127.0.0.1", "10.192.44.129", "10.192.44.128", "10.192.44.126", "10.192.44.127", "10.254.0.1", "*.kubernetes.master", "localhost", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "GuangDong", "L": "ShenZhen", "O": "k8s", "OU": "System" } ] } EOF 知識點: 這個證書目前專屬于 apiserver加了一個 *.kubernetes.master 域名以便內(nèi)部私有 DNS 解析使用(可刪除);至于很多人問過 kubernetes 這幾個能不能刪掉,答案是不可以的;因為當集群創(chuàng)建好后,default namespace 下會創(chuàng)建一個叫 kubenretes 的 svc,有一些組件會直接連接這個 svc 來跟 api 通訊的,證書如果不包含可能會出現(xiàn)無法連接的情況;其他幾個 kubernetes 開頭的域名作用相同 hosts包含的是授權范圍,不在此范圍的的節(jié)點或者服務使用此證書就會報證書不匹配錯誤。 10.254.0.1是指kube-apiserver 指定的 service-cluster-ip-range 網(wǎng)段的第一個IP。 3.7 生成kubernetes證書和私鑰 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes 3.8 創(chuàng)建admin證書 cat > admin-csr.json < { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "GuangDong", "L": "ShenZhen", "O": "system:masters", "OU": "System" } ] } EOF 3.9 生成admin證書和私鑰 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin 知識點: 這個admin 證書,是將來生成管理員用的kube config 配置文件用的,現(xiàn)在我們一般建議使用RBAC 來對kubernetes 進行角色權限控制, kubernetes 將證書中的CN 字段 作為User, O 字段作為 Group 3.10 創(chuàng)建Kube-proxy 證書 cat > kube-proxy-csr.json < { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "GuangDong", "L": "ShenZhen", "O": "k8s", "OU": "System" } ] } EOF 3.11 生成kube-proxy客戶端證書和私鑰 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy 3.12 校驗證書 openssl x509 -noout -text -in kubernetes.pem 3.13分發(fā)證書 將生成的證書和秘鑰文件(后綴名為.pem)拷貝到所有機器的 /etc/kubernetes/ssl 目錄下備用 mkdir -p /etc/kubernetes/ssl scp *.pem {node2,node3}:/etc/kubernetes/ssl 4.創(chuàng)建kubeconfig文件 (master節(jié)點) 4.1 生成token文件 export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ') export KUBE_APISERVER="https://10.192.44.129:6443" echo $BOOTSTRAP_TOKEN cat > token.csv < ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:bootstrappers" EOF cp token.csv /etc/kubernetes/ 知識點:不要質疑 system:bootstrappers 用戶組是否寫錯了,有疑問請參考官方文, https://kubernetes.io/docs/admin/kubelet-tls-bootstrapping/ 4.2 創(chuàng)建kubelete-kubeconfig文件 kubeconfig 設置其實是權限配置文件,是對k8s集群層面的訪問控制。如果不使用--kubeconfig=xx.kubeconfig,他就會默認保存在~/.kube/conf中文件,然后作為默認配置文件。其實通過kubeadm配置也會發(fā)現(xiàn),他要求你將kubeconfig拷貝到~/.kube/conf。 cd /etc/kubernetes/ssl 4.2.1設置集群參數(shù) kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig 4.2.2設置客戶端認證參數(shù) kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig 4.2.3設置上下文參數(shù) kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig 4.2.4設置默認上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig 4.3 創(chuàng)建kube-proxy文件 4.3.1 設置集群參數(shù) kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig 4.3.2 設置客戶端認證參數(shù) kubectl config set-credentials kube-proxy \ --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \ --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig 4.3.3 設置上下文參數(shù) kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig 4.3.4 設置默認上下文 kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig 4.4 分發(fā)kubeconfig 證書 scp bootstrap.kubeconfig kube-proxy.kubeconfig {node2,node3}:/etc/kubernetes/ 4.5 創(chuàng)建 admin kubeconfig文件 4.5.1 設置集群參數(shù) kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=admin.conf 4.5.2設置客戶端認證參數(shù) kubectl config set-credentials admin \ --client-certificate=/etc/kubernetes/ssl/admin.pem \ --embed-certs=true \ --client-key=/etc/kubernetes/ssl/admin-key.pem \ --kubeconfig=admin.conf 4.5.3設置上下文參數(shù) kubectl config set-context default \ --cluster=kubernetes \ --user=admin \ --kubeconfig=admin.conf 4.5.4設置默認上下文 kubectl config use-context default --kubeconfig=admin.conf 4.6 創(chuàng)建高級審計文件 cat >> audit-policy.yaml < # Log all requests at the Metadata level. apiVersion: audit.k8s.io/v1beta1 kind: Policy rules: - level: Metadata EOF 4.7 文件拷貝: #cp ~/.kube/config /etc/kubernetes/kubelet.kubeconfig (#關于這一步當時我是添加node節(jié)點出問題,如果沒有問題請忽略這操作,下面的kubelet.kubeconfig一樣) scp /etc/kubernetes/{kubelet.kubeconfig,bootstrap.kubeconfig,kube-proxy.kubeconfig} node2:/etc/kubernetes/ scp /etc/kubernetes/{kubelet.kubeconfig,bootstrap.kubeconfig,kube-proxy.kubeconfig} node3:/etc/kubernetes/ 5 創(chuàng)建etcd集群 5.1創(chuàng)建etcd啟動服務(每臺) cat > /usr/lib/systemd/system/etcd.service << EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] Type=notify WorkingDirectory=/var/lib/etcd/ EnvironmentFile=-/etc/etcd/etcd.conf ExecStart=/usr/local/bin/etcd \ --name ${ETCD_NAME} \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \ --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \ --initial-advertise-peer-urls ${ETCD_INITIAL_ADVERTISE_PEER_URLS} \ --listen-peer-urls ${ETCD_LISTEN_PEER_URLS} \ --listen-client-urls ${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \ --advertise-client-urls ${ETCD_ADVERTISE_CLIENT_URLS} \ --initial-cluster-token ${ETCD_INITIAL_CLUSTER_TOKEN} \ --initial-cluster infra1=https://172.20.0.113:2380,infra2=https://172.20.0.114:2380,infra3=https://172.20.0.115:2380 \ --initial-cluster-state new \ --data-dir=${ETCD_DATA_DIR} Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF 知識點: systemd是配置管理驅動服務的。 環(huán)境變量 = -/ "-"表示抑制錯誤,即發(fā)生錯誤的時候,也不影響其他命令的執(zhí)行。 5.2 編輯配置文件(以ectd1為例,etcd2,etcd3注意替換IP地址) mkdir /etc/etcd && vim /etc/etcd/etcd.conf cat > /etc/etcd/etcd.conf << EOF ETCD_NAME=infra1 ETCD_DATA_DIR="/var/lib/etcd" ETCD_LISTEN_PEER_URLS="https://10.192.44.129:2380" ETCD_LISTEN_CLIENT_URLS="https://10.192.44.129:2379" #[cluster] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.192.44.129:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_ADVERTISE_CLIENT_URLS="https://10.192.44.129:2379" 5.3啟動etcd服務器,記得創(chuàng)建/var/lib/etcd。 mkdir /var/lib/etcd systemctl enable etcd && systemctl start etcd 6 部署master節(jié)點:(好像需要自己到服務器文件解壓) 6.1 下載kubernetes 文件 下載kubernetes (v1.11.3) wget https://github.com/kubernetes/kubernetes/releases/download/v1.11.3/kubernetes.tar.gz tar -xzvf kubernetes.tar.gz cd kubernetes ./cluster/get-kube-binaries.sh #如果不行,請手動操作 cd server/ tar -zxvf kubernetes-server-linux-amd64.tar.gz cp kubernetes/server/bin/kube-apiserver /usr/local/bin/kube-apiserver cp kubernetes/server/bin/kube-controller-manager /usr/local/bin/kube-controller-manager cp kubernetes/server/bin/kube-scheduler /usr/local/bin/kube-scheduler chmod +x /usr/local/bin/{kube-apiserver,kube-controller-manager,kube-scheduler} 6.2配置系統(tǒng)服務啟動kube-apiserver,kube-controller-manager,kube-scheduler 6.2.1創(chuàng)建kube-apiserver.service cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Service Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target After=etcd.service [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/apiserver ExecStart=/usr/local/bin/kube-apiserver \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_ETCD_SERVERS \ $KUBE_API_ADDRESS \ $KUBE_API_PORT \ $KUBELET_PORT \ $KUBE_ALLOW_PRIV \ $KUBE_SERVICE_ADDRESSES \ $KUBE_ADMISSION_CONTROL \ $KUBE_API_ARGS Restart=on-failure Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF 6.2.2 創(chuàng)建kube-controller-manager.service cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/controller-manager ExecStart=/usr/local/bin/kube-controller-manager \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_CONTROLLER_MANAGER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF 6.2.3 創(chuàng)建kube-scheduler.service cat > /usr/lib/systemd/system/kube-scheduler.service << EOF [Unit] Description=Kubernetes Scheduler Plugin Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/scheduler ExecStart=/usr/local/bin/kube-scheduler \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_SCHEDULER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF 6.2.4 編輯/etc/kubernetes/config文件 cat > /etc/kubernetes/config << EOF ### # kubernetes system config # # The following values are used to configure various aspects of all # kubernetes services, including # # kube-apiserver.service # kube-controller-manager.service # kube-scheduler.service # kubelet.service # kube-proxy.service # logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=true" # How the controller-manager, scheduler, and proxy find the apiserver #KUBE_MASTER="--master=http://test-001.jimmysong.io:8080" KUBE_MASTER="--master=http://10.192.44.129:8080" EOF 6.2.5 編輯apiserver配置文件 cat > /etc/kubernetes/apiserver << EOF ### ## kubernetes system config ## ## The following values are used to configure the kube-apiserver ## # ## The address on the local server to listen to. #KUBE_API_ADDRESS="--insecure-bind-address=test-001.jimmysong.io" KUBE_API_ADDRESS="--advertise-address=10.192.44.129 --bind-address=10.192.44.129 --insecure-bind-address=10.192.44.129" # ## The port on the local server to listen on. KUBE_API_PORT="--secure-port=6443" # ## Port minions listen on #KUBELET_PORT="--kubelet-port=10250" # ## Comma separated list of nodes in the etcd cluster KUBE_ETCD_SERVERS="--etcd-servers=https://10.192.44.129:2379,https://10.192.44.127:2379,https://10.192.44.126:2379" # ## Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" # ## default admission control policies KUBE_ADMISSION_CONTROL="--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction" # ## Add your own! KUBE_API_ARGS="--anonymous-auth=false \ --authorization-mode=Node,RBAC \ --kubelet-https=true \ --kubelet-timeout=3s \ --enable-bootstrap-token-auth \ --enable-garbage-collector \ --enable-logs-handler \ --token-auth-file=/etc/kubernetes/token.csv \ --service-node-port-range=30000-32767 \ --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ --client-ca-file=/etc/kubernetes/ssl/ca.pem \ --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/etc/kubernetes/ssl/ca.pem \ --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \ --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \ --etcd-compaction-interval=5m0s \ --etcd-count-metric-poll-period=1m0s \ --enable-swagger-ui=true \ --apiserver-count=3 \ --log-flush-frequency=5s \ --audit-log-maxage=30 \ --audit-log-maxbackup=3 \ --audit-log-maxsize=100 \ --audit-log-path=/var/lib/audit.log \ --audit-policy-file=/etc/kubernetes/audit-policy.yaml \ --storage-backend=etcd3 \ --event-ttl=1h" EOF 6.2.6 編輯controller-manager配置文件 cat > /etc/kubernetes/controller-manager << EOF ### # The following values are used to configure the kubernetes controller-manager # defaults from config and apiserver should be adequate # Add your own! KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 --service-cluster-ip-range=10.254.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem --root-ca-file=/etc/kubernetes/ssl/ca.pem --leader-elect=true" EOF 6.2.7 編輯scheduler配置文件 cat > /etc/kubernetes/scheduler << EOF ### # kubernetes scheduler config # default config should be adequate # Add your own! KUBE_SCHEDULER_ARGS="--leader-elect=true --address=127.0.0.1 --algorithm-provider=DefaultProvider" 6.2.8 啟動服務 systemctl daemon-reload systemctl enable kueb-apiserver kube-controller-manager kube-scheduler systemctl start kueb-apiserver kube-controller-manager kube-scheduler 6.2.9 驗證master節(jié)點功能 kubectl get componentstatuses 如下: NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"} 6.2.10 kubectl命令補全 echo "source <(kubectl completion bash)" >> ~/.bashrc source ~/.bashrc 7. 安裝flannel網(wǎng)絡插件 7.1 通過yum安裝配置flannel(每節(jié)點) yum install -y flannel 7.2 配置服務文件(每節(jié)點) cat > /usr/lib/systemd/system/flanneld.service << EOF [Unit] Description=Flanneld overlay address etcd agent After=network.target After=network-online.target Wants=network-online.target After=etcd.service Before=docker.service [Service] Type=notify EnvironmentFile=/etc/sysconfig/flanneld EnvironmentFile=-/etc/sysconfig/docker-network ExecStart=/usr/bin/flanneld-start \ -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS} \ -etcd-prefix=${FLANNEL_ETCD_PREFIX} \ $FLANNEL_OPTIONS ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker Restart=on-failure [Install] WantedBy=multi-user.target RequiredBy=docker.service 知識點:mk-docker-opts.sh生成環(huán)境變量/run/flannel/subnet.env,/run/docker_opts.env。后續(xù)要docker要調用其配置文件。 7.3 創(chuàng)建flanneld配置文件(每節(jié)點) cat > /etc/sysconfig/flanneld << EOF # Flanneld configuration options # etcd url location. Point this to the server where etcd runs FLANNEL_ETCD_ENDPOINTS="https://10.192.44.129:2379,https://10.192.44.127:2379,https://10.192.44.126:2379" # etcd config key. This is the configuration key that flannel queries # For address range assignment FLANNEL_ETCD_PREFIX="/kube-centos/network" # Any additional options that you want to pass FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem" 7.4 在etcd創(chuàng)建網(wǎng)絡配置(每節(jié)點,gw模式) etcdctl --endpoints=https://10.192.44.129:2379,https://10.192.44.127:2379,https://10.192.44.126:2379 \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ mkdir /kube-centos/network etcdctl --endpoints=https://10.192.44.129:2379,https://10.192.44.127:2379,https://10.192.44.126:2379 \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ mk /kube-centos/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"host-gw"}}' 7.5 啟動flannel(每節(jié)點) systemctl daemon-reload systemctl restart flanneld systemctl start flanneld 7.6 查看etcd內(nèi)容(隨便一個節(jié)點執(zhí)行就行了,因為數(shù)據(jù)是同步的) etcdctl --endpoints=https://10.192.44.129:2379 \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ ls /kube-centos/network/subnets etcdctl --endpoints=https://10.192.44.129:2379 \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ get /kube-centos/network/config 7.7 將flannel啟動后生成的環(huán)境變量添加到docker的systemd目錄。(每節(jié)點) vim /usr/lib/systemd/system/docker.service EnvironmentFile=-/run/flannel/docker systemctl daemon-reload && systemctl restart docker 7.8 更改dockerd啟動配置(每節(jié)點) vim /usr/lib/systemd/system/docker.service EnvironmentFile=-/run/flannel/docker ExecStart=/usr/bin/dockerd \ $DOCKER_OPT_BIP \ $DOCKER_OPT_IPMASQ \ $DOCKER_OPT_MTU \ --log-driver=json-file 8.部署node節(jié)點 8.1 TLS bootstrapping配置(master節(jié)點) cd /etc/kubernetes kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap kubectl create clusterrolebinding kubelet-nodes \ --clusterrole=system:node \ --group=system:nodes 知識點: kubelet 啟動時向 kube-apiserver 發(fā)送 TLS bootstrapping 請求,需要先將 bootstrap token 文件中的 kubelet-bootstrap 用戶賦予 system:node-bootstrapper cluster 角色(role), 然后 kubelet 才能有權限創(chuàng)建認證請求(certificate signing requests): kubelet 通過認證后向 kube-apiserver 發(fā)送 register node 請求,需要先將 kubelet-nodes 用戶賦予 system:node cluster角色(role) 和 system:nodes 組(group), 然后 kubelet 才能有權限創(chuàng)建節(jié)點請求: 8.2 下載kubelet和kube-proxy 二進制文件(每節(jié)點) wget https://github.com/kubernetes/kubernetes/releases/download/v1.11.3/kubernetes.tar.gz tar -xzvf kubernetes.tar.gz cd kubernetes ./cluster/get-kube-binaries.sh #如果不行,請手動操作 cd server/ tar -zxvf kubernetes-server-linux-amd64.tar.gz cp kubernetes/server/bin/kubelet /usr/local/bin/kubelet cp kubernetes/server/bin/kube-proxy /usr/local/bin/kube-proxy chmod +x /usr/local/bin/{kubelet,kube-proxy} 8.3 配置系統(tǒng)服務啟動kubelet,kube-proxy 8.3.1 創(chuàng)建kubelete cat > /usr/lib/systemd/system/kubelet.service << EOF [Unit] Description=Kubernetes Kubelet Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/kubelet ExecStart=/usr/local/bin/kubelet \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBELET_ADDRESS \ $KUBELET_PORT \ $KUBELET_HOSTNAME \ $KUBE_ALLOW_PRIV \ $KUBELET_POD_INFRA_CONTAINER \ $KUBELET_ARGS Restart=on-failure [Install] WantedBy=multi-user.target EOF 8.3.2 創(chuàng)建Kube-proxy cat > /usr/lib/systemd/system/kube-proxy.service << EOF [Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/proxy ExecStart=/usr/local/bin/kube-proxy \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_PROXY_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF 8.3.3 創(chuàng)建conf文件 cd /etc/kubernetes cat >/etc/kubernetes/config< KUBE_LOGTOSTDERR="--logtostderr=true" KUBE_LOG_LEVEL="--v=2" EOF 8.3.4 創(chuàng)建kubelete-conf文件(master) cat > /etc/kubernetes/kubelet << EOF ### ## kubernetes kubelet (minion) config # ## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=10.192.44.129" # ## The port for the info server to serve on #KUBELET_PORT="--port=10250" # ## You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname-override=master" # ## location of the api-server ## COMMENT THIS ON KUBERNETES 1.8+ # ## pod infrastructure container KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0" # ## Add your own! KUBELET_ARGS="--cluster-dns=10.254.0.2 --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local. --allow-privileged=true --serialize-image-pulls=false --fail-swap-on=false --log-dir=/var/log/kubernetes/kubelet" EOF 8.3.5 創(chuàng)建kubelete-conf文件(node2) cat > /etc/kubernetes/kubelet << EOF ### ## kubernetes kubelet (minion) config # ## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=10.192.44.127" # ## The port for the info server to serve on #KUBELET_PORT="--port=10250" # ## You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname-override=node2" # ## location of the api-server ## COMMENT THIS ON KUBERNETES 1.8+ # ## pod infrastructure container KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0" # ## Add your own! KUBELET_ARGS="--cluster-dns=10.254.0.2 --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local. --allow-privileged=true --serialize-image-pulls=false --fail-swap-on=false --log-dir=/var/log/kubernetes/kubelet" EOF 8.3.6 創(chuàng)建kubelete-conf文件(node3) cat > /etc/kubernetes/kubelet << EOF ### ## kubernetes kubelet (minion) config # ## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=10.192.44.126" # ## The port for the info server to serve on #KUBELET_PORT="--port=10250" # ## You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname-override=node3" # ## location of the api-server ## COMMENT THIS ON KUBERNETES 1.8+ # ## pod infrastructure container KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0" # ## Add your own! KUBELET_ARGS="--cluster-dns=10.254.0.2 --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local. --allow-privileged=true --serialize-image-pulls=false --fail-swap-on=false --log-dir=/var/log/kubernetes/kubelet" EOF 8.3.7 創(chuàng)建kube-proxy文件(master) cat > /etc/kubernetes/proxy << EOF ### # kubernetes proxy config # default config should be adequate # Add your own! KUBE_PROXY_ARGS="--bind-address=10.192.44.129 --hostname-override=master --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16 --log-dir=/var/log/kubernetes/proxy" EOF 8.3.7 創(chuàng)建kube-proxy文件(node2) cat > /etc/kubernetes/proxy << EOF ### # kubernetes proxy config # default config should be adequate # Add your own! KUBE_PROXY_ARGS="--bind-address=10.192.44.127 --hostname-override=node2 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16 --log-dir=/var/log/kubernetes/proxy" EOF 8.3.8 創(chuàng)建kube-proxy文件(node3) cat > /etc/kubernetes/proxy << EOF ### # kubernetes proxy config # default config should be adequate # Add your own! KUBE_PROXY_ARGS="--bind-address=10.192.44.126 --hostname-override=node3 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16 --log-dir=/var/log/kubernetes/proxy" EOF 8.3.9 啟動kubelet systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet 8.3.10 啟動kube-proxy systemctl daemon-reload && systemctl enable kube-proxy && systemctl start kube-proxy 8.3.11 查看證書申請請求(node節(jié)點自動去kubeapi節(jié)點申請) kubectl get csr 8.3.12 master節(jié)點允許請求 ,查看證書請求狀態(tài) kubectl certificate approve node-csr-Yiiv675wUCvQl3HH11jDr0cC9p3kbrXWrxvG3EjWGoE kubectl describe csr node-csr-Yiiv675wUCvQl3HH11jDr0cC9p3kbrXWrxvG3EjWGoE 狀態(tài)標注下如下: kubectl describe csr node-csr-hsBS9OyhOa8rK_Q48ee81giH17t6Nk4FL9IRWRt4ygw Name: node-csr-hsBS9OyhOa8rK_Q48ee81giH17t6Nk4FL9IRWRt4ygw Labels: Annotations: CreationTimestamp: Thu, 22 Nov 2018 20:19:09 +0800 Requesting User: kubelet-bootstrap Status: Approved,Issued Subject: Common Name: system:node:node3 Serial Number: Organization: system:nodes 8.3.13 查看節(jié)點狀態(tài) kubectl get nodes 8.3.14 創(chuàng)建測試 vim deploy.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: ngxin:1.7.9 ports: - containerPort: 80 kubectl create -f deploy.yaml kubectl scale deployment nginx --replicas=4 9.部署集群DNS(CoreDNS) 9.1 下載coredns配置文件,如下: coredns.yaml.sed apiVersion: v1 kind: ServiceAccount metadata: name: coredns namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns rules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:coredns subjects: - kind: ServiceAccount name: coredns namespace: kube-system --- apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | .:53 { errors health kubernetes CLUSTER_DOMAIN REVERSE_CIDRS { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 proxy . /etc/resolv.conf cache 30 } --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "CoreDNS" spec: replicas: 2 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns spec: serviceAccountName: coredns tolerations: - key: "CriticalAddonsOnly" operator: "Exists" containers: - name: coredns image: coredns/coredns:1.1.1 imagePullPolicy: IfNotPresent args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile --- apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system annotations: prometheus.io/scrape: "true" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: kube-dns clusterIP: CLUSTER_DNS_IP ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP 9.2 編寫部署腳本 cat > deploy.sh << EOF #!/bin/bash # Deploys CoreDNS to a cluster currently running Kube-DNS. SERVICE_CIDR=${1:-10.254.0.0/16} POD_CIDR=${2:-172.30.0.0/16} CLUSTER_DNS_IP=${3:-10.254.0.2} CLUSTER_DOMAIN=${4:-cluster.local} YAML_TEMPLATE=${5:-`pwd`/coredns.yaml.sed} sed -e s/CLUSTER_DNS_IP/$CLUSTER_DNS_IP/g -e s/CLUSTER_DOMAIN/$CLUSTER_DOMAIN/g -e s?SERVICE_CIDR?$SERVICE_CIDR?g -e s?POD_CIDR?$POD_CIDR?g $YAML_TEMPLATE > coredns.yaml EOF 知識點:根據(jù)自己的node網(wǎng)絡,cluster修改自己的地址段。 9.3 部署coredns chmod + deploy.sh ./deploy.sh kubectl create -f coredns.yaml 9.4 驗證dns服務 9.4.1 創(chuàng)建deployment cat > busyboxdeploy.yaml << EOF apiVersion: extensions/v1beta1 kind: Deployment metadata: name: busybox-deployment spec: replicas: 2 template: metadata: labels: app: busybox spec: containers: - name: busybox image: busybox ports: - containerPort: 80 args: ["/bin/sh","-c","sleep 1000"] EOF 9.4.2 進入pod,ping自己的SVC kubectl exec busybox-deployment-6679c4bb96-86kfg -it -- /bin/sh # ping kubernetes # ... # 雖然因為網(wǎng)絡的問題ping不同,但是可以解析出名稱。 10. 部署heapster 10.1.下載yaml文件 mkdir heapter cd hapster wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/grafana.yaml wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml 10.2. 修改yaml的container鏡像源文件(默認使用goolge鏡像源,我們下載不到只能改成其他人上傳至dockerhub上的) 10.2.1 修改grafana.yaml k8s.gcr.io/heapster-grafana-amd64:v5.0.4 mirrorgooglecontainers/heapster-grafana-amd64:v5.0.4 10.2.2 修改heapster.yaml k8s.gcr.io/heapster-amd64:v1.5.4 cnych/heapster-amd64:v1.5.4 10.2.3 修改influxdb.yaml k8s.gcr.io/heapster-influxdb-amd64:v1.5.2 fishchen/heapster-influxdb-amd64:v1.5.2 10.3 查看heapster狀態(tài) kubectl get svc -n kube-system 10.4 在master設置代理可以允許外部訪問 kubectl proxy --port=8096 --address="10.192.44.129" --accept-hosts='^*$' 11.部署dashboard 11.1 下載dashboard的yaml文件 wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml -O kubernetes-dashboard.yaml 11.2 修改如下:(使用的是官方鏡像,但是更換了images,添加了nodePort) # ------------------- Dashboard Secret ------------------- # apiVersion: v1 kind: Secret
標題名稱:搭建K8S集群:kubernetes-1.11.3
網(wǎng)站路徑:http://weahome.cn/article/ggopog.html