真实的国产乱ⅩXXX66竹夫人,五月香六月婷婷激情综合,亚洲日本VA一区二区三区,亚洲精品一区二区三区麻豆

成都創(chuàng)新互聯(lián)網(wǎng)站制作重慶分公司

kubernetes集群版本升級攻略

云計算 kubbernetes版本兼容性

在升級之前你需要了解各版本間的關系:

成都創(chuàng)新互聯(lián)服務項目包括延吉網(wǎng)站建設、延吉網(wǎng)站制作、延吉網(wǎng)頁制作以及延吉網(wǎng)絡營銷策劃等。多年來,我們專注于互聯(lián)網(wǎng)行業(yè),利用自身積累的技術優(yōu)勢、行業(yè)經(jīng)驗、深度合作伙伴關系等,向廣大中小型企業(yè)、政府機構等提供互聯(lián)網(wǎng)行業(yè)的解決方案,延吉網(wǎng)站推廣取得了明顯的社會效益與經(jīng)濟效益。目前,我們服務的客戶以成都為中心已經(jīng)輻射到延吉省份的部分城市,未來相信會繼續(xù)擴大服務區(qū)域并繼續(xù)獲得客戶的支持與信任!kubernetes版本命名方式表示為XYZ,其中X表示主要版本,Y表示次要版本,Z表示補丁版本。
比如 1.16.0 K8s所有組件 kube-controller,kube-scheduler,kubelet的版本號不得高于kube-apiserver的版本號。 這些組件的版本號可低于kube-apiserver的1個次要版本,比如kube-apierver是1.16.0,其它組件的版本可以為1.16.x和1.15.x。 在一個HA集群中,多個kube-apiserver間的版本號最多只能相差一個次版本號,比如 1.16和1.15。 最好所有組件與kube-apiserver版本號完全一致。 因此升級Kubernetes集群時,最先升級的核心組件就是kube-apiserver。 且只能向上升級為一個次要版本。 kubectl版本最多只能比kube-apiserver高或低一個次版本號。 宏觀升級流程 升級主控制平面節(jié)點。 升級其他控制平面節(jié)點。 升級Node節(jié)點。 微觀升級步驟 先升級kubeadm版本 升級第一個主控制平面節(jié)點Master組件。 升級第一個主控制平面節(jié)點上的kubelet及kubectl。 升級其它控制平面節(jié)點。 升級Node節(jié)點 驗證集群。 升級注意事項 確定升級前的的kubeadm集群版本。 kubeadm upgrade不會影響到工作負載,僅涉及k8s內部的組件,但是備份etcd數(shù)據(jù)庫是實踐。 升級后,所有容器都會重啟動,因為容器的hash值已更改。 由于版本的兼容性,只能從一個次要版本升級到另外一個次要版本,不能跳躍升級。 集群控制平面應使用靜態(tài)Pod和etcd pod或外部etcd。 kubeadm upgrade 集群升級命令詳解

通過查詢命令行幫助:

$ kubeadm upgrade -h

Upgrade your cluster smoothly to a newer version with this command.

Usage:
  kubeadm upgrade [flags]
  kubeadm upgrade [command]
`

Available Commands:
  apply       Upgrade your Kubernetes cluster to the specified version.
  diff        Show what differences would be applied to existing static pod manifests. See also: kubeadm upgrade apply --dry-run
  node        Upgrade commands for a node in the cluster. Currently only supports upgrading the configuration, not the kubelet itself.
  plan        Check which versions are available to upgrade to and validate whether your current cluster is upgradeable. To skip the internet check, pass in the optional [version] parameter.

命令解析:

apply: 升級Kubernetes集群到指定版本。 diff: 即將運行的靜態(tài)Pod文件清單與當前正運行的靜態(tài)Pod清單文件的差異。 node: 升級集群中的node,當前(v1.16)僅支持升級kubelet的配置文件(/var/lib/kubelet/config.yaml),而非kubelet本身。 plan: 檢測當前集群是否可升級,并支持升級到哪些版本。

其中node子命令又支持如下子命令和選項:

$ kubeadm upgrade node  -h
Upgrade commands for a node in the cluster. Currently only supports upgrading the configuration, not the kubelet itself.

Usage:
  kubeadm upgrade node [flags]
  kubeadm upgrade node [command]

Available Commands:
  config                     Downloads the kubelet configuration from the cluster ConfigMap kubelet-config-1.X, where X is the minor version of the kubelet.
  experimental-control-plane Upgrades the control plane instance deployed on this node. IMPORTANT. This command should be executed after executing `kubeadm upgrade apply` on another control plane instance

Flags:
  -h, --help   help for node

Global Flags:
      --log-file string   If non-empty, use this log file
      --rootfs string     [EXPERIMENTAL] The path to the \'real\' host root filesystem.
      --skip-headers      If true, avoid header prefixes in the log messages
  -v, --v Level           number for the log level verbosity

命令解析:

config: 從集群configmap中下載kubelet的配置文件kubelet-config-1.x,其中x是kubelet的次要版本。 experimental-control-plane: 升級部署在此節(jié)點的控制平面各組件, 通常在第一個控制平面實例上執(zhí)行"kubeadm upgrade apply"后,應執(zhí)行此命令。

操作環(huán)境說明:

OS: Ubuntu16.04 k8s: 一個Master,一個Node kubernetes之從1.13.x升級到1.14.x

由于當前環(huán)境中的集群是由kubeadm創(chuàng)建的,其版本為1.13.1,所以本次實驗將其升級至1.14.0。

執(zhí)行升級流程 升級第一個控制平面節(jié)點

首先,在第一個控制平面節(jié)點也就是主控制平面上操作:

1. 確定升級前集群版本:

root@k8s-master:~# kubectl version
Client Version: version.Info{Major:1, Minor:13, GitVersion:v1.13.1, GitCommit:eec55b9ba98609a46fee712359c7b5b365bdd920, GitTreeState:clean, BuildDate:2018-12-13T10:39:04Z, GoVersion:go1.11.2, Compiler:gc, Platform:linux/amd64}
Server Version: version.Info{Major:1, Minor:13, GitVersion:v1.13.1, GitCommit:eec55b9ba98609a46fee712359c7b5b365bdd920, GitTreeState:clean, BuildDate:2018-12-13T10:31:33Z, GoVersion:go1.11.2, Compiler:gc, Platform:linux/amd64}

2. 查找可升級的版本:

apt update
apt-cache policy kubeadm
# find the latest 1.14 version in the list
# it should look like 1.14.x-00, where x is the latest patch
1.14.0-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages

3. 先升級kubeadm到1.14.0

# replace x in 1.14.x-00 with the latest patch version
apt-mark unhold kubeadm kubelet && \\
apt-get update && apt-get install -y kubeadm=1.14.0-00 && \\
apt-mark hold kubeadm

如上升級kubeadm到1.14版本,Ubuntu系統(tǒng)有可能會自動升級kubelet到當前最新版本的1.16.0,所以此時就把kubelet也升級下:

apt-get install -y kubeadm=1.14.0-00 kubelet=1.14.0-00

如果確實發(fā)生這種情況,導致了kubeadm和kubelet版本不一致,最終致使后面的升級集群操作失敗,此時可以刪除kubeadm、kubelet

刪除:

apt-get remove kubelet kubeadm

再次安裝預期版本:

apt-get install -y kubeadm=1.14.0-00 kubelet=1.14.0-00

確定kubeadm已升級到預期版本:

root@k8s-master:~# kubeadm version
kubeadm version: &version.Info{Major:1, Minor:14, GitVersion:v1.14.0, GitCommit:641856db18352033a0d96dbc99153fa3b27298e5, GitTreeState:clean, BuildDate:2019-03-25T15:51:21Z, GoVersion:go1.12.1, Compiler:gc, Platform:linux/amd64}
root@k8s-master:~# 

4. 運行升級計劃命令:檢測集群是否可以升級,及獲取到的升級的版本。

kubeadm upgrade plan

輸出如下:

root@k8s-master:~# kubeadm upgrade plan
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with \'kubectl -n kube-system get cm kubeadm-config -oyaml\'
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.13.1
[upgrade/versions] kubeadm version: v1.14.0

Awesome, you\'re up-to-date! Enjoy!

告訴你集群可以升級。

5. 升級控制平面各組件,包含etcd。

root@k8s-master:~# kubeadm upgrade apply v1.14.0
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with \'kubectl -n kube-system get cm kubeadm-config -oyaml\'
[upgrade/version] You have chosen to change the cluster version to v1.14.0
[upgrade/versions] Cluster version: v1.13.1
[upgrade/versions] kubeadm version: v1.14.0
//輸出 y 確認之后,開始進行升級。
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version v1.14.0...
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-controller-manager-k8s-master hash: 31a4d945c251e62ac94e215494184514
Static pod: kube-scheduler-k8s-master hash: fefab66bc5a8a35b1f328ff4f74a8477
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Writing new Static Pod manifests to /etc/kubernetes/tmp/kubeadm-upgraded-manifests696355120
[upgrade/staticpods] Moved new manifest to /etc/kubernetes/manifests/kube-apiserver.yaml and backed up old manifest to /etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-03-20-30-46/kube-apiserver.yaml
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: bb799a8d323c1577bf9e10ede7914b30
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[apiclient] Found 0 Pods for label selector component=kube-apiserver
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component kube-apiserver upgraded successfully!
[upgrade/staticpods] Moved new manifest to /etc/kubernetes/manifests/kube-controller-manager.yaml and backed up old manifest to /etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-03-20-30-46/kube-controller-manager.yaml
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-k8s-master hash: 31a4d945c251e62ac94e215494184514
Static pod: kube-controller-manager-k8s-master hash: 54146492ed90bfa147f56609eee8005a
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component kube-controller-manager upgraded successfully!
[upgrade/staticpods] Moved new manifest to /etc/kubernetes/manifests/kube-scheduler.yaml and backed up old manifest to /etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-03-20-30-46/kube-scheduler.yaml
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-k8s-master hash: fefab66bc5a8a35b1f328ff4f74a8477
Static pod: kube-scheduler-k8s-master hash: 58272442e226c838b193bbba4c44091e
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component kube-scheduler upgraded successfully!
[upload-config] storing the configuration used in ConfigMap kubeadm-config in the kube-system Namespace
[kubelet] Creating a ConfigMap kubelet-config-1.14 in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the kubelet-config-1.14 ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[certs] Generating apiserver certificate and key
[certs] apiserver serving cert is signed for dns names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.3.1.20]
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to v1.14.0. Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven\'t already done so.
root@k8s-master:~# 

在最后兩行中,可以看到,集群升級成功。

kubeadm upgrade apply 執(zhí)行了如下操作:

檢測集群是否可以升級:
API Service是否可用、
所有的Node節(jié)點是不是處于Ready、
控制平面是否healthy。 強制實施版本的skew policies。 確??刂破矫骁R像可用且拉取到機器上。 通過更新/etc/kubernetes/manifests下清單文件來升級控制平面組件,如果升級失敗,則將清單文件還原。 應用新的kube-dns和kube-proxy配置清單文件,及創(chuàng)建相關的RBAC規(guī)則。 為API Server創(chuàng)建新的證書和key,并把舊的備份一份(如果它們將在180天后過期)。

到v1.16版本為止,kubeadm upgrade apply必須在主控制平面節(jié)點上執(zhí)行。

6. 運行完之后,驗證集群版本:

root@k8s-master:~# kubectl version 
Client Version: version.Info{Major:1, Minor:13, GitVersion:v1.13.1, GitCommit:eec55b9ba98609a46fee712359c7b5b365bdd920, GitTreeState:clean, BuildDate:2018-12-13T10:39:04Z, GoVersion:go1.11.2, Compiler:gc, Platform:linux/amd64}
Server Version: version.Info{Major:1, Minor:14, GitVersion:v1.14.0, GitCommit:641856db18352033a0d96dbc99153fa3b27298e5, GitTreeState:clean, BuildDate:2019-03-25T15:45:25Z, GoVersion:go1.12.1, Compiler:gc, Platform:linux/amd64}

可以看到,雖然kubectl版本是在1.13.1,而服務端的控制平面已經(jīng)升級到了1.14.0

Master組件已正常運行:

root@k8s-master:~# kubectl get componentstatuses 
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {health:true}   

到這里,第一臺控制平面的Master組件已升級完成,控制平面節(jié)點上通常還有kubelet和kubectl,所以這兩個也要做升級。

7. 升級CNI插件。

這一步是可選的,查詢CNI插件是否可以升級。

8. 升級該控制平面上的kubelet和kubectl

現(xiàn)在可以升級kubelet了,在升級過程中,不影響業(yè)務Pod的運行。

8.1. 升級kubelet、kubectl

# replace x in 1.14.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \\
apt-get update && apt-get install -y kubelet=1.14.0-00 kubectl=1.14.0-00 && \\
apt-mark hold kubelet kubectl 

8.2. 重啟kubelet:

sudo systemctl restart kubelet

9. 查看kubectl版本,與預期一致。

root@k8s-master:~# kubectl version 
Client Version: version.Info{Major:1, Minor:14, GitVersion:v1.14.0, GitCommit:641856db18352033a0d96dbc99153fa3b27298e5, GitTreeState:clean, BuildDate:2019-03-25T15:53:57Z, GoVersion:go1.12.1, Compiler:gc, Platform:linux/amd64}
Server Version: version.Info{Major:1, Minor:14, GitVersion:v1.14.0, GitCommit:641856db18352033a0d96dbc99153fa3b27298e5, GitTreeState:clean, BuildDate:2019-03-25T15:45:25Z, GoVersion:go1.12.1, Compiler:gc, Platform:linux/amd64}
root@k8s-master:~# 

第一臺控制平面節(jié)點已完成升級。

升級其它控制平面節(jié)點

10. 升級其它控制平面節(jié)點。

在其它控制平面上執(zhí)行,與第一個控制平面節(jié)點相同,但使用:

sudo kubeadm upgrade node experimental-control-plane

代替:

sudo kubeadm upgrade apply

而 sudo kubeadm upgrade plan 沒有必要再執(zhí)行了。

kubeadm upgrade node experimental-control-plane執(zhí)行如下操作:

從集群中獲取kubeadm的ClusterConfiguration。 備份kube-apiserver證書(可選)。 升級控制平面上的三個核心組件的靜態(tài)Pod清單文件。 升級Node

現(xiàn)在開始升級Node上的各組件:kubeadm、kubelet、kube-proxy。

在不影響集群訪問的情況下,一個節(jié)點一個節(jié)點的執(zhí)行。

1.將Node標記為維護狀態(tài)。

現(xiàn)在Node還原來的1.13:

root@k8s-master:~# kubectl get node
NAME         STATUS   ROLES    AGE    VERSION
k8s-master   Ready    master   292d   v1.14.0
k8s-node01   Ready    node     292d   v1.13.1

升級Node之前先將Node標記為不可用,并逐出所有Pod:

kubectl drain $NODE --ignore-daemonsets

2. 升級kubeadm和kubelet

現(xiàn)在在各Node上同樣的安裝kubeadm、kubelet,因為使用kubeadm升級kubelet。

# replace x in 1.14.x-00 with the latest patch version
apt-mark unhold kubeadm kubelet && \\
apt-get update && apt-get install -y kubeadm=1.14.0-00 kubelet=1.14.0-00 && \\
apt-mark hold kubeadm kubelet

3. 升級kubelet的配置文件

$ kubeadm upgrade node config --kubelet-version v1.14.0
[kubelet-start] Downloading configuration for the kubelet from the kubelet-config-1.14 ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
root@k8s-master:~# 

4. 重新啟動kubelet

$ sudo systemctl restart kubelet

5. 最后將節(jié)點標記為可調度來使其重新加入集群

kubectl uncordon $NODE

至此,該Node升級完畢,可以查看kubelet、kube-proxy的版本已變?yōu)轭A期版本v1.14.0

驗證集群版本
root@k8s-master:~# kubectl get node
NAME         STATUS   ROLES    AGE    VERSION
k8s-master   Ready    master   292d   v1.14.0
k8s-node01   Ready    node     292d   v1.14.0

該STATUS列應所有節(jié)點顯示Ready,并且版本號已更新。

到這里,所有升級流程已完美攻克。

從故障狀態(tài)中恢復

如果kubeadm upgrade失敗并且無法 回滾(例如由于執(zhí)行期間意外關閉),則可以再次運行kubeadm upgrade。此命令是冪等的,并確保實際狀態(tài)最終是您聲明的狀態(tài)。

要從不良狀態(tài)中恢復,您可以在不更改集群運行版本的情況下運行:

kubeadm upgrade --force。

更多升級信息查看官方升級文檔

kubernetes之從1.14.x升級到1.15.x

從1.14.0升級到1.15.0的升級流程也大致相同,只是升級命令稍有區(qū)別。

升級主控制平面節(jié)點

升級流程 與 從1.13升級至 1.14.0 相同。

1. 查詢可升級版本,安裝kubeadm到預期版本v1.15.0

apt-cache policy kubeadm
apt-mark unhold kubeadm kubelet
apt-get install -y kubeadm=1.15.0-00

kubeadm已達預期版本:

root@k8s-master:~# kubeadm version
kubeadm version: &version.Info{Major:1, Minor:15, GitVersion:v1.15.0, GitCommit:e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529, GitTreeState:clean, BuildDate:2019-06-19T16:37:41Z, GoVersion:go1.12.5, Compiler:gc, Platform:linux/amd64}

2. 執(zhí)行升級計劃

由于v1.15版本中,證書到期會自動續(xù)費,kubeadm在控制平面升級期間更新所有證書,即 v1.15發(fā)布的kubeadm upgrade,會自動續(xù)訂它在該節(jié)點上管理的證書。如果不想自動更新證書,可以加上參數(shù):--certificate-renewal=false。

升級計劃:

kubeadm upgrade plan

可以看到如下輸出:

root@k8s-master:~# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with \'kubectl -n kube-system get cm kubeadm-config -oyaml\'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.14.0
[upgrade/versions] kubeadm version: v1.15.0
I1005 20:45:04.474363   38108 version.go:248] remote version is much newer: v1.16.1; falling back to: stable-1.15
[upgrade/versions] Latest stable version: v1.15.4
[upgrade/versions] Latest version in the v1.14 series: v1.14.7

Components that must be upgraded manually after you have upgraded the control plane with \'kubeadm upgrade apply\':
COMPONENT   CURRENT       AVAILABLE
Kubelet     1 x v1.14.0   v1.14.7
            1 x v1.15.0   v1.14.7

Upgrade to the latest version in the v1.14 series:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.14.0   v1.14.7
Controller Manager   v1.14.0   v1.14.7
Scheduler            v1.14.0   v1.14.7
Kube Proxy           v1.14.0   v1.14.7
CoreDNS              1.3.1     1.3.1
Etcd                 3.3.10    3.3.10

You can now apply the upgrade by executing the following command:

    kubeadm upgrade apply v1.14.7

_____________________________________________________________________

Components that must be upgraded manually after you have upgraded the control plane with \'kubeadm upgrade apply\':
COMPONENT   CURRENT       AVAILABLE
Kubelet     1 x v1.14.0   v1.15.4
            1 x v1.15.0   v1.15.4

Upgrade to the latest stable version:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.14.0   v1.15.4
Controller Manager   v1.14.0   v1.15.4
Scheduler            v1.14.0   v1.15.4
Kube Proxy           v1.14.0   v1.15.4
CoreDNS              1.3.1     1.3.1
Etcd                 3.3.10    3.3.10

You can now apply the upgrade by executing the following command:

    kubeadm upgrade apply v1.15.4

Note: Before you can perform this upgrade, you have to update kubeadm to v1.15.4.

_____________________________________________________________________

3. 升級控制平面

根據(jù)任務指引,升級控制平面:

kubeadm upgrade apply v1.15.0

由于kubeadm的版本是v1.15.0,所以集群版本也只能為v1.15.0。

輸出如下信息:

root@k8s-master:~# kubeadm upgrade apply v1.15.0
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with \'kubectl -n kube-system get cm kubeadm-config -oyaml\'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to v1.15.0
[upgrade/versions] Cluster version: v1.14.0
[upgrade/versions] kubeadm version: v1.15.0
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
...
##正在拉取鏡像
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-scheduler.
...
##已經(jīng)拉取所有組件的鏡像
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
...
...
##如下自動續(xù)訂了所有證書
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Writing new Static Pod manifests to /etc/kubernetes/tmp/kubeadm-upgraded-manifests353124264
[upgrade/staticpods] Preparing for kube-apiserver upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
...
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to v1.15.0. Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven\'t already done so.

4. 升級成功,驗證。

可以看到,升級成功,此時,再次查詢集群核心組件版本:

root@k8s-master:~# kubectl version
Client Version: version.Info{Major:1, Minor:14, GitVersion:v1.14.0, GitCommit:641856db18352033a0d96dbc99153fa3b27298e5, GitTreeState:clean, BuildDate:2019-03-25T15:53:57Z, GoVersion:go1.12.1, Compiler:gc, Platform:linux/amd64}
Server Version: version.Info{Major:1, Minor:15, GitVersion:v1.15.0, GitCommit:e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529, GitTreeState:clean, BuildDate:2019-06-19T16:32:14Z, GoVersion:go1.12.5, Compiler:gc, Platform:linux/amd64}

查該node版本:

NAME         STATUS   ROLES    AGE    VERSION
k8s-master   Ready    master   295d   v1.14.0
k8s-node01   Ready    node     295d   v1.14.0

5. 升級該控制平面上的kubelet和kubectl

控制平面核心組件已升級為v1.15.0,現(xiàn)在升級該節(jié)點上的kubelet及kubectl了,在升級過程中,不影響業(yè)務Pod的運行。

# replace x in 1.15.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \\
apt-get update && apt-get install -y kubelet=1.15.0-00 kubectl=1.15.0-00 && \\
apt-mark hold kubelet kubectl 

6. 重啟kubelet:

sudo systemctl restart kubelet

7. 驗證kubelet、kubectl版本,與預期一致。

root@k8s-master:~# kubectl version
Client Version: version.Info{Major:1, Minor:15, GitVersion:v1.15.0, GitCommit:e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529, GitTreeState:clean, BuildDate:2019-06-19T16:40:16Z, GoVersion:go1.12.5, Compiler:gc, Platform:linux/amd64}
Server Version: version.Info{Major:1, Minor:15, GitVersion:v1.15.0, GitCommit:e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529, GitTreeState:clean, BuildDate:2019-06-19T16:32:14Z, GoVersion:go1.12.5, Compiler:gc, Platform:linux/amd64}

查該node版本:

root@k8s-master:~# kubectl get node
NAME         STATUS   ROLES    AGE    VERSION
k8s-master   Ready    master   295d   v1.15.0
k8s-node01   Ready    node     295d   v1.14.0
升級其它控制平面

升級其它控制平面上的三個組件的命令有所不同,使用:

1. 升級其它控制平面組件,但是使用如下命令:

$ sudo kubeadm upgrade node

2. 然后,再升級kubelet和kubectl。

# replace x in 1.15.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \\
apt-get update && apt-get install -y kubelet=1.15.x-00 kubectl=1.15.x-00 && \\
apt-mark hold kubelet kubectl

3. 重啟kubelet

$ sudo systemctl restart kubelet
升級Node

升級Node與前面一致,此處簡寫。

在所有Node上執(zhí)行。

1. 升級kubeadm:

# replace x in 1.15.x-00 with the latest patch version
apt-mark unhold kubeadm && \\
apt-get update && apt-get install -y kubeadm=1.15.x-00 && \\
apt-mark hold kubeadm

查詢kubeadm版本:

root@k8s-node01:~# kubeadm version
kubeadm version: &version.Info{Major:1, Minor:15, GitVersion:v1.15.0, GitCommit:e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529, GitTreeState:clean, BuildDate:2019-06-19T16:37:41Z, GoVersion:go1.12.5, Compiler:gc, Platform:linux/amd64}

2. 設置node為維護狀態(tài):

kubectl cordon $NODE

3. 更新kubelet配置文件

$ sudo kubeadm upgrade node
upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with \'kubectl -n kube-system get cm kubeadm-config -oyaml\'
[upgrade] Skipping phase. Not a control plane node[kubelet-start] Downloading configuration for the kubelet from the kubelet-config-1.15 ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

4. 升級kubelet組件和kubectl。

# replace x in 1.15.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \\
apt-get update && apt-get install -y kubelet=1.15.x-00 kubectl=1.15.x-00 && \\
apt-mark hold kubelet kubectl

5. 重啟kubelet

sudo systemctl restart kubelet

此時kube-proxy也會自動升級并重啟。

6. 取消維護狀態(tài)

kubectl uncordon $NODE

Node升級完成。

驗證集群版本
root@k8s-master:~# kubectl get node
NAME         STATUS     ROLES    AGE    VERSION
k8s-master   Ready      master   295d   v1.15.0
k8s-node01   NotReady   node     295d   v1.15.0
kubeadm upgrade node詳解

在這次升級流程中,升級其它控制平面和升級Node 用的都是 kubeadm upgrade node。

kubeadm upgrade node 在其它控制平面節(jié)點執(zhí)行時:

從集群中獲取kubeadm的ClusterConfiguration。 備份kube-apiserver證書(可選)。 升級控制平面上的三個核心組件的靜態(tài)Pod清單文件。 升級該控制平面上的kubelet配置。

kubeadm upgrade node 在Node節(jié)點上執(zhí)行以下操作:

從集群中獲取kubeadm的ClusterConfiguration。 升級該Node節(jié)點的kubelet配置。 kubernetes之從1.15.x升級到1.16.x

從1.15.x升級到1.16.x 與 前面的 從1.14.x升級到1.15.x,升級命令完全相同,此處就不再重復。


新聞名稱:kubernetes集群版本升級攻略
URL地址:http://weahome.cn/article/cjsjdh.html

其他資訊

在線咨詢

微信咨詢

電話咨詢

028-86922220(工作日)

18980820575(7×24)

提交需求

返回頂部