今天就跟大家聊聊有關(guān)kunernets中怎么使用helm安裝tiller,可能很多人都不太了解,為了讓大家更加了解,小編給大家總結(jié)了以下內(nèi)容,希望大家根據(jù)這篇文章可以有所收獲。
創(chuàng)新互聯(lián)公司長期為上千余家客戶提供的網(wǎng)站建設(shè)服務(wù),團(tuán)隊(duì)從業(yè)經(jīng)驗(yàn)10年,關(guān)注不同地域、不同群體,并針對不同對象提供差異化的產(chǎn)品和服務(wù);打造開放共贏平臺,與合作伙伴共同營造健康的互聯(lián)網(wǎng)生態(tài)環(huán)境。為江陵企業(yè)提供專業(yè)的網(wǎng)站設(shè)計(jì)、網(wǎng)站建設(shè),江陵網(wǎng)站改版等技術(shù)服務(wù)。擁有10余年豐富建站經(jīng)驗(yàn)和眾多成功案例,為您定制開發(fā)。
Helm由客戶端命helm令行工具和服務(wù)端tiller組成,Helm的安裝十分簡單。 下載helm命令行工具到master節(jié)點(diǎn)node1的/usr/local/bin下,這里下載的2.9.1版本:
wget https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz tar -zxvf helm-v2.11.0-linux-amd64.tar.gz cd linux-amd64/ cp helm /usr/local/bin/
為了安裝服務(wù)端tiller,還需要在這臺機(jī)器上配置好kubectl工具和kubeconfig文件,確保kubectl工具可以在這臺機(jī)器上訪問apiserver且正常使用。 這里的node1節(jié)點(diǎn)以及配置好了kubectl。
因?yàn)镵ubernetes APIServer開啟了RBAC訪問控制,所以需要創(chuàng)建tiller使用的service account: tiller并分配合適的角色給它。 詳細(xì)內(nèi)容可以查看helm文檔中的Role-based Access Control。 這里簡單起見直接分配cluster-admin這個集群內(nèi)置的ClusterRole給它。創(chuàng)建rbac-config.yaml文件:
apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system
kubectl create -f rbac-config.yaml serviceaccount/tiller created clusterrolebinding.rbac.authorization.k8s.io/tiller created
安裝tillerhelm init --service-account tiller --skip-refresh
到這一步就出現(xiàn)問題了,跟之前參考的博主寫的不一樣了。因?yàn)槲沂褂玫氖菄鴥?nèi)的docker源,所以gcr.io/kubernetes-helm/tiller這個鏡像訪問不到,所以查看pod的時候
kubectl get pods -n kube-system 顯示: NAME READY STATUS RESTARTS AGE tiller-deploy-6f6fd74b68-rkk5w 0/1 ImagePullBackOff 0 14h
pod的狀態(tài)不對啊。作為剛?cè)腴T的小白,開始摸索解決
kubectl describe pod tiller-deploy-6f6fd74b68-rkk5w -n kube-system
顯示
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Failed 52m (x3472 over 14h) kubelet, test1 Error: ImagePullBackOff Normal BackOff 2m6s (x3686 over 14h) kubelet, test1 Back-off pulling image "gcr.io/kubernetes-helm/tiller:v2.11.0"
顯然是獲取gcr.io/kubernetes-helm/tiller:v2.11.0鏡像失敗
docker search kubernetes-helm/tiller
cockpit/kubernetes This container provides a version of cockpit… 41 [OK] fluent/fluentd-kubernetes-daemonset Fluentd Daemonset for Kubernetes 24 [OK] lachlanevenson/k8s-helm Helm client (https://github.com/kubernetes/h… 17 dtzar/helm-kubectl helm and kubectl running on top of alpline w… 16 [OK] jessestuart/tiller Nightly multi-architecture (amd64, arm64, ar… 4 [OK] hypnoglow/kubernetes-helm Image providing kubernetes kubectl and helm … 3 [OK] linkyard/docker-helm Docker image containing kubernetes helm and … 3 [OK] jimmysong/kubernetes-helm-tiller 2 ibmcom/tiller Docker Image for IBM Cloud private-CE (Commu… 1 zhaosijun/kubernetes-helm-tiller mirror from gcr.io/kubernetes-helm/tiller:v2… 1 [OK] zlabjp/kubernetes-resource A Concourse resource for controlling the Kub… 1 thebeefcake/concourse-helm-resource concourse resource for managing helm deploym… 1 [OK] timotto/rpi-tiller k8s.io/tiller for Raspberry Pi 1 fishead/gcr.io.kubernetes-helm.tiller mirror of gcr.io/kubernetes-helm/tiller 1 [OK] victoru/concourse-helm-resource concourse resource for managing helm deploym… 0 [OK] bitnami/helm-crd-controller Kubernetes controller for HelmRelease CRD 0 [OK] z772458549/kubernetes-helm-tiller kubernetes-helm-tiller 0 [OK] mnsplatform/concourse-helm-resource Concourse resource for helm deployments 0 croesus/kubernetes-helm-tiller kubernetes-helm-tiller 0 [OK]
這么多鏡像,看描述,我看中了fishead/gcr.io.kubernetes-helm.tiller mirror of gcr.io/kubernetes-helm/tiller 1 [OK]
意思是fishead/gcr.io.kubernetes-helm.tiller 這個鏡像是 根據(jù)
mirror of gcr.io/kubernetes-helm/tiller Build而成
接下來去dockerhub上確認(rèn)下
dockerhub.jpg
果然是我們需要的鏡像,然后查看版本:
tag.jpg
下載鏡像:docker pull fishead/gcr.io.kubernetes-helm.tiller:v2.11.0
改tagdocker tag fishead/gcr.io.kubernetes-helm.tiller:v2.11.0 gcr.io/kubernetes-helm/tiller:v2.11.0
查看本地鏡像
images.jpg
萌新這步折騰了很久,參考網(wǎng)上方法,有試過
刪除tillerhelm reset -f
初始化,重新部署tillerhelm init --service-account tiller --tiller-image gcr.io/kubernetes-helm/tiller:v2.11.0 --skip-refresh
查看pod,還是錯誤的狀態(tài)kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE tiller-deploy-6f6fd74b68-qvlzx 0/1 ImagePullBackOff 0 8m43s
啊啊啊啊啊啊啊啊啊啊啊啊啊啊啊啊啊啊啊,崩潰了。為什么還是顯示拉取鏡像失敗呢。(;′⌒`)
冷靜下來想想,是不是配置文件中寫了總是獲取倉庫鏡像呢
編輯下配置文件kubectl edit deployment tiller-deploy -n kube-system
apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "2" creationTimestamp: 2018-11-16T08:03:53Z generation: 2 labels: app: helm name: tiller name: tiller-deploy namespace: kube-system resourceVersion: "133136" selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/tiller-deploy uid: 291c2a71-e976-11e8-b6eb-8cec4b591b6a spec: progressDeadlineSeconds: 2147483647 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: helm name: tiller strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: creationTimestamp: null labels: app: helm name: tiller spec: automountServiceAccountToken: true containers: - env: - name: TILLER_NAMESPACE value: kube-system - name: TILLER_HISTORY_MAX value: "0" image: gcr.io/kubernetes-helm/tiller:v2.11.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /liveness port: 44135 scheme: HTTP initialDelaySeconds: 1 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 name: tiller ports: - containerPort: 44134 name: tiller protocol: TCP - containerPort: 44135
果然找到了鏡像拉取策略:imagePullPolicy: IfNotPresent
看看官網(wǎng)怎么說的
https://kubernetes.io/docs/concepts/containers/images/ By default, the kubelet will try to pull each image from the specified registry. However, if the imagePullPolicy property of the container is set to IfNotPresent or Never, then a local image is used (preferentially or exclusively, respectively). #默認(rèn)情況是會根據(jù)配置文件中的鏡像地址去拉取鏡像,如果設(shè)置為IfNotPresent 和Never就會使用本地鏡像。 IfNotPresent :如果本地存在鏡像就優(yōu)先使用本地鏡像。 Never:直接不再去拉取鏡像了,使用本地的;如果本地不存在就報(bào)異常了
按道理來說,我這個配置沒問題啊,為什么不先檢索本地的鏡像呢,難道是我后來下載的原因。不管了,我先改成neverimagePullPolicy:Never
保存下,查看pod狀態(tài)tiller-deploy-f844bd879-p6m8x 1/1 Running 0 62s
看完上述內(nèi)容,你們對kunernets中怎么使用helm安裝tiller有進(jìn)一步的了解嗎?如果還想了解更多知識或者相關(guān)內(nèi)容,請關(guān)注創(chuàng)新互聯(lián)行業(yè)資訊頻道,感謝大家的支持。