真实的国产乱ⅩXXX66竹夫人,五月香六月婷婷激情综合,亚洲日本VA一区二区三区,亚洲精品一区二区三区麻豆

成都創(chuàng)新互聯(lián)網(wǎng)站制作重慶分公司

K8S使用CephRBD作為后端存儲(chǔ)

一、準(zhǔn)備工作

Ceph版本:v13.2.5 mimic穩(wěn)定版

成都創(chuàng)新互聯(lián)公司是一家以重慶網(wǎng)站建設(shè)、網(wǎng)頁設(shè)計(jì)、品牌設(shè)計(jì)、軟件運(yùn)維、成都網(wǎng)站營銷、小程序App開發(fā)等移動(dòng)開發(fā)為一體互聯(lián)網(wǎng)公司。已累計(jì)為成都辦公空間設(shè)計(jì)等眾行業(yè)中小客戶提供優(yōu)質(zhì)的互聯(lián)網(wǎng)建站和軟件開發(fā)服務(wù)。

1、Ceph上準(zhǔn)備存儲(chǔ)池

[root@ceph-node1 ceph]# ceph osd pool create k8s 128 128
pool 'k8s' created
[root@ceph-node1 ceph]# ceph osd pool ls
k8s

2、Ceph上準(zhǔn)備K8S客戶端賬號(hào)

本環(huán)境中直接使用了Ceph的admin賬號(hào),當(dāng)然生產(chǎn)環(huán)境中還是要根據(jù)不同功能客戶端分配不同的賬號(hào):
ceph auth get-or-create client.k8s mon 'allow r' osd 'allow rwx pool=k8s' -o ceph.client.k8s.keyring
獲取賬號(hào)的密鑰:

[root@ceph-node1 ceph]# ceph auth get-key client.admin | base64
QVFDMmIrWmNEL3JTS2hBQWwwdmR3eGJGMmVYNUM3SjdDUGZZbkE9PQ==

3、為controller-manager提供rbd命令

使用StorageClass動(dòng)態(tài)創(chuàng)建PV時(shí),controller-manager會(huì)自動(dòng)在Ceph上創(chuàng)建image,所以我們要為其準(zhǔn)備好rbd命令。
(1) 如果集群是用kubeadm部署的,由于controller-manager官方鏡像中沒有rbd命令,所以我們要導(dǎo)入外部配置:

kind: ClusterRole 
apiVersion: rbac.authorization.k8s.io/v1 
metadata: 
  name: rbd-provisioner 
rules: 
  - apiGroups: [""] 
    resources: ["persistentvolumes"] 
    verbs: ["get", "list", "watch", "create", "delete"] 
  - apiGroups: [""] 
    resources: ["persistentvolumeclaims"] 
    verbs: ["get", "list", "watch", "update"] 
  - apiGroups: ["storage.k8s.io"] 
    resources: ["storageclasses"] 
    verbs: ["get", "list", "watch"] 
  - apiGroups: [""] 
    resources: ["events"] 
    verbs: ["create", "update", "patch"] 
  - apiGroups: [""] 
    resources: ["services"] 
    resourceNames: ["kube-DNS","coredns"] 
    verbs: ["list", "get"] 
--- 
kind: ClusterRoleBinding 
apiVersion: rbac.authorization.k8s.io/v1 
metadata: 
  name: rbd-provisioner 
subjects: 
  - kind: ServiceAccount 
    name: rbd-provisioner 
    namespace: default 
roleRef: 
  kind: ClusterRole 
  name: rbd-provisioner 
  apiGroup: rbac.authorization.k8s.io 
--- 
apiVersion: rbac.authorization.k8s.io/v1 
kind: Role 
metadata: 
  name: rbd-provisioner 
rules: 
- apiGroups: [""] 
  resources: ["secrets"] 
  verbs: ["get"] 
- apiGroups: [""] 
  resources: ["endpoints"] 
  verbs: ["get", "list", "watch", "create", "update", "patch"] 
--- 
apiVersion: rbac.authorization.k8s.io/v1 
kind: RoleBinding 
metadata: 
  name: rbd-provisioner 
roleRef: 
  apiGroup: rbac.authorization.k8s.io 
  kind: Role 
  name: rbd-provisioner 
subjects: 
  - kind: ServiceAccount 
    name: rbd-provisioner 
    namespace: default 
--- 
apiVersion: extensions/v1beta1 
kind: Deployment 
metadata: 
  name: rbd-provisioner 
spec: 
  replicas: 1 
  strategy: 
    type: Recreate 
  template: 
    metadata: 
      labels: 
        app: rbd-provisioner 
    spec: 
      containers: 
      - name: rbd-provisioner 
        image: quay.io/external_storage/rbd-provisioner:latest 
        env: 
        - name: PROVISIONER_NAME 
          value: ceph.com/rbd 
      serviceAccount: rbd-provisioner 
--- 
apiVersion: v1 
kind: ServiceAccount 
metadata: 
  name: rbd-provisioner

kubectl apply -f rbd-provisioner.yaml
注意:rbd-provisioner的鏡像要和ceph的版本適配,這里鏡像使用最新的,根據(jù)官方提示已支持ceph mimic版。
K8S使用Ceph RBD作為后端存儲(chǔ)
(2) 如果集群是用二進(jìn)制方式部署的,直接在master節(jié)點(diǎn)安裝ceph-common即可。
YUM源:

[Ceph]
name=Ceph packages for $basearch
baseurl=http://download.ceph.com/rpm-mimic/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-mimic/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://download.ceph.com/rpm-mimic/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1

#安裝客戶端
yum -y install ceph-common-13.2.5
#拷貝keyring文件
將ceph的ceph.client.admin.keyring文件拷貝到master的/etc/ceph目錄下。

4、為kubelet提供rbd命令

創(chuàng)建pod時(shí),kubelet需要使用rbd命令去檢測和掛載pv對(duì)應(yīng)的ceph image,所以要在所有的worker節(jié)點(diǎn)安裝ceph客戶端ceph-common-13.2.5。

二、K8S上試用Ceph RBD存儲(chǔ)

1、創(chuàng)建存儲(chǔ)類

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ceph-sc
  namespace: default
  annotations: 
    storageclass.kubernetes.io/is-default-class: "false"
provisioner: ceph.com/rbd
reclaimPolicy: Retain
parameters:
  monitors: 172.16.1.31:6789,172.16.1.32:6789,172.16.1.33:6789
  adminId: admin
  adminSecretName: storage-secret
  adminSecretNamespace: default
  pool: k8s
  fsType: xfs
  userId: admin
  userSecretName: storage-secret
  imageFormat: "2"
  imageFeatures: "layering"

kubectl apply -f storage_class.yaml

2、為存儲(chǔ)類提供secret

apiVersion: v1
kind: Secret
metadata:
  name: storage-secret
  namespace: default
data:
  key: QVFDMmIrWmNEL3JTS2hBQWwwdmR3eGJGMmVYNUM3SjdDUGZZbkE9PQ==
type:
  kubernetes.io/rbd

kubectl apply -f storage_secret.yaml
注意:provisioner的值要和rbd-provisioner設(shè)置的值一樣

3、創(chuàng)建PVC

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ceph-pvc
  namespace: default
spec:
  storageClassName: ceph-sc
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

kubectl apply -f storage_pvc.yaml
#創(chuàng)建完P(guān)VC后,PV會(huì)自動(dòng)創(chuàng)建:

[root@k8s-master03 ceph]# kubectl get pv       
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS   REASON   AGE
pvc-315991e9-7d4b-11e9-b6cc-0050569ba238   1Gi        RWO            Retain           Bound    default/ceph-sc-test   prom-sc                 13h

#正常情況PVC也處于Bound狀態(tài)

[root@k8s-master03 ceph]# kubectl get pvc
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
ceph-sc-test   Bound    pvc-315991e9-7d4b-11e9-b6cc-0050569ba238   1Gi        RWO            prom-sc        17s

4、創(chuàng)建測試應(yīng)用

apiVersion: v1
kind: Pod
metadata:
  name: ceph-pod1
spec:
  nodeName: k8s-node02
  containers:
  - name: nginx
    image: nginx:1.14
    volumeMounts:
    - name: ceph-rdb-vol1
      mountPath: /usr/share/nginx/html
      readOnly: false
  volumes:
  - name: ceph-rdb-vol1
    persistentVolumeClaim:
      claimName: ceph-pvc

kubectl apply -f storage_pod.yaml
#查看pod狀態(tài)

[root@k8s-master03 ceph]# kubectl get pods -o wide
NAME                             READY   STATUS    RESTARTS   AGE     IP            NODE         NOMINATED NODE   READINESS GATES
ceph-pod1                        1/1     Running   0          3d23h   10.244.4.75   k8s-node02              

#進(jìn)入容器查看掛載情況,可以看到rbd已掛載到/usr/share/nginx/html目錄。

[root@k8s-master03 ceph]# kubectl exec -it ceph-pod1 -- /bin/bash
root@ceph-pod1:/# df –hT
/dev/rbd0            xfs     1014M   33M  982M   4% /usr/share/nginx/html
#在掛載目錄下添加一個(gè)測試文件
root@ceph-pod1:/# cat /usr/share/nginx/html/index.html
hello ceph!

#在Ceph上檢查對(duì)應(yīng)image掛載的節(jié)點(diǎn),目前在172.16.1.22即k8s-node02。

[root@ceph-node1 ~]# rbd status k8s/kubernetes-dynamic-pvc-2410765c-7dec-11e9-aa80-26a98c3bc9e4
Watchers:
        watcher=172.16.1.22:0/264870305 client.24553 cookie=18446462598732840961

#而后我們刪掉這個(gè)的pod

[root@k8s-master03 ceph]# kubectl delete -f  storage_pod.yaml   
pod "ceph-pod1" deleted

#修改清單文件storage_pod.yaml,將pod調(diào)度到k8s-node01上,并應(yīng)用。
#稍后,查看pod的狀態(tài),改pod已部署在k8s-node01上了。

[root@k8s-master01 ~]# kubectl get pods -o wide
NAME                             READY   STATUS    RESTARTS   AGE    IP            NODE         NOMINATED NODE   READINESS GATES
ceph-pod1                        1/1     Running   0          34s    10.244.3.28   k8s-node01              

#在Ceph上再次檢查image掛載節(jié)點(diǎn),目前在172.16.1.21即k8s-node01

[root@ceph-node1 ~]# rbd status k8s/kubernetes-dynamic-pvc-2410765c-7dec-11e9-aa80-26a98c3bc9e4
Watchers:
        watcher=172.16.1.21:0/1812501701 client.114340 cookie=18446462598732840963

#進(jìn)入容器,檢查文件存在并沒有丟失,說明pod切換節(jié)點(diǎn)后使用了原來的image。

[root@k8s-master03 ceph]# kubectl exec -it ceph-pod1 -- /bin/bash
root@ceph-pod1:/# cat /usr/share/nginx/html/index.html
hello ceph!

文章名稱:K8S使用CephRBD作為后端存儲(chǔ)
網(wǎng)頁鏈接:http://weahome.cn/article/gojepg.html

其他資訊

在線咨詢

微信咨詢

電話咨詢

028-86922220(工作日)

18980820575(7×24)

提交需求

返回頂部