當(dāng)集群容量或者計(jì)算資源達(dá)到一定限定時(shí),就需要對(duì)集群進(jìn)行擴(kuò)容,擴(kuò)容操作主要可以分為兩種 :
1、縱向擴(kuò)展:向已有節(jié)點(diǎn)中添加磁盤,容量增加,集群計(jì)算性能不變;
2、橫向擴(kuò)展:添加新的節(jié)點(diǎn),包括磁盤、內(nèi)存、cpu資源,可以達(dá)到擴(kuò)容性能提升的效果;
生產(chǎn)環(huán)境中,一般不會(huì)在新節(jié)點(diǎn)加入ceph集群后,立即開始數(shù)據(jù)回填,這樣會(huì)影響集群性能。所以我們需要設(shè)置一些標(biāo)志位,來(lái)完成這個(gè)目的。
[root@node140 ~]##ceph osd set noin
[root@node140 ~]##ceph osd set nobackfill
在用戶訪問(wèn)的非高峰時(shí),取消這些標(biāo)志位,集群開始在平衡任務(wù)。
[root@node140 ~]##ceph osd unset noin
[root@node140 ~]##ceph osd unset nobackfill
[root@node143 ~]# yum -y install ceph ceph-radosgw
[root@node143 ~]# rpm -qa | egrep -i "ceph|rados|rbd"
[root@node143 ~]# ceph -v 全部都是(nautilus版本)
ceph version 14.2.2 (4f8fa0a0024755aae7d95567c63f11d6862d55be) nautilus (stable)
ceph可以無(wú)縫擴(kuò)展,支持在線添加osd和monitor節(jié)點(diǎn)
[root@node140 ~]# ceph -s
cluster:
id: 58a12719-a5ed-4f95-b312-6efd6e34e558
health: HEALTH_OK
services:
mon: 2 daemons, quorum node140,node142 (age 8d)
mgr: admin(active, since 8d), standbys: node140
mds: cephfs:1 {0=node140=up:active} 1 up:standby
osd: 16 osds: 16 up (since 5m), 16 in (since 2w)
data:
pools: 5 pools, 768 pgs
objects: 2.65k objects, 9.9 GiB
usage: 47 GiB used, 8.7 TiB / 8.7 TiB avail
pgs: 768 active+clean
[root@node140 ~]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 8.71826 root default
-2 3.26935 host node140
0 hdd 0.54489 osd.0 up 1.00000 1.00000
1 hdd 0.54489 osd.1 up 1.00000 1.00000
2 hdd 0.54489 osd.2 up 1.00000 1.00000
3 hdd 0.54489 osd.3 up 1.00000 1.00000
4 hdd 0.54489 osd.4 up 1.00000 1.00000
5 hdd 0.54489 osd.5 up 1.00000 1.00000
-3 3.26935 host node141
12 hdd 0.54489 osd.12 up 1.00000 1.00000
13 hdd 0.54489 osd.13 up 1.00000 1.00000
14 hdd 0.54489 osd.14 up 1.00000 1.00000
15 hdd 0.54489 osd.15 up 1.00000 1.00000
16 hdd 0.54489 osd.16 up 1.00000 1.00000
17 hdd 0.54489 osd.17 up 1.00000 1.00000
-4 2.17957 host node142
6 hdd 0.54489 osd.6 up 1.00000 1.00000
9 hdd 0.54489 osd.9 up 1.00000 1.00000
10 hdd 0.54489 osd.10 up 1.00000 1.00000
11 hdd 0.54489 osd.11 up 1.00000 1.00000
[root@node143 ceph]# ls
ceph.client.admin.keyring ceph.conf
[root@node143 ceph]# ceph -s
cluster:
id: 58a12719-a5ed-4f95-b312-6efd6e34e558
health: HEALTH_OK
services:
mon: 2 daemons, quorum node140,node142 (age 8d)
mgr: admin(active, since 8d), standbys: node140
mds: cephfs:1 {0=node140=up:active} 1 up:standby
osd: 16 osds: 16 up (since 25m), 16 in (since 2w)
data:
pools: 5 pools, 768 pgs
objects: 2.65k objects, 9.9 GiB
usage: 47 GiB used, 8.7 TiB / 8.7 TiB avail
pgs: 768 active+clean
[root@node143 ceph]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 557.9G 0 disk
├─sda1 8:1 0 200M 0 part /boot
└─sda2 8:2 0 519.4G 0 part
└─centos-root 253:0 0 519.4G 0 lvm /
sdb 8:16 0 558.9G 0 disk
sdc 8:32 0 558.9G 0 disk
sdd 8:48 0 558.9G 0 disk
sde 8:64 0 558.9G 0 disk
sdf 8:80 0 558.9G 0 disk
sdg 8:96 0 558.9G 0 disk
[root@node143 ]# parted /dev/sdc mklabel GPT
[root@node143 ]# parted /dev/sdd mklabel GPT
[root@node143 ]# parted /dev/sdf mklabel GPT
[root@node143 ]#parted /dev/sdg mklabel GPT
[root@node143 ]# parted /dev/sdb mklabel GPT
[root@node143 ]# parted /dev/sde mklabel GPT
[root@node143 ]# mkfs.xfs -f /dev/sdc
[root@node143 ]# mkfs.xfs -f /dev/sdd
[root@node143 ]# mkfs.xfs -f /dev/sdb
[root@node143 ]# mkfs.xfs -f /dev/sdf
[root@node143 ]# mkfs.xfs -f /dev/sdg
[root@node143 ]# mkfs.xfs -f /dev/sde
[root@node143 ~]# ceph-volume lvm create --data /dev/sdb
--> ceph-volume lvm activate successful for osd ID: 0
--> ceph-volume lvm create successful for: /dev/sdb
[root@node143 ~]# ceph-volume lvm create --data /dev/sdc
[root@node143 ~]# ceph-volume lvm create --data /dev/sdd
[root@node143 ~]# ceph-volume lvm create --data /dev/sdf
[root@node143 ~]# ceph-volume lvm create --data /dev/sdg
[root@node143 ~]# ceph-volume lvm create --data /dev/sde
[root@node143 ~]# blkid
/dev/mapper/centos-root: UUID="7616a088-d812-456b-8ae8-38d600eb9f8b" TYPE="xfs"
/dev/sda2: UUID="6V8bFT-ylA6-bifK-gmob-ah4I-zZ4G-N7EYwD" TYPE="LVM2_member"
/dev/sda1: UUID="eee4c9af-9f12-44d9-a386-535bde734678" TYPE="xfs"
/dev/sdb: UUID="TcjeCg-YsBQ-RHbm-UNYT-UoQv-iLFs-f1st2X" TYPE="LVM2_member"
/dev/sdd: UUID="aSLPmt-ohdJ-kG7W-JOB1-dzOD-D0zp-krWW5m" TYPE="LVM2_member"
/dev/sdc: UUID="7ARhbT-S9sC-OdZw-kUCq-yp97-gSpY-hfoPFa" TYPE="LVM2_member"
/dev/sdg: UUID="9MDhh2-bXIX-DwVf-RkIt-IUVm-fPEH-KSbsDd" TYPE="LVM2_member"
/dev/sde: UUID="oc2gSZ-j3WO-pOUs-qJk6-ZZS0-R8V7-1vYaZv" TYPE="LVM2_member"
/dev/sdf: UUID="jxQjNS-8xpV-Hc4p-d2Vd-1Q8O-U5Yp-j1Dn22" TYPE="LVM2_member"
[root@node143 ~]# ceph-volume lvm list
[root@node143 ~]# lsblk
[root@node143 ~]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 11.98761 root default
-2 3.26935 host node140
0 hdd 0.54489 osd.0 up 1.00000 1.00000
1 hdd 0.54489 osd.1 up 1.00000 1.00000
2 hdd 0.54489 osd.2 up 1.00000 1.00000
3 hdd 0.54489 osd.3 up 1.00000 1.00000
4 hdd 0.54489 osd.4 up 1.00000 1.00000
5 hdd 0.54489 osd.5 up 1.00000 1.00000
-3 3.26935 host node141
12 hdd 0.54489 osd.12 up 1.00000 1.00000
13 hdd 0.54489 osd.13 up 1.00000 1.00000
14 hdd 0.54489 osd.14 up 1.00000 1.00000
15 hdd 0.54489 osd.15 up 1.00000 1.00000
16 hdd 0.54489 osd.16 up 1.00000 1.00000
17 hdd 0.54489 osd.17 up 1.00000 1.00000
-4 2.17957 host node142
6 hdd 0.54489 osd.6 up 1.00000 1.00000
9 hdd 0.54489 osd.9 up 1.00000 1.00000
10 hdd 0.54489 osd.10 up 1.00000 1.00000
11 hdd 0.54489 osd.11 up 1.00000 1.00000
-9 3.26935 host node143
7 hdd 0.54489 osd.7 up 1.00000 1.00000
8 hdd 0.54489 osd.8 up 1.00000 1.00000
18 hdd 0.54489 osd.18 up 0 1.00000
19 hdd 0.54489 osd.19 up 0 1.00000
20 hdd 0.54489 osd.20 up 0 1.00000
21 hdd 0.54489 osd.21 up 0 1.00000
====== osd.0 =======
顯示osd.num,num后面會(huì)用到。
[root@node143 ~]# systemctl enable ceph-osd@7
[root@node143 ~]# systemctl enable ceph-osd@8
[root@node143 ~]# systemctl enable ceph-osd@18
[root@node143 ~]# systemctl enable ceph-osd@19
[root@node143 ~]# systemctl enable ceph-osd@20
[root@node143 ~]# systemctl enable ceph-osd@21
[root@node143 ~]# ceph -s
cluster:
id: 58a12719-a5ed-4f95-b312-6efd6e34e558
health: HEALTH_WARN
noin,nobackfill flag(s) set
services:
mon: 2 daemons, quorum node140,node142 (age 8d)
mgr: admin(active, since 8d), standbys: node140
mds: cephfs:1 {0=node140=up:active} 1 up:standby
osd: 22 osds: 22 up (since 4m), 18 in (since 9m); 2 remapped pgs
flags noin,nobackfill
data:
pools: 5 pools, 768 pgs
objects: 2.65k objects, 9.9 GiB
usage: 54 GiB used, 12 TiB / 12 TiB avail
pgs: 766 active+clean
1 active+remapped+backfilling
1 active+remapped+backfill_wait
在用戶訪問(wèn)的非高峰時(shí),取消這些標(biāo)志位,集群開始在平衡任務(wù)。
[root@node140 ~]##ceph osd unset noin
[root@node140 ~]##ceph osd unset nobackfill
另外有需要云服務(wù)器可以了解下創(chuàng)新互聯(lián)cdcxhl.cn,海內(nèi)外云服務(wù)器15元起步,三天無(wú)理由+7*72小時(shí)售后在線,公司持有idc許可證,提供“云服務(wù)器、裸金屬服務(wù)器、高防服務(wù)器、香港服務(wù)器、美國(guó)服務(wù)器、虛擬主機(jī)、免備案服務(wù)器”等云主機(jī)租用服務(wù)以及企業(yè)上云的綜合解決方案,具有“安全穩(wěn)定、簡(jiǎn)單易用、服務(wù)可用性高、性價(jià)比高”等特點(diǎn)與優(yōu)勢(shì),專為企業(yè)上云打造定制,能夠滿足用戶豐富、多元化的應(yīng)用場(chǎng)景需求。