塊是字節(jié)序列(例如,一個(gè)512字節(jié)的數(shù)據(jù)塊)?;趬K的存儲(chǔ)接口是使用旋轉(zhuǎn)介質(zhì)(例如硬盤,CD,軟盤甚至傳統(tǒng)的9-track tape)存儲(chǔ)數(shù)據(jù)的最常用方法。塊設(shè)備接口的無處不在,使虛擬塊設(shè)備成為與海量數(shù)據(jù)存儲(chǔ)系統(tǒng)(如Ceph)進(jìn)行交互的理想候選者。
創(chuàng)新互聯(lián)公司是專業(yè)的南潯網(wǎng)站建設(shè)公司,南潯接單;提供網(wǎng)站制作、網(wǎng)站建設(shè),網(wǎng)頁設(shè)計(jì),網(wǎng)站設(shè)計(jì),建網(wǎng)站,PHP網(wǎng)站建設(shè)等專業(yè)做網(wǎng)站服務(wù);采用PHP框架,可快速的進(jìn)行南潯網(wǎng)站開發(fā)網(wǎng)頁制作和功能擴(kuò)展;專業(yè)做搜索引擎喜愛的網(wǎng)站,專業(yè)的做網(wǎng)站團(tuán)隊(duì),希望更多企業(yè)前來合作!
Ceph塊設(shè)備經(jīng)過精簡配置,可調(diào)整大小,并在Ceph集群中的多個(gè)OSD上存儲(chǔ)條帶化數(shù)據(jù),ceph塊設(shè)備利用了RADOS功能,例如快照,復(fù)制和一致性。 Ceph的RADOS塊設(shè)備(RBD)使用內(nèi)核模塊或librbd庫與OSD進(jìn)行交互。
‘
Ceph的塊設(shè)備對(duì)內(nèi)核設(shè)備,KVMS例如QEMU,基于云的計(jì)算系統(tǒng),例如OpenStack和CloudStack,提供高性能和無限的可擴(kuò)展性 。你可以使用同一群集同時(shí)操作Ceph RADOS網(wǎng)關(guān),Ceph的文件系統(tǒng)和Ceph塊設(shè)備。
創(chuàng)建池和塊
[root@ceph-node1 ~]# ceph osd pool create block 6
pool 'block' created
為客戶端創(chuàng)建用戶,并將密鑰文件scp到客戶端
[root@ceph-node1 ~]# ceph auth get-or-create client.rbd mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=block'| tee ./ceph.client.rbd.keyring
[client.rbd]
key = AQA04PpdtJpbGxAAd+lCJFQnDfRlWL5cFUShoQ==
[root@ceph-node1 ~]#scp ceph.client.rbd.keyring root@ceph-client:/etc/ceph
客戶端創(chuàng)建一個(gè)大小為2G的塊設(shè)備
[root@ceph-client /]# rbd create block/rbd0 --size 2048 --name client.rbd
映射此塊設(shè)備到客戶端
[root@ceph-client /]# rbd map --image block/rbd0 --name client.rbd
/dev/rbd0
[root@ceph-client /]# rbd showmapped --name client.rbd
id pool image snap device
0 block rbd0 - /dev/rbd0
注意:這里可能會(huì)報(bào)如下的錯(cuò)誤
[root@ceph-client /]# rbd map --image block/rbd0 --name client.rbd
rbd: sysfs write failed
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (2) No such file or directory
解決方法有三種,看我這篇博客rbd: sysfs write failed解決辦法
創(chuàng)建文件系統(tǒng),并掛載塊設(shè)備
[root@ceph-client /]# fdisk -l /dev/rbd0
Disk /dev/rbd0: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4194304 bytes / 4194304 bytes[root@ceph-client /]# mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0 isize=512 agcount=8, agsize=65536 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=524288, imaxpct=25
= sunit=1024 swidth=1024 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0[root@ceph-client /]# mount /dev/rbd0 /ceph-rbd0
[root@ceph-client /]# df -Th /ceph-rbd0
Filesystem Type Size Used Avail Use% Mounted on
/dev/rbd0 xfs 2.0G 33M 2.0G 2% /ceph-rb
寫入數(shù)據(jù)測(cè)試
[root@ceph-client /]# dd if=/dev/zero of=/ceph-rbd0/file count=100 bs=1M
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.0674301 s, 1.6 GB/s
[root@ceph-client /]# ls -lh /ceph-rbd0/file
-rw-r--r-- 1 root root 100M Dec 19 10:50 /ceph-rbd0/file
做成系統(tǒng)服務(wù)
[root@ceph-client /]#cat /usr/local/bin/rbd-mount
#!/bin/bash
# Pool name where block device image is stored
export poolname=block
# Disk image name
export rbdimage0=rbd0
# Mounted Directory
export mountpoint0=/ceph-rbd0
# Image mount/unmount and pool are passed from the systemd service as arguments
# Are we are mounting or unmounting
if [ "$1" == "m" ]; then
modprobe rbd
rbd feature disable $rbdimage0 object-map fast-diff deep-flatten
rbd map $rbdimage0 --id rbd --keyring /etc/ceph/ceph.client.rbd.keyring
mkdir -p $mountpoint0
mount /dev/rbd/$poolname/$rbdimage0 $mountpoint0
fi
if [ "$1" == "u" ]; then
umount $mountpoint0
rbd unmap /dev/rbd/$poolname/$rbdimage0
fi
[root@ceph-client ~]# cat /etc/systemd/system/rbd-mount.service
[Unit]
Description=RADOS block device mapping for $rbdimage in pool $poolname"
Conflicts=shutdown.target
Wants=network-online.target
After=NetworkManager-wait-online.service
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/local/bin/rbd-mount m
ExecStop=/usr/local/bin/rbd-mount u
[Install]
WantedBy=multi-user.target
開機(jī)自動(dòng)掛載
[root@ceph-client ~]#systemctl daemon-reload
[root@ceph-client ~]#systemctl enable rbd-mount.service