真实的国产乱ⅩXXX66竹夫人,五月香六月婷婷激情综合,亚洲日本VA一区二区三区,亚洲精品一区二区三区麻豆

成都創(chuàng)新互聯(lián)網(wǎng)站制作重慶分公司

ceph中使用技巧有哪些

小編給大家分享一下ceph中使用技巧有哪些,相信大部分人都還不怎么了解,因此分享這篇文章給大家參考一下,希望大家閱讀完這篇文章后大有收獲,下面讓我們一起去了解一下吧!

創(chuàng)新互聯(lián)公司是專(zhuān)業(yè)的鞏義網(wǎng)站建設(shè)公司,鞏義接單;提供網(wǎng)站設(shè)計(jì)制作、網(wǎng)站建設(shè),網(wǎng)頁(yè)設(shè)計(jì),網(wǎng)站設(shè)計(jì),建網(wǎng)站,PHP網(wǎng)站建設(shè)等專(zhuān)業(yè)做網(wǎng)站服務(wù);采用PHP框架,可快速的進(jìn)行鞏義網(wǎng)站開(kāi)發(fā)網(wǎng)頁(yè)制作和功能擴(kuò)展;專(zhuān)業(yè)做搜索引擎喜愛(ài)的網(wǎng)站,專(zhuān)業(yè)的做網(wǎng)站團(tuán)隊(duì),希望更多企業(yè)前來(lái)合作!

1. 設(shè)置cephx keys

如果ceph設(shè)置了cephx,就可以為不同的用戶(hù)設(shè)置權(quán)限。

#創(chuàng)建dummy的key
$ ceph auth get-or-create client.dummy mon 'allow r' osd  'allow rwx pool=dummy'

[client.dummy]
    key = AQAPiu1RCMb4CxAAmP7rrufwZPRqy8bpQa2OeQ==
$ ceph auth list
installed auth entries:
...
client.dummy
    key: AQAPiu1RCMb4CxAAmP7rrufwZPRqy8bpQa2OeQ==
    caps: [mon] allow r
    caps: [osd] allow rwx pool=dummy
...

#對(duì)dummy的key重新分配權(quán)限
$ ceph auth caps client.dummy mon 'allow rwx' osd 'allow rwx pool=dummy'
updated caps for client.dummy
$ ceph auth list
installed auth entries:
client.dummy
    key: AQAPiu1RCMb4CxAAmP7rrufwZPRqy8bpQa2OeQ==
    caps: [mon] allow rwx
    caps: [osd] allow allow rwx pool=dummy/dev/sda2       /srv/ceph/osdX1  xfs rw,noexec,nodev,noatime,nodiratime,barrier=0   0   0

2. 查看rbd被掛載到哪里

由于rbd showmapped只能顯示本地掛載的rbd設(shè)備,如果機(jī)器比較多,而你恰好忘了在哪里map的了,就只能逐個(gè)機(jī)器找了。利用listwatchers可以解決這個(gè)問(wèn)題。

對(duì)于image format為1的塊:

$ rbd info boot
rbd image 'boot':
    size 10240 MB in 2560 objects
    order 22 (4096 kB objects)
    block_name_prefix: rb.0.89ee.2ae8944a
    format: 1
$ rados -p rbd listwatchers boot.rbd
watcher=192.168.251.102:0/2550823152 client.35321 cookie=1

對(duì)于image format為2的塊,有些不一樣:

[root@osd2 ceph]# rbd info myrbd/rbd1
rbd image 'rbd1':
	size 8192 kB in 2 objects
	order 22 (4096 kB objects)
	block_name_prefix: rbd_data.13436b8b4567
	format: 2
	features: layering
[root@osd2 ceph]# rados -p myrbd listwatchers rbd_header.13436b8b4567
watcher=192.168.108.3:0/2292307264 client.5130 cookie=1

需要將rbd info得到的序號(hào)加到rbd_header后面。

3. 怎樣刪除巨型rbd image

之前在一些博客看到刪除巨型rbd image,如果直接通過(guò)rbd rm的話(huà)會(huì)很耗時(shí)(漫長(zhǎng)的夜)。但在ceph 0.87上嘗試了一下,這個(gè)問(wèn)題已經(jīng)不存在了,具體過(guò)程如下:

#創(chuàng)建一個(gè)1PB大小的塊
[root@osd2 ceph]# time rbd create myrbd/huge-image -s 1024000000

real	0m0.353s
user	0m0.016s
sys	0m0.009s
[root@osd2 ceph]# rbd info myrbd/huge-image
rbd image 'huge-image':
	size 976 TB in 256000000 objects
	order 22 (4096 kB objects)
	block_name_prefix: rb.0.1489.6b8b4567
	format: 1
[root@osd2 ceph]# time rbd rm myrbd/huge-image
Removing image: 2% complete...^\Quit (core dumped)

real	10m24.406s
user	18m58.335s
sys	11m39.507s

上面創(chuàng)建了一個(gè)1PB大小的塊,也許是太大了,直接rbd rm刪除的時(shí)候還是很慢,所以用了一下方法:

[root@osd2 ceph]# rados -p myrbd rm huge-image.rbd
[root@osd2 ceph]# time rbd rm myrbd/huge-image
2014-11-06 09:44:42.916826 7fdb4fd5a7e0 -1 librbd::ImageCtx: error finding header: (2) No such file or directory
Removing image: 100% complete...done.

real	0m0.192s
user	0m0.012s
sys	0m0.013s

來(lái)個(gè)1TB大小的塊試試:

[root@osd2 ceph]# rbd create myrbd/huge-image -s 1024000
[root@osd2 ceph]# rbd info myrbd/huge-image
rbd image 'huge-image':
	size 1000 GB in 256000 objects
	order 22 (4096 kB objects)
	block_name_prefix: rb.0.149c.6b8b4567
	format: 1
[root@osd2 ceph]# time rbd rm myrbd/huge-image
Removing image: 100% complete...done.

real	0m29.418s
user	0m52.467s
sys	0m32.372s

所以巨型的塊刪除還是要用以下方法:

format 1:

[root@osd2 ceph]# rbd create myrbd/huge-image -s 1024000000
[root@osd2 ceph]# rbd info myrbd/huge-image
rbd image 'huge-image':
	size 976 TB in 256000000 objects
	order 22 (4096 kB objects)
	block_name_prefix: rb.0.14a5.6b8b4567
	format: 1
[root@osd2 ceph]# rados -p myrbd rm huge-image.rbd
[root@osd2 ceph]# time rados -p myrbd ls|grep '^rb.0.14a5.6b8b4567'|xargs -n 200  rados -p myrbd rm
[root@osd2 ceph]# time rbd rm myrbd/huge-image
2014-11-06 09:54:12.718211 7ffae55747e0 -1 librbd::ImageCtx: error finding header: (2) No such file or directory
Removing image: 100% complete...done.

real	0m0.191s
user	0m0.015s
sys	0m0.010s

format 2:

[root@osd2 ceph]# rbd create myrbd/huge-image -s 1024000000 --image-format=2
[root@osd2 ceph]# rbd info myrbd/huge-image
rbd image 'huge-image':
	size 976 TB in 256000000 objects
	order 22 (4096 kB objects)
	block_name_prefix: rbd_data.14986b8b4567
	format: 2
	features: layering
[root@osd2 ceph]# rados -p myrbd rm rbd_id.huge-image
[root@osd2 ceph]# rados -p myrbd rm rbd_header.14986b8b4567
[root@osd2 ceph]# rados -p myrbd ls | grep '^rbd_data.14986b8b4567' | xargs -n 200  rados -p myrbd rm
[root@osd2 ceph]# time rbd rm myrbd/huge-image
2014-11-06 09:59:26.043671 7f6b6923c7e0 -1 librbd::ImageCtx: error finding header: (2) No such file or directory
Removing image: 100% complete...done.

real    0m0.192s
user    0m0.016s
sys    0m0.010s

注意,如果塊是空的,不許要xargs那條語(yǔ)句;如果是非空就需要了。

所以,如果是100TB以上的塊,最好還是先刪除id,再rbd rm進(jìn)行刪除。

4. 查看kvm或qemu是否支持ceph

$ sudo qemu-system-x86_64 -drive format=?
Supported formats: vvfat vpc vmdk vhdx vdi sheepdog sheepdog sheepdog rbd raw host_cdrom host_floppy host_device file qed qcow2 qcow parallels nbd nbd nbd dmg tftp ftps ftp https http cow cloop bochs blkverify blkdebug
$ qemu-img -h
...
...
Supported formats: vvfat vpc vmdk vhdx vdi sheepdog sheepdog sheepdog rbd raw host_cdrom host_floppy host_device file qed qcow2 qcow parallels nbd nbd nbd dmg tftp ftps ftp https http cow cloop bochs blkverify blkdebug

可以到 http://ceph.com/packages/下載最新的rpm或deb包。

5. 利用ceph rbd配置nfs

一種簡(jiǎn)單實(shí)用的存儲(chǔ)方法,具體如下:

#安裝nfs rpm
[root@osd1 current]# yum install nfs-utils rpcbind
Loaded plugins: fastestmirror, priorities, refresh-packagekit, security
Loading mirror speeds from cached hostfile
epel/metalink                                                                                                  | 5.5 kB     00:00     
 * base: mirrors.cug.edu.cn
 * epel: mirrors.yun-idc.com
 * extras: mirrors.btte.net
 * rpmforge: ftp.riken.jp
 * updates: mirrors.btte.net
Ceph                                                                                                           |  951 B     00:00     
Ceph-noarch                                                                                                    |  951 B     00:00     
base                                                                                                           | 3.7 kB     00:00     
ceph-source                                                                                                    |  951 B     00:00     
epel                                                                                                           | 4.4 kB     00:00     
epel/primary_db                                                                                                | 6.3 MB     00:01     
extras                                                                                                         | 3.4 kB     00:00     
rpmforge                                                                                                       | 1.9 kB     00:00     
updates                                                                                                        | 3.4 kB     00:00     
updates/primary_db                                                                                             | 188 kB     00:00     
69 packages excluded due to repository priority protections
Setting up Install Process
Package rpcbind-0.2.0-11.el6.x86_64 already installed and latest version
Resolving Dependencies
--> Running transaction check
---> Package nfs-utils.x86_64 1:1.2.3-39.el6 will be updated
---> Package nfs-utils.x86_64 1:1.2.3-54.el6 will be an update
--> Finished Dependency Resolution

Dependencies Resolved

======================================================================================================================================
 Package                         Arch                         Version                                Repository                  Size
======================================================================================================================================
Updating:
 nfs-utils                       x86_64                       1:1.2.3-54.el6                         base                       326 k

Transaction Summary
======================================================================================================================================
Upgrade       1 Package(s)

Total download size: 326 k
Is this ok [y/N]: y
Downloading Packages:
nfs-utils-1.2.3-54.el6.x86_64.rpm                                                                              | 326 kB     00:00     
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Updating   : 1:nfs-utils-1.2.3-54.el6.x86_64                                                                                    1/2 
  Cleanup    : 1:nfs-utils-1.2.3-39.el6.x86_64                                                                                    2/2 
  Verifying  : 1:nfs-utils-1.2.3-54.el6.x86_64                                                                                    1/2 
  Verifying  : 1:nfs-utils-1.2.3-39.el6.x86_64                                                                                    2/2 

Updated:
  nfs-utils.x86_64 1:1.2.3-54.el6           

#創(chuàng)建一個(gè)塊并格式化、掛載
[root@osd1 current]# rbd create myrbd/nfs_image -s 1024000 --image-format=2
[root@osd1 current]# rbd map myrbd/nfs_image
/dev/rbd0
[root@osd1 current]# mkdir /mnt/nfs
[root@osd1 current]# mkfs.xfs /dev/rbd0
log stripe unit (4194304 bytes) is too large (maximum is 256KiB)
log stripe unit adjusted to 32KiB
meta-data=/dev/rbd0              isize=256    agcount=33, agsize=8190976 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=262144000, imaxpct=25
         =                       sunit=1024   swidth=1024 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=128000, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@osd1 current]# mount /dev/rbd0 -o rw,noexec,nodev,noatime,nobarrier /mnt/nfs

#改寫(xiě)exports文件,添加一行
[root@osd1 current]#  vim /etc/exports
/mnt/nfs 192.168.108.0/24(rw,no_root_squash,no_subtree_check,async)
[root@osd1 current]# exportfs -r
這里還需要執(zhí)行指令service rpcbind start
[root@osd1 current]# service nfs start
Starting NFS services:                                     [  OK  ]
Starting NFS quotas:                                       [  OK  ]
Starting NFS mountd:                                       [  OK  ]
Starting NFS daemon:                                       [  OK  ]
Starting RPC idmapd:                                       [  OK  ]

此時(shí)客戶(hù)端就可以?huà)燧d了??蛻?hù)端運(yùn)行:
showmount -e 192.168.108.2
然后進(jìn)行掛載:
mount -t nfs 192.168.108.2:/mnt/nfs /mnt/nfs

如果無(wú)法掛載,運(yùn)行 service rpcbind start或 service portmap start命令試一下。

以上是“ceph中使用技巧有哪些”這篇文章的所有內(nèi)容,感謝各位的閱讀!相信大家都有了一定的了解,希望分享的內(nèi)容對(duì)大家有所幫助,如果還想學(xué)習(xí)更多知識(shí),歡迎關(guān)注創(chuàng)新互聯(lián)行業(yè)資訊頻道!


分享標(biāo)題:ceph中使用技巧有哪些
網(wǎng)站地址:http://weahome.cn/article/jsjsph.html

其他資訊

在線(xiàn)咨詢(xún)

微信咨詢(xún)

電話(huà)咨詢(xún)

028-86922220(工作日)

18980820575(7×24)

提交需求

返回頂部