1.添加mds配置
# vi ceph.conf[mds.a]host = hostname
2.为mds元数据服务器创建一个目录
# mkdir –p /var/lib/ceph/mds/ceph-a
3. 为bootstrap-mds客户端创建一个密钥
# ceph-authtool --create-keyring /var/lib/ceph/bootstrap-mds/ceph.keyring --gen-key -n client.bootstrap-mds
4. 在ceph auth库中创建bootstrap-mds客户端,赋予权限添加之前创建的密钥
# ceph auth add client.bootstrap-mds mon 'allow profile bootstrap-mds' -i /var/lib/ceph/bootstrap-mds/ceph.keyring
5. 在ceph auth库中创建mds.a用户,并赋予权限和创建密钥,密钥保存在/var/lib/ceph/mds/ceph-a/keyring文件里
# ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.a osd 'allow rwx' mds 'allow' mon 'allow profile mds' -o /var/lib/ceph/mds/ceph-a/keyring
6.启动mds服务进程
# service ceph start mds.a=== mds.a === Starting Ceph mds.a on DEV-L0003542...starting mds.a at :/0# ceph mds stat #查看mds节点状态e4: 1/1/1 up {0=a=up:active}
7.查看集群状态
# ceph -scluster 1c7ec934-1595-11e5-aa3f-06aed00006d5health HEALTH_WARN 1360 pgs degraded; 4820 pgs stuck unclean; recovery 62423/138288 objects degraded (45.140%)monmap e1: 1 mons at {mon1=10.20.15.156:6789/0}, election epoch 2, quorum 0 mon1mdsmap e4: 1/1/1 up {0=a=up:active}osdmap e143: 7 osds: 7 up, 7 inpgmap v5095: 4820 pgs, 14 pools, 9919 kB data, 46096 objects9251 MB used, 150 GB / 159 GB avail62423/138288 objects degraded (45.140%)667 active1360 active+degraded2793 active+remappedclient io 3058 B/s wr, 5 op/s
7.客户端挂载cephfs(fuse方式)
# yum install ceph-fuse –y # mkdir /mycephfs# ceph-fuse -m 10.20.15.156:6789 ~/mycephfs# df -hFilesystem Size Used Avail Use% Mounted on/dev/mapper/Volgroup00-LV_root 16G 3.9G 11G 27% /tmpfs 1.9G 0 1.9G 0% /dev/shm/dev/vda1 194M 34M 150M 19% /boot/dev/sda 100G 1.9G 99G 2% /mnt/share/dev/vdb 10G 1.3G 8.8G 13% /var/lib/ceph/osd/ceph-0/dev/vdc 10G 1.2G 8.8G 12% /var/lib/ceph/osd/ceph-1/dev/vdd 10G 1.2G 8.9G 12% /var/lib/ceph/osd/ceph-5/dev/sda 100G 1.9G 99G 2% /var/lib/ceph/osd/ceph-6ceph-fuse 160G 9.1G 151G 6% /root/mycephfs
8.内核直接挂载(内核需有ceph modul较高版本)
# mount -t ceph 10.20.15.156:6789://mycephfs -v -oname=admin,secretfile=/etc/ceph/ceph.client.admin.keyring
9.备注
cephfs的元数据信息默认在metadata pool中# rados ls -p metadata609.00000000mds0_sessionmap608.00000000601.00000000602.00000000mds0_inotable1.00000000.inode200.00000000604.00000000605.00000000mds_anchortablemds_snaptable600.00000000603.00000000100.00000000200.00000001606.00000000607.00000000100.00000000.inode1.00000000