Ceph介绍-管理使用(二)

2020年7月27日14:16:30Ceph介绍-管理使用(二)已关闭评论 158 views

第1章 Ceph使用

1.1 块设备

1.1.1 创建块设备池

  • 【语法】:
1
2
3
# {pg-num} [{pgp-num} 数字要相同
ceph osd pool create {pool-name} [{pg-num} [{pgp-num}]] [replicated] [crush-rule-name] [expected-num-objects]
ceph osd pool create {pool-name} [{pg-num} [{pgp-num}]] erasure [erasure-code-profile] [crush-rule-name] [expected_num_objects] [--autoscale-mode=<on,off,warn>]

1.1.1.1 创建块设备池

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@ceph-manager ~]# ceph osd pool create rbd 64 64
[root@ceph-manager ~]# rbd pool init rbd
[root@ceph-manager ~]# ceph -s
  cluster:
    id:     55355315-f0b4-4901-9691-f7d23e9379d6
    health: HEALTH_OK

  services:
    mon: 1 daemons, quorum ceph-node01 (age 4m)
    mgr: ceph-node01(active, since 6m)
    osd: 3 osds: 3 up (since 81m), 3 in (since 81m)

  data:
    pools:   1 pools, 64 pgs
    objects: 1 objects, 19 B
    usage:   3.0 GiB used, 57 GiB / 60 GiB avail
    pgs:     64 active+clean

1.1.2 配置块设备

1.1.2.1 映射块设备镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[root@ceph-client ~]# rbd create testrbd --size=10G
# 此处若报错查看<a href="#_rbd:_sysfs_write">12.2.1章</a>
[root@ceph-client ~]# rbd map testrbd --name client.admin
/dev/rbd0

[root@ceph-client ~]# fdisk -l

Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000de1ce

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200     6293503     2097152   82  Linux swap / Solaris
/dev/sda3         6293504   104857599    49282048   83  Linux

Disk /dev/rbd0: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4194304 bytes / 4194304 bytes

1.1.2.2 创建文件系统使用块设备

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@ceph-client ~]# mkfs.xfs /dev/rbd0
[root@ceph-client ~]# mount /dev/rbd0 /mnt

[root@ceph-client ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        476M     0  476M   0% /dev
tmpfs           487M     0  487M   0% /dev/shm
tmpfs           487M  7.7M  479M   2% /run
tmpfs           487M     0  487M   0% /sys/fs/cgroup
/dev/sda3        47G  2.1G   45G   5% /
/dev/sda1      1014M  132M  883M  13% /boot
tmpfs            98M     0   98M   0% /run/user/0
/dev/rbd0        10G   33M   10G   1% /mnt

1.1.3 删除块设备镜像

1
2
3
4
5
# 卸载映射镜像
[root@ceph-client ~]# rbd unmap testrbd
# 输出块设备镜像
[root@ceph-client ~]# rbd remove testrbd
Removing image: 100% complete...done.

1.1.4 删除块设备池

  • 【语法】:
1
ceph osd pool delete {pool-name} [{pool-name} --yes-i-really-really-mean-it]

1.1.4.1 修改mon节点配置文件

1
2
3
4
5
6
7
8
9
10
[root@ceph-node01 ~]# vim /etc/ceph/ceph.conf
[global]
fsid = 55355315-f0b4-4901-9691-f7d23e9379d6
mon_initial_members = ceph-node01
mon_host = 10.10.10.201
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd pool default size = 2
mon_allow_pool_delete = true    # 增加此项内容

1.1.4.2 重启mon服务

1
2
3
4
5
6
7
[root@ceph-node01 ~]# systemctl list-units --type=service |grep ceph
ceph-crash.service                 loaded active running Ceph crash dump collector
ceph-mgr@ceph-node01.service       loaded active running Ceph cluster manager daemon
ceph-mon@ceph-node01.service       loaded active running Ceph cluster monitor daemon
ceph-osd@0.service                 loaded active running Ceph object storage daemon osd.0

[root@ceph-node01 ~]# systemctl restart ceph-mon@ceph-node01.service

1.1.4.3 删除块设备池

1
2
[root@ceph-manager ~]# ceph osd pool delete rbd rbd --yes-i-really-really-mean-it
pool 'rbd' removed

备注:块设备池操作:https://ceph.readthedocs.io/en/latest/rados/operations/pools/

1.2 文件系统

Ceph文件系统中的所有元数据操作都通过元数据服务器进行,因此至少需要一台元数据服务器。

1.2.1 创建元数据服务器

1
[cephnodes@ceph-manager ~]$ ceph-deploy mds create ceph-node01

1.2.2 创建文件系统

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[root@ceph-client ~]# ceph osd pool create cephfs_data 32
pool 'cephfs_data' created
[root@ceph-client ~]# ceph osd pool create cephfs_meta 32
pool 'cephfs_meta' created
[root@ceph-client ~]# ceph fs new testcephfs cephfs_meta cephfs_data
new fs with metadata pool 4 and data pool 3

[root@ceph-client ~]# ceph -s
  cluster:
    id:     55355315-f0b4-4901-9691-f7d23e9379d6
    health: HEALTH_OK

  services:
    mon: 1 daemons, quorum ceph-node01 (age 93m)
    mgr: ceph-node01(active, since 96m)
    mds: testcephfs:1 {0=ceph-node01=up:active}
    osd: 3 osds: 3 up (since 2h), 3 in (since 2h)

  task status:
    scrub status:
        mds.ceph-node01: idle

  data:
    pools:   3 pools, 128 pgs
    objects: 25 objects, 2.3 KiB
    usage:   3.0 GiB used, 57 GiB / 60 GiB avail
    pgs:     128 active+clean

1.2.3 挂载文件系统

1.2.3.1 查看用户Key

1
2
3
4
5
6
7
[cephnodes@ceph-manager ~]$ cat ceph.client.admin.keyring
[client.admin]
    key = AQBEdw1fMxaWCRAAnM6d++XzbapVXrFbOuxsHg==
    caps mds = "allow *"
    caps mgr = "allow *"
    caps mon = "allow *"
    caps osd = "allow *"

1.2.3.2 直接挂载

1
2
3
4
5
6
7
8
9
10
11
12
13
# 此处挂载的ip地址为mon服务器的ip地址
[root@ceph-client ~]# mount -t ceph 10.10.10.201:6789:/ /mnt -o name=admin,secret=AQBEdw1fMxaWCRAAnM6d++XzbapVXrFbOuxsHg==

[root@ceph-client ~]# df -h
Filesystem           Size  Used Avail Use% Mounted on
devtmpfs             476M     0  476M   0% /dev
tmpfs                487M     0  487M   0% /dev/shm
tmpfs                487M  7.6M  479M   2% /run
tmpfs                487M     0  487M   0% /sys/fs/cgroup
/dev/sda3             47G  2.1G   45G   5% /
/dev/sda1           1014M  132M  883M  13% /boot
tmpfs                 98M     0   98M   0% /run/user/0
10.10.10.201:6789:/   27G     0   27G   0% /mnt

1.2.3.3 使用密钥文件挂载

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@ceph-client ~]# vim /etc/ceph/secret.key
AQBEdw1fMxaWCRAAnM6d++XzbapVXrFbOuxsHg==
[root@ceph-client ~]# mount -t ceph 10.10.10.201:6789:/ /mnt -o name=admin,secretfile=/etc/ceph/secret.key

[root@ceph-client ~]# df -h
Filesystem           Size  Used Avail Use% Mounted on
devtmpfs             476M     0  476M   0% /dev
tmpfs                487M     0  487M   0% /dev/shm
tmpfs                487M  7.6M  479M   2% /run
tmpfs                487M     0  487M   0% /sys/fs/cgroup
/dev/sda3             47G  2.1G   45G   5% /
/dev/sda1           1014M  132M  883M  13% /boot
tmpfs                 98M     0   98M   0% /run/user/0
10.10.10.201:6789:/   27G     0   27G   0% /mnt

1.2.3.4 开机自动挂载/etc/fstab、

1
2
[root@ceph-client ~]# vim /etc/fstab
10.10.10.201:6789:/     /mnt    ceph    name=admin,secretfile=/etc/ceph/secret.key,noatime,_netdev    0       2

1.2.3.5 使用ceph-fuse挂载

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@ceph-client ~]# yum install -y ceph-fuse
# 若已经存在/etc/ceph/ceph.client.admin.keyring,则不需要添加-k参数
[root@ceph-client ~]# ceph-fuse -k /etc/ceph/ceph.client.admin.keyring -m 10.10.10.201:6789 /mnt
ceph-fuse[19430]: starting ceph client
2020-07-14 20:43:27.386 7fa07dea3f80 -1 init, newargv = 0x55f290d53cb0 newargc=9
ceph-fuse[19430]: starting fuse

[root@ceph-client ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        476M     0  476M   0% /dev
tmpfs           487M     0  487M   0% /dev/shm
tmpfs           487M  7.6M  479M   2% /run
tmpfs           487M     0  487M   0% /sys/fs/cgroup
/dev/sda3        47G  2.2G   45G   5% /
/dev/sda1      1014M  132M  883M  13% /boot
tmpfs            98M     0   98M   0% /run/user/0
ceph-fuse        27G     0   27G   0% /mnt

备注:创建ceph文件系统: https://ceph.readthedocs.io/en/latest/cephfs/createfs/

1.3 对象存储

1.3.1 创建对象存储网关

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
[cephnodes@ceph-manager ~]$ ceph-deploy install --rgw ceph-client --no-adjust-repos
[cephnodes@ceph-manager ~]$ ceph-deploy rgw create ceph-client

[root@ceph-manager ~]# ceph -s
  cluster:
    id:     55355315-f0b4-4901-9691-f7d23e9379d6
    health: HEALTH_OK

  services:
    mon: 1 daemons, quorum ceph-node01 (age 2h)
    mgr: ceph-node01(active, since 2h)
    mds: testcephfs:1 {0=ceph-node01=up:active}
    osd: 3 osds: 3 up (since 3h), 3 in (since 3h)
    rgw: 1 daemon active (ceph-client)

  task status:
    scrub status:
        mds.ceph-node01: idle

  data:
    pools:   7 pools, 256 pgs
    objects: 212 objects, 7.0 KiB
    usage:   3.0 GiB used, 57 GiB / 60 GiB avail
    pgs:     256 active+clean

  io:
client:   85 B/s rd, 0 op/s rd, 0 op/s wr

# 默认使用7480端口
[root@ceph-client ~]# netstat -lntup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name  
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1210/sshd          
tcp        0      0 0.0.0.0:7480            0.0.0.0:*               LISTEN      19750/radosgw      
tcp6       0      0 :::22                   :::*                    LISTEN      1210/sshd          
tcp6       0      0 :::7480                 :::*                    LISTEN      19750/radosgw      
udp        0      0 127.0.0.1:323           0.0.0.0:*                           837/chronyd        
udp6       0      0 ::1:323                 :::*                                837/chronyd

1.3.2 修改默认端口

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[root@ceph-client ~]# vim /etc/ceph/ceph.conf
[global]
fsid = 55355315-f0b4-4901-9691-f7d23e9379d6
mon_initial_members = ceph-node01
mon_host = 10.10.10.201
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd pool default size = 2
mon_allow_pool_delete = true

[client.rgw.ceph-client]
rgw_frontends = "civetweb port=80"

[root@ceph-client ~]# systemctl restart ceph-radosgw@rgw.ceph-client.service

[root@ceph-client ~]# netstat -lntup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name  
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      20547/radosgw      
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1210/sshd          
tcp6       0      0 :::22                   :::*                    LISTEN      1210/sshd          
udp        0      0 127.0.0.1:323           0.0.0.0:*                           837/chronyd        
udp6       0      0 ::1:323                 :::*                                837/chronyd

1.3.3 查看运行结果

Ceph介绍-管理使用(二)

1.4 故障排查

1.4.1 rbd: sysfs write failed

【问题现象】:

1
2
3
4
5
[root@ceph-client ~]# rbd map testrbd --name client.admin
rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable testrbd object-map fast-diff deep-flatten".
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (6) No such device or address

【问题原因】:

  • rbd 块镜像支持两种格式(-image-format format-id):format-id取值为1或2,默认为 2。
  1. format 1:新建 rbd 映像时使用最初的格式。此格式兼容所有版本的 librbd 和内核模块,但是不支持较新的功能,像克隆。
  2. format 2:使用第二版 rbd 格式, librbd 和11 版以上内核模块才支持(除非是分拆的模块)。此格式增加了克隆支持,使得扩展更容易,还允许以后增加新功能。
  • 根据错误信息查看dmessg:
1
2
3
4
5
6
7
8
9
10
11
[root@ceph-client ~]# dmesg | tail
[ 1173.382421] libceph: loaded (mon/osd proto 15/24)
[ 1173.389249] rbd: loaded (major 253)
[ 1173.393763] libceph: no secret set (for auth_x protocol)
[ 1173.393782] libceph: error -22 on auth protocol 2 init
[ 1199.498891] libceph: mon0 10.10.10.201:3300 socket closed (con state CONNECTING)
[ 1199.924710] libceph: mon0 10.10.10.201:3300 socket closed (con state CONNECTING)
[ 1200.927736] libceph: mon0 10.10.10.201:3300 socket closed (con state CONNECTING)
[ 1202.926760] libceph: mon1 10.10.10.201:6789 session established
[ 1202.928441] libceph: client14129 fsid 55355315-f0b4-4901-9691-f7d23e9379d6
[ 1202.947305] rbd: image testrbd: image uses unsupported features: 0x38

不支持的属性0x3c,0x3c是16进制的数值,换算成10进制是3*16+12=60,60的意思是不支持:

32+16+8+4 = exclusive-lock, object-map, fast-diff, deep-flatten

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@ceph-client ~]# rbd info rbd/testrbd
rbd image 'testrbd':
    size 10 GiB in 2560 objects
    order 22 (4 MiB objects)
    snapshot_count: 0
    id: 372615e75ca7
    block_name_prefix: rbd_data.372615e75ca7
    format: 2
    features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
    op_features:
    flags:
    create_timestamp: Tue Jul 14 19:03:07 2020
    access_timestamp: Tue Jul 14 19:03:07 2020
    modify_timestamp: Tue Jul 14 19:03:07 2020

【问题解决】:

动态关闭新功能属性后重新map即可:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@ceph-client ~]# rbd feature disable rbd/testrbd deep-flatten
[root@ceph-client ~]# rbd feature disable rbd/testrbd fast-diff
[root@ceph-client ~]# rbd feature disable rbd/testrbd object-map
[root@ceph-client ~]# rbd feature disable rbd/testrbd exclusive-lock

[root@ceph-client ~]# rbd info rbd/testrbd
rbd image 'testrbd':
    size 10 GiB in 2560 objects
    order 22 (4 MiB objects)
    snapshot_count: 0
    id: 372615e75ca7
    block_name_prefix: rbd_data.372615e75ca7
    format: 2
    features: layering
    op_features:
    flags:
    create_timestamp: Tue Jul 14 19:03:07 2020
    access_timestamp: Tue Jul 14 19:03:07 2020
    modify_timestamp: Tue Jul 14 19:03:07 2020

1.5 参考资料

https://www.dazhuanlan.com/2019/08/16/5d560e805987f/

https://blog.csdn.net/weixin_44691065/article/details/93082965

https://www.jianshu.com/p/a4e1ba361cc9

weinxin
我的微信
如果有技术上的问题可以扫一扫我的微信