集群状态
command | example | description |
---|---|---|
ceph version | 集群组件版本 | |
ceph -s | 集群健康状况、节点调度信息等 | |
ceph health detail | 集群健康状况细节 | |
ceph df | 集群中存储量的概览和细节 | |
ceph osd stat | 查看集群osd实例概览 | |
ceph osd df | 集群中每个osd的容量使用情况 | |
rados df | 查看每个pool使用情况 | |
ceph osd pool ls | 列出pool | |
ceph osd pool ls detail | 列出pool及细节 | |
ceph crash ls | 列出所有崩溃问题 | |
ceph crash ls-new | 列出新出现的崩溃问题 | |
ceph crash info |
ceph crash info 2023-02-01….. | 展示具体崩溃问题信息 |
ceph crash archive |
ceph crash archive 2023-02-01.. | 归档指定崩溃问题 |
ceph crash archive-all | 归档所有崩溃问题(解决WARN) | |
ceph daemon osd. |
ceph daemon osd.3 config show | 查看运行中的osd配置 |
osd
command | example | description |
---|---|---|
ceph osd tree | ||
它提供了每个OSD的列表还包括类、权重、状态,OSD所在的节点,以及任何重新加权或优先级 | ||
ceph osd find |
ceph osd find 8 | 快速定位osd机器位置等信息 |
ceph osd down |
ceph osd down 0 | 下线osd,此osd不再接收读写操作,但是还活着 |
ceph osd up |
ceph osd up 0 | 拉起osd,该osd开始接收读写 |
ceph osd out |
ceph osd out 0 | 将osd驱逐出集群,此时可对此osd进行维护 |
ceph osd in |
ceph osd in 0 | 将osd加入集群 |
ceph osd rm |
ceph osd rm 0 | 在集群中删除一个 osd,可能需要先 stop 该 osd,即 systemctl stop osd.0 |
ceph osd crush rm |
ceph osd crush rm node1 | 在集群中删除一个host节点 |
ceph osd getmaxosd | ||
查看最大osd的个数及当前osd个数(如开启动态调整部署,则自适应无需设置) | ||
ceph osd setmaxosd |
ceph osd setmaxosd 20 | 设置最大osd个数 |
ceph osd pause | ||
暂停osd服务,整个集群不再接收数据 | ||
ceph osd unpause | ||
开启osd服务,开启后再次接收数据 |
pool
command | example | description |
---|---|---|
ceph osd pool create |
ceph osd pool create test_pool | 创建pool, pg_num可选, 通常结合下面声明pool类型 |
ceph osd pool application enable |
ceph osd pool application enable test_pool rbd | 受支持的类型为:cephfs、rbd、rgw |
ceph osd pool set-quota |
ceph osd pool set-quota test_pool max_bytes $((10 * 1024 * 1024)) | 设置pool配额 |
ceph osd pool get-quota |
ceph osd pool get-quota test_pool | 查看pool配额 |
ceph osd pool set |
ceph osd pool set test_pool pg_num 64 | 修改pool pg_num |
ceph osd pool set |
ceph osd pool set test_pool pgp_num 64 | 修改pool pgp_num |
ceph osd pool set |
ceph osd pool set test_pool size 2 | 修改副本数 |
ceph osd pool set |
ceph osd pool set test_pool min_size 1 | 修改degrade副本数 |
ceph osd pool rename |
ceph osd pool rename test_pool test_new_pool | 重命名pool |
ceph osd pool delete |
ceph osd pool delete test_new_pool test_new_pool –yes-i-really-really-mean-it | 删除pool,需要临时设置mon_allow_pool_delete=true才能成功删除 |
ceph –admin-daemon /var/run/ceph/ceph-mon. |
ceph –admin-daemon /var/run/ceph/ceph-mon.ceph-pre-01.asok config set mon_allow_pool_delete true | 临时设置mon_allow_pool_delete=true |
rgw
command | example | description |
---|---|---|
radosgw-admin user create –uid= |
radosgw-admin user create –uid=ohmyuser –display-name=”ohmyuser” --email=ohmyuser@ohops.org | 创建rgw用户 |
s3cmd mb s3:// |
s3cmd mb s3://test-backup | 创建存储桶 |
radosgw-admin bucket link –uid= |
radosgw-admin bucket link –uid=ohmyuser –bucket=test-backup | 绑定用户到存储桶 |
radosgw-admin quota set –uid= |
radosgw-admin quota set –uid=ohmyuser –quota-scope=user –max-size=100TB | 设置用户配额 |
radosgw-admin quota enable –quota-scope=user –uid= |
radosgw-admin quota enable –quota-scope=user –uid=ohmyuser | 开启用户配额配置 |
adosgw-admin quota disable –quota-scope=user –uid= |
radosgw-admin quota disable –quota-scope=user –uid=ohmyuser | 关闭用户配额设置 |
radosgw-admin quota set –uid= |
radosgw-admin quota set –uid=ohmyuser –quota-scope=bucket –max-size=100TB | 设置存储桶配额 |
radosgw-admin quota enable –quota-scope=bucket –uid= |
radosgw-admin quota enable –quota-scope=bucket –uid=ohmyuser | 开启存储桶配额配置 |
radosgw-admin quota disable –quota-scope=bucket –uid= |
radosgw-admin quota disable –quota-scope=bucket –uid=ohmyuser | 关闭存储桶配额设置 |
radosgw-admin user info –uid= |
radosgw-admin user info –uid=ohmyuser | 查看rgw用户信息 |
转载请注明来源, 欢迎对文章中的引用来源进行考证, 欢迎指出任何有错误或不够清晰的表达, 可以邮件至 chinaops666@gmail.com