1. 查看当前crush规则ceph osd crush rule dump ceph osd crush rule dump[{"rule_id":0,"rule_name":"replicated_rule","ruleset":0,"type":1,"min_size":1,"max_size":10,"steps":[{"op":"take","item":-1,"item_name":"default"},{"op":"chooseleaf_firstn","num":0,"type":"host"...
# 把二进制格式的crush map导出到test.bin文件中 ceph osd getcrushmap -o test.bin #用 crushtool 工具把 test.bin 里的二进制数据转换成文本形式保存到 test.txt 文档里。 crushtool -d test.bin -o test.txt crushmap配置中最核心的当属rule了,crush rule决定了三点重要事项: 1、从OSDMap中...
当往集群中新加入 OSD 时,为了减少数据移动对集群性能的影响,Ceph 官方建议逐渐地增加 OSD 的 crush weight,比如起始值为0,先设置为 0.2,等数据迁移结束,再设置为 0.4,依此类推,逐渐增加为 0.6,0.8 和 1 甚至更高。在要停用一个 OSD 时,建议采用相反操作,逐渐减少 OSD 的 crush weight 直至 0. 3. 修改...
$ ceph osd crush rule ls [ "replicated_rule", "erasurepool"] $ ceph osd crush rule dump erasurepool { "rule_id": 1, "rule_name": "erasurepool", "ruleset": 1, "type": 3, "min_size": 3, "max_size": 20, "steps": [ { "op": "take", "item": -1, "item_name": "def...
其中host/chasis/rack 等等在ceph属于中叫做bucket(桶)。可以自定义新的级别bucket,比如新定义一个bucket级别host-SSD ,专门将SSD盘的OSD归入这个bucket中。 可以修改默认 Crush Rule,把隔离域换成 OSDCRUSH使得 Ceph 能自我修复 pg_num和pgd_num 要相等才能实现再平衡...
root@ceph-deploy:~# ceph osd setcrushmap -i /data/ceph/newcrushmap 验证crush运行图是否生效 点击查看代码 root@ceph-deploy:~# ceph osd crush rule dump [ { "rule_id": 0, "rule_name": "replicated_rule", "ruleset": 0, "type": 1, "min_size": 1, "max_size": 8, "steps": ...
# ceph mon dump 查看mon的admin socket # ceph-conf --name mon.ceph-xx-mon00 --show-config-value admin_socket /var/run/ceph/ceph-mon.ceph-xx-mon00.asok CRUSH Map 创建bucket # ceph osd crush add-bucket host-xx host 移动bucket
9.2 crush算法 Unifom List Tree Straw Straw2 #默认使用 9.3 PG与OSD映射调整 默认情况下,crush算法自行对创建的pool中的PG分配OSD,但是可以手动基于权重设置crush算法分配数据的倾向性,比如1T的磁盘权重是1,2T的磁盘就是2,推荐使用相同大小的设备。
[root@ceph1:~]# ceph osd crush rule dump [ { "rule_id": 0, "rule_name": "replicated_ruleset", "ruleset": 0, "type": 1, "min_size": 1, "max_size": 10, "steps": [ { "op": "take", "item": -1, "item_name": "default"}, { "op": "chooseleaf_firstn", "num": ...
查看rule的ID # ceph osd crush rule dump | grep testrule # ceph osd pool set demo crush_ruleset 1 此处1是指在rule里rule_ id设置的值 6)、创建pool # ceph osd pool create demo 64 64replicated testrule 7)、在ceph.conf中加入防止集群重启crush map被重置 osd crush update on start = false...