IssueCeph health reports "too many PGs per OSD (250 > max 200)" warning.Environment Red Hat Ceph Storage 3.x Subscriber exclusive content A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more. Current Customers and Partners Log in for full access Log In...
一、故障现象: 查看ceph的集群状态:too many PGs per OSD (698 > max 300) # ceph -scluster e2ca994a-00c4-477f-9390-ea3f931c5062 health HEALTH_WARN too many PGs per OSD(698>max300)monmap e1:3mons at{hz-01-ops-tc-ceph-02=172.16.2.231:6789/0,hz-01-ops-tc-ceph-03=172.16.2.172:...
too many PGs per OSD (652 > max 300) 《==报错内容 monmap e1: 1 mons at {node241=192.168.2.41:6789/0} election epoch 1, quorum 0 node241 osdmap e408: 5 osds: 5 up, 5 in pgmap v23049: 1088 pgs, 16 pools, 256 MB data, 2889 objects 6100 MB used, 473 GB / 479 GB avail...
理所当然看到mon_max_pg_per_osd 这个值啊,我修改了。已经改成了1000 [mon]mon_max_pg_per_osd=1000 是不是很奇怪,并不生效。通过config查看 # ceph --show-config |grep mon_max_pg mon_max_pg_per_osd = 250 还是250. 继续看源码 在options.cc中看到 ...