Option("mon_max_pg_per_osd",Option::TYPE_UINT,Option::LEVEL_ADVANCED).set_min(1).set_default(250).add_service("mgr").set_description("Max number of PGs per OSD the cluster will allow").set_long_description("If the number of PGs per OSD exceeds this, a ""health warning will be ...
HEALTH_WARN 1 hosts fail cephadm check; 1 pools have many more objects per pg than average; Degraded data redundancy: 1665/56910 objects degraded (2.926%), 1 pg degraded, 1 pg undersized; too many PGs per OSD (259 > max 250) [WRN] CEPHADM_HOST_CHECK_FAILED: 1 hosts fail cephadm c...
too many PGs per OSD (652 > max 300) 《==报错内容 monmap e1: 1 mons at {node241=192.168.2.41:6789/0} election epoch 1, quorum 0 node241 osdmap e408: 5 osds: 5 up, 5 in pgmap v23049: 1088 pgs, 16 pools, 256 MB data, 2889 objects 6100 MB used, 473 GB / 479 GB avail...
一、故障现象: 查看ceph的集群状态:too many PGs per OSD (698 > max 300) # ceph -scluster e2ca994a-00c4-477f-9390-ea3f931c5062 health HEALTH_WARN too many PGs per OSD(698>max300)monmap e1:3mons at{hz-01-ops-tc-ceph-02=172.16.2.231:6789/0,hz-01-ops-tc-ceph-03=172.16.2.172:...
Hello all, after the latest update, done today in the morning, i get a HEALTH_WARN in Ceph -> "too many PGs per OSD (256 > max 200)" Before the update...