ceph集群状态 pgs not scrubbed in time 检查集群状态 # ceph -scluster:id:83738b81-56e4-4d34-bdc2-3a60d789d224health:HEALTH_WARN75pgsnotscrubbedintimeservices:mon:3daemons,quorummaster,node01,node02(age4m)mgr:master(active,since4m),standbys:node01,node02mds:3up:standbyosd: 3 osds:3up(sin...
[root@node437 ~]# ceph health detailHEALTH_WARN 3 pgs not deep-scrubbedintime; 3 pgs not scrubbedintime; 3 pool(s) have no replicas configured [WRN] PG_NOT_DEEP_SCRUBBED: 3 pgs not deep-scrubbedintime pg 3.19 not deep-scrubbed since 2021-12-27T14:54:35.417018+0800 pg 3.10 not deep-...
[1] PGs not scrubbed in time. [2] PG stuck in recovering. Resolution To resolve the issue, follow the step in theWorkaround: If time allows, follow theDiagnostic Stepsto gather artifacts regarding this issue Open a Support case with Red Hat reference this KCS Article, #7063971. For non...
Ceph pgs not deep-scrubbed in time异常处理 处理流程: 查看ceph状态 ceph -s 查看ceph详细健康状态 ceph health detail 修复 ceph pg deep-scrub 2.7f 批量修复命令 for i inseq 0 7;do ceph pg deep-scrub 3.$i;done 此命令只能针对相对比较连续的。 如下图...
HEALTH_WARN pgs not scrubbed / deep-scrubbed in time while some PGs remain in scrubbing / deep-scrubbing "indefinitely". It is possible for the PG currently being scrubbed / deep-scrubbed to hang (become a zombie or be in a defunct state). If this issue persists, it will prevent other ...
I think the options mon_warn_pg_not_deep_scrubbed_ratio = 0 will remove this message but not. How i can disable this message. Thanks 历史记录 说明 行为 #1 由Sten Aus更新于超过 5 年之前 Exatctly same experience here. 行为 #2
Ceph is a distributed object, block, and file storage platform - doc/rados: add "pgs not deep scrubbed in time" info · ceph/ceph@d620a51
If I understand the code in PGMap.cc correctly, it should warn when a PG was last scrubbed earlier than:if pool.last_deep_scrub_stamp < now - (deep_scrub_interval * mon_warn_pg_not_deep_scrubbed_ratio + deep_scrub_interval)So that would be:2020-03-24 < 2020-04-06 - (31 * ...
> root@heku1 ~# ceph daemon osd.0 config get mon_warn_not_deep_scrubbed > { > "mon_warn_not_deep_scrubbed": "691200" < 8 days > } > > Can me help anyone? > > Best Regards > Alex > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com >http://...
:~$ sudo ceph health detail HEALTH_WARN 1 MDSs report slow metadata IOs; Reduced data availability: 27 pgs inactive, 23 pgs incomplete; 23 pgs not deep-scrubbed in time; 23 pgs not scrubbed in time; 5271 slow ops, oldest one blocked for 94976 sec, daemons [osd.16,osd.17,osd.18,osd...