1.现象如下 cluster: id: 7338b120-e4a3-4acd-9d05-435d9c4409d1 health: HEALTH_WARN 4 slow ops, oldest one blocked for 59880 sec, mon.ceph-node01 has slow ops services: mon: 3 daemons, quorum ceph-node…
ceph-slow ops https://zhuanlan.zhihu.com/p/440464163 https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/troubleshooting_guide/troubleshooting-osds#slow-requests-and-requests-are-blocked An OSD with slow requests is every OSD that is not able to service the I/O operation...
1 slow ops, oldest one blocked for 1416 sec, mon.node142 has slow ops services: mon: 2 daemon, quorum node140,node142 (age 8d) mgr: admin(active, since 8d), standbys: node140 mds: cephfs:1 {0=node140=up:active} 1 up:standby osd: 22 osds: 22 up (since 23m), 18 in (sinc...
Ceph SLOW OPS occur during disk expansion or replacement. The output of ceph status shows HEALTH_WARN with SLOW OPS Example: Raw # ceph -s cluster: id: b0fd22b0-xxxx-yyyy-zzzz-6e79c93b366c health: HEALTH_WARN 2 slow ops, oldest one blocked for 1395 sec, mon.node02 has slow ops En...
[root@edon-00 ~]# ceph health detail HEALTH_WARN Too many repaired reads on 1 OSDs; Degraded data redundancy: 1/104988123404 objects degraded (0.000%), 1 pg degraded; 1 slow ops, oldest one blocked for 25234 sec, osd.33 has slow ops [WRN] OSD_TOO_MANY_REPAIRS: Too many repaired rea...
1 slow ops, oldest one blocked for 44 sec, mon.ceph26 has slow ops services: mon: 3 daemons, quorum ceph25,ceph26,ceph27 (age 11m) mgr: ceph25.bassgu(active, since 23m), standbys: ceph26.bgrvgp osd: 3 osds: 3 up (since 47s), 3 in (since 47s) ...
ceph:v15.2.11 / rook: 1.6.3 硬盘创建 rook的版本大于1.3,无法使用目录创建集群,要使用单独的裸盘进行创建,也就是创建一个新的磁盘,挂载到宿主机,不进行格式化,直接使用即可 sda ├─sda1 xfs 2a65c467-96ee-406c-b275-2b299f95e3c7 /boot
13 slow ops, oldest one blocked for 74234 sec, mon.hv4 has slow ops On node hv4 we were seeing Code: Dec 22 13:17:58 hv4 ceph-mon[2871]: 2019-12-22 13:17:58.475 7f552ad45700 -1 mon.hv4@0(leader) e22 get_health_metrics reporting 13 slow ops, oldest is osd_failure(fail...
POOL_HAS_TARGET_SIZE_BYTES_AND_RATIO 一個以上儲存區已同時設定 target_size_bytes 及target_size_ratio ,以預估儲存區的預期大小。 使用 ceph osd pool set _POOL_NAME_ target_size_bytes 0 指令將儲存區的值設為零。 TOO_FEW_OSDS 儲存體叢集中的 OSD 數目低於可配置的臨界值 osd_pool_default_siz...
Alternatively, if you are not a github user, you should run the following command on a system that has git installed: git clone https://github.com/ceph/ceph.git When theceph/cephrepository has been cloned to your system, run the following commands to move into the clonedceph/cephrepository...