回到顶部 参考资料 1. 本地虚拟机ceph 100.000% pgs not active 2. ceph集群提示pgs: 100.000% pgs unknown的一个解决办法
1、没有创建pool之前 [root@cluster9 ceph-cluster]#ceph -scluster: id: d81b3ce4-bcbc-4b43-870e-430950652315health: HEALTH_OK services: mon:1daemons, quorum cluster9 mgr: cluster9(active) osd:3 osds: 3 up, 3indata: pools:0 pools, 0pgs objects:0objects, 0B usage:3.06GiB used, 1...
systemctl stop ceph-osd@1 ``` * b.查看PG状态 ``` ceph pg stat 20 pgs: 20 active+undersized+degraded; 14512 kB data, 302 GB used, 6388 GB / 6691 GB avail; 12/36 objects degraded (33.333%) ``` * c.查看集群监控状态 ``` ceph health detail HEALTH_WARN 1 osds down; Degraded dat...
cephhealthdetailHEALTH_WARN24pgsstale;3/300inosdsaredown...pg2.5isstuckstale+active+remapped,lastacting[2,0]...osd.10isdownsinceepoch23,lastaddress192.168.106.220:6800/11080osd.11isdownsinceepoch13,lastaddress192.168.106.220:6803/11539osd.12isdownsinceepoch24,lastaddress192.168.106.220:6806/11861 I...
6. pgs错误:1 pg undersized 错误描述: root@node1:/var/lib/ceph/osd# ceph health detail HEALTH_WARN 4 mgr modules have failed dependencies; Degraded data redundancy: 1 pg undersized [WRN] MGR_MODULE_DEPENDENCY: 4 mgr modules have failed dependencies ...
services: mon: 1 daemons, quorum ceph-1 mgr: ceph-1(active) mds: cephfs-1/1/1 up {0=ceph-1=up:active}, 2 up:standby osd: 3 osds: 3 up, 3 in data: pools: 2 pools, 188 pgs objects: 22 objects, 2.2 KiB usage: 3.0 GiB used, 57 GiB / 60 GiB avail pgs: 188 active+...
pgmap v26: 64 pgs, 1 pools, 0 bytes data, 0 objects 15459 MB used, 45944 MB / 61404 MB avail 64 active+clean 出现上面信息就表示ceph分布式存储集群环境已经成功了!! ——— ===如果报错1: 2018-02-07 21:53:54.886552 7fdc5cf5a700 -1 monclient(hunting): ERROR: missing keyring, cannot...
$ bin/ceph pg stat 20 pgs: 20 active+undersized+degraded; 14512 kB data, 302 GB used, 6388 GB / 6691 GB avail; 12/36 objects degraded (33.333%) c. 查看集群监控状态 $ bin/ceph health detail HEALTH_WARN 1 osds down; Degraded data redundancy: 12/36 objects degraded (33.333%), 20 pg...
1416 pgs: 6 active+clean+remapped, 1288 active+clean, 3 stale+active+clean, 119 active+undersized+degraded; 74940 MB data, 250 GB used, 185 TB / 185 TB avail; 1292/48152 objects degraded (2.683%) $ ceph pg dump | grep remapped ...
接下来我们要开始启动osd,将pgs分配到osd之上,可以让集群恢复健康。 4.编辑CrushMap,启用osd节点 你的初始监视器可以正常运行后就可以添加 osd 了。要想让集群达到 active + clean 状态,必须安装足够多的 osd 来处理pgs,在完成Mon启动之后,集群就有了默认的 CrushMap,但现在此图还是空的,里面没有任何 osd 映射...