接着,处理额外挂载的磁盘,然后创建osd,成功后可以看到ceph集群已经变为可用状态。 # 查询各节点磁盘列表ceph-deploy disk list local-debian{0..2}# 擦除磁盘数据ceph-deploy disk zap local-debian0 /dev/sdb ceph-deploy disk zap local-debian1 /dev/sdb ceph-deploy disk zap local-debian2 /dev/sdb# ...
[SWAP] └─centos-home 253:2 0 21.4G 0 lvm /home vdb 252:16 0 20G 0 disk ├─vdb1 252:17 0 15G 0 part /var/lib/ceph/osd/ceph-2 #挂载成功└─vdb2 252:18 0 5G 0 part 查看OSD [cephuser@ceph-admin cluster]$ ceph-deploy disk list ceph-node1 ceph-node2 ceph-node3 ......
[cephuser@ceph-admin cluster]$ ceph-deploy disk list ceph-node1 ceph-node2 ceph-node3 使用zap选项删除所有osd节点上的分区 [cephuser@ceph-admin cluster]$ ceph-deploy disk zap ceph-node1:/dev/vdbceph-node2:/dev/vdbceph-node3:/dev/vdb 准备OSD(使用prepare命令) [cephuser@ceph-admin cluster...
1 $ ceph-deploy disk list ceph-node1 [ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf [ceph-node1][DEBUG ] connection detected need for sudo [ceph-node1][DEBUG ] connected to host: ceph-node1 [ceph-node1][DEBUG ] detect platform information ...
[cephuser@ceph-admin cluster]$ ceph-deploy disk list ceph-node1 ceph-node2 ceph-node3 使用zap选项删除所有osd节点上的分区 [cephuser@ceph-admin cluster]$ ceph-deploy disk zap ceph-node1:/dev/vdb ceph-node2:/dev/vdb ceph-node3:/dev/vdb ...
11、查看节点可用磁盘:ceph-deploy disk list ceph-node1 ceph-node2 ceph-node3 删除磁盘上所有分区: ceph-deploy disk zap ceph-node1:/dev/sdb ceph-node2:/dev/sdb ceph-node3:/dev/sdb 准备OSD:ceph-deploy osd prepare ceph-node1:/dev/sdb ceph-node2:/dev/sdb ceph-node3:/dev/sdb ...
[cephuser@ceph-admin cluster]$ ceph-deploy disk list ceph-node1 ceph-node2 ceph-node3 1. 使用zap选项删除所有osd节点上的分区 [cephuser@ceph-admin cluster]$ ceph-deploy disk zap ceph-node1:/dev/sdb ceph-node2:/dev/sdb ceph-node3:/dev/sdb ...
ceph-deploydisklistHOST Subcommandzapzaps/erases/destroys a device’s partition table and contents. It actually usesceph-volumelvmzapremotely, alternatively allowing someone to remove the Ceph metadata from the logical volume. osd¶ Manage OSDs by preparing data disk on remote host.osdmakes use of...
[cephuser@ceph-admin cluster]$ ceph-deploy disk list ceph-node1 ceph-node2 ceph-node3 使用zap选项删除所有osd节点上的分区 [cephuser@ceph-admin cluster]$ ceph-deploy disk zap ceph-node1:/dev/vdb ceph-node2:/dev/vdb ceph-node3:/dev/vdb 准备OSD(使用prepare命令) [cephuser@ceph-admin ...
#查看某节点上所有磁盘ceph-deploy disk list {node-name [node-name]...}#清除指定磁盘上的分区,用于重装ceph集群#ceph-deploy disk zap {osd-server-name}:{disk-name}例如: ceph-deploy disk zap ceph-1:/dev/vdb 1. monitor操作 #从某个节点上移除Ceph MON进程ceph-deploy mon destroy {host-name [...