ceph-disk list /dev/vdb filters the devices with /dev//dev/vdb and it looks like it does not work. It should be called with ceph-disk list vdb instead but there is no indication of the problem and this is confusing. The best fix would be for ceph-disk list to work the same with...
ceph-disk [-h] [-v] [--log-stdout] [--prepend-to-path PATH] [--statedir PATH] [--sysconfdir PATH] [--setuser USER] [--setgroup GROUP] {prepare,activate,activate-lockbox,activate-block,activate-journal,activate-all,list,suppress-activate,unsuppress-activate,deactivate,destroy,zap,trigger...
Note: Inceph-disk list,output highlighted sde1 is journal partition for sdb2. Check the output of the Ceph-disk list and map the journal disk partition in command for Ceph preparation. As soon as you run below command OSD.7 came up/in and data rebalance(backfill/re...
source/opt/ceph-container/bin/disk_list.sh functionosd_activate{ if[[-z"${OSD_DEVICE}"]]||[[!-b"${OSD_DEVICE}"]];then log"ERROR: you either provided a non-existing device or no device at all." log"You must provide a device to build your OSD ie: /dev/sdb" ...
Ceph-deploy - Bug #9122: ceph-deploy: disk list throws an exception <id> https://tracker.ceph.com/issues/9122?journal_id=39554 </id> <updated>2014-08-15T05:22:53Z</updated> <author> <name>Alfredo Deza</name> <email>adeza@redhat.com</email> </author> <content type="html">...
$ snap list microceph Name Version Rev Tracking Publisher Notes microceph 0+git.6208776 220 latest/stable canonical✓ - By following the tutorial: https://canonical-microceph.readthedocs-hosted.com/en/latest/tutorial/add_osds/ microceph t...
ceph-disk list suppress-activate 禁止一设备(前缀)激活。用类似/var/lib/ceph/tmp/suppress-activate.sdb的文件标记不想激活的设备,此文件的最后一位是禁止的设备名( /dev/X 去掉 /dev/ 前缀)。函数is_suppressed()会检查并匹配前缀(不含 /dev/ ),也就是说禁止 sdb 的同时也禁止了 sdb1 、 sdb2 等设...
[ceph_deploy][ERROR ] disk_list(args, cfg) [ceph_deploy][ERROR ] File "/usr/lib/python2.7/site-packages/ceph_deploy/osd.py", line 376, in disk_list [ceph_deploy][ERROR ] distro.conn.logger(line) [ceph_deploy][ERROR ] TypeError: 'Logger' object is not callable ...
problem hasn't really been discussed on the list before, so I'm hoping that I can get this figured out since it's stopping me from learning more about ceph. I've tried this with the journal on the same disk and on a separate SSD, both with the same error stopping me. ...
1. UseCeph-disk listcommand in order to understand the mapping of OSD to Journal and identify the disk to be isolated and replaced. [heat-admin@pod1-osd-compute-3 ~]$ sudo ceph-disk list /dev/sda : /dev/sda1 other, iso9660