Replace disks Ceph Print

  • ceph, proxmox, osd
  • 0

If you would like to use an existing host that is already part of the cluster, and there is sufficient free space on that host so that all of its data can be migrated off, then you can instead do:

ceph osd crush unlink $OLDHOST default

where “default” is the immediate ancestor in the CRUSH map. (For smaller clusters with unmodified configurations this will normally be “default”, but it might also be a rack name.) You should now see the host at the top of the OSD tree output with no parent:

$ bin/ceph osd tree
-5             0 host oldhost
10   ssd 1.00000     osd.10        up  1.00000 1.00000
11   ssd 1.00000     osd.11        up  1.00000 1.00000
12   ssd 1.00000     osd.12        up  1.00000 1.00000
-1       3.00000 root default
-2       3.00000     host foo
 0   ssd 1.00000         osd.0     up  1.00000 1.00000
 1   ssd 1.00000         osd.1     up  1.00000 1.00000
 2   ssd 1.00000         osd.2     up  1.00000 1.00000

after the move you execute following command to start the rebalance , in case you only got 2 nodes you need to let ceph know that you only need to use 1 host

ceph osd pool set {data} size 1
ceph osd pool set {data} min_size 1

{data} = the pool name

now lets check if the node is still in use

while ! ceph osd safe-to-destroy $(ceph osd ls-tree $OLDHOST); do sleep 60 ; done

if it says its not in use then you can start to opt out all disks on that host and destroy them (or just replace the disks you need to replace  )

then you can execute following commands to ge it back up and running as before

ceph osd crush move $OLDHOST root=default
ceph osd pool set {data} size 3
ceph osd pool set {data} min_size 2

now everythi!ng should start again to rebuild to the cluster

it can take some time when ceph is rebuild , in that time it is posisble that some vps servers are not responding until the rebuild is complete

repeat above steps for every server you need to replace the disks from


Was dit antwoord nuttig?

« Terug