How to migrate from raid 6 to raid 5 using mdadm and LVM
Hello!
I read through ifferent forums and blogs but haven't found a detailed enough hint how to do it. I currently have a raid 6 with 4 harddrives: Code:
root@Gnoccho:~# cat /proc/mdstat Can somebody tell me what commands I have to use? I don't know whether it is important: I have LVM configured to use the whole array with one volumegroup and two volumes (root and swap) Thank you very much, haerta |
I've never tried doing anything as drastic as what you are doing, but if you want to give it a try it should work.
You want to run mdadm --grow with the new raid parameters just like you were doing a create. md is smart enough to keep track of the old parameters and the new parameters. It will reshape a stripe at a time from old to new and keep track of how far it has gotten so that reads and writes to the old part will use the old shape and to the new part will use the new shape. It will take a long time and will not be good if it gets interrupted, since you could lose part of the stripe it is currently working on. So the commands would be something like: Code:
cat /proc/mdstat Code:
mdadm --grow /dev/md0 --level=raid5 --raid-devices=3 Code:
mdadm --manage --remove /dev/md0 /dev/sdd1 Usual caveats: backup first, IANACS, following my advice may open the velociraptor cage, etc. Good luck! Feel free to yell and scream if it doesn't work and I will refund your fee for my advice. |
Thanks
Thanks four answer!
I don't need to yell or scream, I guess you gave the right hints. I was unable to remove sdd1 because it was busy: Code:
mdadm --manage --remove /dev/md0 /dev/sdd1 Code:
mdadm --grow /dev/md0 --level=raid5 --raid-devices=3 Code:
mdadm --grow /dev/md0 --level=raid5 --raid-devices=3 --backup-file=/media/120GB\ extern/mdadm-backupfile Code:
cat /proc/mdstat |
All Done
I successfully migrated from raid 6 to raid 5 with one spare drive
I grew my raid 5 to get more free space (actually that was the reason why I wanted to get rid of my raid 6) Code:
mdadm --grow /dev/md0 --level=raid5 --raid-devices=4 --backup-file=/media/120GB\ extern/mdadm-backupfile Thanks! |
There still was work to do. I realized that after growing my raid 5 to use all four drives that only the Volumegroup on top of the raid was resized to the whole /dev/md0 but not the LogicalVolume in it:
Code:
lvdisplay /dev/MyVolumeGroup/MyRootVolume I only have one Root Volume and one swap Volume (8GB). Let's verify that the VolumeGroup on /dev/md0 has free space available: Code:
vgdisplay /dev/MyVolumeGroup Code:
lvresize -L +111,78GB /dev/MyVolumeGroup/MyRootVolume Code:
lvdisplay /dev/MyVolumeGroup/MyRootVolume According to the guide referenced below I have to make sure that the file system of the LogicalVolume has grown too. Let‘s see if it has: Code:
df -kh I can resize the RootVolume (Mountpoint /) with Code:
btrfs filesystem resize max / Code:
df -kh We are done! A good guide to LVM resizing:http://www.tcpdump.com/kb/os/linux/l...de/expand.html |
Now there is an interesting concoction - btrfs on LVM RAID ...
People seem to go one way or the other, not both. Glad you got it all working. |
All times are GMT -5. The time now is 07:51 AM. |