LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - General (https://www.linuxquestions.org/questions/linux-general-1/)
-   -   How to migrate from raid 6 to raid 5 using mdadm and LVM (https://www.linuxquestions.org/questions/linux-general-1/how-to-migrate-from-raid-6-to-raid-5-using-mdadm-and-lvm-939508/)

haerta 04-12-2012 01:56 PM

How to migrate from raid 6 to raid 5 using mdadm and LVM
 
Hello!

I read through ifferent forums and blogs but haven't found a detailed enough hint how to do it.

I currently have a raid 6 with 4 harddrives:

Code:

root@Gnoccho:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid6 sdc1[2] sda1[0] sdd1[3] sdb1[1]
      234435584 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]

I would like to get rid of one harddrive and migrate to raid 5 with three harddrives (And one spare)

Can somebody tell me what commands I have to use? I don't know whether it is important: I have LVM configured to use the whole array with one volumegroup and two volumes (root and swap)

Thank you very much,

haerta

smallpond 04-13-2012 04:14 PM

I've never tried doing anything as drastic as what you are doing, but if you want to give it a try it should work.

You want to run mdadm --grow with the new raid parameters just like you were doing a create. md is smart enough to keep track of the old parameters and the new parameters. It will reshape a stripe at a time from old to new and keep track of how far it has gotten so that reads and writes to the old part will use the old shape and to the new part will use the new shape. It will take a long time and will not be good if it gets interrupted, since you could lose part of the stripe it is currently working on. So the commands would be something like:
Code:

cat /proc/mdstat
Make sure that it isn't in the middle of a rebuild. It should show a happy raid6 named /dev/md0 with 4 drives - let's call them sda1, sdb1, sdc1, sdd1 (YMMV)
Code:

mdadm --grow /dev/md0 --level=raid5 --raid-devices=3
mdadm may want to do this in two steps (like I said, I've never tried this) in which case you first need to remove a drive
Code:

mdadm --manage --remove /dev/md0 /dev/sdd1

Usual caveats: backup first, IANACS, following my advice may open the velociraptor cage, etc.

Good luck! Feel free to yell and scream if it doesn't work and I will refund your fee for my advice.

haerta 04-16-2012 02:01 PM

Thanks
 
Thanks four answer!

I don't need to yell or scream, I guess you gave the right hints.

I was unable to remove sdd1 because it was busy:
Code:

mdadm --manage --remove /dev/md0 /dev/sdd1
mdadm: hot remove failed for /dev/sdd1: Device or resource busy

So I tried to grow without removing the fourth drive.

Code:

mdadm --grow /dev/md0 --level=raid5 --raid-devices=3
mdadm: /dev/md0: Cannot grow - need backup-file

Ok, since I had a external harddrive available I tried following:

Code:

mdadm --grow /dev/md0 --level=raid5 --raid-devices=3 --backup-file=/media/120GB\ extern/mdadm-backupfile
At least my computer is doing something:

Code:

cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid6 sdc1[2] sdd1[3] sda1[0] sdb1[1]
      234435584 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
      [>....................]  reshape =  1.6% (1908736/117217792) finish=767.6min speed=2503K/sec
     
unused devices: <none>

I will post again if it did not work. Maybe then it's time to ask for refund ;-)

haerta 04-19-2012 05:26 PM

All Done
 
I successfully migrated from raid 6 to raid 5 with one spare drive

I grew my raid 5 to get more free space (actually that was the reason why I wanted to get rid of my raid 6)
Code:

mdadm --grow /dev/md0 --level=raid5 --raid-devices=4 --backup-file=/media/120GB\ extern/mdadm-backupfile
I ended with a nice raid 5 with 4 drives.

Thanks!

haerta 04-20-2012 05:17 PM

There still was work to do. I realized that after growing my raid 5 to use all four drives that only the Volumegroup on top of the raid was resized to the whole /dev/md0 but not the LogicalVolume in it:

Code:

lvdisplay /dev/MyVolumeGroup/MyRootVolume
  --- Logical volume ---
  LV Name                /dev/MyVolumeGroup/MyRootVolume
  VG Name                MyVolumeGroup
  LV UUID                ud63WI-Pwu4-97Rx-lSE5-mYdF-yPmP-mEDX0a
  LV Write Access        read/write
  LV Status              available
  # open                1
  LV Size                216,12 GiB
  Current LE            55327
  Segments              1
  Allocation            inherit
  Read ahead sectors    auto
  - currently set to    256
  Block device          253:0

MyRootVolume should occupy almost all space of the RAID (With one LVM VolumeGroup spanned over the whole RAID device). With 4 x 120GB Harddrives I should have about 3 x 120GB (One drive for parity data in RAID5) available.
I only have one Root Volume and one swap Volume (8GB).

Let's verify that the VolumeGroup on /dev/md0 has free space available:

Code:

vgdisplay /dev/MyVolumeGroup
  --- Volume group ---
  VG Name              MyVolumeGroup
  System ID           
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access            read/write
  VG Status            resizable
  MAX LV                0
  Cur LV                2
  Open LV              2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size              335,36 GiB
  PE Size              4,00 MiB
  Total PE              85853
  Alloc PE / Size      57235 / 223,57 GiB
  Free  PE / Size      28618 / 111,79 GiB
  VG UUID              PHmGGp-nKv1-ewUr-9TcX-j2Cu-YbH3-0DZp7P

So I needed to add those 111,79 GB to the LogicalVolume:

Code:

lvresize -L +111,78GB /dev/MyVolumeGroup/MyRootVolume
  Rounding up size to full physical extent 111,78 GiB
  Extending logical volume MyRootVolume to 327,90 GiB
  Logical volume MyRootVolume successfully resized

Let's verify that the LogicalValue has grown:
Code:

lvdisplay /dev/MyVolumeGroup/MyRootVolume
  --- Logical volume ---
  LV Name                /dev/MyVolumeGroup/MyRootVolume
  VG Name                MyVolumeGroup
  LV UUID                ud63WI-Pwu4-97Rx-lSE5-mYdF-yPmP-mEDX0a
  LV Write Access        read/write
  LV Status              available
  # open                1
  LV Size                327,90 GiB
  Current LE            83943
  Segments              2
  Allocation            inherit
  Read ahead sectors    auto
  - currently set to    6144
  Block device          253:0

So MyRootVolume has grown from 216,12 to 327,90 GiB.

According to the guide referenced below I have to make sure that the file system of the LogicalVolume has grown too. Let‘s see if it has:

Code:

df -kh
Dateisystem            Größe Benut  Verf Ben%% Eingehängt auf
/dev/mapper/MyVolumeGroup-MyRootVolume
                      217G  188G  22G  90% /
udev                  995M  8,0K  995M  1% /dev
tmpfs                402M  1,1M  401M  1% /run
none                  5,0M    0  5,0M  0% /run/lock
none                1005M  160K 1004M  1% /run/shm
/dev/mapper/MyVolumeGroup-MyRootVolume
                      217G  188G  22G  90% /home

So we see that MyRootVolume only has 217 GB in size and still reflects the old size. Only 22GB are left on /.

I can resize the RootVolume (Mountpoint /) with
Code:

btrfs filesystem resize max /
Resize '/' of 'max'

Verify that we have now 111GB more space for files:
Code:

df -kh
Dateisystem            Größe Benut  Verf Ben%% Eingehängt auf
/dev/mapper/MyVolumeGroup-MyRootVolume
                      328G  188G  134G  59% /
udev                  995M  8,0K  995M  1% /dev
tmpfs                402M  1,1M  401M  1% /run
none                  5,0M    0  5,0M  0% /run/lock
none                1005M  160K 1004M  1% /run/shm
/dev/mapper/MyVolumeGroup-MyRootVolume
                      328G  188G  134G  59% /home


We are done!

A good guide to LVM resizing:http://www.tcpdump.com/kb/os/linux/l...de/expand.html

syg00 04-20-2012 06:16 PM

Now there is an interesting concoction - btrfs on LVM RAID ...
People seem to go one way or the other, not both.

Glad you got it all working.


All times are GMT -5. The time now is 07:51 AM.