LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   CentOS (https://www.linuxquestions.org/questions/centos-111/)
-   -   Migrate /boot to RAID 1 (https://www.linuxquestions.org/questions/centos-111/migrate-boot-to-raid-1-a-4175597591/)

circus78 01-16-2017 04:17 PM

Migrate /boot to RAID 1
 
Hi,
I have a physical server with two 80GB hard disk.

Code:

# fdisk -l /dev/sda

Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0005c12b

  Device Boot      Start        End      Blocks  Id  System
/dev/sda1  *          1          64      512000  83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              64        9605    76635136  8e  Linux LVM
root@server:~# fdisk -l /dev/sdb

Disk /dev/sdb: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0005c12b

  Device Boot      Start        End      Blocks  Id  System
/dev/sdb1  *          1          64      512000  83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sdb2              64        9605    76635136  8e  Linux LVM


I don't remember why, but I didn't created /boot on RAID mount.
Now I would like to "upgrade" sda1 and sdb1 to a RAID 1 device.

Which steps should I follow?

I think:

1. change partition type with fdisk (on both disks at same time?)
2. create software raid device with mdadm
3. mount new raid device somewhere (eg. /mnt/newboot)
4. copy /boot/* to /mnt/newboot

.. at this point I need some help for remaining steps: grub? /etc/fstab?

Some other info:

Code:

# uname -ar
Linux server 2.6.32-642.6.1.el6.x86_64 #1 SMP Wed Oct 5 00:36:12 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux


# cat /etc/redhat-release
CentOS release 6.8 (Final)

# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Wed Nov 14 10:50:34 2012
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/vg_server-lv_root /                      ext4    defaults        1 1
UUID=a5533f23-9746-4bf5-8085-1fd0626cae22 /boot                  ext4    defaults        1 2
/dev/mapper/vg_server-lv_swap swap                    swap    defaults        0 0
tmpfs                  /dev/shm                tmpfs  defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                  /sys                    sysfs  defaults        0 0
proc                    /proc                  proc    defaults        0 0



Thank you

Pearlseattle 01-27-2017 02:18 PM

Hi

I have absolutely no experience with LVM.

Anyway, concerning a "pure" server using only raid (and no LVM), this is the /etc/fstab of my server, which is booting from a raid1 (showing 2nd line just to show that there are no other tricks for other raid-partitions):
Code:

/dev/md1                /boot          ext4            noatime        0 2
/dev/md2                /              ext4            noatime        0 1

Important - 1
You'll need to install grub on the MBR of both disks (e.g. "grub-install /dev/sda" AND "grub-install /dev/sdb"), so that if e.g. your 1st HDD is kaputt + you are a very lucky person having a BIOS which is able to understand that and decides to boot from the 2nd HDD, the PC/server will be able to load GRUB from the 2nd HDD.

(maybe) Important - 2
Not 100% sure if this is really needed, but I'm currently passing to the kernel the option "domdadm" (for some kind of early raid-member-autodiscovery? Cannot remember...).
Therefore, in "/etc/default/grub" of grub2 I have currently this line:
Code:

GRUB_CMDLINE_LINUX_DEFAULT="net.ifnames=0 domdadm"
Not sure if this is really needed because this is explicitly mentioned in Gentoo's instructions but I've seen other threads not mentioning it - in any case it won't hurt.


I don't remember if having the mdadm-stuff compiled in the kernel (instead of modules) was a must, but I would go for it if you can - just to be on the safe side... .

syg00 01-27-2017 06:17 PM

Nothing is ever as simple as it first looks.
As well as copying the data you have to install grub to both devices (note grub-install is a Debian-ism, CentOS likely won't have it). You have to ensure that the install of grub only refers to the disk it is installed on - i.e. you can't simply run setup from the good system to the second disk as it will refer back to the first disk for stage2 loading. Bad things happen if the first disk fails.
Then you have to make sure grub and the initrd can handle mdadm devices - on both disks. Very version/distro specific.

Can be done, but as you are currently stuck with legacy grub, you might as well upgrade to CentOS 7 and set it up from the start - and get grub2 as a bonus.
Not to mention systemd .... :p

rknichols 01-27-2017 08:54 PM

GRUB legacy doesn't understand RAID devices. The only way /boot can be on a RAID device is if you use RAID header format 0.9 or 1.0, which are placed at the end of the device. An unaware program will just see the filesystem. The danger there is that anything that writes to the filesystem in that condition (and that includes just mounting it read/write) will desynchronize the array and compromise the data.

Beyond that, if you're lucky and the BIOS treats whatever disk it booted from as the "first BIOS disk" (0x80), then simply running "grub-install /dev/sdb" should "just work". If not, the suggestions I've seen are to have a fallback stanza in the GRUB menu with "root (hd1,0)" in place of "root (hd0,0)". No, I don't think that fallback is going to be automatic, and that would pose a problem for an unattended reboot. Testing whether all that works for the various ways the first disk might fail and how your BIOS might handle it is quite a challenge.


All times are GMT -5. The time now is 05:19 AM.