ArchThis Forum is for the discussion of Arch Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I am trying to install Arch on a server with a two-disk software RAID (1- mirroring) array. The setup and install goes fine following the instructions here.
However, the last step, installing GRUB, always fails.
The problem is that when I do
Code:
grub> root (hd0,0)
It just says "filesystem type unknown, partition type 0x83"
My partition scheme is as follows:
/dev/sda1 and /dev/sdb1 are 1024 MB /boot partitions, ext4
/dev/sda2 and /dev/sdb2 are 2048 MB swap partitions
/dev/sda3 and /dev/sdb3 are the rest of the drives, /, ext4
/dev/md0 uses /dev/sda1 and /dev/sdb1, and I used --metadata=0.90 to create it with mdadm. As far as I can tell from my googling, that's what usually causes the problem, but I have included the metadata flag.
Unraid your /boot partitions and you will be fine. Grub can't see your RAIDs at boot time anyways, because there are started after the kernel is loaded, which is loaded by Grub. I had the same problem, now I have my /boot on a different disk and everything works fine.
OK, so I got fed up with it and just decided to do it the Gentoo way. So, in order to get Arch installed with software RAID1, I used the following procedure, which is partly from the Arch wiki that I linked to previously and partly from the Gentoo wiki. If anyone else has the problem I did, you can consider this a workaround, but not a solution.
Furthermore, in the Arch wiki link I referred to initially, it says that
Quote:
Nowadays (2009.02), with the mdadm hook in the initrd it it no longer necessary to add kernel parameters concerning the RAID array(s).
I did not find this to be the case. As you can see below, I still need those kernel lines in there. If anyone knows why, please let me know.
Code:
- Boot into the Arch CD.
- When you get to a root shell, manually partition the drives before entering the installer.
- Run 'fdisk /dev/sda'
- Partition the first drive
- 1024 MB boot partition, type fd
- 1024 MB swap partition, type 82
- Remaining is root, type fd
- Run 'fdisk /dev/sdb' and use identical settings
- modprobe raid1
- Now we'll create the RAID partitions for boot and root (not swap)
- mknod /dev/md1 b 9 1
- mknod /dev/md3 b 9 3
- mdadm --create /dev/md1 --level=1 --raid-devices=2 --metadata=0.90 /dev/sda1 /dev/sdb1
- mdadm --create /dev/md3 --level=1 --raid-devices=2 /dev/sda3 /dev/sdb3
- For info, check "cat /proc/mdstat" (you don't need to wait for the sync to finish to continue the installation) and "mdadm --misc --detail /dev/md1"
- Launch the installer with /arch/setup
- The drives are already partitioned, so we just need to make the filesystems.
- Choose "Manually configure block devices, filesystems and mountpoints"
- Make ext4 filesystems on the /dev/md1 and /dev/md3 devices
- Make swap filesystems on the /dev/sda2 and /dev/sdb2 devices
- Update the RAID configuration
- rm /mnt/etc/mdadm.conf
- mdadm --examine --scan >> /mnt/etc/mdadm.conf
- Add the dm_mod module to MODULES in /etc/mkinitcpio.conf
- Add the mdadm and lvm2 hooks to the HOOKS list in /etc/mkinitcpio.conf (before 'filesystems', NOT after), although in this setup I don't need lvm2
- Set USELVM="yes" in /etc/rc.conf
- I installed GRUB manually. I don't know if it will work through the installer, but it might.
- cp -a /mnt/usr/lib/grub/i386-pc/* /mnt/boot/grub
- sync
- mount -o bind /dev /mnt/dev
- mount -t proc none /mnt/proc
- chroot /mnt /bin/bash
- grub
- grub> root (hd0,0)
- grub> setup (hd0)
- grub> quit
- Now I had to edit the grub menu.lst file to use the kernel line referred to in the Arch wiki as the "old style." If I don't include the md= parameters, it can't find the md1 and md3 devices on boot.
- title Arch Linux
- root (hd0,0)
- kernel /vmlinuz26 root=/dev/md3 ro md=1,/dev/sda1,/dev/sdb1 md=3,/dev/sda3,/dev/sdb3
- initrd /kernel26.img
- Reboot
- Once it boots on its own, we'll install grub on the second disk as well.
- grub
- grub> device (hd0) /dev/sdb
- grub> root (hd0,0)
- grub> setup (hd0)
- grub> quit
Nowadays (2009.02), with the mdadm hook in the initrd it it no longer necessary to add kernel parameters concerning the RAID array(s).
I did not find this to be the case. As you can see below, I still need those kernel lines in there. If anyone knows why, please let me know.
This is true for everything except the /boot partition. Notice how you are still specifying /dev/sdx as kernel args? Without those the boot loader can't find the kernel and initrd, which are required for the RAID module to load.
Have you tried physically removing a disk and booting with that config?
This is true for everything except the /boot partition. Notice how you are still specifying /dev/sdx as kernel args?
Yes. That's my point though. The way I interpret the statement in the wiki, I shouldn't have to include those as long as the mdadm hook is in the initrd (and it is).
Quote:
Without those the boot loader can't find the kernel and initrd, which are required for the RAID module to load.
How could it ever be possible to NOT include those lines in the kernel line, then? Obviously I'm still missing a key concept...
It's the chicken-egg scenario. Even though mdadm is in your initrd, your initrd isn't loaded when you are looking at the GRUB menu. The kernel isn't aware of /dev/md<x> until the initrd containing the RAID modules is loaded. So the md<n>=foo arguments tell it that even though it doesn't know what RAID is at that point...you want to refer to the following <x> disks as md<n>. Once the initrd is loaded, normal RAID is established.
I guess what I'm getting at is this: should we remove that part from the Arch wiki? Seems to me that there's no way around it... it can't possibly work correctly without those arguments.
Hrm...that's a tough call. I think it should be amended at least; explaining that the arguments are necessary for the /boot partition.
I use the system rescue disk version 1.3 to create a boot record with my raid.
mount the /dev/md0
I move the /boot/grub to /boot/grub.org and then copy the grub directory from the rescue disk. This is the old grub legacy version.
I then run grub as normal
device (hd0) /dev/sda
root (hd0,0)
setup (hd0)
I then modify the menu.lst file
root (hd0,0)
kernel /vmlinuz root=/dev/md0 (you can use /boot/vmlinuz----.generic also)
If I do not use the menu.lst file the system will not boot,
it will not boot from the command line even though it can
find /vmlinuz
I also find that it often fails to write a boot record to /dev/sdb.
I just do a dd if-/dev/sda of=/dev/sdb count=1 bs=446
when I finally get the system running.
The new ubuntu distros all have grub2 and do not work on a raid setup.
I also modify the mdadm.conf file and take out the UUID stuff, replace with /dev stuff and then
bind -0 the /sys /proc /dev directories
chroot
update-initramfs -u
Otherwise you just boot to the initramfs level.
I regularly create raids and fix them for the local schools Tuxlabs. If any one is interested I can let you have the scripts.
I am going to setup a blog soon with all the details. Our wiki disapeared due to people moving away.
Peter
I set up another machine with identical hardware... same RAID setup. This time, rather than doing the GRUB install manually (as in the procedure I posted previously), I let the installer handle it.
It turns out that I do NOT need the kernel arguments on the machine with GRUB installed via the installer! It boots without them, just like the wiki says it should.
The problem now is that I don't know what the installer does differently, and I'm concerned. It asked me if I wanted to install on both drives in the array, and I told it yes, and it did, supposedly. How can I be sure that GRUB was ACTUALLY installed on both drives in the arrays of each machine? I assume that once they're both done syncing their arrays, I can just remove the first drive in the array and try to boot. Is that a worthwhile test? If they are both capable of booting from either drive, but one has different kernel arguments, then I'm not sure I know what is different between installing GRUB manually and via the installer.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.