RAID drivers changed from /dev/md[0-2] to /dev/md12[5-7]
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
RAID drivers changed from /dev/md[0-2] to /dev/md12[5-7]
i have 2 disks /dev/sdd & /dev/sde each contains 4 logical partitions
sd[de][5-8]
i created raid 0 array from 2 partitions /dev/sdd5 and /dev/sde5 and mounted it in /dev/md0
and raid 1 in 2 other partitions /dev/sd[de]6 and mounted it in /dev/md1
and the last 4 partition raid 5 and mounted it in /dev/md2
every thing was good and appeared ok when i ran mdadm --query /dev/md[0-2]
but i found that i don't have /etc/mdadm.conf file
then i rebooted the machine but after that i found /dev/md[0-2] changed to /dev/md12[5-7]
i don't know what happed and what should i do?!!!
It sounds like something slowed down the disk scan, which reordered the discovery.
Mine comes up as md127, and there is only one. I also gave up on using the /dev device names for mounting purposes - there are just too many different ways for disks/partitions to come up ready.
I use volume labels now as they are easy to deal with, easily configured, and just as reliable as UUIDs.
The mdadm.conf is nice to have, but isn't necessary. The information is recorded in each partition and is used to rebuild the md devices.
i removed all my RAID devices and try to start a new RAID creation
--level=0 was ok
--level=1 was not
[root@rollyRHEL6 Desktop]# mdadm --create /dev/md0 --level=0 --raid-device=2 /dev/sdd5 /dev/sde5
mdadm: Defaulting to version 1.2 metadata
[root@rollyRHEL6 Desktop]# mdadm --create /dev/md1 --level=1 --raid-device=2 /dev/sdd6 /dev/sde6
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array?
what is that mean?!!
I'm not sure I understand what the problem is. Your arrays are still there, still working, they just have different device names, correct? You shouldn't be using device names for mounting anyway, so what's the harm?
As for your latest output, it's pretty self-explanatory. Do you plan to store /boot on your level 1 array? If so, make sure your boot loader can use md/v1.x metadata. If not, then it doesn't matter.
Last edited by suicidaleggroll; 02-15-2016 at 09:59 AM.
i write that to the /etc/mdadm.conf file
DEVICE /dev/sdd[5678] /dev/sde[5678]
ARRAY /dev/md0 devices=/dev/sdd5,/dev/sde5
ARRAY /dev/md1 devices=/dev/sdd6,/dev/sde6
ARRAY /dev/md2 devices=/dev/sdd7,/dev/sdd8,/dev/sde7,/dev/sde8
and reboot the machine, every thing still good; nothing changed
thank you
i write that to the /etc/mdadm.conf file
DEVICE /dev/sdd[5678] /dev/sde[5678]
ARRAY /dev/md0 devices=/dev/sdd5,/dev/sde5
ARRAY /dev/md1 devices=/dev/sdd6,/dev/sde6
ARRAY /dev/md2 devices=/dev/sdd7,/dev/sdd8,/dev/sde7,/dev/sde8
and reboot the machine, every thing still good; nothing changed
thank you
Well, using device names is STRONGLY not recommended. They can change from boot to boot. It all depends on which disk spins up first and responds first. That is how the order of the /dev/sd* names are set. Adding/moving a disk can change the list of names.
If you used the --detail it will report the UUIDs to be used instead. Those will remain until the disk gets wiped and reinitialized, no matter where they are plugged in.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.