/dev/sd* devices change their names on each virtual machine reboot
Linux - Virtualization and CloudThis forum is for the discussion of all topics relating to Linux Virtualization and Linux Cloud platforms. Xen, KVM, OpenVZ, VirtualBox, VMware, Linux-VServer and all other Linux Virtualization platforms are welcome. OpenStack, CloudStack, ownCloud, Cloud Foundry, Eucalyptus, Nimbus, OpenNebula and all other Linux Cloud platforms are welcome. Note that questions relating solely to non-Linux OS's should be asked in the General forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
/dev/sd* devices change their names on each virtual machine reboot
Hello,
I have a virtual machine (using qemu/KVM) used for Oracle and I needed to add some emulated disk devices(four 2GB disks and four 1GB disks) so I could use them for ASM disks, until I discovered that on each reboot my virtual machine is changing the names of the disk drives, which is a big problem for me.
For example, this is how my disk drives are named before I reboot:
Code:
[root@node01 ~]# fdisk -l |grep /dev/sd
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/sda: 21.5 GB, 21474836480 bytes
/dev/sda1 * 1 64 512000 83 Linux
/dev/sda2 64 2611 20458496 8e Linux LVM
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
/dev/sdb1 2 20480 20970496 8e Linux LVM
Disk /dev/sdc: 2147 MB, 2147483648 bytes
/dev/sdc1 1 1009 2095662 83 Linux
Disk /dev/sdd: 2147 MB, 2147483648 bytes
Disk /dev/dm-1 doesn't contain a valid partition table
/dev/sdd1 1 1009 2095662 83 Linux
Disk /dev/sde: 2147 MB, 2147483648 bytes
/dev/sde1 1 1009 2095662 83 Linux
Disk /dev/sdf: 2147 MB, 2147483648 bytes
/dev/sdf1 1 1009 2095662 83 Linux
Disk /dev/sdg: 1073 MB, 1073741824 bytes
/dev/sdg1 1 1011 1048376+ 83 Linux
Disk /dev/sdh: 1073 MB, 1073741824 bytes
/dev/sdh1 1 1011 1048376+ 83 Linux
Disk /dev/sdi: 1073 MB, 1073741824 bytes
/dev/sdi1 1 1011 1048376+ 83 Linux
Disk /dev/sdj: 1073 MB, 1073741824 bytes
/dev/sdj1 1 1011 1048376+ 83 Linux
Disk /dev/dm-2 doesn't contain a valid partition table
Disk /dev/dm-3 doesn't contain a valid partition table
Disk /dev/dm-4 doesn't contain a valid partition table
/dev/sda and /dev/sdb are physical volumes for two VGs available on the virtual machine;
/dev/sdc to /dev/sdj are the mentioned four 2GB and four 1GB drives(formatted each as one primary partition) which I intend to use for ASM.
And here is after reboot:
Code:
[root@node01 ~]# fdisk -l |grep /dev/sd
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/sda: 2147 MB, 2147483648 bytes
/dev/sda1 1 1009 2095662 83 Linux
Disk /dev/sdb: 2147 MB, 2147483648 bytes
/dev/sdb1 1 1009 2095662 83 Linux
Disk /dev/sdi: 21.5 GB, 21474836480 bytes
/dev/sdi1 * 1 64 512000 83 Linux
/dev/sdi2 64 2611 20458496 8e Linux LVM
Disk /dev/sdc: 2147 MB, 2147483648 bytes
Disk /dev/dm-1 doesn't contain a valid partition table
/dev/sdc1 1 1009 2095662 83 Linux
Disk /dev/sdg: 1073 MB, 1073741824 bytes
/dev/sdg1 1 1011 1048376+ 83 Linux
Disk /dev/sde: 1073 MB, 1073741824 bytes
/dev/sde1 1 1011 1048376+ 83 Linux
Disk /dev/sdj: 21.5 GB, 21474836480 bytes
/dev/sdj1 2 20480 20970496 8e Linux LVM
Disk /dev/sdd: 2147 MB, 2147483648 bytes
/dev/sdd1 1 1009 2095662 83 Linux
Disk /dev/sdf: 1073 MB, 1073741824 bytes
/dev/sdf1 1 1011 1048376+ 83 Linux
Disk /dev/sdh: 1073 MB, 1073741824 bytes
/dev/sdh1 1 1011 1048376+ 83 Linux
Disk /dev/dm-2 doesn't contain a valid partition table
Disk /dev/dm-3 doesn't contain a valid partition table
Disk /dev/dm-4 doesn't contain a valid partition table
This is how I attached the newly created LVs from the virtualization host machine as block devices to the virtual machine:
Code:
[root@c4 ~]# lvcreate -L 2G -n virtdisks_node01_asm1 vg_c4
Logical volume "virtdisks_node01_asm1" created
[root@c4 ~]# virsh # attach-disk node01 /dev/mapper/vg_c4-virtdisks_node01_asm1 sdc --persistent
Disk attached successfully
From the virtual machine dmesg log:
scsi 2:0:2:0: Direct-Access QEMU QEMU HARDDISK 0.15 PQ: 0 ANSI: 5
scsi target2:0:2: tagged command queuing enabled, command queue depth 16.
scsi target2:0:2: Beginning Domain Validation
scsi target2:0:2: Domain Validation skipping write tests
scsi target2:0:2: Ending Domain Validation
sd 2:0:2:0: Attached scsi generic sg3 type 0
sd 2:0:2:0: [sdc] 4194304 512-byte logical blocks: (2.14 GB/2.00 GiB)
sd 2:0:2:0: [sdc] Write Protect is off
sd 2:0:2:0: [sdc] Mode Sense: 1f 00 00 08
sd 2:0:2:0: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
sdc:
sd 2:0:2:0: [sdc] Attached SCSI disk
.... did this with all LVs..
On the qemu guest .xml file the disks are added with their corresponding /dev/sd* names as I want them:
do NOT use the scsi emulation with qemu. It's buggy, unsupported and not actively developed. Use virtio_blk instead
Thanks for that hint.
So you are telling me that this naming convention mess-up comes from using scsi emulation?
And please can you tell me how to convert safely from using scsi emulation to virtio_blk?
I can't see any form of virtio emulation type that I can choose from the attach-disk command?
Code:
attach-disk domain-id source target [--driver driver] [--subdriver subdriver] [--cache cache] [--type type] [--mode mode] [--persistent] [--sourcetype soucetype] [--serial
serial] [--shareable] [--address address]
Attach a new disk device to the domain. source and target are paths for the files and devices. driver can be file, tap or phy for the Xen hypervisor depending on the
kind of access; or qemu for the QEMU emulator. type can indicate cdrom or floppy as alternative to the disk default, although this use only replaces the media within
the existing virtual cdrom or floppy device; consider using update-device for this usage instead. mode can specify the two specific mode readonly or shareable.
persistent indicates the changes will affect the next boot of the domain. sourcetype can indicate the type of source (block|file) cache can be one of "default",
"none", "writethrough", "writeback", or "directsync". serial is the serial of disk device. shareable indicates the disk device is shareable between domains. address
is the address of disk device in the form of pci:domain.bus.slot.function, scsi:controller.bus.unit or ide:controller.bus.unit.
[root@node01 ~]# fdisk -l |grep /dev/
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/sda: 21.5 GB, 21474836480 bytes
/dev/sda1 * 1 64 512000 83 Linux
/dev/sda2 64 2611 20458496 8e Linux LVM
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
/dev/sdb1 2 20480 20970496 8e Linux LVM
Disk /dev/vda: 2147 MB, 2147483648 bytes
/dev/vda1 1 1009 2095662 83 Linux
Disk /dev/vdb: 2147 MB, 2147483648 bytes
/dev/vdb1 1 1009 2095662 83 Linux
Disk /dev/vdc: 2147 MB, 2147483648 bytes
/dev/vdc1 1 1009 2095662 83 Linux
Disk /dev/vdd: 2147 MB, 2147483648 bytes
/dev/vdd1 1 1009 2095662 83 Linux
Disk /dev/vde: 1073 MB, 1073741824 bytes
Disk /dev/dm-1 doesn't contain a valid partition table
/dev/vde1 1 1011 1048376+ 83 Linux
Disk /dev/vdf: 1073 MB, 1073741824 bytes
/dev/vdf1 1 1011 1048376+ 83 Linux
Disk /dev/vdg: 1073 MB, 1073741824 bytes
/dev/vdg1 1 1011 1048376+ 83 Linux
Disk /dev/vdh: 1073 MB, 1073741824 bytes
/dev/vdh1 1 1011 1048376+ 83 Linux
Disk /dev/dm-0: 4227 MB, 4227858432 bytes
Disk /dev/dm-2 doesn't contain a valid partition table
Disk /dev/dm-3 doesn't contain a valid partition table
Disk /dev/dm-4 doesn't contain a valid partition table
Disk /dev/dm-5 doesn't contain a valid partition table
Disk /dev/dm-1: 16.7 GB, 16710107136 bytes
Disk /dev/dm-2: 5368 MB, 5368709120 bytes
Disk /dev/dm-3: 5368 MB, 5368709120 bytes
Disk /dev/dm-4: 5368 MB, 5368709120 bytes
Disk /dev/dm-5: 5268 MB, 5268045824 bytes
[root@node01 ~]#
It seems to me that the XML directive 'target dev='on the <disk clause> doesn't work because the disks were named sequentially from vda to vdh - which is not the way they are set up on the XML config file. The same applies for the /dev/sda and /dev/sdb - they should be named hda and hdb....
And if I decide to add/edit/drop another disk(s) some other time, I should expect again change in the device namings?
Thanks for that info.
Unfortunately I don't think that this can suit for me for the Oracle ASM volumes, because they aren't ext* filesystems, but rather are ASM formatted:
So I guess that upon boot, no matter what(KVM decisions to change device naming, etc...) the /boot partition is recognized no matter how the kernel sees and loads the block device (with what /dev/* name) - is that correct?
However, if I edit my fstab to reflect to these partitions with their UUIDs, instead of their /dev/mapper* names, what I will achieve as talking for the names of the devices?
If you go up a little bit you will see that no matter how the kernel sees the block devices names, the system boots up properly, identifying the disks, LVM partitions, VGs and LVs, etc... but this I guess has nothing to do with the device names....
So my question is still how do I control the names of the devices the kernel will detect upon boot in the guest virtual machine?
I have a virtual machine (using qemu/KVM) used for Oracle and I needed to add some emulated disk devices(four 2GB disks and four 1GB disks) so I could use them for ASM disks, until I discovered that on each reboot my virtual machine is changing the names of the disk drives, which is a big problem for me.
For example, this is how my disk drives are named before I reboot:
Code:
[root@node01 ~]# fdisk -l |grep /dev/sd
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/sda: 21.5 GB, 21474836480 bytes
/dev/sda1 * 1 64 512000 83 Linux
/dev/sda2 64 2611 20458496 8e Linux LVM
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
/dev/sdb1 2 20480 20970496 8e Linux LVM
Disk /dev/sdc: 2147 MB, 2147483648 bytes
/dev/sdc1 1 1009 2095662 83 Linux
Disk /dev/sdd: 2147 MB, 2147483648 bytes
Disk /dev/dm-1 doesn't contain a valid partition table
/dev/sdd1 1 1009 2095662 83 Linux
Disk /dev/sde: 2147 MB, 2147483648 bytes
/dev/sde1 1 1009 2095662 83 Linux
Disk /dev/sdf: 2147 MB, 2147483648 bytes
/dev/sdf1 1 1009 2095662 83 Linux
Disk /dev/sdg: 1073 MB, 1073741824 bytes
/dev/sdg1 1 1011 1048376+ 83 Linux
Disk /dev/sdh: 1073 MB, 1073741824 bytes
/dev/sdh1 1 1011 1048376+ 83 Linux
Disk /dev/sdi: 1073 MB, 1073741824 bytes
/dev/sdi1 1 1011 1048376+ 83 Linux
Disk /dev/sdj: 1073 MB, 1073741824 bytes
/dev/sdj1 1 1011 1048376+ 83 Linux
Disk /dev/dm-2 doesn't contain a valid partition table
Disk /dev/dm-3 doesn't contain a valid partition table
Disk /dev/dm-4 doesn't contain a valid partition table
/dev/sda and /dev/sdb are physical volumes for two VGs available on the virtual machine;
/dev/sdc to /dev/sdj are the mentioned four 2GB and four 1GB drives(formatted each as one primary partition) which I intend to use for ASM.
And here is after reboot:
Code:
[root@node01 ~]# fdisk -l |grep /dev/sd
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/sda: 2147 MB, 2147483648 bytes
/dev/sda1 1 1009 2095662 83 Linux
Disk /dev/sdb: 2147 MB, 2147483648 bytes
/dev/sdb1 1 1009 2095662 83 Linux
Disk /dev/sdi: 21.5 GB, 21474836480 bytes
/dev/sdi1 * 1 64 512000 83 Linux
/dev/sdi2 64 2611 20458496 8e Linux LVM
Disk /dev/sdc: 2147 MB, 2147483648 bytes
Disk /dev/dm-1 doesn't contain a valid partition table
/dev/sdc1 1 1009 2095662 83 Linux
Disk /dev/sdg: 1073 MB, 1073741824 bytes
/dev/sdg1 1 1011 1048376+ 83 Linux
Disk /dev/sde: 1073 MB, 1073741824 bytes
/dev/sde1 1 1011 1048376+ 83 Linux
Disk /dev/sdj: 21.5 GB, 21474836480 bytes
/dev/sdj1 2 20480 20970496 8e Linux LVM
Disk /dev/sdd: 2147 MB, 2147483648 bytes
/dev/sdd1 1 1009 2095662 83 Linux
Disk /dev/sdf: 1073 MB, 1073741824 bytes
/dev/sdf1 1 1011 1048376+ 83 Linux
Disk /dev/sdh: 1073 MB, 1073741824 bytes
/dev/sdh1 1 1011 1048376+ 83 Linux
Disk /dev/dm-2 doesn't contain a valid partition table
Disk /dev/dm-3 doesn't contain a valid partition table
Disk /dev/dm-4 doesn't contain a valid partition table
This is how I attached the newly created LVs from the virtualization host machine as block devices to the virtual machine:
Code:
[root@c4 ~]# lvcreate -L 2G -n virtdisks_node01_asm1 vg_c4
Logical volume "virtdisks_node01_asm1" created
[root@c4 ~]# virsh # attach-disk node01 /dev/mapper/vg_c4-virtdisks_node01_asm1 sdc --persistent
Disk attached successfully
From the virtual machine dmesg log:
scsi 2:0:2:0: Direct-Access QEMU QEMU HARDDISK 0.15 PQ: 0 ANSI: 5
scsi target2:0:2: tagged command queuing enabled, command queue depth 16.
scsi target2:0:2: Beginning Domain Validation
scsi target2:0:2: Domain Validation skipping write tests
scsi target2:0:2: Ending Domain Validation
sd 2:0:2:0: Attached scsi generic sg3 type 0
sd 2:0:2:0: [sdc] 4194304 512-byte logical blocks: (2.14 GB/2.00 GiB)
sd 2:0:2:0: [sdc] Write Protect is off
sd 2:0:2:0: [sdc] Mode Sense: 1f 00 00 08
sd 2:0:2:0: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
sdc:
sd 2:0:2:0: [sdc] Attached SCSI disk
.... did this with all LVs..
On the qemu guest .xml file the disks are added with their corresponding /dev/sd* names as I want them:
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.