Linux - Virtualization and CloudThis forum is for the discussion of all topics relating to Linux Virtualization and Linux Cloud platforms. Xen, KVM, OpenVZ, VirtualBox, VMware, Linux-VServer and all other Linux Virtualization platforms are welcome. OpenStack, CloudStack, ownCloud, Cloud Foundry, Eucalyptus, Nimbus, OpenNebula and all other Linux Cloud platforms are welcome. Note that questions relating solely to non-Linux OS's should be asked in the General forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Hi, I am attempting to migrate a virtual machine onto a physical machine and have managed to do so but when I attempt to boot the physical machine I get the following errors:
Code:
Red Hat nash version 5.1.19.6 starting
Reading all physical volumes. This may take a while...
Volume group "VolGroup00" not found
Unable to access resume device (/dev/VolGroup00/LogVol01)
mount: could not find filesystem '/dev/root'
setuproot: moving /dev failed: No such file or directory
setuproot: error mounting /proc: No such file or directory
setuproot: error mounting /sys: No such file or directory
switchroot: mount failed: No such file or directory
Kernel panic - not syncing: Attempted to kill init!
The VM is Oracle Linux which uses the Red Hat distribution and the physical machine is a SUN x6250 box.
I assume it is due to a hardware mismatch from when the VM was created (using VMware Workstation) and the hardware on the machine but I am fairly new to Linux and could do with some guidance/suggestions for how to go about fixing the issue.
Well, for starters I'd ask "Why the V2P? Why not just setup the Sun x6250, then migrate the database(s)?".
Then I'd ask "What did Oracle Support say?" (But I'm sure you've had a case open for days, and they've just failed to call you back.)"
As for your current problem; I would boot a Linux Live CD (like Knoppix), and see what you can see.
If it finds your drive(s), physical volume(s), volume group(s), and logical volume(s), then your in business.
However, if it can't; then you either have hardware the Linux Live CD cannot see (which I have not had happen with the latest Knoppix builds), or something went horribly, horribly wrong with your V2P.
Did the V2P process generate any log files?
I've not played with Oracle Linux (their downloads haven't worked for me for a few months now). Does it use the GRUB boot loader?
If so, when presented with the menu, do the following;
1) Hit "c". Which should take you to the grub prompt. 2) Do either "find /boot/grub/stage1" or "find /grub/stage1". It should list the drives it can boot from like so;
Code:
(hd0,0)
(hd1,0)
(hd2,0)
3) now change the "root (hd#,#)" line to match what it found and try to boot that.
If you still have booting issues, then you may have to rebuild your initrd (if Oracle Linux uses that).
If this post (or any post) helps you out, hit "Yes" in the bottom right-hand corner of that post.
Also, if your problem has been solved, use "Thread Tools" at the top of the page to mark the thread [SOLVED].
Thanks for your response xeleema. Unfortunately V2P is the only way to get this to work!
When I run find /grub/stage1 I get the result "(hd0,0)". I have run "root (hd0,0)" and I get the message "Filesystem type is ext2fs, partition type 0x83". Then running setup(hd0) seems to run ok and ends with Done.
I have tried to rebuild my initrd and again, this seemed to be OK.
The only thing I am trying to run now is "grub-install --recheck /dev/sda1" which returns
Probing devices to guess BIOS drives. This may take a long time.
Could not find device for /boot
I presume this is linked with the fact that when i ran find /boot/grub/stage1 it didn't find anything but did with "find /grub/stage1". Do you know what I should be doing to get grub-install to work?
Thanks for your response xeleema. Unfortunately V2P is the only way to get this to work!
Drat. Even though I'm a staunch beliver that there's always more than one way to skin a penguin.
Quote:
Originally Posted by kilgour
When I run find /grub/stage1 I get the result "(hd0,0)"...
Good, make sure your grub.conf has a line in it that says "root (hd0,0)"
Quote:
Originally Posted by kilgour
...but when i run "root (hd0,0)" i get the message "Filesystem type is ext2fs, partition type 0x83". I am not sure what to do with this?
Don't type "root (hd0,0)" at the "grub>" prompt. That line should be in your grub.conf.
Quote:
Originally Posted by kilgour
I have also attempted to rebuild my initrd and these are the steps that I have taken
1. Boot systemrescuecd
2. Open terminal or console and run "su" for root.
3. Mount root partition or logical volume (VolGroup00/LogVol00)
3a. cd /mnt
3b. mkdir sysroot
3c. mount /dev/VolGroup00/LogVol00 ./sysroot
4. Mount your boot partition: (dev/sda1)
4a. mount /dev/sda1 ./sysroot/boot
5. Mount w/ bind the /dev from rescue system to problem system
5a. mount --bind /dev ./sysroot/dev
6. Go into the chroot environment
6a. chroot sysroot
7. Mount /proc and /sys
7a. mount /proc
7b. mount /sys
8. run mkinitrd
8a. cd /boot
8b. mkinitrd -v -f initrd-2.6.18.194.el5.img 2.6.18.194.el5
Unfortunately at the last step I get the error "No modules available for kernal initrd-2.6.18-194.el5". I have run "ls -l /lib/modules/2.6.18-194.el5/" and there seem to be module files and folders. Do you know what I might be doing wrong?
Thanks for any help you can provide,
Paul
I know you have to move your current initrd to initrd.bak first.
Just be sure that the the last three characters there are "eee" "ell" "five" (el5) and not "eee" "fifteen" (e15)
(I've goofed up like that before.)
The root command is in my grub.conf and I managed to successfully run the mkinitrd command (my bad!).
When I restart the system without the rescue disk, I am getting a similar (but not exactly the same) error message. It still reports that Volume group "VolGroup00" not found and mount: could not find filesytem /dev/root. However, it now says "Kernal panic - not syncing: Attempted to kill init!"
For some reason it is unable to find the /var folder even though this is definitely there. When I get the grub console up, and manually attempt to boot, it is noticeable that when i type "kernal /vm[tab]" it finds the /vmlinuz... folder but when i continue to type "ro root=/dev[tab]" it does not find the dev folder.
Does this give any clues? I am really out of ideas! I have managed to run the command "grub-install --recheck /dev/sda" (I was typing sda1 before) and this seems to work ok. Any ideas would be greatly appreciated right now!
I think it might be to do with my mkinitrd not being created with the correct drivers for the hardware on the SUN server and it not picking up the hard drives correctly. Does anybody know how I find out which drivers are needed for the initrd and I specify the correct ones?
I have seen that the mkinitrd command looks in the /lib/modules/2.6.18-194.el5/kernel/drivers folder for the drivers. Can I assume the drivers in this folder are for a different architecture and need replacing with the correct architecture's drivers or does mkinitrd just "pick" the driver from this folder which it needs for the architecture it thinks it is on? In this case, how do I point it to the correct drivers?
I have managed to fix this! I had to use the mkinitrd command with "--with aacraid" and now it boots up fine. Obviously it wasn't finding the RAID controller because the original VM was created on a windows system that didn't have one.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.