CentOSThis forum is for the discussion of CentOS Linux. Note: This forum does not have any official participation.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Oh, I see; the drive we've been dealing with all this time is actually the 8 Gb USB stick you've been using to boot the system. And of course this is using EFI.
No wonder the LVM subsystem found no volumes; all this time we've been barking up the wrong tree. No worries, we'll just have to take a step back and try again.
The first question is: Where are your drives? It seems really odd that a CentOS-based rescue system would be unable to detect an AHCI disk controller.
See what the kernel log has to say about disk controllers and drives, and post the results: dmesg | grep "ata\|scsi"
Every supported disk controller is registered as a scsi controller, and there vill be a message saying something like "scsi host0: Fusion MPT SAS Host" (the last part will identify your actual hardware). Then there will be an "atax.yy" entry for every detected SATA drive, something like this: "ata2.00: ATA-8: HGST HTS721010A9E630, JB0OA3J0, max UDMA/133".
Also post the output from lspci as it will show the actual hardware present in your server. It should then be pretty straightforward to identify the disk controller and figure out what we have to do to load the requisite modules and have the drives show up.
Remember, it does say that I have no Linux partitions, right before I get into the shell. It reports (nothing to fix).
dmesg:
Firmware bug TSC_Deadline disabled...
Memory...
Write protecting hte kernel read-only data:12288k
iscsi: registered transport (tcp)
scsi host0: usb-storage 3-1:1.0
libata version 3.00 loaded
scsi 0:0:0:0: Direct-Access A-Data USB Flash Drive 0.00 PQ: 0 ANSI: 2
sd 0:0:0:0: [sda] Asking for cache data failed
EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null)
sd 0:0:0:0: Attached scsi generic sg0 type 0
Maybe this takes us back to the UEFI settings? Seems nothing is visible at all. I have two SSD's in there, one has the OS on it, the other is a storage unit that's linked to /home/backups as a share point.
The internal drive isn't showing up. Either the drive died or a setting in the bios is wrong. Do you have an option in bios settings to restore defaults? If so, recommend trying that
OK I'm back into the UEFI (what I call BIOS) now. I loaded UEFI defaults and restarted.
OK, now we're making progress. I can chroot /mnt/sysimage now. Done.
Actually it doesn't, because the UEFI/legacy setting doesn't affect how the disk controller is being presented to the system via the PCIe bus. (The "IDE / AHCI" settings discussed earlier does have an effect on this, though.)
Quote:
Originally Posted by BeeRich
Seems nothing is visible at all. I have two SSD's in there, one has the OS on it, the other is a storage unit that's linked to /home/backups as a share point.
And the controller these drives are attached to should be visible somewhere.
Actually it doesn't, because the UEFI/legacy setting doesn't affect how the disk controller is being presented to the system via the PCIe bus. (The "IDE / AHCI" settings discussed earlier does have an effect on this, though.)
And the controller these drives are attached to should be visible somewhere.
What about the output from lspci?
Sorry, resetting the UEFI defaults brought back the drives.
Generating grub conf file...
Script `/boot/efi/EFI/centos/grub.cfg.new' contains no commands and will do nothing
Syntax errors are detected in generated GRUB config file
Ensure that there are no errors in /etc/default/grub
and /etc/grub.d/* files or please file a bug report...
I had a look at a previous grub file and it was long, not some tiny .conf setup. Now this install is on the rescue image, correct?
I have to jump out to lunch for 30 min or so. Back at it after.
without the usb plugged in, when you first boot, what key you press to get the option to select the usb, is there an option to select an efi file to boot from?
without the usb plugged in, when you first boot, what key you press to get the option to select the usb, is there an option to select an efi file to boot from?
Nothing. I always have that USB drive for the rescue OS. Otherwise it says Reboot and Select Proper Boot device or Insert Boot Media in selected Boot device and press a key
Actually, there are two grub files in that directory from a year ago. Can I reuse one of those?
Possibly, as long as the kernel/initrd files they point to are still present on the system.
And if none of them work, one of them can surely be edited to match whatever currently exists in the /boot directory. After all, at one point, probably prior to an update, they did work with your system.
Keep in mind, though, that these files were autogenerated, and as such are bound to contain tons of superfluous fluff, even though a working grub.cfg only needs to contain less than ten lines of text per kernel image.
Possibly, as long as the kernel/initrd files they point to are still present on the system.
And if none of them work, one of them can surely be edited to match whatever currently exists in the /boot directory. After all, at one point, probably prior to an update, they did work with your system.
Keep in mind, though, that these files were autogenerated, and as such are bound to contain tons of superfluous fluff, even though a working grub.cfg only needs to contain less than ten lines of text per kernel image.
OK, that didn't work. Replaced it with the latest .rpmsave file. You say one can be generated to match whatever is in /boot. Not sure what you mean by that.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.