Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm working with an external LaCie 4BIG RAID array connected to a RHEL4 server (via eSATA) that has one large VG with 5 100gb LogVols inside (so far).
They're all formatted and mounted perfectly and I've copied my data onto them. The problem is that after a reboot, none of my logical volumes remains active. The 'lvdisplay' command shows their status as "not available". I can manually issue an "lvchange -a y /dev/<volgroup>" and they're back, but I need them to automatically come up with the server.
Shouldn't Logical Vols be persistently active by default?
How can I make them so?
Is it possible to place commands (I'm thinking the "lvchange -a y /dev/<volgroup>") into /etc/fstab so they'd be executed first and then have the volumes mounted? Seems like a workaround, even if it is.
I'm guessing that the problem may be that the drives powered down and take too long to power up. Later when you tried manually they were spun up and you didn't have a problem. Look at the kernel boot messages for clues.
Hmmm ... I don't think that's quite right. If that were the problem, then wouldn't the symptom be drives not available for mounting in fstab during boot, but when I later got to a login prompt and checked by hand, I'd "mysteriously" find solid and happy, "LV STATUS: Available" logical volumes waiting for me? What I'm finding is that the drive/Volume Group is up, but the Logical Volumes inside the VG are inactive.
To more directly address your comment, I did look in /var/log/dmesg and I think these are the only relevant parts -
Code:
sata_sil 0000:00:08.0: version 2.0
ACPI: PCI Interrupt 0000:00:08.0[A] -> GSI 29 (level, low) -> IRQ 185
ata1: SATA max UDMA/100 cmd 0xF8830080 ctl 0xF883008A bmdma 0xF8830000 irq 185
ata2: SATA max UDMA/100 cmd 0xF88300C0 ctl 0xF88300CA bmdma 0xF8830008 irq 185
scsi2 : sata_sil
ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310)
ata1.00: ATA-6, max UDMA/133, 2930019504 sectors: LBA48
ata1.00: ata1: dev 0 multi count 1
ata1.00: configured for UDMA/100
scsi3 : sata_sil
ata2: SATA link down (SStatus 0 SControl 310)
Vendor: ATA Model: LaCie 4big Qua Rev: 0
Type: Direct-Access ANSI SCSI revision: 05
SCSI device sdg: 2930019504 512-byte hdwr sectors (1500170 MB)
SCSI device sdg: drive cache: write back
SCSI device sdg: 2930019504 512-byte hdwr sectors (1500170 MB)
SCSI device sdg: drive cache: write back
sdg: sdg1
Attached scsi disk sdg at scsi2, channel 0, id 0, lun 0
Attached scsi generic sg13 at scsi2, channel 0, id 0, lun 0, type 0
The SATA link comes up fine, which is what I see onscreen in POST, then it looks like it goes down? But then right after, the array (sdg) and its one partition (sdg1) are seen as attached. I'd think it would need to be spun up for that.
LASTLY, an idea - if I activate the Logical vols by putting the "lvchange -a y <volgroup>" command into the boot process in the /etc/rc.d/rc.sysinit script, right before the part where it mounts all the other local (i.e. non /) filesystems, would that work? I'm not accustomed to playing around in this area, but I can see that logical volume mgmt is already setup earlier in this script, so this *should* work. This is a production server, so I can't take it down at a whim to experiment.
Still a workaround though, if so. The volumes should be persistently active!!! Grrr!
Ugh ugh ugh! Spinup/spindown does not seem to be the problem. The array has a setting where it will power up and power down as the server does and I did have that enabled. I turned it off and rebooted. No array power cycling. Same problem. No change.
Later I tried my idea of putting the "lvchange -a y <volgroup>" command into the boot process in the /etc/rc.d/rc.sysinit script, right before the part where it mounts all the other local (i.e. non /) filesystems. LogVol mgmnt does seem to be established in the OS before that section.
Well this not only did NOT work on reboot (logical volume not found), but now both the VG and LVs on the array seem to be GONE!! The partition is still there and I think only the metaadata is somehow hosed. I've taken the array off of the production server and will experiment with it hooked to my Desktop. Since the data on it is not in production yet, I'm not asking for help on recovery. I'll give it a quick whirl for practise but the data is not yet important. I can reformat the whole thing cavalierly.
But I expect to be back where I started from once I get this back up on my other machine.
Anybody have an answer for why active Logical Volumes won't maintain their active status across reboots?
Last edited by Vanyel; 05-06-2009 at 10:54 PM.
Reason: typo
Hooked up to my desktop now. Whew. Only needed to do a vgscan and it was all there. Same problem, after reboot, active LogVols are inactive.
Might the problem be with the Volume Group and not the Logical Volumes?
Quote:
[van@mournblade ~]$ sudo vgdisplay
Password:
--- Volume group ---
VG Name mailserver_lacie
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 6
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 5
Open LV 0
Max PV 0
Cur PV 1
van PV 1
VG Size 1.36 TB
PE Size 4.00 MB
Total PE 357667
Alloc PE / Size 128000 / 500.00 GB
Free PE / Size 229667 / 897.14 GB
VG UUID BOBUfI-fJos-QBzY-GuA2-PsKj-TPJY-grN3AU
==========================================================================================
[van@mournblade ~]$ sudo lvdisplay
--- Logical volume ---
LV Name /dev/mailserver_lacie/usr1
VG Name mailserver_lacie
LV UUID LOwVH9-3h6v-NGFo-pdPZ-C3K9-tGgQ-k2Q4Tw
LV Write Access read/write
LV Status NOT available
LV Size 100.00 GB
Current LE 25600
Segments 1
Allocation inherit
Read ahead sectors 0
--- Logical volume ---
LV Name /dev/mailserver_lacie/usr2
VG Name mailserver_lacie
LV UUID eMqOYy-fgtk-ZTNe-4e3I-3GKU-zKql-v0Q33u
LV Write Access read/write
LV Status NOT available
LV Size 100.00 GB
Current LE 25600
Segments 1
Allocation inherit
Read ahead sectors 0
--- Logical volume ---
LV Name /dev/mailserver_lacie/usr3
VG Name mailserver_lacie
LV UUID qxMDLz-AJQV-dGJP-YNKy-JXto-0CTH-qc9L2m
LV Write Access read/write
LV Status NOT available
LV Size 100.00 GB
Current LE 25600
Segments 1
Allocation inherit
Read ahead sectors 0
--- Logical volume ---
LV Name /dev/mailserver_lacie/usr4
VG Name mailserver_lacie
LV UUID GN33bv-avzz-JU3z-QIST-7Wa2-nOlp-xVPtyf
LV Write Access read/write
LV Status NOT available
LV Size 100.00 GB
Current LE 25600
Segments 1
Allocation inherit
Read ahead sectors 0
--- Logical volume ---
LV Name /dev/mailserver_lacie/usr5
VG Name mailserver_lacie
LV UUID CsSizM-ZLQB-pReC-M0yz-54Xd-qwXv-0h2gBE
LV Write Access read/write
LV Status NOT available
LV Size 100.00 GB
Current LE 25600
Segments 1
Allocation inherit
Read ahead sectors 0
After a LOT of reading and consulting a SysAdmin friend ... I still have NO IDEA what the problem is! But I've got a successful workaround so I'm happy and can go on. For anyone else who ever has the same problem ...
As near as I can tell, even though the eSATA card and the connected array are physically recognized during POST, the VG/LVs on the array aren't recognized by the OS that early. Why so late? No clue.
So I went back to my idea of manually inserting the lvchange command into the boot process (decided to use /sbin/vgchange -a y instead) and went about tracking down where.
What worked was inserting this into /etc/inittab as the next to last entry, just before X11 starts in runlevel 5 -
md:35: once:/sbin/vgchange -a y
SUCCESS!
Of course, on my production machine I'll have to chkconfig off sendmail and httpd, then replace the vgchange command with a custom shell script that contains the command and manually turns both on, since the array will be hosting the volumes where they have their data and this executes after the services start. But after everything else I've been through to get this far, that's trivial.
- Van
Last edited by Vanyel; 05-07-2009 at 07:12 PM.
Reason: char sequence became a smiley
Seems possible. Thanks jschiwal. I have to get the project that the array is needed for underway, so I probably won't test that. But will keep it in mind.
Oh, and as a Postscript to all this, I remade the array one more time and discovered that ONLY LVs on it do not come up normally. A plain old fdisk ext3 volume on it can be fstab'd and mounts fine. So anyone having the same problem, consider if you can do without Logical Volumes for a "quick and dirty" solution (I don't *need* them for my purpose. But I much prefer them).
I don't know if anybody ever answered this question, I have found that in SUSE 11 there is a command file in the /etc/init.d directory called boot.lvm. I added this to service database and set it to start at runlevels 235. Upon reboot the Logical Volume Manager starts and runs the appropriate commands and mt 3.1TB logical volume is immediately available.
I have not tried this on RedHat and other Linux variants.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.