Linux From ScratchThis Forum is for the discussion of LFS.
LFS is a project that provides you with the steps necessary to build your own custom Linux system.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I agree about rust, but what do you use instead of GIMP? Some things can be done with Imagemagick, but I do quite a bit of graphics work so would be interested in somthing to replace GIMP.
No, you misunderstood me. I used to use LFS as an alternative day-to-day system but for the last couple of years I've just used it as a rescue system for emergencies. No applications needed.
OK, it builds. Whether it will boot or not is another matter. It's a long time since I built a kernel and I may have lost the knack of it. The point is that I didn't see any weirdness along the way and I didn't have to carry out any unauthorised patching.
Mind you, I'm talking about basic LFS only. I'll have to put in some bits and pieces from the early chapters of BLFS, but I'm not building this system for daily use as I used to do in the good old days, only as an emergency rescue device.
I agree with @derguteweka that the pip/wheel/python stuff is an unpleasant nuisance. You copy those commands over and then you look at them and think, "wtf does all that mean??" The whole point of LFS used to be that once you'd done it, you understood exactly how a Linux system was put together. I don't think that's the case any more.
I can't boot it! Usually with a bad kernel, it boots and then panics. But this one won't boot at all. I select it from the elilo menu, the kernel appears to load and then it halts and buzzes at me: dum-dum-dum-daaaa. Over and over. I wonder what that means; I've never heard it before on this machine.
Some time when I have the energy, I'll try a hybrid boot with my Slackware kernel and initrd. Right now, I'm cheesed off.
Further info: Just found this in a Lenovo manual on POST beep symptoms:
BEEP SYMPTOM
3 short beeps followed by 1 long beep.
BEEP MEANING
Memory not detected.
SUGGESTED ACTIONS
Investigate memory subsystem. Ensure that any memory module(s) are properly seated in the connectors.
Now that is nonsense! Slackware and AntiX both use the memory normally. And in this case, the pattern isn't occurring at POST but after a kernel has loaded, so it must be a kernel config problem. This version of LFS sets a number of kernel config parameters that I never messed with before but afaik I followed those instructions religiously.
Last edited by hazel; 04-29-2024 at 04:38 AM.
Reason: Further info
Oh, the dreaded 'beeps' - that's not good. I always use a config from a previously built LFS that i know works on this machine so not (much) drama on 1st boot. It's just 'make oldconfig', then answer (usually No) on any new config options/hardware drivers. Good idea to try the Slack kernel (i think i would try the 'huge' variant if they still do those).
I agree with @derguteweka that the pip/wheel/python stuff is an unpleasant nuisance. You copy those commands over and then you look at them and think, "wtf does all that mean??" The whole point of LFS used to be that once you'd done it, you understood exactly how a Linux system was put together. I don't think that's the case any more.
Ah, yes. I built several LFS, then the (now defunct) HLFS followed by Kevux before I promoted myself to Slackware. You could understand things then. But we have become accustomed to layers of sophistication and complication that the ordinary mortal just takes for granted - until they go belly up.
Twenty years ago, it was 2 hours or reading help and/or google to build a kernel from scratch. Today, it would be three at least. I'd suggest doing it once, as a learning experience. Then when it goes belly up, copy any distro config file (usually in /boot) and use that . It will have safe enough options.
I always use a config from a previously built LFS that i know works on this machine so not (much) drama on 1st boot. It's just 'make oldconfig', then answer (usually No) on any new config options/hardware drivers.
I used to do that on my old drive. I had two alternating LFS partitions, one for the current LFS and one for the previous one. The new LFS went onto the latter with the current one as build host. But this new drive is much smaller, so I can only have one LFS partition at a time. I built this new LFS out of Slackware but, like a fool, I forgot to save the kernel config before I cleared the partition. I won't make that mistake again!
I have two irons in the fire now. I can do a hybrid boot with the Slackware kernel and initrd (they're all on the ESP so it's just a matter of tweaking the elilo.config file). But I'm also rebuilding the proper LFS kernel (6.7.4) using a doctored version of my Slackware config. We'll see how it goes.
Distribution: Void, Linux From Scratch, Slackware64
Posts: 3,154
Original Poster
Rep:
I always start with the currently running config:
Code:
gunzip < /proc/config.gz > /path/for/config
A quilck look to see if anything major changed, then boot if it fails go back and try again, usually the running config works fine, especially for minor kernel updates.
Hazel dont know if it matters but I boot from grub no initrd, just a fubar'ed system.
A quilk look to see if anything major changed, then boot if it fails go back and try again, usually the running config works fine, especially for minor kernel updates.
I usually do that too. I've already explained why I couldn't do the same this time around.
Quote:
Hazel dont know if it matters but I boot from grub no initrd, just a fubar'ed system.
I don't think it makes any difference what bootloader you use. Elilo had already loaded the kernel when the machine halted and the buzzing started. I don't intend to use an initrd either. I never used one in LFS before. It's just a matter of getting a kernel that boots. Then perhaps you can give me some commands that failed for you and I'll try them out.
Nope! The same thing happens with this kernel (6.7.4) even if I build it using a modified version of the Slackware config. But I know that 6.1.42 works because that's the kernel that AntiX uses. So I shall have to do a bisection.
Slackware-current uses 6.6.29. I'll download the source tomorrow morning and build it with the same config file. Then we'll see.
My build of the 6.1.4 kernel boots normally. The boot didn't go all the way because of kernel misconfiguration problems, but that's something I know how to correct. Seemingly there's something about the 6.7 kernel that my machine just doesn't like.
Curious. It doesn't seem to be a misconfiguration but a system problem. Syslogd and klogd are not coming onstream when they should; in a normal boot, the visible output switches from kernel console to klogd output soon after the sound card is registered. In this LFS system, that doesn't happen. The output simply stops and there is no login prompt. Yet the keyboard works as if a console had been started. There are no "Kernel panic: unable to find init" messages, so init must be running. That means that at least the root partition must have loaded normally; I don't know about the dynamic ones. And if I reboot with ctrl-alt-del, I can see udevd and gpm closing down, so they must have been started by their respective startup scripts. It's just that I can't see them or get in to check them.
This is a quite separate problem from the complete boot failure, which occurs for all kernels later than 6.3.6 and has to be the fault of my hardware.
It's all Intel inside. In any case, the initial output from the kernel wouldn't be readable if the video driver had failed. And we're talking basic svga here, not graphics.
But I've been going over the book and have noticed something. istr that in the earlier lfs versions, the /dev directory had a couple of static nodes put into it when it was created. I can't recall what they were, but they were apparently necessary to allow the creation of all the dynamic nodes at boot. And I can't find them in this version of the book. I wonder if they are still needed. Probably not, because my AntiX partition doesn't have them either.
framebuffer? usually when the system boots but no terminal output…
Code:
CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION=y
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=y
# CONFIG_FRAMEBUFFER_CONSOLE_DEFERRED_TAKEOVER is not set
Ah! I think I may have found it:
Code:
CONFIG_FB_SVGALIB=m
CONFIG_FB_VGA16=m
As I'm not using an initrd image, I need to compile these in, don't I, or there won't be a framebuffer to put the console in.
Last edited by hazel; 05-08-2024 at 11:00 AM.
Reason: Added paragraph
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.