Linux From ScratchThis Forum is for the discussion of LFS.
LFS is a project that provides you with the steps necessary to build your own custom Linux system.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm thinking about getting Finish scripts made soon for my next testing phase. I'm going to take some time to rebuild against the latest LFS dev release.
Distribution: Void, Linux From Scratch, Slackware64
Posts: 3,156
Rep:
I think that although finish scripts are not strictly necessary for every service it is probably a good idea to supply one as it show people how to use them, after all this is about learning not just having something the way we want it, also of course if you have to take down a service manually for whatever reason ( update, new configs etc ) it's probably better to shut the service down nicely rather than just to force quit it.
P.S. Sorry I am not very active on the forum at the moment I definitely haven't lost interest in this project but I am trying to do a load of other stuff as well.
I'm beginning to think we may need finish scripts for any and all services just to make sure everything is shut down due to the fact other Runit implementations backbone SysVinit in some way, while we're doing pure Runit without SysVinit.
Because of this we're using a completely different implementation from the bookwork we've all read up on. All the bookwork we've read points back to using SysV's rc script in /etc/init.d to initiate the startup a of the bootscripts as well as halt, reboot, and sendsignal scripts to trigger the SysVinit toolkit for their respective commands. Those scripts operate on the point of using sysvinit's triggers for startup, reboot, and shutdown sequences. But our implementation is different. By that we're more than likely going to need a full set of scripts for startup and shutdown sequences for stage 2 without sysvinit triggers.
While we are using sysvinit bootscripts for stage 1 basic services, yes, we may be lacking shutdown scripts to fully stop services in stage 2 to fully have stage 3 work correctly. I'm not surprised because when you use a chroot it's impossible to dismount any drive or virtual partition if anything is still active in execution. It's only possible to dismount a drive if all services are stopped and no executions are taking place.
A bit of reworking for stage 1 might be in order on my part.
I'm planning to do a full re-copy of the LFS-Bootscripts to move all the bootscripts required for Runit-LFS into /etc/runit/init.d and strip out everything not needed for stage 1. By that, you won't even need to have LFS-Bootscripts, and hardly of the BLFS-Bootscripts either except for those for dhcpcd and dhclient which are more system trigger scripts than sysvinit bootscripts.
Stage 2 run scripts will need finish scripts so that all services can be properly shutdown during stage 3. That should allow stage 3 to properly dismount the drive(s) correctly. I was blind to not realize it until yesterday when I exited a chroot while a service was still active and /dev/pts wouldn't dismount. I'd kick myself in the ass if my foot would reach for not realizing that sooner. Plus due to the fact we're forging ahead on a non-sysvinit toolkit where other implementations used a Sysvinit backbone, we'll need those finish scripts.
Only one thing to do from here... Forge ahead. Onward and upward.
I decided that finish scripts are a good idea and made them for all my stage 2 services. But even without them I'm not sure that SysV does much more or else to tidy up at shut down time. From what Keith Hedger said, that issue with unmounting non-virtual filesystems doesn't sound like something related to using Runit.
SysV runs a PID kill switch in it's scripts that kills all processes in the process tree regardless of what started them. This kills runsvdir, sv, and runsv all respectively shutting down stage 2.
It's a cheap and dirty hack in my opinion because you're using two different init services for one goal rather than using a single init source and proper scripting techniques. Read the ArchLinux scripts. They're using sysvinit alongside Runit for startup and shutdown. I had to re-read them several times to see the command structures but the commands they're using are sysvinit commands, not just Runit.
That's why virtual mount points dismount so easily under ArchLinux's implementation. They lack many finish scripts, but they have sysvinit used. It's even in the wiki that you have to have sysvinit installed to use their Runit implementation.
Basically it's like this...
Runit's stage 3 triggers a sysvinit script to initiate the system shutdown partially in SysV to trigger the PID kill all. Once the PID kill has executed, the Runit stage 3 takes over again and initiates the remainder of the shutdown process. This way they effectively not only use a hybrid init system, but they don't allow for proper scripting techniques to load and unload services using solely Runit.
To be certain, I stripped all the non-finish script services from the /service directory and Runit triggered all the finish scripts and the virtual partition mount points all successfully dismounted with your script Stoat.
The reason you might not have seen the error was because you created finish scripts for all your services. You actually did things properly for a full stage 2 shutdown sequence in parallel so that stage 3 worked as it was supposed to.
Now the real issue will be to get all services not just a run script but equal finish scripts written.
However, this again is yet another step forward in this process.
FYI, I decided to return iptables and alsa stuff to stage 1. First, they're not daemons. They're one-shot things that don't require monitoring. I was using the pause applet to hold them in the "running" status. But secondly, occasional weird things would happen regarding the firewall. Every now and then I would not be able to reach the Internet with my browser even though the hardware and Ethernet connection were good. Recycling the firewall made everything okay. Today when that happened, I checked iptables -L and not all of my rules had been loaded. It's been very intermittent (about three or four times). I don't know why, and I don't feel like investigating it right now. I'm making the pure guess that maybe the more initscript-style sequential launching method of stage 1 will make a difference. It will take a while to know.
I'm not really sure I see any advantage to the Arch way of using Runit. I strongly want to go one way or the other. At the moment, I'm satisfied overall with this purely Runit system. If I had to incorporate a bunch of SysV processes to make Runit work, I would just go back to SysV.
There is no real advantage to the Arch way. It's not even a real advantage. It's a cheap and lazy hackish way of using Runit, but it's not using Runit to it's fullest potential to act as a startup, manager, and shutdown system for the daemons.
In fact, I'm wanting to possibly go a step further than original mentioned. I'd like to see about eliminating all the start scripts and moving everything into Stage 1 and Stage 3 for startup and shutdown.
Runit shouldn't have to rely on sysvinit processes to work. Runit is Runit, not sysvinit. It can work with another init, but you're not utilizing it's full potential. Then again, this shows the weakness of certain distributions NOT using tools the proper ways, and half-assing things to where things don't work as intended.
This is where I see so many distributions pulling the bail out flag and migrating to systemd because systemd is easy to implement.
Properly done, Runit can be a fast, reliable, and fully enclosed init system. But it's getting all the proper tools, scripts, and commands in the proper places and sequences that will make or break Runit, and Runit does not just come with everything pre-written. Runit requires some elbow grease and ibuprofen to work correctly, the right way.
We're doing this the right way Stoat, and our work has come a long way. We are being patient with system, whereas others are being hasty and impatient.
I'm constantly trying things and making adjustments. I've decided to edit my collection of service scripts so that the seven that relate to both stage 1 and 3 accept start|stop arguments. I am calling those in stage 1 with start arguments and in stage 3 with stop arguments (instead of issuing the commands from the stage 3 script as I was doing). Everything is still working normally. The scripts are better "cosmetically" and for maintenance and adjustments (as Keith suggested earlier). Just FYI for where I am at the moment.
Good deal Stoat. I'm a day away from completing my next LFS incarnation so a full battery of testing will take place. Perhaps we can share our scripts when this effort is good and ready for deployment so we can see where we're at.
Runit is still working well for me. I have created a network script and moved its start and stop stuff to stage 1 and 3 (instead of supervised run/finish scripts). For me, it's better this way.
I have seen that umount failure message at shutdown about two times. So it's infrequent and sort of random. Or, at least I can't make a connection between that and something I was doing. I figure the umount command fails because a filesystem is/was busy, and maybe the SysV scripts handle that in some way that Runit does not.
Yes, that's one of the reasons I'm thinking we need to move all the startup and shutdown command sequences into stage 1 and 3 respectively for Runit.
As of now, though, the system is useable and works well enough to be generally useful to anyone. All the work now should be shifting towards the s6 implementation. I'm hoping we can get the service script packages from Runit usable between the two init systems without too much a problem. The logging codes in the scripts might have to be redrafted to be if/else driven, but that'll give us some breathing room.
Edit:
After some work I'm going to try and clean up the scripts to operate with LFS-7.5 stable as a baseline model. I want to start moving everything critical to boot the system into stage-1, and not worry about the handling. One-shot items shouldn't need scripts to control them, but with some cases it might be necessary, like ALSA, and a few others.
If everything builds properly, we'll have finally a drop-in ready init solution for not just LFS, but any distribution at that.
Just FYI and for anybody interested, here are the Runit scripts I currrently am using. Mentioning anything wrong or that could be better with them will be appreciated.
Everything continues to work well with these. But I sort of have to disclose that I run a very simple and lean desktop BLFS system that I use for Internet and everyday work such as finance software, word processing, spreadsheets, audio/video work, email, and so on. I use Fluxbox instead of a fancy DE, and I don't run things like sshd, mail servers, etc. Anyway, it's the only operating system that I possess nowadays.
For logging, I use a Runit log run script only for D-Bus. CUPS has its own logging arrangement, and all of the other stage 2 things are set up either to use syslogd's facilities or don't get logged at all.
I still don't yet see a reason to turn back or to look at any other init methods.
Good deal stoat. I'm confident that once we get a full featured script set devised, we can try to package them, the installer script, and maybe even an auto-build script in a single package and present it to Bruce and other distributions.
One script I want to research is a way to remove everything in /etc/rc.d/init.d and have a similar setup such as Slackware running a SysV Compatible script such as /etc/sv/sysv-retro/run and maybe /etc/sv/bsd-retro/run that picks up and runs any external sysv-init and bsd-init scripts we may not have. Thoughts?
Edit: Stoat I liked the idea of the stage-1 you have and I might revert back to this style, but we do need to figure out how to execute an emergency shell from it, just in case.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.