SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme1n1 259:0 0 7.3T 0 disk
...
nvme2n1 259:2 0 7.3T 0 disk
...
nvme0n1 259:3 0 1.8T 0 disk
...
nvme3n1 259:10 0 7T 0 disk
...
Code:
# cat /etc/slackware-version
Slackware 15.0
Edit: I read that as "LVM and/or LUKS" but I think you meant "LUKS on top of LVM". In that case, no. I don't use LVM. Only LUKS.
Correct LUKS on top of LVM.
I'm attempting to locate where my system is hanging up during boot. Trying to eliminate potential reasons.
Thanks.
The info you, @rogan, and @ctrlaltca have supplied indicate it doesn't seem to be the LVM or LKS systems.
I am directing current efforts to looking at whether the video dev or module as a possible culprit.
My previous post describes the challenge: https://www.linuxquestions.org/quest...ml#post6470302
Thanks
Last edited by linux91; 12-18-2023 at 01:16 PM.
Reason: Trimmed quoted text to streamline reply.
Yes, LVM over LUKS working fine here.
I would post my /proc/version and lsblk but cloudflare won't let me.
This is the third time i try to post this comment. Fsck cloudflare.
This is very helpful, Thanks.
My current take is that the hangup occurs when executing the command:
/sbin/udevadm trigger --type=devices --action=add
during startup or simply executing the command at the CLI.
This is a very broad swag but it's where I'm starting.
I configured the system to dual boot kernel-6.1.66 (last working kernel) and the latest kernel. This allows me to test new kernels to see if it is addressed upstream while in the meantime I will continue to compare the later kernels with kernel-6.1.66, ex. diffing the output of lsmod, lsdev, udevadm monitor, etc. for each kernel; running the udevadm trigger command above.
I will also diff the kernel config files to see if something was enabled or disabled.
Last edited by linux91; 12-18-2023 at 01:34 PM.
Reason: Clarification
My current take is that the hangup occurs when executing the command:
/sbin/udevadm trigger --type=devices --action=add
during startup or simply executing the command at the CLI.
This is a very broad swag but it's where I'm starting.
I configured the system to dual boot kernel-6.1.66 (last working kernel) and the latest kernel. This allows me to test new kernels to see if it is addressed upstream while in the meantime I will continue to compare the later kernels with kernel-6.1.66, ex. diffing the output of lsmod, lsdev, udevadm monitor, etc. for each kernel; running the udevadm trigger command above.
I will also diff the kernel config files to see if something was enabled or disabled.
You could adding debug echo statements to the startup scripts to help narrow down the problem area. I did that at work while trying to pin down a problem with Ubuntu not booting. Found the line that failed but still couldn't solve the problem...
Distribution: VM Host: Slackware-current, VM Guests: Artix, Venom, antiX, Gentoo, FreeBSD, OpenBSD, OpenIndiana
Posts: 1,011
Rep:
Quote:
Originally Posted by marav
Linux 6.7 Introduces "make hardening.config" To Help Build A Hardened Kernel
Code:
The hardening updates for the Linux 6.7 kernel bring a new hardening configuration
profile to help in building a security hardened kernel with some sane defaults.
https://github.com/torvalds/linux/bl...rdening.config
of course run it on top of existing .config
pretty much safe for everybody. I am running hardened kernel for long time (there options are mostly in 5.x and all in 6.x). These settings are really reasonable hardening options.
Recently I compiled Slackware generic 6.6.3. ~4850 modules.... So compiling may not be very enticing. But it really is worth to keep system as safe as possible.
You could adding debug echo statements to the startup scripts to help narrow down the problem area. I did that at work while trying to pin down a problem with Ubuntu not booting. Found the line that failed but still couldn't solve the problem...
I'll look into that, Thanks.
Apparently many things have changed in the kernel-6.6.7 config since kernel-6.1.66. This looks like it will take some time to figure out.
I have encountered a kernel bug on 6.6.7 possibly related to yours.
It manifests as commands taking "forever" to complete, for instance
depmod on a newly created /lib/modules/5.15.143 took about ten minutes,
and btrfs send had to be stopped, as I gave up waiting.
All works normally with a similarly configured 5.15.143.
I suspect some interaction with atomic file read/write and the all-new
scheduler they introduced, but there may be other reasons, who knows...
I'll try to nail this down.
I suspect some interaction with atomic file read/write and the all-new
scheduler they introduced, but there may be other reasons, who knows...
What scheduler do you mean? I have recently noticed intermittent delays of maybe ten seconds running commands like 'ls' in a directory containing few files. I use bfq I/O scheduler as before (it's a HDD).
The scheduler I refer to is the new EEVDF scheduler which replaced
the old CFQ that distributes cpu time between processes.
Yes I suspect this one because commands sometimes just don't get any
cpu time it seems.
OK, my 10 sec pause reading a directory not in cache can't be it because according to my notes I had seen it in 6.6.1 and then I had tried 6.1.62 and saw it there, too. First time it appeared somewhat earlier than that but I don't remember when. I guess I should try 5.15.
Distribution: VM Host: Slackware-current, VM Guests: Artix, Venom, antiX, Gentoo, FreeBSD, OpenBSD, OpenIndiana
Posts: 1,011
Rep:
What about running top alongside to confirm cpu spikes?
You can always try lqx or zen kernels with alternative cpu schedulers https://github.com/zen-kernel/zen-kernel/releases. This also would confirm or exclude cpu scheduler as a culprit.
I've tried quite a few different kernel configurations,
excluding options that I think might have anything to
do with our problems. So far no luck...
This bug is probably deep, whatever it is.
Random hangs from scanning (reading writing) processes,
no btrfs send /receive, no matter what.
In the case of btrfs send/receive: after a normal start,
cpu usage drops to zero, as does io, then the process just
sits there doing nothing until killed.
Example: 16G btrfs send from a hdd to nvme which receives:
6.6.7:
send: cpu time about 3 sec user 22 sec system. Killed after 14 minutes:
receive: cpu time about 3 sek user 53 sec system. --- | | ---
5.15.143:
send: cpu time about 5 sec user 39 sec system 4 min 40 sec real.
receive: cpu time about 5 sec user 1 min 28 sec system --- | | ---
It's still early days for the 6.6 series. This is going to get fixed.
Aeterna: I might try the zen when I get the time. Thanks for the tip.
Didier: I usually use iostat, top and time because it's simple and often enough.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.