@chrisretusn
Thanks for all the useful suggestions. I took some time to edit my SlackBuild to implement some of them:
re. the /dev bind mount:
In my original testing I didn't seem to need the /dev mount so I went with the minimum requirements for me. Its a good idea still so I added it in. The /dev/null catch makes it necessary anyways.
re. the /home mount:
The build directories I used were based on the standard SBo ones. I didn't want to change those so that builds are kept in a common place with all the other SlackBuilds I use. You can certainly use other build directories, as your method demonstrates.
re. the strip and permissions:
Thanks for the input here. I had the strip in there from the template I used and didn't actually check if it was needed or not. I checked like you suggest and see that its not needed so I took it out.
I also added the standard permissions "find and chmod" fix so that all those 444 files are made 644. Thanks for the pointer.
re. the nvidia_drm modprobe -r:
The nvidia_drm kernel module gets loaded by the installer during the .run script's installation process. If I do not unload nvidia_drm then the umount /sys step fails for me with:
Code:
umount: /tmp/SBo/chroot-nvidia-drivers/sys: target is busy.
This happens every time, if I do not unload the nvidia_drm driver before trying to umount /sys. Then it becomes this stuck mountpoint, along with the parent directory for the chroot mount.
If I unload the nvidia_drm module from within the chroot first, then the umount cleanup is fine. Taking it out would put me back to umount failing so I'll leave it in. I did streamline the code around it a little to utilize the _cleanup function better.
re. Detecting kernels in /boot:
I use elilo and copy the kernels I am using to the EFI partition. I don't use the symlinks and just call the kernels by name in my elilo.conf instead. In my EFI partition I usually keep a copy of the latest kernel, then the last version as backup from the previous kernel upgrade, and also the original kernel from initial install. Checking for links doesn't quite work in this case.
I do like the idea of looking at /boot for the kernels though. What if I did something like:
Code:
KERNEL_LIST="${KERNEL_LIST:-$(ls /boot/vmlinuz-* | grep -o '[0-9]*\.[0-9]*\.[0-9]*' | sort | uniq)}"
This would also catch -huge kernels. I haven't used a -huge kernel in years but I would guess they would need to build nvidia modules for as well?
Also I added the ':-' parameter so that other kernel versions can be supplied via the command line. E.g. The following would build only for the running kernel (in case someone only wants that and not modules for all kernels)
Code:
KERNEL_LIST=`uname -r` ./nvidia-drivers.SlackBuild
Or a defined list of kernels can be passed:
Code:
KERNEL_LIST="5.15.19 5.15.80" ./nvidia-drivers.SlackBuild
Or any other custom expression to generate a kernel list could be used.
I uploaded the nvidia-drivers.SlackBuild to my github to put the latest version I am using there. I could keep posting it here like the last two iterations but those will become un-editable at some point. I'll leave the old ones up for reference, but going forward I'll just keep the working copy here:
https://github.com/0xBOBF/slackbuild...ers.SlackBuild
Thanks for the useful testing and pointers