LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 12-03-2022, 04:31 PM   #1
0XBF
Member
 
Registered: Nov 2018
Distribution: Slackware
Posts: 770

Rep: Reputation: 872Reputation: 872Reputation: 872Reputation: 872Reputation: 872Reputation: 872Reputation: 872
An alternative approach to packaging NVIDIA's *.run drivers.


I've been a longtime user of NVIDIA's .run script installed drivers. The only complaint I have against it is that it installs itself outside of Slackware's existing package management system, leaving no way to track its files or use pkgtools to manage it.

I came up with a script to allow me to properly package it, while still using the standard .run script from NVIDIA. I'd like to offer it up to the Slackware community to critique, test, tear apart, or get other general feedback from script gurus around here.

I am aware of the existing nvidia driver builds on slackbuilds.org. That one takes a more traditional approach to building nvidia's software and is also split across a few packages. I have nothing against that, but I prefer the simplicity of NVIDIA's supplied .run script, which is why I took this approach.

The script is based on the SlackBuild standard and has some boilerplate code, but then does some unconventional building techniques. Some key highlights:
  • Running the script with no options will run NVIDIA's .run script with its UI "as-is", allowing you to go through its process and select what options you want. It should appear no different than the regular .run script usage.
  • The NVIDIA script is run chrooted in an overlay filesystem so that all its changes are "captured" in the upper directory of the overlay, rather than letting it install onto the real root filesystem.
  • After NVIDIA's script is run, I take down the overlay system and clean up the changes to conform to packaging standards like proper .new file handling and whatnot.
  • The script will also accept 'UI=none' as an option to prevent using NVIDIA's UI. This will automatically build the package with all options enabled (i.e. it will also include 32 bit compatibility libs and xorg.conf.new).

The end result is a package that can be installpkg/upgradepkg/etc. to manage the resulting drivers. I just updated my nvidia-drivers to 525.60.11 this morning with this packaging method and its all working as expected.

I'll share the script and relevent .info below.

Edit Note: The latest version of this script will be here: https://github.com/0xBOBF/slackbuild...ers.SlackBuild The code below is the original version.
Code:
#!/bin/bash

# Slackware build script for nvidia-drivers

# Copyright 2022, Bob Funk, Winnipeg, Canada
# All rights reserved.
#
# Redistribution and use of this script, with or without modification, is
# permitted provided that the following conditions are met:
#
# 1. Redistributions of this script must retain the above copyright
#    notice, this list of conditions and the following disclaimer.
#
#  THIS SOFTWARE IS PROVIDED BY THE AUTHOR "AS IS" AND ANY EXPRESS OR IMPLIED
#  WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
#  MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO
#  EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
#  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
#  PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
#  OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
#  WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
#  OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
#  ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

# NOTE ABOUT THE BUILD METHOD:
# This build uses NVIDIA's .run script to install their drivers. Instead of 
# trying to redirect the script to install to a package build directory, it 
# installs the drivers in an overlay and uses that to construct a package.

cd $(dirname $0) ; CWD=$(pwd)

PRGNAM=nvidia-drivers
SRCNAM=NVIDIA-Linux-x86_64
VERSION=${VERSION:-525.60.11}
BUILD=${BUILD:-1}
TAG=${TAG:-_0XBF}
PKGTYPE=${PKGTYPE:-tgz}

# Sanity checks:
if [ ! -e "$CWD/${SRCNAM}-${VERSION}.run" ]; then
  echo "${SRCNAM}-${VERSION}.run is missing."
  echo "Please download a copy of ${SRCNAM}-${VERSION}.run from nvidia.com"
  exit 1
fi
if [ ! -z "$(who -r | grep -o 'run-level 4')" ]; then
  echo "You appear to be in run-level 4."
  echo "The NVIDIA installer can't be run while Xorg or Wayland are running."
  exit 1
elif pgrep Xorg 1> /dev/null; then
  echo "The NVIDIA installer can't be run while Xorg is running."
  exit 1
elif pgrep Xwayland 1> /dev/null; then
  echo "The NVIDIA installer can't be run while Xwayland is running."
  exit 1
fi

if [ -z "$ARCH" ]; then
  case "$( uname -m )" in
    i?86) ARCH=i586 ;;
    arm*) ARCH=arm ;;
       *) ARCH=$( uname -m ) ;;
  esac
fi

if [ ! -z "${PRINT_PACKAGE_NAME}" ]; then
  echo "$PRGNAM-$VERSION-$ARCH-$BUILD$TAG.$PKGTYPE"
  exit 0
fi

TMP=${TMP:-/tmp/SBo}
PKG=$TMP/package-$PRGNAM
OUTPUT=${OUTPUT:-/tmp}

if [ "$ARCH" = "x86_64" ]; then
  SLKCFLAGS="-O2 -fPIC"
  LIBDIRSUFFIX="64"
else
  echo "Arch type '$ARCH' is not supported."
  echo "Must be 'x86_64' to use this driver."
  exit 1
fi

# Allow package to be built with UI=none. This will automatically
# answer 'yes' to all questions. I.e. also installs 32bit compat, 
# and xorg cofiguration steps:
if [ "$UI" = "none" ]; then
  NVOPTS="--no-questions --run-nvidia-xconfig --ui=none"
  echo "Building NVIDIA drivers without the UI"
  echo "Using options: $NVOPTS"
else
  echo "Building NVIDIA drivers with the UI"
fi

set -e

# Some directories for the overlay and chroot mount points:
WORKDIR=$TMP/workdir-$PRGNAM
CHROOT=$TMP/chroot-$PRGNAM
SRCDIR=$TMP/srcdir-$PRGNAM

# Initialize directories:
rm -rf $PKG $WORKDIR $CHROOT $SRCDIR
mkdir -p $TMP $PKG $OUTPUT $WORKDIR $CHROOT $SRCDIR
cd $TMP

# Copy the .run script into SRCDIR and set executable:
cp -a $CWD/${SRCNAM}-${VERSION}.run ${SRCDIR}
chmod +x ${SRCDIR}/${SRCNAM}-${VERSION}.run

# Set up the overlayfs and bind mounts:
mount -t overlay overlay -o lowerdir=/,upperdir=${PKG},workdir=${WORKDIR} ${CHROOT}
mount -o bind /proc ${CHROOT}/proc
mount -o bind /sys ${CHROOT}/sys

# Set trap to prevent leaving mounts if running the installer in chroot fails:
_cleanup () {
  if lsmod | grep -q nvidia_drm ; then
    chroot ${CHROOT} modprobe -r nvidia_drm
  fi
  umount ${CHROOT}{/proc,/sys,}
}
trap "_cleanup" EXIT

# Enter the chroot and run NVIDIA's script:
chroot ${CHROOT} ${SRCDIR}/${SRCNAM}-${VERSION}.run ${NVOPTS}

echo "Finished building NVIDIA drivers."
echo "Cleaning up. This may take a few seconds..."

# Unload nvidia_drm in the chroot so /sys bind can unmount:
chroot ${CHROOT} modprobe -r nvidia_drm

# Cleanup mounts and unset trap:
umount ${CHROOT}{/proc,/sys,}
rm -rf $WORKDIR $CHROOT $SRCDIR
trap "" EXIT

# Cleanup some uneeded bits:
rm -rf $PKG/etc/ld.so.cache $PKG/tmp $PKG/var

# Remove installer/uninstaller. This would defeat the purpose of packaging this for pkgtools:
rm -f $PKG/usr/bin/nvidia-{uninstall,installer}

# Remove all 0 byte files (these are blanking files created by the overlayfs):
find ${PKG} -size 0 -exec rm {} +;

# Handle xorg.conf:
if [ -e "$PKG/etc/X11/xorg.conf" ]; then
  mv $PKG/etc/X11/{xorg.conf,xorg.conf.new}
  # Remove nvidia's backup. We will use new-config anyway:
  rm -f $PKG/etc/X11/xorg.conf.backup
fi

find $PKG -print0 | xargs -0 file | grep -e "executable" -e "shared object" | grep ELF \
  | cut -f 1 -d : | xargs strip --strip-unneeded 2> /dev/null || true

mkdir -p $PKG/usr/doc/$PRGNAM-$VERSION
cat $CWD/$PRGNAM.SlackBuild > $PKG/usr/doc/$PRGNAM-$VERSION/$PRGNAM.SlackBuild

# Move docs to proper location:
mv $PKG/usr/doc/NVIDIA* $PKG/usr/doc/$PRGNAM-$VERSION/

# Fix permissions on .desktop files:
chmod 0644 $PKG/usr/share/applications/*

mkdir -p $PKG/install
cat << EOF > $PKG/install/slack-desc
              |-----handy-ruler------------------------------------------------------|
nvidia-drivers: nvidia-drivers (NVIDIA Linux x86_64 Drivers)
nvidia-drivers:
nvidia-drivers: A packaging of NVIDIA's Linux drivers. This is for x86_64 only.
nvidia-drivers:
nvidia-drivers:
nvidia-drivers:
nvidia-drivers:
nvidia-drivers:
nvidia-drivers:
nvidia-drivers: https://www.nvidia.com/en-us/drivers/unix/
nvidia-drivers:
EOF
cat << EOF > $PKG/install/doinst.sh
config() {
  NEW="\$1"
  OLD="\$(dirname \$NEW)/\$(basename \$NEW .new)"
  # If there's no config file by that name, mv it over:
  if [ ! -r \$OLD ]; then
    mv \$NEW \$OLD
  elif [ "\$(cat \$OLD | md5sum)" = "\$(cat \$NEW | md5sum)" ]; then
    # toss the redundant copy
    rm \$NEW
  else
    # Otherwise, we leave the .new copy for the admin to consider...
    # And lets remind them too:
    echo
    echo "A new xorg.conf file is in this package."
    echo "Please process it by running 'slackpkg new-config'."
  fi
}

if [ -e etc/X11/xorg.conf.new ]; then
  config etc/X11/xorg.conf.new
fi

if [ -x /usr/bin/update-desktop-database ]; then
  /usr/bin/update-desktop-database -q usr/share/applications >/dev/null 2>&1
fi
echo 
echo "NVIDIA drivers have been (re)installed. Please reboot for changes to take effect!"
echo
EOF

cd $PKG
/sbin/makepkg -l y -c n $OUTPUT/$PRGNAM-$VERSION-$ARCH-$BUILD$TAG.$PKGTYPE
Code:
PRGNAM="nvidia-drivers"
VERSION="525.60.11"
HOMEPAGE="https://www.nvidia.com/en-us/drivers/unix/"
DOWNLOAD=""
MD5SUM=""
DOWNLOAD_x86_64="https://us.download.nvidia.com/XFree86/Linux-x86_64/525.60.11/NVIDIA-Linux-x86_64-525.60.11.run"
MD5SUM_x86_64="9c8e8d318555faa68eb3c0e014cbce56"
REQUIRES=""
MAINTAINER="Bob Funk"
EMAIL=""
Cheers,

Bob

Edit: This package will build fine with existing nvidia drivers installed. However, I would recommend uninstalling any existing nvidia drivers with their ordained method before installing the package built from this script.

i.e. If you have installed prior versions of nvidia drivers directly to the root filesystem with the NVIDIA *.run script, use that same run script to uninstall its own files first by passing it the --uninstall option. Then you can install the slackware package this script built and not worry about any overwrites or conflicts. Once using the packaged version of the drivers from this script, future upgrades can be done by building a newer package with this same script and using "upgradepkg" to upgrade it on the system.

Last edited by 0XBF; 12-12-2022 at 05:16 PM.
 
Old 12-04-2022, 03:06 AM   #2
chrisretusn
Senior Member
 
Registered: Dec 2005
Location: Philippines
Distribution: Slackware64-current
Posts: 2,978

Rep: Reputation: 1556Reputation: 1556Reputation: 1556Reputation: 1556Reputation: 1556Reputation: 1556Reputation: 1556Reputation: 1556Reputation: 1556Reputation: 1556Reputation: 1556
Wow, I like this. Something I've kind of wanted to do for while. Should have thought about using chroot, awesome idea. Thanks. Only thing I would do differently would be to have the script download the *.run file. I do this with all of my SlackBuilds. Nice touch on checking for runlevel, I had not thought of that one.
 
2 members found this post helpful.
Old 12-04-2022, 05:04 AM   #3
kjhambrick
Senior Member
 
Registered: Jul 2005
Location: Round Rock, TX
Distribution: Slackware64 15.0 + Multilib
Posts: 2,159

Rep: Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512
0XBF --

I REALLY like your NVidia.SlackBuild script !

Thank you.

Like chrisretusn, I've been thinking about doing something similar for years-n-years but I've never gotten around to it.

I too really like your chroot idea, especially with operations that mess with my /lib/modules/ directories !

As a general rule, I try to always use SlackBuilds, even for the 'simple stuff'.

But the NVidia.run files are an exception because they're so simple to install and they include the --uninstall feature.

And also because the NVidia.run approach was simpler for me than using the existing SBo SlackBuilds.

One Q before I start using your script ...

I try to build and install each 5.15.y Kernel on my Slackware64 15.0 Laptop as soon as a new Kernel is released.

Isn't it true that I would need to rerun the NVidia.SlackBuild for each new Kernel and then reinstall the Package to install Kernel Modules for the new Kernel ?

Something like:
Code:
# Boot runlevel 3 and Log in as root and then:
 
# NVidia-0XBF.SlackBuild                 # rebuild 0XBF's NVidia Package
# upgradepkg --reinstall /tmp/nvidia-drivers-525.60.11-x86_64-1_0XBF.tgz  
# reboot
If so, would it make sense to include the running Kernel Version in the Package Name ?

Maybe simply append something like "_$(uname -r)" to the NVidia VERSION ?

Or maybe even get all fancy and apply the appropriate `Nvidia.run --advanced-options` to build for any instaled Kernel ?

Then I wouldn't have to remember to apply the --reinstall flag to the upgradepkg command

My only other suggestion would be to maybe change the Package BaseName( nvidia-drivers ) so that it's different than the existing SBo nvidia-drivers SlackBuild but that's just because I am so easily confused

Thanks again 0XBF.

This is REALLY great work !

-- kjh

Last edited by kjhambrick; 12-04-2022 at 05:05 AM.
 
1 members found this post helpful.
Old 12-04-2022, 06:50 AM   #4
mlangdn
Senior Member
 
Registered: Mar 2005
Location: Kentucky
Distribution: Slackware64-current
Posts: 1,845

Rep: Reputation: 452Reputation: 452Reputation: 452Reputation: 452Reputation: 452
How could this be used for multiple kernels? I use a custom build, yet I keep the stock kernels in case I screw up, and I don't eliminate the custom kernel until after the new build is proofed. I'd hate to uninstall / install a lot. I like that -K -k option.
 
2 members found this post helpful.
Old 12-04-2022, 08:03 AM   #5
kjhambrick
Senior Member
 
Registered: Jul 2005
Location: Round Rock, TX
Distribution: Slackware64 15.0 + Multilib
Posts: 2,159

Rep: Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512
Dang mlangdn !

Like you, I never execute upgradepkg on { kernel-generic, kernel-huge, kernel-modules, kernel-source }.

I execute installpkg instead so I've always got at least one working Kernel on my system.

I realize now after reading your post that with 0XBF's SlackBuild ...

If I upgradepkg, I will lose the NVidia Kernel Modules for all-but the Kernel linked to the NVidia.run File.

If I installpkg, I MIGHT end up with dupe /usr/bin/nvidia-* and /usr/lib{,64}/libnvidia-*.so files.

Thanks for making me think about it

-- kjh
 
2 members found this post helpful.
Old 12-04-2022, 09:56 AM   #6
0XBF
Member
 
Registered: Nov 2018
Distribution: Slackware
Posts: 770

Original Poster
Rep: Reputation: 872Reputation: 872Reputation: 872Reputation: 872Reputation: 872Reputation: 872Reputation: 872
Thanks for the feedback on multiple kernel versions. My method for kernel upgrades is always that I "installpkg" the new kernel and modules, reboot and test it first, then finish up by adding the kernel-source and building the nvidia drivers for it once its known working. I guess I always just have one version thats working and dont expect to go back once Im at the point where I am building nvidia drivers for it.

However, I made a change to the script to support building additional kernel modules to handle that use case. The idea is to build the drivers for the running kernel first, then find the remaining kernels and also build modules for those. This would allow a single package to support multiple kernel versions at once, requiring a single upgradepkg to cover all.

The modified script is below, with the differences from the first version highlighted. What do yall think?

Edit: This does mean that the package is intended to be built on a specific machine, not built on one machine and distributed to many, since the installed kernels on each machine may differ.

Edit Note: The latest version of this script will be here: https://github.com/0xBOBF/slackbuild...ers.SlackBuild The code below is the original version from this post.
Code:
#!/bin/bash

# Slackware build script for nvidia-drivers

# Copyright 2022, Bob Funk, Winnipeg, Canada
# All rights reserved.
#
# Redistribution and use of this script, with or without modification, is
# permitted provided that the following conditions are met:
#
# 1. Redistributions of this script must retain the above copyright
#    notice, this list of conditions and the following disclaimer.
#
#  THIS SOFTWARE IS PROVIDED BY THE AUTHOR "AS IS" AND ANY EXPRESS OR IMPLIED
#  WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
#  MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO
#  EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
#  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
#  PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
#  OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
#  WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
#  OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
#  ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

# NOTE ABOUT THE BUILD METHOD:
# This build uses NVIDIA's .run script to install their drivers. Instead of 
# trying to redirect the script to install to a package build directory, it 
# installs the drivers in an overlay and uses that to construct a package.
#
# Edited Dec 4, 2022 - Support building modules for ALL present kernels.

cd $(dirname $0) ; CWD=$(pwd)

PRGNAM=nvidia-drivers
SRCNAM=NVIDIA-Linux-x86_64
VERSION=${VERSION:-525.60.11}
BUILD=${BUILD:-1}
TAG=${TAG:-_0XBF}
PKGTYPE=${PKGTYPE:-tgz}

# Sanity checks:
if [ ! -e "$CWD/${SRCNAM}-${VERSION}.run" ]; then
  echo "${SRCNAM}-${VERSION}.run is missing."
  echo "Please download a copy of ${SRCNAM}-${VERSION}.run from nvidia.com"
  exit 1
fi
if [ ! -z "$(who -r | grep -o 'run-level 4')" ]; then
  echo "You appear to be in run-level 4."
  echo "The NVIDIA installer can't be run while Xorg or Wayland are running."
  exit 1
elif pgrep Xorg 1> /dev/null; then
  echo "The NVIDIA installer can't be run while Xorg is running."
  exit 1
elif pgrep Xwayland 1> /dev/null; then
  echo "The NVIDIA installer can't be run while Xwayland is running."
  exit 1
fi

if [ -z "$ARCH" ]; then
  case "$( uname -m )" in
    i?86) ARCH=i586 ;;
    arm*) ARCH=arm ;;
       *) ARCH=$( uname -m ) ;;
  esac
fi

if [ ! -z "${PRINT_PACKAGE_NAME}" ]; then
  echo "$PRGNAM-$VERSION-$ARCH-$BUILD$TAG.$PKGTYPE"
  exit 0
fi

TMP=${TMP:-/tmp/SBo}
PKG=$TMP/package-$PRGNAM
OUTPUT=${OUTPUT:-/tmp}

if [ "$ARCH" = "x86_64" ]; then
  SLKCFLAGS="-O2 -fPIC"
  LIBDIRSUFFIX="64"
else
  echo "Arch type '$ARCH' is not supported."
  echo "Must be 'x86_64' to use this driver."
  exit 1
fi

# Allow package to be built with UI=none. This will automatically
# answer 'yes' to all questions. I.e. also installs 32bit compat, 
# and xorg cofiguration steps:
if [ "$UI" = "none" ]; then
  NVOPTS="--no-questions --run-nvidia-xconfig --ui=none"
  echo "Building NVIDIA drivers without the UI"
  echo "Using options: $NVOPTS"
else
  echo "Building NVIDIA drivers with the UI"
fi

# Get a list of installed kernels so we can supply each with modules:
KERNEL_LIST="$(ls /lib/modules/ | grep '[0-9]*\.[0-9]*\.[0-9]*')"

set -e

# Some directories for the overlay and chroot mount points:
WORKDIR=$TMP/workdir-$PRGNAM
CHROOT=$TMP/chroot-$PRGNAM
SRCDIR=$TMP/srcdir-$PRGNAM

# Initialize directories:
rm -rf $PKG $WORKDIR $CHROOT $SRCDIR
mkdir -p $TMP $PKG $OUTPUT $WORKDIR $CHROOT $SRCDIR
cd $TMP

# Copy the .run script into SRCDIR and set executable:
cp -a $CWD/${SRCNAM}-${VERSION}.run ${SRCDIR}
chmod +x ${SRCDIR}/${SRCNAM}-${VERSION}.run

# Set up the overlayfs and bind mounts:
mount -t overlay overlay -o lowerdir=/,upperdir=${PKG},workdir=${WORKDIR} ${CHROOT}
mount -o bind /proc ${CHROOT}/proc
mount -o bind /sys ${CHROOT}/sys

# Set trap to prevent leaving mounts if running the installer in chroot fails:
_cleanup () {
  if lsmod | grep -q nvidia_drm ; then
    chroot ${CHROOT} modprobe -r nvidia_drm
  fi
  umount ${CHROOT}{/proc,/sys,}
}
trap "_cleanup" EXIT


# First build everything, including modules for the running kernel:
chroot ${CHROOT} ${SRCDIR}/${SRCNAM}-${VERSION}.run ${NVOPTS}

# Now build modules for each installed kernel:
for kernel in ${KERNEL_LIST}
do
  # Skip the running kernel, since we did that already:
  if [ "$(uname -r)" != "${kernel}" ]; then
    echo "Building kernel modules for version ${kernel}"
    # Do these without the UI, since its all automatic anyway:
    chroot ${CHROOT} ${SRCDIR}/${SRCNAM}-${VERSION}.run -k ${kernel} -K --no-questions --ui=none
  fi
done

echo "Finished building NVIDIA drivers."
echo "Cleaning up. This may take a few seconds..."

# Unload nvidia_drm in the chroot so /sys bind can unmount:
chroot ${CHROOT} modprobe -r nvidia_drm

# Cleanup mounts and unset trap:
umount ${CHROOT}{/proc,/sys,}
rm -rf $WORKDIR $CHROOT $SRCDIR
trap "" EXIT

# Cleanup some uneeded bits:
rm -rf $PKG/etc/ld.so.cache $PKG/tmp $PKG/var

# Remove installer/uninstaller. This would defeat the purpose of packaging this for pkgtools:
rm -f $PKG/usr/bin/nvidia-{uninstall,installer}

# Remove all 0 byte files (these are blanking files created by the overlayfs):
find ${PKG} -size 0 -exec rm {} +;

# Handle xorg.conf:
if [ -e "$PKG/etc/X11/xorg.conf" ]; then
  mv $PKG/etc/X11/{xorg.conf,xorg.conf.new}
  # Remove nvidia's backup. We will use new-config anyway:
  rm -f $PKG/etc/X11/xorg.conf.backup
fi

find $PKG -print0 | xargs -0 file | grep -e "executable" -e "shared object" | grep ELF \
  | cut -f 1 -d : | xargs strip --strip-unneeded 2> /dev/null || true

mkdir -p $PKG/usr/doc/$PRGNAM-$VERSION
cat $CWD/$PRGNAM.SlackBuild > $PKG/usr/doc/$PRGNAM-$VERSION/$PRGNAM.SlackBuild

# Move docs to proper location:
mv $PKG/usr/doc/NVIDIA* $PKG/usr/doc/$PRGNAM-$VERSION/

# Fix permissions on .desktop files:
chmod 0644 $PKG/usr/share/applications/*

mkdir -p $PKG/install
cat << EOF > $PKG/install/slack-desc
              |-----handy-ruler------------------------------------------------------|
nvidia-drivers: nvidia-drivers (NVIDIA Linux x86_64 Drivers)
nvidia-drivers:
nvidia-drivers: A packaging of NVIDIA's Linux drivers. This is for x86_64 only.
nvidia-drivers:
nvidia-drivers:
nvidia-drivers:
nvidia-drivers:
nvidia-drivers:
nvidia-drivers:
nvidia-drivers: https://www.nvidia.com/en-us/drivers/unix/
nvidia-drivers:
EOF
cat << EOF > $PKG/install/doinst.sh
config() {
  NEW="\$1"
  OLD="\$(dirname \$NEW)/\$(basename \$NEW .new)"
  # If there's no config file by that name, mv it over:
  if [ ! -r \$OLD ]; then
    mv \$NEW \$OLD
  elif [ "\$(cat \$OLD | md5sum)" = "\$(cat \$NEW | md5sum)" ]; then
    # toss the redundant copy
    rm \$NEW
  else
    # Otherwise, we leave the .new copy for the admin to consider...
    # And lets remind them too:
    echo
    echo "A new xorg.conf file is in this package."
    echo "Please process it by running 'slackpkg new-config'."
  fi
}

if [ -e etc/X11/xorg.conf.new ]; then
  config etc/X11/xorg.conf.new
fi

if [ -x /usr/bin/update-desktop-database ]; then
  /usr/bin/update-desktop-database -q usr/share/applications >/dev/null 2>&1
fi
echo 
echo "NVIDIA drivers have been (re)installed. Please reboot for changes to take effect!"
echo
EOF

cd $PKG
/sbin/makepkg -l y -c n $OUTPUT/$PRGNAM-$VERSION-$ARCH-$BUILD$TAG.$PKGTYPE
@chrisretusn: The "sanity check" for the *.run file would be a good place to add an automatic download if you would like that as well

Last edited by 0XBF; 12-12-2022 at 05:18 PM.
 
8 members found this post helpful.
Old 12-04-2022, 10:44 AM   #7
0XBF
Member
 
Registered: Nov 2018
Distribution: Slackware
Posts: 770

Original Poster
Rep: Reputation: 872Reputation: 872Reputation: 872Reputation: 872Reputation: 872Reputation: 872Reputation: 872
Quote:
Originally Posted by kjhambrick View Post
My only other suggestion would be to maybe change the Package BaseName( nvidia-drivers ) so that it's different than the existing SBo nvidia-drivers SlackBuild but that's just because I am so easily confused
It's subtle but I used "nvidia-drivers", while the SBo one is "nvidia-driver". I used "drivers", since it's an all-in-one package.
 
1 members found this post helpful.
Old 12-04-2022, 10:57 AM   #8
mlangdn
Senior Member
 
Registered: Mar 2005
Location: Kentucky
Distribution: Slackware64-current
Posts: 1,845

Rep: Reputation: 452Reputation: 452Reputation: 452Reputation: 452Reputation: 452
That was quick! Nice work!
 
2 members found this post helpful.
Old 12-04-2022, 11:10 AM   #9
kjhambrick
Senior Member
 
Registered: Jul 2005
Location: Round Rock, TX
Distribution: Slackware64 15.0 + Multilib
Posts: 2,159

Rep: Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512
Quote:
Originally Posted by 0XBF View Post
It's subtle but I used "nvidia-drivers", while the SBo one is "nvidia-driver". I used "drivers", since it's an all-in-one package.
Oops, thanks 0XBF.

I missed it because I am not known for my subtlety and I often need a tap with a clue-stick

But I see the difference now that you've pointed it out for me.

And since you're Packaging the NVidia Kernel Modules for each Kernel Version Directory in /lib/modules, there is no need to associate the Package with any particular Kernel.

I'll definitely play with it.

Thanks again !

-- kjh
 
1 members found this post helpful.
Old 12-04-2022, 12:22 PM   #10
0XBF
Member
 
Registered: Nov 2018
Distribution: Slackware
Posts: 770

Original Poster
Rep: Reputation: 872Reputation: 872Reputation: 872Reputation: 872Reputation: 872Reputation: 872Reputation: 872
Quote:
Originally Posted by kjhambrick View Post
Oops, thanks 0XBF.

I missed it because I am not known for my subtlety and I often need a tap with a clue-stick

But I see the difference now that you've pointed it out for me.

And since you're Packaging the NVidia Kernel Modules for each Kernel Version Directory in /lib/modules, there is no need to associate the Package with any particular Kernel.

I'll definitely play with it.

Thanks again !

-- kjh
I'm not sure about submitting it to SBo so I didn't put too much thought into naming it, other than just making it different from the other versions on there to avoid conflict. The slackbuild script is unconventional from their standards, and the sbopkglint tool complains when it also includes 32bit libraries if that is included.

For now I just wanted to share it to get feedback and input from other slackers who might be interested in doing something similar.

Thanks for taking a look at it!

-Bob
 
2 members found this post helpful.
Old 12-04-2022, 05:22 PM   #11
Paulo2
Member
 
Registered: Aug 2012
Distribution: Slackware64 15.0 (started with 13.37). Testing -current in a spare partition.
Posts: 932

Rep: Reputation: 522Reputation: 522Reputation: 522Reputation: 522Reputation: 522Reputation: 522
I don't know if I'm recalling correctly, but /lib/modules sometimes has old directories not used anymore,
due to builds of modules for those versions.
Maybe running a match of /lib/modules against /usr/src/linux-* to see which versions really have a source counterpart.
 
Old 12-04-2022, 05:38 PM   #12
kjhambrick
Senior Member
 
Registered: Jul 2005
Location: Round Rock, TX
Distribution: Slackware64 15.0 + Multilib
Posts: 2,159

Rep: Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512
Quote:
Originally Posted by Paulo2 View Post
I don't know if I'm recalling correctly, but /lib/modules sometimes has old directories not used anymore,
due to builds of modules for those versions.
Maybe running a match of /lib/modules against /usr/src/linux-* to see which versions really have a source counterpart.
Paulo2 --

Yes, I believe you're correct, but YMMWV.

When I run `removepkg kernel-modules-5.15.y-x86_64-1` I get warnings like: 'cannot remove non-empty directory'

This is because I've got NVidia, Tuxedo(*) and VMWare Modules in my /lib/modules/5.15.y directories which require me to `rm -rf /lib/modules/5.15.y` after I run `removepkg kernel-modules-5.15.y-x86_64-1`

-- kjh

(*) Tuxedo Computers Provides LED Drivers for my Sager NP9672M Laptop ( a rebranded Clevo X170KM-G )
 
1 members found this post helpful.
Old 12-05-2022, 05:51 PM   #13
0XBF
Member
 
Registered: Nov 2018
Distribution: Slackware
Posts: 770

Original Poster
Rep: Reputation: 872Reputation: 872Reputation: 872Reputation: 872Reputation: 872Reputation: 872Reputation: 872
Quote:
Originally Posted by Paulo2 View Post
I don't know if I'm recalling correctly, but /lib/modules sometimes has old directories not used anymore,
due to builds of modules for those versions.
Maybe running a match of /lib/modules against /usr/src/linux-* to see which versions really have a source counterpart.
The times I have had leftover /lib/modules directories were from the nvidia drivers still being there after installing with the run script. I guess the same can happen from other 3rd party kernel modules installed to /lib/modules and left behind there.

It gets a little tricky being more specific about legitimately installed kernels though. If only slackware's kernel packages are considered then something like this should catch them all:
Code:
ls /var/lib/pkgtools/packages/kernel-{generic,huge}* 2> /dev/null | rev | cut -d- -f3 | rev | sort | uniq
But then someone might make their own package with a different naming scheme and get missed. You also don't need the kernel source installed to build nvidia kernel modules and a user might not have the sources installed for all kernels. I'm not sure what a foolproof "catch-all" would be.
 
1 members found this post helpful.
Old 12-05-2022, 10:37 PM   #14
Paulo2
Member
 
Registered: Aug 2012
Distribution: Slackware64 15.0 (started with 13.37). Testing -current in a spare partition.
Posts: 932

Rep: Reputation: 522Reputation: 522Reputation: 522Reputation: 522Reputation: 522Reputation: 522
Quote:
Originally Posted by 0XBF View Post
The times I have had leftover /lib/modules directories were from the nvidia drivers still being there after installing with the run script. I guess the same can happen from other 3rd party kernel modules installed to /lib/modules and left behind there.

It gets a little tricky being more specific about legitimately installed kernels though. If only slackware's kernel packages are considered then something like this should catch them all:
Code:
ls /var/lib/pkgtools/packages/kernel-{generic,huge}* 2> /dev/null | rev | cut -d- -f3 | rev | sort | uniq
But then someone might make their own package with a different naming scheme and get missed. You also don't need the kernel source installed to build nvidia kernel modules and a user might not have the sources installed for all kernels. I'm not sure what a foolproof "catch-all" would be.
OK, that's a better point of view. In the end, I agree that if the user has more than the stock kernel package,
the user must keep /lib/modules and /usr/src cleaned.
 
Old 12-09-2022, 11:18 AM   #15
chrisretusn
Senior Member
 
Registered: Dec 2005
Location: Philippines
Distribution: Slackware64-current
Posts: 2,978

Rep: Reputation: 1556Reputation: 1556Reputation: 1556Reputation: 1556Reputation: 1556Reputation: 1556Reputation: 1556Reputation: 1556Reputation: 1556Reputation: 1556Reputation: 1556
0XBF, Again thank for this awesome idea and script.

I am using 470.161.03 driver set. Occasionally the build would go crazy after the first run in the chroot, more often it would do this on building for the other kernels. My solution, at least so far after several successful builds was to add /dev/ to the chroot. I normally add /dev, /proc and /sys to any system chroot I do. This also had the benefit of eliminating the /dev/null that is in the package. This is what I am using now. I use $PKGDIR for $PKG.
Code:
mount -t overlay overlay -o lowerdir=/,upperdir=$PKGDIR,workdir=$WORKDIR $CHROOT
mount --bind /proc $CHROOT/proc
mount --bind /sys $CHROOT/sys
mount --bind /dev $CHROOT/dev
mount --bind $INSTLRDIR $CHROOT/home/
My umount in the script uses.
Code:
umount $CHROOT{/dev,/proc,/sys,/home,}
That last mount probably needs an explanation ($INSTLRDIR=/home/non-slack/slackbuilds/nvidia-linux/build). I use a nonstandard build environment for my packages, it does not use the system /tmp/ directory. I have been using this setup for years. This is my build layout, all packages I build are a sub-directory of slackbuilds.
Code:
/home/non-slack/slackbuilds/nvidia-linux/
├── build
│   ├── NVIDIA-Linux-x86_64-470.161.03.run
│   ├── NVIDIA-Linux-x86_64-470.161.03.run.sha256sum
│   ├── nvidia-drivers.SlackBuild
│   ├── nvidia-linux.SlackBuild
├── nvidia-linux-470.161.03-x86_64-1cgs.lst
├── nvidia-linux-470.161.03-x86_64-1cgs.meta
├── nvidia-linux-470.161.03-x86_64-1cgs.txt
├── nvidia-linux-470.161.03-x86_64-1cgs.txz
├── nvidia-linux-470.161.03-x86_64-1cgs.txz.asc
├── nvidia-linux-470.161.03-x86_64-1cgs.txz.md5
└── tmp
    ├── chroot
    ├── package
    └── workdir
        └── work
I left in the tmp directories to show the layout better. As you can see the .run file is located in /home/... this is mounted to another hard drive so I had to specifically mount it to use it. Copying it to srcdir had the same issue as it was not located in /tmp/. This allows me to just keep that downloaded .run file in place. Hope it make sense.

I used my template.SlackBuild to integrate your build script to my build environment. This is my strip stanza. I noted you used "true"
Code:
find $PKGDIR | xargs file | grep -E 'ELF|(executable|shared object)\1' \
  | cut -f 1 -d : | xargs strip --strip-unneeded 2> /dev/null || exit 1
The find without the execute part show that everything is already stripped except for the "*.ko" files. So stripping is not needed.

I also noted that the perms for most of the files was 444, so I just use the standard SlackBuild "Set sane ownership, permissions" stanza to fix that. This would eliminate the need to fix the .desktop file.

With all my test runs with this, I noticed that there did not seem to be a need unload nvidia_drm in the chroot to umount everything. the umount command I use above works every time. The only thing left after the SlackBuild finished is the tmp/package directory. Perhaps it's different for different systems.

Regarding other kernels. I normally have two kernels installed, sometimes three if there is one in testing. Occasionally there are other kernels install that I just have not ran removepkg on. I don't want these kernels complied for nvidia. My setup with lilo and symlinks in /boot make it simple to just build for the kernels "in use" per-say. These are my symlinks right now in boot for the kernels lilo is setup to use. A test branch kernel will be similarly named.
Code:
vmlinuz-generic-stock -> vmlinuz-generic-5.19.17
vmlinuz-generic-working -> vmlinuz-generic-5.19.16
This what I am using to create KERNEL_LIST
Code:
KERNEL_LIST=$(find /boot -name "vmlinuz-generic-*" -type l -printf "%l\n" | cut -d- -f3)
Using a package for the nvidia drvers is great. after the recent update of mesa, I just reinstalled the package, all is well. Beats what I use to do, uninstall (remembering to do that before upgrading mesa), do the upgrade, then, install nvidia again. Waiting on xorg-server and a kernel update to test.

I have some other ideas at work. For example, there is a kernel update, just rebuild the kernel and repackage the package, reinstall or update.

Last edited by chrisretusn; 12-09-2022 at 11:19 AM.
 
3 members found this post helpful.
  


Reply

Tags
nvidia



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Alternative update approach to slackpkg - request for testing/feedback GazL Slackware 67 12-09-2023 10:31 AM
[SOLVED] With RPMFusion Packaging of Nvidia Drivers - do I need to do anything on GPU upgrade? DJOtaku Fedora 2 01-02-2015 08:20 PM
Designing, Writing and Packaging Printer Drivers XenaneX Programming 1 12-18-2013 08:32 AM
alternative approach for cron job sysmicuser Linux - Newbie 12 03-22-2013 09:52 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 09:34 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration