LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 11-21-2023, 04:59 AM   #1
lazardo
Member
 
Registered: Feb 2010
Location: SD Bay Area
Posts: 274

Rep: Reputation: Disabled
simple non-root openzfs


Long time mdraid experience and the recent mixed licensing thread resulted in a desire to experiment with zfs without too much initial investment. First build+install openzfs from slackbuilds.org, then:

Create '/etc/modprobe.d/zfs.conf' with
Code:
softdep zfs pre: spl
options zfs zfs_arc_max=4294967296
This will get modules loaded early enough to use traditional mounting without requiring an initrd, and limit main memory consumption to 4GB (or some other value)
Code:
GB=4; echo $(( $GB * 1024 * 1024 * 1024 ))
4294967296
During boot, the modules are loaded after ACPI and before usb discovery.

Jumpstart without reboot:
Code:
sudo /etc/rc.d/rc.zfs start
Create a simple mirror pool. Two whole disks or two partitions or even files can be used for experiments, I had two 400M unused partitions on one nvme.
Code:
sudo zpool create -m legacy -o ashift=12 zfs_test mirror /dev/nvme0n1p4 /dev/nvme0n1p6
zpool status
zpool list
If the disks/partitions had mdraid or a file system use 'wipefs' before 'zpool create'.

Create a mount point and /etc/fstab entry:
Code:
zfs_test   /mnt/test   zfs   defaults,lazytime,noatime,_netdev   0   0
Mount it:
Code:
sudo mount -av
Create a /etc/rc.d/rc.local entry for next boot:
Code:
logger -st rc.local "$( zpool import zfs_test; zpool list -H; mount -v /mnt/test )"
You now have a fully functional zfs mirror for the current running kernel. For any kernel change (rebuild or booting to a different kernel) you must rebuild the SBo package.

Cheers,

Last edited by lazardo; 11-21-2023 at 12:57 PM.
 
Old 11-21-2023, 12:54 PM   #2
lazardo
Member
 
Registered: Feb 2010
Location: SD Bay Area
Posts: 274

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by lazardo View Post
...
You now have a fully functional zfs mirror for the current running kernel. For any kernel change (rebuild or booting to a different kernel) you must rebuild the SBo package.
Multiple kernels:
Code:
sudo KERNEL=6.1.24 sh ./openzfs.Slackbuild
and use 'installpkg', once for each kernel.

To check:
Code:
sudo zpool scrub zfs_test

Last edited by lazardo; 11-21-2023 at 12:58 PM.
 
3 members found this post helpful.
Old 11-22-2023, 02:38 PM   #3
lazardo
Member
 
Registered: Feb 2010
Location: SD Bay Area
Posts: 274

Original Poster
Rep: Reputation: Disabled
zfs 1.5x over mdraid (simple streaming read)

Replicated the above openzfs install/config on a small server originally configured as mdraid mirrors.

Seagate IronWolf 4TB - CMR 3.5 Inch SATA 6Gb/s 5400 RPM 64MB Cache

2x raid10, far2 layout
2x zfs mirror, 4k, lz4 compression

Code:
$ alias drop
alias drop='echo 3 | sudo tee /proc/sys/vm/drop_caches'

$ cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] 
md10 : active raid10 sda1[0] sdb1[1]
      3906884608 blocks super 1.2 512K chunks 2 far-copies [2/2] [UU]
      bitmap: 0/466 pages [0KB], 4096KB chunk, file: /Bitmaps/bitmap.md10

unused devices: <none>
$ zpool list
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
backupZ  3.62T   186G  3.44T        -         -     0%     5%  1.00x    ONLINE  -

$ df -h | grep -e md10 -e md20
/dev/md10       3.6T  2.6T 1006G  73% /md10
backupZ         3.6T  186G  3.4T   6% /md20
raw
Code:
$ drop; dd if=/dev/md10 of=/dev/null bs=1M count=2048 |& grep -v records
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 5.93904 s, 362 MB/s
$ drop; for i in /dev/sd{a,b}; do sudo dd if=$i of=/dev/null bs=1M count=2048 & done |& grep -v records
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 10.6781 s, 201 MB/s
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 10.9092 s, 197 MB/s

$ drop; for i in /dev/sd{c,d}; do sudo dd if=$i of=/dev/null bs=1M count=2048 & done |& grep -v records
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 10.8389 s, 198 MB/s
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 11.3723 s, 189 MB/s
cooked
Code:
$ drop; for i in {/md10,}/md20/other/alan.watts.lectures.zip; do dd if=$i of=/dev/null bs=1M; done |& grep -v records
3900026367 bytes (3.9 GB, 3.6 GiB) copied, 10.9994 s, 355 MB/s
3900026367 bytes (3.9 GB, 3.6 GiB) copied, 7.45778 s, 523 MB/s

$ echo "scale=2; 523/355" | bc -q
1.47

Last edited by lazardo; 11-22-2023 at 02:54 PM.
 
2 members found this post helpful.
Old 11-23-2023, 06:24 AM   #4
guanx
Senior Member
 
Registered: Dec 2008
Posts: 1,183

Rep: Reputation: 237Reputation: 237Reputation: 237
Quote:
Originally Posted by lazardo View Post
Replicated the above openzfs install/config on a small server originally configured as mdraid mirrors.

Seagate IronWolf 4TB - CMR 3.5 Inch SATA 6Gb/s 5400 RPM 64MB Cache

2x raid10, far2 layout
2x zfs mirror, 4k, lz4 compression

Code:
$ alias drop
alias drop='echo 3 | sudo tee /proc/sys/vm/drop_caches'

$ cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] 
md10 : active raid10 sda1[0] sdb1[1]
      3906884608 blocks super 1.2 512K chunks 2 far-copies [2/2] [UU]
      bitmap: 0/466 pages [0KB], 4096KB chunk, file: /Bitmaps/bitmap.md10

unused devices: <none>
$ zpool list
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
backupZ  3.62T   186G  3.44T        -         -     0%     5%  1.00x    ONLINE  -

$ df -h | grep -e md10 -e md20
/dev/md10       3.6T  2.6T 1006G  73% /md10
backupZ         3.6T  186G  3.4T   6% /md20
raw
Code:
$ drop; dd if=/dev/md10 of=/dev/null bs=1M count=2048 |& grep -v records
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 5.93904 s, 362 MB/s
$ drop; for i in /dev/sd{a,b}; do sudo dd if=$i of=/dev/null bs=1M count=2048 & done |& grep -v records
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 10.6781 s, 201 MB/s
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 10.9092 s, 197 MB/s

$ drop; for i in /dev/sd{c,d}; do sudo dd if=$i of=/dev/null bs=1M count=2048 & done |& grep -v records
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 10.8389 s, 198 MB/s
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 11.3723 s, 189 MB/s
cooked
Code:
$ drop; for i in {/md10,}/md20/other/alan.watts.lectures.zip; do dd if=$i of=/dev/null bs=1M; done |& grep -v records
3900026367 bytes (3.9 GB, 3.6 GiB) copied, 10.9994 s, 355 MB/s
3900026367 bytes (3.9 GB, 3.6 GiB) copied, 7.45778 s, 523 MB/s

$ echo "scale=2; 523/355" | bc -q
1.47
1) openzfs read-ahead is much more aggressive than mdraid.
2) mdraid10 of 4 disks might need at least 2 parallel processes to saturate read throughput.
 
Old 11-23-2023, 03:44 PM   #5
lazardo
Member
 
Registered: Feb 2010
Location: SD Bay Area
Posts: 274

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by guanx View Post
1) openzfs read-ahead is much more aggressive than mdraid.
2) mdraid10 of 4 disks might need at least 2 parallel processes to saturate read throughput.
1) great
2) Its a 2-disk mirror created with f2 so md calls it 'raid10'. In this case the very close to raw:
Code:
$ drop; dd if=/dev/md10 of=/dev/null bs=1M count=2048 |& grep -v records
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 5.93904 s, 362 MB/s
Cheers,

Last edited by lazardo; 11-28-2023 at 01:26 PM.
 
Old 11-23-2023, 05:49 PM   #6
guanx
Senior Member
 
Registered: Dec 2008
Posts: 1,183

Rep: Reputation: 237Reputation: 237Reputation: 237
Talking

Quote:
Originally Posted by lazardo View Post
...

2) Its a 2-disk mirror created with f2 so md calls it '10'.

...
Sorry for my misunderstanding!
This made it clear. Thanks!
 
Old 11-25-2023, 04:10 PM   #7
lazardo
Member
 
Registered: Feb 2010
Location: SD Bay Area
Posts: 274

Original Poster
Rep: Reputation: Disabled
[BUGFIX] upgrade openzfs 2.2.0 -> 2.2.2

Update: Bug has been resolved w 2.2.2 release, zfs_dmu_offset_next_sync workaround is still recommended however. Exciting times, eh?
===========================

Update: Getting an unexplainable hard lockup with 'zpool import' with 2.2.1 so back to 2.2.0 with the suggested safety workaround
Code:
echo 0 > /sys/module/zfs/parameters/zfs_dmu_offset_next_sync
No issues or dataloss. 6.1.62 custom kernel, patched 15.0

Original upgrade post below
===========================

There was a block_clone issue which [rarely] corrupted data, also 2.2.1 is required for 6.6.x kernels: https://github.com/openzfs/zfs/releases

To upgrade in the previous example context:
Grab 2.2.1 https://github.com/openzfs/zfs/relea...s-2.2.1.tar.gz to your openzfs slackbuild and rebuild. Edit the 'make' command about line 119 in openzfs.SlackBuild for fast build:
Code:
make -j $( nproc )
Code:
sudo KERNEL=$( uname -r ) VERSION=2.2.1 sh ./openzfs.SlackBuild
Then:
Code:
sudo zpool export zfs_test
sudo /etc/rc.d/rc.zfs stop

sudo upgradepkg openzfs-2.2.1_$( uname -r )-x86_64-1_SBo.tgz

sudo /etc/rc.d/rc.zfs start
sudo zpool import zfs_test
sudo mount -av
sudo zpool scrub zfs_test
zpool status

Last edited by lazardo; 12-03-2023 at 04:30 PM. Reason: 2.2.2 release
 
1 members found this post helpful.
Old 12-27-2023, 01:51 PM   #8
lazardo
Member
 
Registered: Feb 2010
Location: SD Bay Area
Posts: 274

Original Poster
Rep: Reputation: Disabled
30 days later ...

Quote:
Originally Posted by lazardo View Post
Long time mdraid experience and the recent mixed licensing thread resulted in a desire to experiment with zfs without too much initial investment. ...
Net-net: all mdadm is now zfs

openzfs 2.2 consistently continues to be solid:
  • generally 1.5x faster vs mdadm far2 mirrors (both 2-disk zraid1 and 6-disk zraid2)
  • no more bitrot prevention (5% PAR2 calculations)
  • no more double backup to a borg repository
  • no memory issues (ARC max set to 1/3 phys, 4G to 32G systems)
  • zfs compression just works
  • the "burden" of recompile after a new kernel is an illusion
    Code:
    cd $ZFS_SLACKBUILD &&
    sudo VERSION=$ZFS_VERSION KERNEL=$KERNEL JOBS=4 sh ./openzfs.SlackBuild &&
    sudo installpkg /tmp/openzfs-*_${KERNEL}-x86_64-1_SBo.tgz ||
    logger -st $0 "zfs for $KERNEL is not happy"
I did stick with ext4 root (mix of BIOS and UEFI machines)

Cheers,

https://forum.level1techs.com/t/zfs-...hooting/196035 is an easier place to start the online openzfs docs.

Last edited by lazardo; 12-27-2023 at 03:48 PM. Reason: added easy start link
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: Btrfs vs OpenZFS LXer Syndicated Linux News 0 12-27-2018 05:12 PM
OpenZFS on Linux and BSD zdb command gives space map refcount mismatch User9 Linux - Server 1 01-24-2018 09:00 PM
ZFS / OpenZFS update please? dcnblues Linux - Distributions 3 07-31-2017 06:13 PM
LXer: LinuxCon: OpenZFS moves Open Source Storage Forward LXer Syndicated Linux News 0 09-20-2013 10:31 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 09:10 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration