SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Long time mdraid experience and the recent mixed licensing thread resulted in a desire to experiment with zfs without too much initial investment. First build+install openzfs from slackbuilds.org, then:
This will get modules loaded early enough to use traditional mounting without requiring an initrd, and limit main memory consumption to 4GB (or some other value)
During boot, the modules are loaded after ACPI and before usb discovery.
Jumpstart without reboot:
Code:
sudo /etc/rc.d/rc.zfs start
Create a simple mirror pool. Two whole disks or two partitions or even files can be used for experiments, I had two 400M unused partitions on one nvme.
Code:
sudo zpool create -m legacy -o ashift=12 zfs_test mirror /dev/nvme0n1p4 /dev/nvme0n1p6
zpool status
zpool list
If the disks/partitions had mdraid or a file system use 'wipefs' before 'zpool create'.
logger -st rc.local "$( zpool import zfs_test; zpool list -H; mount -v /mnt/test )"
You now have a fully functional zfs mirror for the current running kernel. For any kernel change (rebuild or booting to a different kernel) you must rebuild the SBo package.
...
You now have a fully functional zfs mirror for the current running kernel. For any kernel change (rebuild or booting to a different kernel) you must rebuild the SBo package.
$ alias drop
alias drop='echo 3 | sudo tee /proc/sys/vm/drop_caches'
$ cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10]
md10 : active raid10 sda1[0] sdb1[1]
3906884608 blocks super 1.2 512K chunks 2 far-copies [2/2] [UU]
bitmap: 0/466 pages [0KB], 4096KB chunk, file: /Bitmaps/bitmap.md10
unused devices: <none>
$ zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
backupZ 3.62T 186G 3.44T - - 0% 5% 1.00x ONLINE -
$ df -h | grep -e md10 -e md20
/dev/md10 3.6T 2.6T 1006G 73% /md10
backupZ 3.6T 186G 3.4T 6% /md20
raw
Code:
$ drop; dd if=/dev/md10 of=/dev/null bs=1M count=2048 |& grep -v records
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 5.93904 s, 362 MB/s
$ drop; for i in /dev/sd{a,b}; do sudo dd if=$i of=/dev/null bs=1M count=2048 & done |& grep -v records
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 10.6781 s, 201 MB/s
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 10.9092 s, 197 MB/s
$ drop; for i in /dev/sd{c,d}; do sudo dd if=$i of=/dev/null bs=1M count=2048 & done |& grep -v records
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 10.8389 s, 198 MB/s
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 11.3723 s, 189 MB/s
cooked
Code:
$ drop; for i in {/md10,}/md20/other/alan.watts.lectures.zip; do dd if=$i of=/dev/null bs=1M; done |& grep -v records
3900026367 bytes (3.9 GB, 3.6 GiB) copied, 10.9994 s, 355 MB/s
3900026367 bytes (3.9 GB, 3.6 GiB) copied, 7.45778 s, 523 MB/s
$ echo "scale=2; 523/355" | bc -q
1.47
1) openzfs read-ahead is much more aggressive than mdraid.
2) mdraid10 of 4 disks might need at least 2 parallel processes to saturate read throughput.
1) openzfs read-ahead is much more aggressive than mdraid.
2) mdraid10 of 4 disks might need at least 2 parallel processes to saturate read throughput.
1) great
2) Its a 2-disk mirror created with f2 so md calls it 'raid10'. In this case the very close to raw:
Update: Bug has been resolved w 2.2.2 release, zfs_dmu_offset_next_sync workaround is still recommended however. Exciting times, eh?
===========================
Update: Getting an unexplainable hard lockup with 'zpool import' with 2.2.1 so back to 2.2.0 with the suggested safety workaround
To upgrade in the previous example context:
Grab 2.2.1 https://github.com/openzfs/zfs/relea...s-2.2.1.tar.gz to your openzfs slackbuild and rebuild. Edit the 'make' command about line 119 in openzfs.SlackBuild for fast build:
Code:
make -j $( nproc )
Code:
sudo KERNEL=$( uname -r ) VERSION=2.2.1 sh ./openzfs.SlackBuild
Long time mdraid experience and the recent mixed licensing thread resulted in a desire to experiment with zfs without too much initial investment. ...
Net-net: all mdadm is now zfs
openzfs 2.2 consistently continues to be solid:
generally 1.5x faster vs mdadm far2 mirrors (both 2-disk zraid1 and 6-disk zraid2)
no more bitrot prevention (5% PAR2 calculations)
no more double backup to a borg repository
no memory issues (ARC max set to 1/3 phys, 4G to 32G systems)
zfs compression just works
the "burden" of recompile after a new kernel is an illusion
Code:
cd $ZFS_SLACKBUILD &&
sudo VERSION=$ZFS_VERSION KERNEL=$KERNEL JOBS=4 sh ./openzfs.SlackBuild &&
sudo installpkg /tmp/openzfs-*_${KERNEL}-x86_64-1_SBo.tgz ||
logger -st $0 "zfs for $KERNEL is not happy"
I did stick with ext4 root (mix of BIOS and UEFI machines)
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.