LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 06-10-2022, 07:07 PM   #16
bassmadrigal
LQ Guru
 
Registered: Nov 2003
Location: West Jordan, UT, USA
Distribution: Slackware
Posts: 8,792

Rep: Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656

Quote:
Originally Posted by Jeebizz View Post
My rationale about this; while you can use technically any FS; the problem with ext* btrfs, xfs, jfs - is that they were developed mostly when conventional drives were still mostly in play; and flash/NAND devices were an after thought with adding of TRIM. This is also why I pushed during the requests for --Current to have at least F2FS added for install support. I kinda also wish we had JFFS / JFFS2 support for comparison too; but I'll take what I can get and just use F2FS; since it is written from-the-ground-up for SSDs. With that in mind, this was the purpose of this thread - is TRIM even necessary at this point if I am using a FS that is purposed for a NAND device?
The thing to think about when using modern NVMe or SSD drives is the filesystem has no control over where data gets written to the system. That is all handled by the onboard controller of the NAND device.

F2FS is very beneficial when there isn't a controller onboard (think USB or SD cards), as it is aware of NAND and their write limitations and will handle data accordingly (utilizing wear leveling). For devices with a controller, there is no need to worry about what filesystem you use in regards to write limitations. The filesystem will have no control over where data is written, as the wear-leveling will be handled by the controller.

Outside of those situations, according to this benchmark, there is still no clear winner on performance between F2FS, ext4, btrfs, or xfs. F2FS came out on top more, but there were still several where it didn't do so hot and other FSs did much better.

I'm lazy and just use ext4 on my systems.

Quote:
Originally Posted by zeebra View Post
I'm curious about this too. It's difficult to say.
--snip--
I've not used trim the last 4-5 years now, generally speaking, and once in awhile I come over articles or other stuff that mentions using trim, and I'm like dang, perhaps I need to use it. So anyways, I've been doing it manually a few times like you show.

All SSDs greatly benefit from trim due to the way the NAND operates. When you "delete" data, with both magnetic storage and NAND, the pointer to the data is removed, however the data still resides on the device, but there are no pointers to it. With magnetic storage, the system is able to just write the new data over the old data, since the magnet can set things on a whim. With NAND storage (SSD, NVMe, USB drives, SD cards, etc), you can't just write over old data... it has to be reset first.

All NAND devices are able to reset data, if needed, directly before a write occurs in that area, but this takes a bit of extra time for each sector that needs to be reset before the data can be written to it. A device is smart enough to use sectors of the device that doesn't need to be reset first, but over time, deletions of other files will start leaving old data that no longer has any pointers to it and has not been reset. When this happens, the performance of the device degrades since it needs to reset all sectors before it can be written.

Trim alleviates this issue by resetting all those sectors and leaving unused space ready to have data written to it. Most filesystems support some form of trim, some working in the background (continuous trim) and some requiring a program to be run (periodic trim), some support both methods (but you're better to just choose one).

F2FS enables continuous trim by default (so unless you're disabling it by the mount option "nodiscard", there is no need to do any further trim events with F2FS drives)... ext4 requires you to add the "discard" mount option if you want continuous trim. Periodic trim can be handled with most filesystems by running fstrim, either manually or setting up a cron job to do it occasionally (weekly is probably a good default for most users unless your drive is close to full). For NAND devices that are mounted with continuous trim, there is point in running periodic trim unless you want to ensure a file is deleted for good immediately after deleting it. In the past, I noticed that discard with ext4 would generally get the sector reset within 5 minutes, but running fstrim was immediate.

Quote:
Originally Posted by zeebra View Post
In any case, I'm pretty sure the Kernel generally knows if a disk is SSD or not, so would it not at some point automatically enable trim? There are even pure SSD drivers like nvme, and I would find it a bit strange if trim is not enabled automatically for them.
Trim is a product of the filesystem driver. The kernel itself doesn't do it.

====================

As for noatime, I think this recommendation for SSDs was a product of older devices that their NAND tend to not be able to withstand excessive writes. Adding noatime would prevent access times from being updated on files whenever they were accessed, which for a main partition that had the OS installed to it, that could minimize writes to the filesystem.

NAND devices have improved immensely over the last decade and is it pretty much impossible for most users to surpass the warrantied TBW (TerraBytes Written -- my 1TB 5 year old NVMe drive is warrantied to 400TB... to hit that number in 5 years, I'd need to write over 100GB per day for 10 years straight). Even if they managed to somehow surpass that crazy number, drives are able to go well past their warrantied amount (this experiment, started in 2013 and ending in 2015, found all drives were able to write well past their warrantied amount, with the earliest drive crapping out at 700TB -- with a drive from 2012). With my drive, I've used it heavily for 5 years with including /tmp, /var, and swap on there, as well as my /home partition, and I've used 5% of the drive's write capacity.

If you are adding noatime (and nodiratime) only to save your drive from writes, personally, I think there is no point. However, many disable noatime due to the slight performance penalty in needing to update the access times of files when they're accessed.

Last edited by bassmadrigal; 06-15-2022 at 11:15 AM. Reason: Fixed bbcode formatting
 
6 members found this post helpful.
Old 06-11-2022, 12:53 AM   #17
zeebra
Senior Member
 
Registered: Dec 2011
Distribution: Slackware
Posts: 1,833
Blog Entries: 17

Rep: Reputation: 640Reputation: 640Reputation: 640Reputation: 640Reputation: 640Reputation: 640
Quote:
Originally Posted by bassmadrigal View Post
as the wear-leveling will be handled by the controller.
I've heard so many times now that the controller handles trim on modern SSD's and manually setting trim with discard often is not necessary.

... and then there is the complication of trim with encrypted drives/partitions as well
 
Old 06-15-2022, 11:15 AM   #18
bassmadrigal
LQ Guru
 
Registered: Nov 2003
Location: West Jordan, UT, USA
Distribution: Slackware
Posts: 8,792

Rep: Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656
Quote:
Originally Posted by zeebra View Post
I've heard so many times now that the controller handles trim on modern SSD's and manually setting trim with discard often is not necessary.
As far as I know, automatic and periodic trim is not handled by the controller. It will only trim when necessary to write data, which leads to a performance penalty.

Quote:
Originally Posted by zeebra View Post
and then there is the complication of trim with encrypted drives/partitions as well
If you fill an encrypted disk with random data (say from /dev/random) before using it (which is the typical recommendation) and then enable trim on a NAND device, it will reset all of that random data back to the default state. This essentially allows potential hackers to see what sections on the drive have encrypted data, which theoretically would give them a better chance of decrypting the data.

The reality of it though is that it is extremely hard to crack disk encryption. It is extremely unlikely anyone would use the resources to try and crack it unless you're a super secret spy (please don't tell me if you are I'd rather stay in the dark).

Whether the performance penalty of needing to reset that NAND before writing to it is worth the increased security will be up to the individual.
 
Old 06-15-2022, 11:39 AM   #19
zeebra
Senior Member
 
Registered: Dec 2011
Distribution: Slackware
Posts: 1,833
Blog Entries: 17

Rep: Reputation: 640Reputation: 640Reputation: 640Reputation: 640Reputation: 640Reputation: 640
Quote:
Originally Posted by bassmadrigal View Post
As far as I know, automatic and periodic trim is not handled by the controller. It will only trim when necessary to write data, which leads to a performance penalty.



If you fill an encrypted disk with random data (say from /dev/random) before using it (which is the typical recommendation) and then enable trim on a NAND device, it will reset all of that random data back to the default state. This essentially allows potential hackers to see what sections on the drive have encrypted data, which theoretically would give them a better chance of decrypting the data.

The reality of it though is that it is extremely hard to crack disk encryption. It is extremely unlikely anyone would use the resources to try and crack it unless you're a super secret spy (please don't tell me if you are I'd rather stay in the dark).

Whether the performance penalty of needing to reset that NAND before writing to it is worth the increased security will be up to the individual.
Well yeah, that was my impression too. And no, I don't even have an evil maid! I was trying to read up on what the issue was, and it seemed mostly relevant for special cases. I can't even see how it's worth in most cases to even go after unencrypted data on let's say a destroyed partion, unless it's your own, or the owner is paying you to restore it.

Anyways, I was talking more in terms of regular users, since thing like discard and fstrim don't work on encrypted partions and you have to use things like luks trim etc. I haven't used trim much at all the last 5 years and I haven't noticed any trouble, but I would probably use fstrim if I noticed some performance issues with SSD disks. But that brings me back to the point about controllers, because I've really heard alot of talks around 2015 that alot of SSD disks use trim automatically in the controller. But I didn't really care or think much about it. But it would entail the operating system/filesystem telling the disk when removing the data, and the disk knowing where it is and trimming it automatically upon deletion. Not sure that makes any sense..
 
Old 06-15-2022, 02:57 PM   #20
bassmadrigal
LQ Guru
 
Registered: Nov 2003
Location: West Jordan, UT, USA
Distribution: Slackware
Posts: 8,792

Rep: Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656
Quote:
Originally Posted by zeebra View Post
Well yeah, that was my impression too. And no, I don't even have an evil maid! I was trying to read up on what the issue was, and it seemed mostly relevant for special cases. I can't even see how it's worth in most cases to even go after unencrypted data on let's say a destroyed partion, unless it's your own, or the owner is paying you to restore it.
Yeah, I don't think anyone needs to worry about nefarious doers decrypting data whether or not they have random bits everywhere or just where data resides.

Quote:
Originally Posted by zeebra View Post
But that brings me back to the point about controllers, because I've really heard alot of talks around 2015 that alot of SSD disks use trim automatically in the controller. But I didn't really care or think much about it. But it would entail the operating system/filesystem telling the disk when removing the data, and the disk knowing where it is and trimming it automatically upon deletion. Not sure that makes any sense..
If you happen to come across that info, I'd be interested in reading it. I haven't seen anything on that, but I'd definitely be interested in reading up on it and broadening my knowledge on the subject. I did a bit of searching before writing this comment and these two articles both seem to state it is initiated by the OS and not the controller:

Quote:
SSD TRIM is complementary to garbage collection. The TRIM command enables the operating system (OS) to preemptively notify the SSD which data pages in a particular block can be erased, allowing the SSD's controller to more efficiently manage the storage space available for data. TRIM eliminates any unnecessary copying of discarded or invalid data pages during the garbage collection process to save time and improve SSD performance.
Quote:
SSDs typically cannot detect which pages contain data marked for deletion, causing them to erase and rewrite entire blocks during the garbage collection process. The TRIM command allows the host operating system to inform the SSD about the location of stale data (marked for deletion). The SSD then accesses the stale data and immediately wipes it out. With the TRIM command, the SSD controller can perform garbage collection on a page level instead of managing whole blocks, thereby reducing WAI and increasing SSD endurance.
 
2 members found this post helpful.
Old 06-16-2022, 03:52 AM   #21
zeebra
Senior Member
 
Registered: Dec 2011
Distribution: Slackware
Posts: 1,833
Blog Entries: 17

Rep: Reputation: 640Reputation: 640Reputation: 640Reputation: 640Reputation: 640Reputation: 640
Quote:
Originally Posted by bassmadrigal View Post
If you happen to come across that info, I'd be interested in reading it. I haven't seen anything on that, but I'd definitely be interested in reading up on it and broadening my knowledge on the subject. I did a bit of searching before writing this comment and these two articles both seem to state it is initiated by the OS and not the controller:
It's nothing specific, it's just memories of ALOT of people and others saying those things around that time. I was always insistant on using discard/trim at that time. I do read technical magazines etc (from my small country in Europe), and by people I generally mean people on forums like this.. I do always take things with a grain of salt, but those sayings about fundamental changes to SSD disks were so persistent I started assuming it might be true.

It was probably a contributing factor, alongside encryption (including sometimes /), to why I generally stopped using the "dicard" flag in general, like I had always done up until that point. So I guess I kind of bought into it

Those two quotes clearly counterindicate what "those people" kept telling, as the disk can't determine it on it's own without being notified by the OS specifically with TRIM. I did run fstrim on 2 computers the other day, both with unencrypted /boot and /, and strangely enough, the one with the nvme did trim alot of data, which indicates an SSD only driver(nvme) doesn't do it automatically with the Linux kernel. Secondly, the older Samsung 8xx Evo disk trimmed 0 bytes, nothing, nada, which was quite surprising.

Last edited by zeebra; 06-16-2022 at 04:03 AM.
 
Old 06-17-2022, 09:45 PM   #22
zaphar
Member
 
Registered: Nov 2012
Distribution: Slackware
Posts: 37

Rep: Reputation: Disabled
This link describes flash storage specific to embedded devices but the information probably overlaps with SSDs.
https://openwrt.org/docs/techref/flash.layout
 
1 members found this post helpful.
Old 06-22-2022, 02:09 PM   #23
bassmadrigal
LQ Guru
 
Registered: Nov 2003
Location: West Jordan, UT, USA
Distribution: Slackware
Posts: 8,792

Rep: Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656
Quote:
Originally Posted by zaphar View Post
This link describes flash storage specific to embedded devices but the information probably overlaps with SSDs.
https://openwrt.org/docs/techref/flash.layout
I believe this is a bit different than SSD or NVMe devices as that page states that they are "raw flash memory" devices, and thus show up as /dev/mtd*. I don't believe they have the controllers that SSDs and NVMes have. The latter devices show up as /dev/sd* and /dev/nvme*, indicating they are perceived differently than raw flash memory devices.

For devices that use embedded flash, it is more likely they will not include memory controllers and you will absolutely want to use a built-from-the-ground-up flash filesystem, as the FS driver should be able to take into account wear-leveling.
 
2 members found this post helpful.
Old 07-01-2022, 01:13 AM   #24
zaphar
Member
 
Registered: Nov 2012
Distribution: Slackware
Posts: 37

Rep: Reputation: Disabled
Quote:
Originally Posted by bassmadrigal View Post
I believe this is a bit different than SSD or NVMe devices as that page states that they are "raw flash memory" devices, and thus show up as /dev/mtd*. I don't believe they have the controllers that SSDs and NVMes have. The latter devices show up as /dev/sd* and /dev/nvme*, indicating they are perceived differently than raw flash memory devices.

For devices that use embedded flash, it is more likely they will not include memory controllers and you will absolutely want to use a built-from-the-ground-up flash filesystem, as the FS driver should be able to take into account wear-leveling.
I posted that link as an example of what seems to be discussed, because I don't think this statement takes into account that most of the flash devices we use have built in controllers, even USB and SD cards.
Code:
F2FS is very beneficial when there isn't a controller onboard (think USB or SD cards), as it is aware of NAND and their write limitations and will handle data accordingly (utilizing wear leveling).
 
Old 07-02-2022, 11:31 PM   #25
bassmadrigal
LQ Guru
 
Registered: Nov 2003
Location: West Jordan, UT, USA
Distribution: Slackware
Posts: 8,792

Rep: Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656
Quote:
Originally Posted by zaphar View Post
I posted that link as an example of what seems to be discussed, because I don't think this statement takes into account that most of the flash devices we use have built in controllers, even USB and SD cards.
Code:
F2FS is very beneficial when there isn't a controller onboard (think USB or SD cards), as it is aware of NAND and their write limitations and will handle data accordingly (utilizing wear leveling).
I believed most the previous discussion was centered on devices that would have controllers built in like SSDs or NVMe devices.
 
Old 07-03-2022, 04:25 AM   #26
Didier Spaier
LQ Addict
 
Registered: Nov 2008
Location: Paris, France
Distribution: Slint64-15.0
Posts: 11,065

Rep: Reputation: Disabled
And btrfs...

Quote:
Originally Posted by Didier Spaier View Post
As I understand it F2FS benefit the most to flash devices like SD cards or eMMC drives. Not surprisingly as it has been initially used mainly on SD cards in mobile phones, cf. https://lkml.org/lkml/2012/10/5/205. Caveat: I have seen reports of damaged file systems in case of sudden loss of power as noted in: https://wiki.archlinux.org/title/F2FS For this reason this is the default when installing Slint64-14.2.1 in "auto" mode only if the device name as reported by lsblk includes "mmc", which tells us that this device is either an SD card inserted in a card reader (i.e. not in an USB enclosure) or an eMMC storage device. This avoid the risk of loss of power caused by an inadvertent un-plugging. In Slint 15 we will probably used BTRFS everywhere if installing in "auto" mode, but I digress ("auto" here means that the installer partition the drives and make the file systems.)
Follow-up:
Code:
didier[~]$ LANG=C df -h --output=source,fstype,used,target|grep -e sdc3 -e sdc5 -e Filesystem
Filesystem     Type      Used Mounted on
/dev/sdc5      btrfs      12G /
/dev/sdc5      btrfs      12G /home
/dev/sdc5      btrfs      12G /snapshots
/dev/sdc5      btrfs      12G /swap
/dev/sdc3      ext4       24G /slackslint
didier[~]$
Comments on the output.
  • Two systems, each on only one partition of an external SSD: /dev/sdc3 (Slackware64-15.0 converted to Slint64-15.0, ext4) and /dev/sdc5 (Slint64-15.0 installed in auto mode in a beforehand freed space on the drive, btrfs with one volume and 4 sub-volumes)
  • Using btrfs with zstd compression level 3 divides by a factor two the space on "disk" needed by the system (same packages with a very few exceptions)
Conclusion: in "auto" mode I will use btrfs also for SD cards, eMMC drives and USB sticks. To be honest, I didn't find any factual information comparing the wear out speed and risks of damaging the file system using btrfs vs f2fs, but I assume that btrfs needing twice less blocks written on disk should help.

Below is /etc/fstab in the system using btrfs:
Code:
didier[~]$ cat /etc/fstab
# Initially /dev/sdc4
UUID=63DE-B559 /boot/efi vfat defaults 1 0
# Initially /dev/sdc5
UUID=1d65719d-eb97-4d30-a381-1e63a7db3bc1 / btrfs subvol=/@,compress=zstd:3,noatime 0 0
# Initially /dev/sdc5
UUID=1d65719d-eb97-4d30-a381-1e63a7db3bc1 /home btrfs subvol=/@home,compress=zstd:3,noatime 0 0
# Initially /dev/sdc5
UUID=1d65719d-eb97-4d30-a381-1e63a7db3bc1 /snapshots btrfs subvol=/@snapshots,compress=zstd:3,noatime 0 0
# Initially /dev/sdc5
UUID=1d65719d-eb97-4d30-a381-1e63a7db3bc1 /swap btrfs subvol=/@swap,compress=zstd:3,noatime 0 0
# Initially /dev/sda3
UUID=53ea8679-197e-437c-bdcb-61c55c509a2f /storage ext4 noatime 1 2
# Initially /dev/sdc1
UUID=47af727a-ecc4-4241-b5e0-c121302d8197 /data ext4 noatime 1 2
# Initially /dev/sdc3
UUID=b7dc09e9-58be-4b45-8f22-063e5e6ee5f3 /slackslint ext4 noatime 1 2
/swap/swapfile none swap pri=5 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
proc /proc proc defaults 0 0
tmpfs /dev/shm tmpfs nosuid,nodev,noexec 0 0
tmpfs /tmp tmpfs  rw,nodev,nosuid,mode=1777 0 0
PS. To go back to the trim/discard topic and ease maintenance we will use btrfsmaintenance with the periodicity shown below:
Code:
root[/]# btrfsmaintenance-refresh-cron.sh 
Refresh script btrfs-scrub.sh for monthly
Refresh script btrfs-defrag.sh for none
Refresh script btrfs-balance.sh for weekly
Refresh script btrfs-trim.sh for weekly
I attach /etc/default/btrfsmintenance which leads to that.
Attached Files
File Type: txt btrfsmaintenance.txt (5.1 KB, 19 views)

Last edited by Didier Spaier; 07-06-2022 at 04:14 AM. Reason: typo fix
 
1 members found this post helpful.
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] F2FS and LILO Jeebizz Slackware 4 11-05-2021 01:19 PM
LXer: Redo Rescue Linux Distro Adds Support for Mounting exFAT and F2FS Filesystems LXer Syndicated Linux News 0 10-11-2020 04:51 PM
F2FS and LILO, "Fatal: hole found in map file (zero sector)" cwizardone Slackware 47 07-19-2020 10:14 AM
LXer: Linux Kernel 4.10.2 Brings Wi-Fi Improvements, Updated F2FS and EXT4 Filesytems LXer Syndicated Linux News 0 03-14-2017 12:57 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 04:52 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration