SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
My rationale about this; while you can use technically any FS; the problem with ext* btrfs, xfs, jfs - is that they were developed mostly when conventional drives were still mostly in play; and flash/NAND devices were an after thought with adding of TRIM. This is also why I pushed during the requests for --Current to have at least F2FS added for install support. I kinda also wish we had JFFS / JFFS2 support for comparison too; but I'll take what I can get and just use F2FS; since it is written from-the-ground-up for SSDs. With that in mind, this was the purpose of this thread - is TRIM even necessary at this point if I am using a FS that is purposed for a NAND device?
The thing to think about when using modern NVMe or SSD drives is the filesystem has no control over where data gets written to the system. That is all handled by the onboard controller of the NAND device.
F2FS is very beneficial when there isn't a controller onboard (think USB or SD cards), as it is aware of NAND and their write limitations and will handle data accordingly (utilizing wear leveling). For devices with a controller, there is no need to worry about what filesystem you use in regards to write limitations. The filesystem will have no control over where data is written, as the wear-leveling will be handled by the controller.
Outside of those situations, according to this benchmark, there is still no clear winner on performance between F2FS, ext4, btrfs, or xfs. F2FS came out on top more, but there were still several where it didn't do so hot and other FSs did much better.
I'm lazy and just use ext4 on my systems.
Quote:
Originally Posted by zeebra
I'm curious about this too. It's difficult to say.
--snip--
I've not used trim the last 4-5 years now, generally speaking, and once in awhile I come over articles or other stuff that mentions using trim, and I'm like dang, perhaps I need to use it. So anyways, I've been doing it manually a few times like you show.
All SSDs greatly benefit from trim due to the way the NAND operates. When you "delete" data, with both magnetic storage and NAND, the pointer to the data is removed, however the data still resides on the device, but there are no pointers to it. With magnetic storage, the system is able to just write the new data over the old data, since the magnet can set things on a whim. With NAND storage (SSD, NVMe, USB drives, SD cards, etc), you can't just write over old data... it has to be reset first.
All NAND devices are able to reset data, if needed, directly before a write occurs in that area, but this takes a bit of extra time for each sector that needs to be reset before the data can be written to it. A device is smart enough to use sectors of the device that doesn't need to be reset first, but over time, deletions of other files will start leaving old data that no longer has any pointers to it and has not been reset. When this happens, the performance of the device degrades since it needs to reset all sectors before it can be written.
Trim alleviates this issue by resetting all those sectors and leaving unused space ready to have data written to it. Most filesystems support some form of trim, some working in the background (continuous trim) and some requiring a program to be run (periodic trim), some support both methods (but you're better to just choose one).
F2FS enables continuous trim by default (so unless you're disabling it by the mount option "nodiscard", there is no need to do any further trim events with F2FS drives)... ext4 requires you to add the "discard" mount option if you want continuous trim. Periodic trim can be handled with most filesystems by running fstrim, either manually or setting up a cron job to do it occasionally (weekly is probably a good default for most users unless your drive is close to full). For NAND devices that are mounted with continuous trim, there is point in running periodic trim unless you want to ensure a file is deleted for good immediately after deleting it. In the past, I noticed that discard with ext4 would generally get the sector reset within 5 minutes, but running fstrim was immediate.
Quote:
Originally Posted by zeebra
In any case, I'm pretty sure the Kernel generally knows if a disk is SSD or not, so would it not at some point automatically enable trim? There are even pure SSD drivers like nvme, and I would find it a bit strange if trim is not enabled automatically for them.
Trim is a product of the filesystem driver. The kernel itself doesn't do it.
====================
As for noatime, I think this recommendation for SSDs was a product of older devices that their NAND tend to not be able to withstand excessive writes. Adding noatime would prevent access times from being updated on files whenever they were accessed, which for a main partition that had the OS installed to it, that could minimize writes to the filesystem.
NAND devices have improved immensely over the last decade and is it pretty much impossible for most users to surpass the warrantied TBW (TerraBytes Written -- my 1TB 5 year old NVMe drive is warrantied to 400TB... to hit that number in 5 years, I'd need to write over 100GB per day for 10 years straight). Even if they managed to somehow surpass that crazy number, drives are able to go well past their warrantied amount (this experiment, started in 2013 and ending in 2015, found all drives were able to write well past their warrantied amount, with the earliest drive crapping out at 700TB -- with a drive from 2012). With my drive, I've used it heavily for 5 years with including /tmp, /var, and swap on there, as well as my /home partition, and I've used 5% of the drive's write capacity.
If you are adding noatime (and nodiratime) only to save your drive from writes, personally, I think there is no point. However, many disable noatime due to the slight performance penalty in needing to update the access times of files when they're accessed.
Last edited by bassmadrigal; 06-15-2022 at 11:15 AM.
Reason: Fixed bbcode formatting
I've heard so many times now that the controller handles trim on modern SSD's and manually setting trim with discard often is not necessary.
As far as I know, automatic and periodic trim is not handled by the controller. It will only trim when necessary to write data, which leads to a performance penalty.
Quote:
Originally Posted by zeebra
and then there is the complication of trim with encrypted drives/partitions as well
If you fill an encrypted disk with random data (say from /dev/random) before using it (which is the typical recommendation) and then enable trim on a NAND device, it will reset all of that random data back to the default state. This essentially allows potential hackers to see what sections on the drive have encrypted data, which theoretically would give them a better chance of decrypting the data.
The reality of it though is that it is extremely hard to crack disk encryption. It is extremely unlikely anyone would use the resources to try and crack it unless you're a super secret spy (please don't tell me if you are I'd rather stay in the dark).
Whether the performance penalty of needing to reset that NAND before writing to it is worth the increased security will be up to the individual.
As far as I know, automatic and periodic trim is not handled by the controller. It will only trim when necessary to write data, which leads to a performance penalty.
If you fill an encrypted disk with random data (say from /dev/random) before using it (which is the typical recommendation) and then enable trim on a NAND device, it will reset all of that random data back to the default state. This essentially allows potential hackers to see what sections on the drive have encrypted data, which theoretically would give them a better chance of decrypting the data.
The reality of it though is that it is extremely hard to crack disk encryption. It is extremely unlikely anyone would use the resources to try and crack it unless you're a super secret spy (please don't tell me if you are I'd rather stay in the dark).
Whether the performance penalty of needing to reset that NAND before writing to it is worth the increased security will be up to the individual.
Well yeah, that was my impression too. And no, I don't even have an evil maid! I was trying to read up on what the issue was, and it seemed mostly relevant for special cases. I can't even see how it's worth in most cases to even go after unencrypted data on let's say a destroyed partion, unless it's your own, or the owner is paying you to restore it.
Anyways, I was talking more in terms of regular users, since thing like discard and fstrim don't work on encrypted partions and you have to use things like luks trim etc. I haven't used trim much at all the last 5 years and I haven't noticed any trouble, but I would probably use fstrim if I noticed some performance issues with SSD disks. But that brings me back to the point about controllers, because I've really heard alot of talks around 2015 that alot of SSD disks use trim automatically in the controller. But I didn't really care or think much about it. But it would entail the operating system/filesystem telling the disk when removing the data, and the disk knowing where it is and trimming it automatically upon deletion. Not sure that makes any sense..
Well yeah, that was my impression too. And no, I don't even have an evil maid! I was trying to read up on what the issue was, and it seemed mostly relevant for special cases. I can't even see how it's worth in most cases to even go after unencrypted data on let's say a destroyed partion, unless it's your own, or the owner is paying you to restore it.
Yeah, I don't think anyone needs to worry about nefarious doers decrypting data whether or not they have random bits everywhere or just where data resides.
Quote:
Originally Posted by zeebra
But that brings me back to the point about controllers, because I've really heard alot of talks around 2015 that alot of SSD disks use trim automatically in the controller. But I didn't really care or think much about it. But it would entail the operating system/filesystem telling the disk when removing the data, and the disk knowing where it is and trimming it automatically upon deletion. Not sure that makes any sense..
If you happen to come across that info, I'd be interested in reading it. I haven't seen anything on that, but I'd definitely be interested in reading up on it and broadening my knowledge on the subject. I did a bit of searching before writing this comment and these twoarticles both seem to state it is initiated by the OS and not the controller:
Quote:
SSD TRIM is complementary to garbage collection. The TRIM command enables the operating system (OS) to preemptively notify the SSD which data pages in a particular block can be erased, allowing the SSD's controller to more efficiently manage the storage space available for data. TRIM eliminates any unnecessary copying of discarded or invalid data pages during the garbage collection process to save time and improve SSD performance.
Quote:
SSDs typically cannot detect which pages contain data marked for deletion, causing them to erase and rewrite entire blocks during the garbage collection process. The TRIM command allows the host operating system to inform the SSD about the location of stale data (marked for deletion). The SSD then accesses the stale data and immediately wipes it out. With the TRIM command, the SSD controller can perform garbage collection on a page level instead of managing whole blocks, thereby reducing WAI and increasing SSD endurance.
If you happen to come across that info, I'd be interested in reading it. I haven't seen anything on that, but I'd definitely be interested in reading up on it and broadening my knowledge on the subject. I did a bit of searching before writing this comment and these twoarticles both seem to state it is initiated by the OS and not the controller:
It's nothing specific, it's just memories of ALOT of people and others saying those things around that time. I was always insistant on using discard/trim at that time. I do read technical magazines etc (from my small country in Europe), and by people I generally mean people on forums like this.. I do always take things with a grain of salt, but those sayings about fundamental changes to SSD disks were so persistent I started assuming it might be true.
It was probably a contributing factor, alongside encryption (including sometimes /), to why I generally stopped using the "dicard" flag in general, like I had always done up until that point. So I guess I kind of bought into it
Those two quotes clearly counterindicate what "those people" kept telling, as the disk can't determine it on it's own without being notified by the OS specifically with TRIM. I did run fstrim on 2 computers the other day, both with unencrypted /boot and /, and strangely enough, the one with the nvme did trim alot of data, which indicates an SSD only driver(nvme) doesn't do it automatically with the Linux kernel. Secondly, the older Samsung 8xx Evo disk trimmed 0 bytes, nothing, nada, which was quite surprising.
I believe this is a bit different than SSD or NVMe devices as that page states that they are "raw flash memory" devices, and thus show up as /dev/mtd*. I don't believe they have the controllers that SSDs and NVMes have. The latter devices show up as /dev/sd* and /dev/nvme*, indicating they are perceived differently than raw flash memory devices.
For devices that use embedded flash, it is more likely they will not include memory controllers and you will absolutely want to use a built-from-the-ground-up flash filesystem, as the FS driver should be able to take into account wear-leveling.
I believe this is a bit different than SSD or NVMe devices as that page states that they are "raw flash memory" devices, and thus show up as /dev/mtd*. I don't believe they have the controllers that SSDs and NVMes have. The latter devices show up as /dev/sd* and /dev/nvme*, indicating they are perceived differently than raw flash memory devices.
For devices that use embedded flash, it is more likely they will not include memory controllers and you will absolutely want to use a built-from-the-ground-up flash filesystem, as the FS driver should be able to take into account wear-leveling.
I posted that link as an example of what seems to be discussed, because I don't think this statement takes into account that most of the flash devices we use have built in controllers, even USB and SD cards.
Code:
F2FS is very beneficial when there isn't a controller onboard (think USB or SD cards), as it is aware of NAND and their write limitations and will handle data accordingly (utilizing wear leveling).
I posted that link as an example of what seems to be discussed, because I don't think this statement takes into account that most of the flash devices we use have built in controllers, even USB and SD cards.
Code:
F2FS is very beneficial when there isn't a controller onboard (think USB or SD cards), as it is aware of NAND and their write limitations and will handle data accordingly (utilizing wear leveling).
I believed most the previous discussion was centered on devices that would have controllers built in like SSDs or NVMe devices.
As I understand it F2FS benefit the most to flash devices like SD cards or eMMC drives. Not surprisingly as it has been initially used mainly on SD cards in mobile phones, cf. https://lkml.org/lkml/2012/10/5/205. Caveat: I have seen reports of damaged file systems in case of sudden loss of power as noted in: https://wiki.archlinux.org/title/F2FS For this reason this is the default when installing Slint64-14.2.1 in "auto" mode only if the device name as reported by lsblk includes "mmc", which tells us that this device is either an SD card inserted in a card reader (i.e. not in an USB enclosure) or an eMMC storage device. This avoid the risk of loss of power caused by an inadvertent un-plugging. In Slint 15 we will probably used BTRFS everywhere if installing in "auto" mode, but I digress ("auto" here means that the installer partition the drives and make the file systems.)
Two systems, each on only one partition of an external SSD: /dev/sdc3 (Slackware64-15.0 converted to Slint64-15.0, ext4) and /dev/sdc5 (Slint64-15.0 installed in auto mode in a beforehand freed space on the drive, btrfs with one volume and 4 sub-volumes)
Using btrfs with zstd compression level 3 divides by a factor two the space on "disk" needed by the system (same packages with a very few exceptions)
Conclusion: in "auto" mode I will use btrfs also for SD cards, eMMC drives and USB sticks. To be honest, I didn't find any factual information comparing the wear out speed and risks of damaging the file system using btrfs vs f2fs, but I assume that btrfs needing twice less blocks written on disk should help.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.