LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices


Reply
  Search this Thread
Old 12-20-2022, 06:54 PM   #1
rnturn
Senior Member
 
Registered: Jan 2003
Location: Illinois (SW Chicago 'burbs)
Distribution: openSUSE, Raspbian, Slackware. Previous: MacOS, Red Hat, Coherent, Consensys SVR4.2, Tru64, Solaris
Posts: 2,818

Rep: Reputation: 550Reputation: 550Reputation: 550Reputation: 550Reputation: 550Reputation: 550
WD Gold SATA Drive User Experience?


I've been avoiding WD drives for the last few years because of the data formatting and performance scandal but, for several years, I was using their Red disks a fair amount as members of md devices configured as RAID1.

I'm now in the market for some replacements for some pretty old 500GB disks and I am wondering if the WD Gold 1TB drives would be good replacements (the price is pretty nice). The systems I'd be using these in would, again, be running them as RAID1 boot devices.

Not having any experience with the WD Golds, I'm hoping there someone out there has some experience -- good or bad -- using these in their Linux systems. I'm all ears.

Good devices for RAID1? Or stay far, far, away?

TIA...
 
Old 12-21-2022, 12:06 AM   #2
ArchDoru
LQ Newbie
 
Registered: Dec 2022
Posts: 4

Rep: Reputation: 0
Quote:
Originally Posted by rnturn View Post
I've been avoiding WD drives for the last few years because of the data formatting and performance scandal but, for several years, I was using their Red disks a fair amount as members of md devices configured as RAID1.

I'm now in the market for some replacements for some pretty old 500GB disks and I am wondering if the WD Gold 1TB drives would be good replacements (the price is pretty nice). The systems I'd be using these in would, again, be running them as RAID1 boot devices.

Not having any experience with the WD Golds, I'm hoping there someone out there has some experience -- good or bad -- using these in their Linux systems. I'm all ears.

Good devices for RAID1? Or stay far, far, away?

TIA...
Boot device on a platter spinning Hard Drive???
That is so 20 years ago...

SATA SSD is the minimal norm nowadays, with M.2 NVMe being of course the best option.

The only HDDs I use nowadays are the large capacity 8 TB to 14 TB for file storage. Mostly external too, no need to stress my power supply and have the constant noise of those old drives spinning up and down in my ears all day long. Especially WD, which are well known for having the piece of crap power saving feature, who always tries to spin them down every 5 to 10 minutes...
All my laptops and Desktop PC's have either M.2 NVMe, if so equipped, or SATA SSD for the older models. That is for the day by day system and user files (ROOT and HOME drives).
Even my oldest Desktop PC, built in 2014 boots from a 250 GB M.2 NVMe via a PCIe add-on card and runs the HOME folder from a 2 TB SSD.
It does however still have an internal 8 TB HDD, but also used for storage only...

As far as WD goes, the only ones I ever used are the 2.5 inch portable storage USB ones, I think they call them "My Passport", or something like that...
Never had any issues with those, and I have like 7 or 8 of them.

Last edited by ArchDoru; 12-21-2022 at 12:09 AM.
 
Old 12-21-2022, 07:53 AM   #3
jmgibson1981
Senior Member
 
Registered: Jun 2015
Location: Tucson, AZ USA
Distribution: Debian
Posts: 1,151

Rep: Reputation: 393Reputation: 393Reputation: 393Reputation: 393
I bought an older used server from a guy on craigslist. It has a 1tb gold in it. It's been spinning away fine for 7+ years now with nary a hitch. It probably wasn't new when the person put it in the machine. No complaints thus far. I can't speak to the noise of it as this thing sounds like a jet while it's running anyway. I keep it in my outbuilding with a point to point wireless bridge connecting it to the rest of the lan.
 
Old 12-21-2022, 09:28 AM   #4
rnturn
Senior Member
 
Registered: Jan 2003
Location: Illinois (SW Chicago 'burbs)
Distribution: openSUSE, Raspbian, Slackware. Previous: MacOS, Red Hat, Coherent, Consensys SVR4.2, Tru64, Solaris
Posts: 2,818

Original Poster
Rep: Reputation: 550Reputation: 550Reputation: 550Reputation: 550Reputation: 550Reputation: 550
Quote:
Originally Posted by ArchDoru View Post
Boot device on a platter spinning Hard Drive???
That is so 20 years ago...

SATA SSD is the minimal norm nowadays, with M.2 NVMe being of course the best option.
Maybe but I can't be sure that some of the motley assortment of older systems I'm using will even support SSDs.
 
Old 12-21-2022, 09:29 AM   #5
obobskivich
Member
 
Registered: Jun 2020
Posts: 596

Rep: Reputation: Disabled
Quote:
Originally Posted by rnturn View Post
I've been avoiding WD drives for the last few years because of the data formatting and performance scandal but, for several years, I was using their Red disks a fair amount as members of md devices configured as RAID1.

I'm now in the market for some replacements for some pretty old 500GB disks and I am wondering if the WD Gold 1TB drives would be good replacements (the price is pretty nice). The systems I'd be using these in would, again, be running them as RAID1 boot devices.

Not having any experience with the WD Golds, I'm hoping there someone out there has some experience -- good or bad -- using these in their Linux systems. I'm all ears.

Good devices for RAID1? Or stay far, far, away?

TIA...
What 'scandal' specifically are you referencing? (did I miss something? ha!)

Anyways, as far as I understand it the 'Gold' drives are part of their new marketing language, and previously this line was the 'RE' series enterprise drives (this doesn't mean a new one that you buy in 2022 is the same guts as one you would've bought in 2012 or something, just historically these were the enterprise SATA drives), so they likely cost a pretty penny on a per-GB basis vs their standard client drives (which used to be called 'Caviar' and now are called things like 'Blue' or 'Green'), largely for firmware features that don't make sense for a client system (e.g. TLER). For RAID1 you shouldn't need such enterprise-oriented features, but if you're doing parity RAID (e.g. RAID5 or RAID6) they can help if you're using a hardware controller (TLER is the 'big one'). I've never seen a standard client drive have problems with RAID1, except for hybrid drives/SSHDs (which do a lot of weird stuff with caching and uniformly manufacturers will tell you not to put them in RAID as a result - I was curious about this a few years ago and put a pair of Seagates in RAID0 and RAID1 with mdadm just to try it, spoiler alert: the manufacturers are right, don't put them in RAID - I'm also not sure if anyone is still making SSHDs these days). So in short: they're probably fine as brand new drives (as any brand new drive would be), but unless you're getting them for a song, probably any client focused drive (e.g. WD Blue, Seagate Barracuda, Toshiba L or P) will be just as fine for single-drive or RAID1 use.


Not to put too a fine point on the price thing, and I don't know where you are in the world, but looking at Amazon in the US, a 1TB Gold is around $83/ea - a Seagate Barracuda 1TB is around $46, a WD Blue 1TB is around $40, and a Toshiba 1TB is around $50. That's a massive price premium, and at $160+ for a pair of them you could very easily make the move to SSDs, a more complex (or much higher capacity) RAID array, or some other solution. Basically: I think there are better ways to spend $160+ to get 1TB of usable storage. Again, this may not reflect your location, pricing, availability, etc, just trying to put some actual numbers to the discussion for context. Also note that a 2TB model of most of the drives listed here as examples is around the same price, and at $80+ you can do 4TB drives (including more enterprise/RAID-friendly drives like Seagate IronWolf, Toshiba X, or WD Red). Just food for thought.


Quote:
Originally Posted by ArchDoru View Post
Boot device on a platter spinning Hard Drive???
That is so 20 years ago...

SATA SSD is the minimal norm nowadays, with M.2 NVMe being of course the best option.
Hyperbolic much?

Quote:
Originally Posted by rnturn View Post
Maybe but I can't be sure that some of the motley assortment of older systems I'm using will even support SSDs.
'They' make both SATA and IDE/PATA SSDs - the SATA ones can be quite good, the PATA ones are usually more expensive per-GB than most folks would like (because they're usually targetted at industrial/embedded applications where the 'who cares about vibration' piece is the primary selling point), but its likely you can find something compatible (I'm guessing you aren't using something with 1TB+ of storage that's also so old it doesn't support SATA - that would actually be a 20 year old system ).

Last edited by obobskivich; 12-21-2022 at 09:34 AM.
 
1 members found this post helpful.
Old 12-21-2022, 10:49 AM   #6
computersavvy
Senior Member
 
Registered: Aug 2016
Posts: 3,345

Rep: Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484
One thing to look at if buying a new spinning platter HDD is the recording technology. If it is the older and extremely reliable CMR tech then it should be good. If it is the newer SMR (shingled magnetic recording) tech then run away as fast as you can. SMR drives from all manufacturers take an extreme performance penalty with writes once the first layer is mostly full.

I have found that most sold as NAS or Enterprise server drives are still the CMR tech and though more expensive for the same storage size tend to last longer and be overall more compatible for my uses. I avoid totally the WD Blue and Seagate Caviar series among others.

Looking at the data for the WD Gold drives I found that WD states they are CMR tech.

Last edited by computersavvy; 12-21-2022 at 10:59 AM.
 
1 members found this post helpful.
Old 12-21-2022, 12:25 PM   #7
obobskivich
Member
 
Registered: Jun 2020
Posts: 596

Rep: Reputation: Disabled
Quote:
Originally Posted by computersavvy View Post
One thing to look at if buying a new spinning platter HDD is the recording technology. If it is the older and extremely reliable CMR tech then it should be good. If it is the newer SMR (shingled magnetic recording) tech then run away as fast as you can. SMR drives from all manufacturers take an extreme performance penalty with writes once the first layer is mostly full.

I have found that most sold as NAS or Enterprise server drives are still the CMR tech and though more expensive for the same storage size tend to last longer and be overall more compatible for my uses. I avoid totally the WD Blue and Seagate Caviar series among others.

Looking at the data for the WD Gold drives I found that WD states they are CMR tech.
I keep hearing this 'SMR is the worst thing ever' repeated ad nauseum. Personally I've exerpienced no such issues with them in regular use. I wouldn't put them in a parity RAID array, but their documentation counter-indicates that (they also lack TLER which is another counter-indication for parity RAID). The claims of 56k-esque speeds are exaggerated in regular use, as best as I can tell - basically SMR drives are kind of like TLC/QLC SSDs: they need time between large writes to do background garbage collection and move data around physically, so if you're going to hammer them 24x7 (or put them in parity RAID) the performance will suffer, but for regular usage that isn't assuming huge amounts of writes it should never be an issue. That said, due to the non-stop brigading about 'SMR' and 'CMR,' most manufacturers now publish labels on their drives (and it should be noted, neither of these acronyms are that specific - to say that 'CMR is the old reliable way all drives have been' is kind of ludicrous - is it PMR? HAMR? etc there's quite a lot of complexity being hidden by something that marketers have figured out people want to see). Overall, IME this is a nothing burger unless you're needing extreme amounts of regular writes, or intending to use a parity RAID.
 
Old 12-21-2022, 02:19 PM   #8
rclark
Member
 
Registered: Jul 2008
Location: Montana USA
Distribution: KUbuntu, Fedora (KDE), PI OS
Posts: 496

Rep: Reputation: 182Reputation: 182
Quote:
some of the motley assortment of older systems I'm using will even support SSDs.
If they support the SATA interface, they should support SATA SSD drives. I've had some 'old' laptops and 'upgraded' the HDD with a SSD no problem. Only thing I use HDDs for is backups. All OS drives are SSD as a 500GB SSD (my minimum) is dirt cheap now which is overkill for OS drives. Data drive is dependent on how much you need. If more than 1TB then an HDD is much more economical... just slower.

I buy the RED WD drives (8TB is the largest I own, others are 4TB) and haven't had any problems with them other than eventually wearing out. Seagate drives are the ones I seem to have always run into problems with in a relatively 'short' time period (a year or two). I've never bought a Gold WD drive.

Last edited by rclark; 12-21-2022 at 02:22 PM.
 
Old 12-21-2022, 05:48 PM   #9
Arnulf
Member
 
Registered: Jan 2022
Location: Hanover, Germany
Distribution: Slackware
Posts: 274

Rep: Reputation: 89
Quote:
Originally Posted by rnturn View Post
Maybe but I can't be sure that some of the motley assortment of older systems I'm using will even support SSDs.

If these systems contains SATA controllers you can expect following:
  • SATA controllers with AHCI support support SSDs. AHCI should be enabled in BIOS/UEFI.
  • SATA 3.0 GB/s controllers without AHCI support (e.g. later nForce, later Promise) supports SSDs.
  • Some SATA 1.5 GB/s controllers (e.g. earlier nForce, earlier Promise, Sil3112A) may support SSDs. Individual incompatibilities may occur.
  • VIA SATA 1.5 GB/s controllers don't support SSDs in many cases!
If this systems only contains PATA controllers a CF-card inserted into a CF-to-PATA-adapter may be an alternative to an expensive PATA SSD. This solution isn't suitable for Windows systems due to excessive write load caused by Windows.

Last edited by Arnulf; 12-22-2022 at 01:47 PM.
 
1 members found this post helpful.
Old 12-22-2022, 11:49 AM   #10
rnturn
Senior Member
 
Registered: Jan 2003
Location: Illinois (SW Chicago 'burbs)
Distribution: openSUSE, Raspbian, Slackware. Previous: MacOS, Red Hat, Coherent, Consensys SVR4.2, Tru64, Solaris
Posts: 2,818

Original Poster
Rep: Reputation: 550Reputation: 550Reputation: 550Reputation: 550Reputation: 550Reputation: 550
Quote:
Originally Posted by jmgibson1981 View Post
I bought an older used server from a guy on craigslist. It has a 1tb gold in it. It's been spinning away fine for 7+ years now with nary a hitch. It probably wasn't new when the person put it in the machine. No complaints thus far. I can't speak to the noise of it as this thing sounds like a jet while it's running anyway. I keep it in my outbuilding with a point to point wireless bridge connecting it to the rest of the lan.
So... seems to run "forever" but are noisy.

Hmm... I've recently read about the noise of the Golds. My understanding was that they were noisy mainly during initial spin-up and afterwards only when the heads were seeking---a clicking sound that I wouldn't usually find objectionable. Sounding like a jet would be a deal breaker as I no longer have a basement in which to run the servers. Seems like I should start looking closely at the specs for a drive's sound levels, eh?

Thanks for the feedback.
 
Old 12-22-2022, 12:08 PM   #11
rnturn
Senior Member
 
Registered: Jan 2003
Location: Illinois (SW Chicago 'burbs)
Distribution: openSUSE, Raspbian, Slackware. Previous: MacOS, Red Hat, Coherent, Consensys SVR4.2, Tru64, Solaris
Posts: 2,818

Original Poster
Rep: Reputation: 550Reputation: 550Reputation: 550Reputation: 550Reputation: 550Reputation: 550
Quote:
Originally Posted by obobskivich View Post
What 'scandal' specifically are you referencing? (did I miss something? ha!)
The so-called shingle storage format that turned out to be a dud for performance for many.

Quote:
I've never seen a standard client drive have problems with RAID1, except for hybrid drives/SSHDs (which do a lot of weird stuff with caching and uniformly manufacturers will tell you not to put them in RAID as a result - I was curious about this a few years ago and put a pair of Seagates in RAID0 and RAID1 with mdadm just to try it, spoiler alert: the manufacturers are right, don't put them in RAID - I'm also not sure if anyone is still making SSHDs these days).

[snip]

(I'm guessing you aren't using something with 1TB+ of storage that's also so old it doesn't support SATA - that would actually be a 20 year old system ).
Never said anything about it not supporting SATA. Most of my systems are running nothing BUT SATA. I have one motherboard that has a SATA port that is, supposedly, the only one I can use for a solid state drive---I figured I'd stay away from that for now. Maybe for my laptop though boot times are not the problem w/ that machine (aging CPU and limited RAM).
 
Old 12-22-2022, 12:20 PM   #12
rnturn
Senior Member
 
Registered: Jan 2003
Location: Illinois (SW Chicago 'burbs)
Distribution: openSUSE, Raspbian, Slackware. Previous: MacOS, Red Hat, Coherent, Consensys SVR4.2, Tru64, Solaris
Posts: 2,818

Original Poster
Rep: Reputation: 550Reputation: 550Reputation: 550Reputation: 550Reputation: 550Reputation: 550
Quote:
Originally Posted by rclark View Post
I buy the RED WD drives (8TB is the largest I own, others are 4TB) and haven't had any problems with them other than eventually wearing out. Seagate drives are the ones I seem to have always run into problems with in a relatively 'short' time period (a year or two). I've never bought a Gold WD drive.
Interesting. I had been using the WD Red drives until the shingle format fracas -- the local computer store guy reminded me about checking the actual part number to tell whether a given drive was or wasn't "shingled" -- and switched to IronWolfs for most things. No failures in the time I've been using them. I stayed away from the WD "colored" drives after a wasted day trying to get a pair of the Blacks drives to initialize as a RAID1. I hadn't heard about the Gold drives until quite recently hence the request for user experience.

I appreciate the feedback.
 
Old 12-22-2022, 07:19 PM   #13
obobskivich
Member
 
Registered: Jun 2020
Posts: 596

Rep: Reputation: Disabled
Quote:
Originally Posted by rnturn View Post
The so-called shingle storage format that turned out to be a dud for performance for many.
So roughly what I understand of it is this: SMR (and other various evolutions beyond how mechanical drives were built 20-30 years ago, like HAMR, PMR, etc) is more or less a neccessary if we want capacity (or more accurately, density) to keep increasing on drives, but the way SMR itself works creates significant latency if you're thrashing a full drive (because it has to move 3 tracks of data at once). In regular use, as in you aren't writing out the capacity of the drive multiple times a day/week/month (or attempting to), you likely will never encounter this - just like with an SSD - because there's a lot of smarts in the firmware to hide all of this using the 'unused' parts of the drive as it shuffles data around.

Where it tends to bite people (and I'm not trying to bash anyone over the head here) is if they're throwing it into some sort of parity mode (e.g. zfs, RAID5, RAID6) or they have unrealistic expectations about workload (remember: most hard drives are only rated for something like 180TB/yr of workload, higher end ones usually only at 300 - if you're talking about a 1TB hard drive that's 'a lot' of total drive writes (around a complete write every 2 days), but at 16-18TB that's a different story (maybe 10-12 complete writes a year)). Years ago it was a similar story with variable spindle RPM drives - a loud chorus of 'this is a performance dud' rose up (and remember: usually only the unhappy bother to consistently leave reviews), but if you dug deeper in most cases it was people trying to cut corners on a RAID array, throwing them into zfs, or otherwise doing something that the specs counter-indicated. For folks that had the experience of 'very standard' hard drives a number of years ago, that weren't so segmented by manufacturers (and for which the I in RAID still meant something), this all came as a shock. Depending on what you're specifically doing, SMR may or may not be suitable, but I wouldn't go as far as to just look at that single variable and say 'any drive with this is bad.' Also bear in mind, just as with SSDs, the firmware is not static - so what was true 2-3-10 generations ago may not hold true today, as programmers learn from past iterations. I remember when one of the first SMR drives came out, the Seagate Archive series, and their performance was pretty abysmal for more or less any writes, but I've seen modern WD Blues that can sustain 100-150MB/s into the multi-TB realm - depending on your use-case that may or may not be fast enough. I think an SSD would be an easier answer to this question, especially as a system drive, but for 'storage' I would probably just get whatever is cheapest that offers the capacity and compatibility you need.

Quote:
Never said anything about it not supporting SATA. Most of my systems are running nothing BUT SATA. I have one motherboard that has a SATA port that is, supposedly, the only one I can use for a solid state drive---I figured I'd stay away from that for now. Maybe for my laptop though boot times are not the problem w/ that machine (aging CPU and limited RAM).
That sounds fishy - do you have more specifications on this system? Beyond boot times, random seek performance will improve significantly, and access time will go way down, which improves performance for various applications and use-cases (in other areas, you won't notice any changes). At the capacities you're talking about, SATA SSDs probably make more sense in terms of price/performance, and you will probably end run a lot of performance (and all noise) concerns too. If you go this route, I'd suggest a model that has DRAM and hefty specified write endurance (Crucial MX500 and ADATA SU800 are good examples - but there are loads of options out there, unlike mechanical drives it isn't a duopoly - one brand I'd avoid, for linux systems at least, is Samsung, because many of their drives use Samsung-specific controllers and have firmware bugs related to TRIM which don't seem to ever be fully resolved (I've seen relatively recent 870 and 970 drives exhibit it); fortunately the era of Samsung being head and shoulders above the competition is largely past, so at this point its more of a brand tax than anything else).
 
Old 12-22-2022, 09:05 PM   #14
rnturn
Senior Member
 
Registered: Jan 2003
Location: Illinois (SW Chicago 'burbs)
Distribution: openSUSE, Raspbian, Slackware. Previous: MacOS, Red Hat, Coherent, Consensys SVR4.2, Tru64, Solaris
Posts: 2,818

Original Poster
Rep: Reputation: 550Reputation: 550Reputation: 550Reputation: 550Reputation: 550Reputation: 550
Quote:
Originally Posted by obobskivich View Post
So roughly what I understand of it is this: SMR (and other various evolutions beyond how mechanical drives were built 20-30 years ago, like HAMR, PMR, etc) is more or less a neccessary if we want capacity (or more accurately, density) to keep increasing on drives, but the way SMR itself works creates significant latency if you're thrashing a full drive (because it has to move 3 tracks of data at once).
If memory serves, what upset some is that it was being introduced into drives that were advertised as appropriate for NAS setups---where a RAID[56] configuration wouldn't be unusual. I had a conversation with a salesguy at the time and we picked several "Red" drives off the shelf and some had SMR part numbers, some didn't. I simply shrugged and bought the IronWolfs.

Quote:
Beyond boot times, random seek performance will improve significantly, and access time will go way down, which improves performance for various applications and use-cases (in other areas, you won't notice any changes). At the capacities you're talking about, SATA SSDs probably make more sense in terms of price/performance, and you will probably end run a lot of performance (and all noise) concerns too.
Boot times aren't much of an issue. (If I can't tolerate ~30-~45 sec boot times...) SSDs? Maybe next hardware refresh.
 
Old 12-23-2022, 02:36 AM   #15
jmgibson1981
Senior Member
 
Registered: Jun 2015
Location: Tucson, AZ USA
Distribution: Debian
Posts: 1,151

Rep: Reputation: 393Reputation: 393Reputation: 393Reputation: 393
Quote:
Originally Posted by rnturn View Post
So... seems to run "forever" but are noisy.

Hmm... I've recently read about the noise of the Golds. My understanding was that they were noisy mainly during initial spin-up and afterwards only when the heads were seeking---a clicking sound that I wouldn't usually find objectionable. Sounding like a jet would be a deal breaker as I no longer have a basement in which to run the servers. Seems like I should start looking closely at the specs for a drive's sound levels, eh?

Thanks for the feedback.
I should have specified. It's the server that's loud, the fan. At least loud enough so I don't hear the mechanical drives at all. I can't speak to the noise of the drive itself.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Totality: did you experience it, and if so was it a religious experience? sundialsvcs General 39 09-15-2017 04:09 PM
LXer: Your Computer Junk is gold is Gold to Reglue LXer Syndicated Linux News 0 08-01-2013 09:20 AM
LXer: Sacred Gold is now, well, gold! LXer Syndicated Linux News 0 03-08-2009 04:20 PM
Wireless - Linksys No gold Orinoco Gold YES 1kyle SUSE / openSUSE 0 02-07-2007 12:06 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware

All times are GMT -5. The time now is 07:24 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration