LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Hardware (https://www.linuxquestions.org/questions/linux-hardware-18/)
-   -   To SSD or not to SSD (https://www.linuxquestions.org/questions/linux-hardware-18/to-ssd-or-not-to-ssd-4175429640/)

jlinkels 09-29-2012 03:15 PM

To SSD or not to SSD
 
I am planning to install a small fanless Atom computer to collect data from a data acquisition system. I will query the DAS every 10 seconds and write the data to a mysql database. The amount of data written every 10 seconds is about 256 bytes. That is about 3 million mysql write cycles a year. Projected life 4-6 years.

I plan to run Debian on the box. Headless, no X.

Question is, shall I install an SSD drive or a conventional drive? 64 GB or 128 GB which is mainstream SSD provides plenty of space. It would not be a major problem to replace the drive every two years as preventive maintenance. Unpredicted failure would be a major problem and would also damage my credibility.

jlinkels

Celyr 09-29-2012 03:35 PM

Unpredicted failure may happen even with a rotating drive.
Use two ssd if noise/power usage/speed is an issue, of course you will need at least a raid 1, then if you have an UPS you can think about caching very hard and avoiding too much write cycles to your ssd

jefro 09-29-2012 03:51 PM

The enterprise level stuff may be the best choice for this. I totally agree that a raid is a good choice only if the speeds are OK also. I would be tempted to use a hardware based raid array with traditional mechanical server disks. Not to sure about this atom issue unless it is a server quality.

Right now you have no way to predict the quality of the system unless you take guesses that enterprise level stuff tends to outperform consumer products.

H_TeXMeX_H 09-30-2012 02:26 AM

Well, an SDD is not what I would use ... but why do you want to use it ? Do you need high throughput or something, I mean from your example, you don't.

jlinkels 09-30-2012 06:55 AM

Thanks all for your insights.

The reason I wanted to use it is that an SSD doesn't have moving parts. But it seems with this number of write actions, mechanical wear of a conventional hard disk is less than memory cell wear on an SSD.

jlinkels

TobiSGD 09-30-2012 07:32 AM

If you have a 256 bytes write action every ten seconds you get 8640 writes a day. With a data size of 256 bytes that are 2211840 bytes or 2160 KB or about 2.11 MB a day. If you keep in mind that the usual flash erase block size is not larger than 512KB and that due to the used flash technique the whole block has to be re-written you get a total write of 4423680 KB or 4320 MB or about 4.1 GB.
Intel comments on its SSDs that you are safe to get a lifetime of 5 years if you write 5 GB a day to the SSD, so a wearout is here absolutely not something you should fear. The abstinence of mechanical parts is the main advantage here. Since you want to run a fanless system and I also recommend the use of RAID 1 to be fault tolerant I would even recommend to go for the slower (and cheaper) SATA 2 SSDs, since they produce much less heat as the SATA 3 SSDs or mechanical drives and are much more tolerant to higher temperatures than mechanical drives.

jlinkels 09-30-2012 10:13 AM

TobiSGD, that looks good, but I am writing this data to a MySQL database. All the values are stored in one single table, which means that by the end of the year I have 800+ MB files holding the table contents. I have no idea as to how much of a file is rewritten if I add a record to such a table. Maybe it is being done very smart, and only the new data is appended to that file without the file being rewritten for a large part?

jlinkels

TobiSGD 09-30-2012 10:37 AM

It would be very inefficient, especially for very large databases if large parts of the file have to rewritten just for adding an entry, so I doubt that this will happen.

masterclassic 10-01-2012 02:14 PM

Since the SSD technology is quite new, there is much less experience on its behaviour compared to the conventional rotating hard drives. So, I would be more confident to a conventional solution, taking into account the critical character of the application. A method to reduce power consumption in a system can be to underclock the processor.

However, there can be many reasons for a failure of the system, not just a hard drive failure: power supply issues, RAM failure, mechanical damage of the unit, problems with sensors and connection cables, unwanted vibrations, heat and humidity sources nearby and surely others too. So, I mean that you have to think to other potential problems too. Perhaps you could eventually admit some small interruption in case of sever problem, assuring the safety of the previously ssaved data.

I have in mind the case of my job's server, an HP small/medium enterprise server with 6 hard drives:
2 SAS 15000rpm RAID1 for the system and 4 SATA2 7200rpm enterprise level drives RAID5 (3 working drives and 1 spare drive). The system runs 3.5 years almost day and night. Last summer one of the SATA hard drives failed, the spare drive went in use automatically. We changed the faulty drive (within its 3-year warranty) and no data loss occured, I didn't even stop the server.

However, I have in mind another very recent case: one of the hard drives in a mirror array failed. During the rebuild of the mirror on a new drive, data corruption was noticed and the second hard drive of the mirror failed too. The only solution was to reinstall the system and restore data from backup.

TobiSGD 10-01-2012 02:18 PM

Quote:

Originally Posted by masterclassic (Post 4794247)
Since the SSD technology is quite new, there is much less experience on its behaviour compared to the conventional rotating hard drives.

Not really:
Quote:

In 1995, M-Systems introduced flash-based solid-state drives.[18] They had the advantage of not requiring batteries to maintain the data in the memory (required by the prior volatile memory systems), but were not as fast as the DRAM-based solutions.[19] Since then, SSDs have been used successfully as HDD replacements by the military and aerospace industries, as well as for other mission-critical applications. These applications require the exceptional mean time between failures (MTBF) rates that solid-state drives achieve, by virtue of their ability to withstand extreme shock, vibration and temperature ranges.[20]
http://en.wikipedia.org/wiki/Ssd#Flash-based_SSDs

masterclassic 10-01-2012 02:32 PM

Thank you TobiSGD, it seems that I am too conservative!
However, we have to admit that the overall quality and security of such a system isn't a question of good components only ;)

TobiSGD 10-01-2012 02:37 PM

Quote:

Originally Posted by masterclassic (Post 4794261)
Thank you TobiSGD, it seems that I am too conservative!
However, we have to admit that the overall quality and security of such a system isn't a question of good components only ;)

Of course not, but they are the base one has to build on.

jefro 10-01-2012 03:34 PM

Many companies offer two versions of their products. One is aimed at home users while the other version is offered to commercial or enterprise level users. They do make high quality ssd's for enterprise use and harsh conditions.

AllgoodGuy 10-02-2012 01:53 AM

have you priced these?
 
The new TB drives are as high as 20K USD. I am sure they are quite worth it for trusted computing requirements, but definitely pricey. I have seen the 256GB disks for a few thousand USD. Honestly, I love my SSD drives for all of the reasons above, heat & humidity tolerances, noise, power consumption, speed, and overall more efficient data storage. I believe you might be overly cautious about the failure of these devices when you have so many fault tolerance options available to you. I would have zero worry to using them in a RAID 1 or 5 configuration and backing up the data per your normal procedures.

onebuck 10-02-2012 07:36 AM

Member Response
 
Hi,
Quote:

Originally Posted by jlinkels (Post 4792744)
I am planning to install a small fanless Atom computer to collect data from a data acquisition system. I will query the DAS every 10 seconds and write the data to a mysql database. The amount of data written every 10 seconds is about 256 bytes. That is about 3 million mysql write cycles a year. Projected life 4-6 years.

I plan to run Debian on the box. Headless, no X.

Question is, shall I install an SSD drive or a conventional drive? 64 GB or 128 GB which is mainstream SSD provides plenty of space. It would not be a major problem to replace the drive every two years as preventive maintenance. Unpredicted failure would be a major problem and would also damage my credibility.

jlinkels

Your project sounds doable and without too much effort. 'SSD' for the Atom would be a good fit for getting the DAS information to your DB well within the life cycle for most consumer 'SSD'.

Not knowing the environment for the equipment will be in makes it hard to recommend specifics. If the power is not clean then I would certainly look at conditioning there or use a UPS. LAB or industrial environment? Experiment is local or remote(isolated)?

Term for the acquisition period(s)? 10 seconds collection for data set size of 256B when necessary to off load full set from this controller(Your Atom)? Off loading the DB or DATA from the DB within that life of 4-6 years? Live transfers at some point or shutdown of the experiment?

Which DAS? Most modern DAS already allow you to program and collect/transfer for a period then off load actively(live) to a service. Is this a commercial DAS or homebrew? Sounds as if the DAS is limited or you are not using the DAS to full functionality. Nothing wrong with using intermediate controller but be sure to maintain the DAS temporary/intermediate storage and be sure to know when the over flow will occur so you will not have data lost.


All times are GMT -5. The time now is 10:53 AM.