LinuxQuestions.org
Review your favorite Linux distribution.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 03-09-2015, 03:35 AM   #1
JohnLocke
Member
 
Registered: Jun 2004
Location: Denver, Colorado
Distribution: Ubuntu
Posts: 240

Rep: Reputation: 31
Help figuring out: NAS, FreeNas, Linux "NAS", or Linux server


Hey all, not sure what forum this might belong in, but I'm trying to figure out the "right" solution for my home situation.

Currently I have a decently beefy Ubuntu 14.04 server built. It's got a ridiculous 12TB of data storage (I was going to try and software raid, but with an EFI mobo, I was having huge issues and gave up). It's got 16GB of RAM (DDR3 something or other). It's got an i5-3470.

It's used for just about everything in my house.
- I run MythTV from over the air signals, so it grabs, records, encodes, and stores loads of TV and movies (about 1TB worth currently, but varies between .5TB and 3TB as I delete or don't delete things).
- I run usenet grabbers and recorders to get the shows that didn't record properly
- It's a minecraft server for some friends and I
- It's a web server where I do a lot of project development for my side business of web / software development
- It's a C++ build / Java build server for the same
- It's a simple storage device for backups, photos, etc

Just recently, it had a disk problem. I'll be the first to say, I knew this was coming and I should be using raid, but like I said, had issues getting that going. Secondly, I stupidly bought WD Green drives. Don't do that. They suck.

So finally, to my question:

I think that this server has become overloaded. It does a few too many things. I figure I have 4 options to update it, and I'd like to spend as little money as possible, though I don't mind spending /some/ money ($500 or less would be good). I don't have it available yet, but I do have another PC that will become available in May after my thesis work on it is finished, and it's a similar machine, but only has 2TB of disk.

Option 1)

Buy a NAS device that has built in raid. Use that for the movie / tv storage, backups, general storage. I'd need something that is easy to mount to other linux computers, a mac, and a windows laptop. I'd be happy to buy a diskless system since I have 4 3TB drives in the current server. Two or three of them could be used for the NAS. The trick is, I'm having a hard time figuring out which ones would work with all those operating systems, preferably over NFS / Samba or something similar. It needs to stream HD TV files (about a bandwidth of 20MBps) across a gig network.

Then I could turn the existing server into the rest of the functionality ... web server, data cruncher, etc, and leave it with just a single 3TB drive in it (or less).

Option 2)

Build a FreeNAS system. I don't know a lot about freenas yet, but it seems this is a possible solution that'd be able to use my existing server, but wouldn't have a solid raid solution, so I'd need to buy a raid controller to make the whole thing a bit safer.

Then I turn the other PC that becomes available in May into the server that does data crunching.

Option 3)

Instead of freenas, build a linux nas. Not sure what the best linux distro to use for that is ... centos? mint? I'm using Ubuntu currently, but that's largely because I do so much else on it including use it graphically. I could leave this one headless. I'd still need a raid controller, though, and this solution isn't vastly different than what I'm already doing. It may be the best general option, though. I can do the same as with the freenas option and build a /mostly/ NAS server but also be able to do some of the "light" tasks like web hosting and being the minecraft server (only lightly utilized), then turn the other computer into the data cruncher.

Option 4)

Maybe I'm already doing the "best" solution. Leave the beast alone and just replace disks as needed. Try to figure out software raid again (over time ... might need a different mobo that doesn't require the EFI bootloader, or maybe EFI works better in the last two years since I tried building it).

--------------------------

Maybe there's other options as well.

So ... thoughts? I'm interested in opinions and even in hardware suggestions for a possible NAS. I think I'd like a "real" NAS some day anyway and can happily use my own disks, but haven't figured out what NAS works well as a streaming media server and still mounts directly to linux / windows (linux does the media streaming, it would just be a backup storage location for the windows laptop).
 
Old 03-09-2015, 08:59 PM   #2
T3RM1NVT0R
Senior Member
 
Registered: Dec 2010
Location: Internet
Distribution: Linux Mint, SLES, CentOS, Red Hat
Posts: 2,385

Rep: Reputation: 477Reputation: 477Reputation: 477Reputation: 477Reputation: 477
Nice that you have got monster machines :-)

To begin with you have to make a decision whether you really want to spend on hardware RAID or not. I mean if you really need one then get one no questions / doubt should be there. If you ask me I would say if the data on the system is really very critical or something that you want to preserve with redundancy then get one.

So once you have figured that out we can move on to NAS stuff.

I have never used hardware NAS so I might not be able to give you insights on performance. However, if you are talking about software NAS I would rather prefer to go with software SAN. I am talking about openfiler. I used it when I wanted to setup VMware test infra on my test systems. I initially set it up with NAS but then I didnt get the performance I was looking for, switched the backend to Openfiler and it was pretty smooth. You can download / read about openfiler here:

About openfiler: https://www.openfiler.com/products

Download: https://www.openfiler.com/community/download

You can use software RAID with open filer to provide redundancy but obviously you can't compare software RAID with hardware RAID, so that is the decision you have to make on the basis of significance of the data your system hold.
 
1 members found this post helpful.
Old 03-10-2015, 01:28 PM   #3
replica9000
Senior Member
 
Registered: Jul 2006
Distribution: Debian Unstable
Posts: 1,130
Blog Entries: 2

Rep: Reputation: 260Reputation: 260Reputation: 260
I have software RAID running with WD green drives on an EFI board, no issues so far. I'm not booting from my software RAID though.

Hardware RAID for your NAS will be faster than software RAID. WD Red drives would also be a better choice for hardware RAID. Downside is that if something happens to the controller, your existing array might not work with a different controller.

FreeNAS is based on FreeBSD. You could probably do everything FreeNAS can do with Linux, but it would require more work on your end.
 
Old 03-10-2015, 02:05 PM   #4
enine
Senior Member
 
Registered: Nov 2003
Distribution: Slackʍɐɹǝ
Posts: 1,486
Blog Entries: 4

Rep: Reputation: 282Reputation: 282Reputation: 282
Quote:
Originally Posted by JohnLocke View Post
I think that this server has become overloaded. It does a few too many things.
I think this is the key, what leads you to believe your server is overloaded? Have you tracked cpu utilization/memory/swap usage over time?

Just went through this myself, have a "server" at home running as my samba/dnla share, owncloud, drupal, etc. It was always running into swap and running very high cpu. So I bought a Raspberry Pi 2 and moved Owncloud and Drupal over there to split the load. Your list of uses doesn't look too high but you have a much more powerful box than I so it doesn't look to be overloaded to me.
 
Old 03-10-2015, 02:06 PM   #5
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,573

Rep: Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142
Why not just get a little SSD for the OS and move your 12 TB to a software RAID 5. The SSD will make the system faster and more responsive, and you won't need to worry about EFI and booting the RAID directly.
 
Old 03-12-2015, 09:09 PM   #6
ron7000
Member
 
Registered: Nov 2007
Location: CT
Posts: 248

Rep: Reputation: 26
check out synology.com their ds###play NAS boxes. you can check pricing on newegg.
I have their business class 12-bay NAS boxes and they work well as a simple data server in raid 5/6, and their software is easy to use.
i can't speak for the media streaming stuff but i do know just from using their disk station manager - same operating system works on all their stuff - that there's tons of apps for streaming this and that and connecting everything.
but i would not put all your eggs [drives] in one basket [nas box]. think about a backup on a separate file system somehow someway.

Last edited by ron7000; 03-12-2015 at 09:11 PM.
 
Old 04-02-2015, 12:00 PM   #7
JohnLocke
Member
 
Registered: Jun 2004
Location: Denver, Colorado
Distribution: Ubuntu
Posts: 240

Original Poster
Rep: Reputation: 31
Sorry all, I guess notifications weren't turned on for this thread for me.

In any case, let me try and answer some of these questions and give an update. Enine was right, I no longer think the box is overloaded. CPU / mem / iostat / top looks all confirm the box is barely doing anything at all most times. The slowness I'm seeing, I think, is one of the drives starting to fail. XFS is reporting things like this:

Apr 2 08:52:29 rama kernel: [413494.151404] ata1: EH complete
Apr 2 08:52:29 rama kernel: [413494.151444] XFS (sda2): metadata I/O error: block 0xe526eb60 ("xfs_trans_read_buf_map") error 5 numblks 16
Apr 2 08:52:29 rama kernel: [413494.151449] XFS (sda2): xfs_imap_to_bp: xfs_trans_read_buf() returned error 5.

Not all the time, but semi-frequently.

So I'm pretty sure it's going to be "rebuild the box" time. I don't know that an physical / separate NAS is required, as this box could serve as a NAS on the network pretty easily. I've got a spare 40 or 80GB SSD. The plan will be to run the OS off the SSD, make a dd backup of that OS once it's fully up and running, but run all databases, NAS, web hosting, etc (all the stuff that changes semi-frequently) off a software raid of the other disks. Maybe with a 3 disk raid 5 since my current sda is having issues and that'll leave me with 3 more 3TB drives.

I think I'd prefer hardware raid, but looking at various raid controllers, people seem to have all kinds of random issues with them that just adds another layer of complexity. Not that mdadm is necessarily "simple".

This way, I get to keep simple NFS and Samba access between my boxes and the big, single server box still stands without me having to explain to the wife why we need two servers in our house. She's already dubious about my "need" for three Raspberry Pi 2s .
 
Old 04-02-2015, 02:44 PM   #8
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,573

Rep: Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142
Quote:
Originally Posted by JohnLocke View Post
I think I'd prefer hardware raid, but looking at various raid controllers, people seem to have all kinds of random issues with them
Like what?

I use hardware RAID controllers in everything other than the "easy" RAIDs (1 or 10) because in my experience they're faster and much more stable when it comes to cross-distro compatibility, failing drives, on the fly rebuilds, etc. I typically stick with Adaptec, but I've used 3ware/LSI in the past (they're horrendously slow though). My home server has an Adaptec 6405, works great.
 
Old 04-02-2015, 03:20 PM   #9
JohnLocke
Member
 
Registered: Jun 2004
Location: Denver, Colorado
Distribution: Ubuntu
Posts: 240

Original Poster
Rep: Reputation: 31
Heh, you're right. I should clarify. I'm not planning on buying a $300+ controller card for this setup. I was looking at the cheaper controller cards that can just handle 1, 0, or 10. I'd definitely still consider a $70 card or cheaper (because I still would need at least one more drive to get 4 drives working, so that's part of my budget).

When looking at those cards, I see things like the Highpoint 640L, but then I also see tons of reviewers having issues getting these things working in linux (or that very few of them work in linux at all). The ones that work, then have reviews saying things suggesting they promise more than they can deliver because they're cheap. That highpoint, for example, seems to suggest that it'd work fine with 2 drives, but with 4 it'll have issues with disk access times since it only has two controller cards on it.

I'm happy to listen to people who've actually done it rather than random newegg reviewers, though .
 
Old 04-02-2015, 04:43 PM   #10
Pearlseattle
Member
 
Registered: Aug 2007
Location: Zurich, Switzerland
Distribution: Gentoo
Posts: 999

Rep: Reputation: 142Reputation: 142
I have 450MB/s in both read/write with 4 7200rpm HDDs in a sw-RAID5 (mdadm), so when using a strong CPU performance shoudn't be a factor when deciding if to use HW or SW controllers.
Personally I would say to use >real< HW controllers only when unexpected power surges are expected, and only when coupled with HDDs that can handle the same thing (see "capacitors").

For the rest, you have good know-how, so just decouple everything and choose appropriately - FreeNAS is just a collection of tools which you can have with any distribution you feel most comfortable.

Last edited by Pearlseattle; 04-02-2015 at 04:44 PM.
 
Old 04-02-2015, 05:22 PM   #11
JohnLocke
Member
 
Registered: Jun 2004
Location: Denver, Colorado
Distribution: Ubuntu
Posts: 240

Original Poster
Rep: Reputation: 31
I've got an i5-3470 in the server, so it's a pretty decently strong CPU for what it's being used for.

There aren't "surges" per se in the use of this, but in theory it might be possible if I were running a TV watching via the mythtv back end hosted on this server, plus streaming an HD movie from this server to another TV, plus unrar-ing/unpar-ing files and friends are playing minecraft at the same time. But that's pretty unlikely in my world.

So it's currently sounding like the best idea might be doing the OS on my spare SSD, then running raid 5 with the remaining (still working) 3 3TB drives and run all the configuration / databases / data storage on the raid while the base OS sits on the SSD.
 
Old 04-05-2015, 02:35 PM   #12
Pearlseattle
Member
 
Registered: Aug 2007
Location: Zurich, Switzerland
Distribution: Gentoo
Posts: 999

Rep: Reputation: 142Reputation: 142
Do you think that 6TB will be enough for the future?
You could think about leaving out the SSD and install the OS on a USBstick + buying an additional 3TB disk to then have a 4HDD raid5 => this will give you additional performance benefits compared to a 3HDD raid5.

Additionally: it's generally a good idea to keep "small" files (a few KBs) separate from "big" files (a few dozens MBs or bigger) (again, performance ) => if there is a chance that small & big files are written in mixed mode, then it would be a good idea to create on the raid different partitions (or lvm volumes or whatever) to store them separately => this way the small files will all be grouped and the big files won't end up everywhere on the raid or HDD.
 
Old 04-05-2015, 04:18 PM   #13
JohnLocke
Member
 
Registered: Jun 2004
Location: Denver, Colorado
Distribution: Ubuntu
Posts: 240

Original Poster
Rep: Reputation: 31
I don't know whether 6TB will be "enough" for the future, but for now, should be more than ok. I need to re-open the case and look more closely, but I have an ASRock Z77 Pro-4 M board. I can see 6 sata slots and I'm pretty sure the board actually has 8. I need to take another look, but when I got the board no one was having much luck setting up the on-board raid 5 or 10 that it came with the capability of having. Not sure if that's gotten better or if it's better to just use the linux software raid right out of the box.

At least with software raid, there's reporting of disk errors that in theory I could set up to email me. A light blinking (if that even happens) on a headless server in a closet isn't going to be much use.

Also, I think that raid setup is only for UEFI fast boot ... not sure if that's even going to work with linux. So I guess I'm answering my own question there .

Anyway ... I've got 6 or 8 SATA slots (4 are sata 2 and 4 are sata 3), so no real worries about enough slots. Everything I've read says there's no "real" difference between sata 2 and 3 for a 7200 rpm drive, so I'll build the raid on sata 2. Should be more than enough throughput. Especially since most of the files there are meant to be accessed over the network anyway, not locally as much.

As to the big vs small files, well, I won't have a lot of distinction between them. The movie files come with nfo and jpg files, so there's no real separating them. The minecraft files are huge for the server and small for the config ... in theory I could separate them, but probably won't. Same with the database. Huge files, but small configs. Not much of a great way around some of that.

I figure the biggest "savings" here is mostly in keeping the OS on the SSD and making the raid its own separate "/data" partition or something. It'll be mostly reads and mostly smaller files on the OS and the storage I care to keep separate (and backed up) is then isolated from changes pretty well. Should keep things moving smoothly that way.

If I really get a bug, I'll buy another 3TB drive and make it a 4 drive raid 5. This time, probably not a WD Green, though. That was a bad decision. I hear the reds are actually worth buying, though. Maybe that's what I'll get this time.

Edit: You talked me into it . I just ordered another 3TB drive. I'll run raid 5 and have 9TB. Plenty for quite a while considering I'm using less than 4 currently.

Last edited by JohnLocke; 04-05-2015 at 04:32 PM.
 
Old 04-10-2015, 12:32 PM   #14
Pearlseattle
Member
 
Registered: Aug 2007
Location: Zurich, Switzerland
Distribution: Gentoo
Posts: 999

Rep: Reputation: 142Reputation: 142
Quote:
I need to take another look, but when I got the board no one was having much luck setting up the on-board raid 5 or 10 that it came with the capability of having. Not sure if that's gotten better or if it's better to just use the linux software raid right out of the box.

At least with software raid, there's reporting of disk errors that in theory I could set up to email me. A light blinking (if that even happens) on a headless server in a closet isn't going to be much use.

Also, I think that raid setup is only for UEFI fast boot ... not sure if that's even going to work with linux. So I guess I'm answering my own question there .
Ugh, please don't use the motherboard HW-RAID in any case - if in the future you'll want to change for any reason the motherboard (breaks down, change of CPU, wish to have some new fancy functionality like Thunderbolt, want to have more RAM than the maximum supported, etc... or just want to transfer the RAID HDDs to another box) you'll have to take out the data and reformat them (assuming that the RAID-controller of the new board won't be the same one of the old one).

Quote:
no "real" difference between sata 2 and 3 for a 7200 rpm drive
Agree

Quote:
Same with the database. Huge files, but small configs. Not much of a great way around some of that.
Doing a reorganization of the tables from time to time should fix this.

Quote:
WD Green
Are these those drives that are supposed to vary their RPMs depending on the load? (or what's bad about them? I usually use Hitachi)

Quote:
Edit: You talked me into it . I just ordered another 3TB drive. I'll run raid 5 and have 9TB. Plenty for quite a while considering I'm using less than 4 currently.
Congratulations - now you're getting serious

Now, if you're still thinking about SW-RAID, you should think about which type of SW-RAID you would like to use.
I'm currently using plain "mdadm", but if I were to set up a new RAID now I would probably test the embedded raid5-functionality of the BTRFS filesystem.
I tried it out a long time ago when btrfs was still in its infancy but it was too buggy - now it's probably much more mature.
Btrfs-pros: it's supposed to be very comfortable to administer and use, especially because that would be the only layer with which you would have to deal (format the drives as btrfs-raid5 and you're done). Not to forget as well that btrfs would allow you to do all kind of resizing (adding/removing HDDs, shrinking/enlarging the raid storage) in a very comfortable way.
Btrfs-cons: I don't know if 1) it performs well and 2) if it has a fsck-disk for raid => you would have to test this.

What do you think?
 
Old 04-10-2015, 02:13 PM   #15
JohnLocke
Member
 
Registered: Jun 2004
Location: Denver, Colorado
Distribution: Ubuntu
Posts: 240

Original Poster
Rep: Reputation: 31
The WD Greens are exactly what you think. They're supposed to vary their RPM based on load, and what I find is that they go into low RPM far too frequently. With a multi-disk system, something is almost always "sleeping", and takes multiple seconds to "wake up" enough to see full access speed when I want it. So I get hit with these odd slowdowns in the middle of what seems like otherwise fast processes. The new drive I got was a Hitachi as well. I've had good experiences with them in the past.

And yes, good note on the motherboard hardware raid.

I opted for the linux software raid. I had heard of BTRFS, but my impression was that it was still too experimental for what I want. I'm looking for a server I can drop in a closet, forget about for months or a year at a time, and just keep working on all the time both remotely and locally. That's why I hadn't touched the old one in so long, either .

I'm running (now) the OS off a 60GB SSD (Ubuntu Server 14.04) and a mdadm raid 5. I'm still letting the raid do its resync process ... pretty sure that'll take another day or so to finish up. After that, I can work on setting up notifications for raid failure and test some failure states. Then I'll start looking into the fun things like moving the mysql database onto the raid, installing software again, and copying data again!

Last edited by JohnLocke; 04-10-2015 at 02:14 PM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
reliable centos nas server with raid or nas boxes which is better ? Gil@LQ Linux - Server 9 09-10-2015 05:13 AM
pros and cons -- small "linux box server" versus dedicated "NAS box" SaintDanBert Linux - Server 1 11-30-2013 03:29 PM
NAS server with FreeNas... waiting to build one. Gil@LQ Linux - Software 3 08-04-2012 12:30 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 06:12 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration