LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Networking
User Name
Password
Linux - Networking This forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.

Notices


Reply
  Search this Thread
Old 06-16-2008, 01:00 AM   #1
charrois
LQ Newbie
 
Registered: Jun 2008
Posts: 1

Rep: Reputation: 0
Internet switch redundancy for high availability cluster


Hi there. I'm looking at setting up a high availability Apache/MySQL cluster under Linux and trying to eliminate single points of failure as much as possible.. yet (like everyone) trying to do it as inexpensively as possible.

I'm currently planning two front end machines (running heartbeat and likely HA-Proxy and Squid) behind our firewall with one NIC each connected to the firewall and another to an internal network switch. They'll then query three Apache web servers on the internal network, which in turn will communicate with three machines running MySQL cluster. Anyone see any potential problems or suggestions with that configuration so far?

The services on the network should have reasonable redundancy, so with that configuration I'm not too concerned if any one of the 7 machines goes down. But, though less likely, I do have a concern about the internal network switch. If it were to die, I'd lose all connectivity, and hence all functionality.

Does anyone have any suggestions on the best way to harden this? I've just recently learned about Linux network bonding, but I'm still pretty green on how it all works. Would this be a good way to provide redundancy? I could add another NIC to each of the machines, so that they are each connected to two internal switches, and it seems that networking bonding might be the software way of tying together the redundancy I'd need. But would it? And if traffic is split between the two NICs, I'm still pretty clueless as to how the receiving computer reassembles the packets. I haven't seen many examples of how bonding would be set up using two switches.

If anyone knows if this would work, or could suggest a better way of accomplishing the same thing, I'm all ears. Normally, I'd just try and figure things out on my own, but in this case the hardware we purchase is dependent on the strategy we plan on using.... so we want to make sure we purchase the right hardware.

Thanks for your help!

Dan
 
Old 06-16-2008, 03:54 AM   #2
acid_kewpie
Moderator
 
Registered: Jun 2001
Location: UK
Distribution: Gentoo, RHEL, Fedora, Centos
Posts: 43,417

Rep: Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985
for a simple option, yes a bond would be a good point to go from, largely as you expect it to work. With money to burn, there are plenty of pretty sexy was to make the bonding really really good, but even with dumb L2 switches you can get a good level of resilience purely from the linux side. just use an active passive bonding mode (mode=1), and all the while it has a network connection to the switch (and *only the switch... there is still room for other issues to arise in end to end connectivity remember) then a primary nic would be used, and upon failure a secondary one would kick in. Personally I would look to alternate actives and passives between the two switches rather than let a purely secondary switch gather dust and maybe even fail quietly. You could also use mode 0 for RR strafing of packets across both switches if you want too, not my personal favourite but fine. Given that a single cable between the two is fine for operational performance (cheap gig uplinking switch are readily available to make a decent enough backbone here) there shouldn't be any issues if volumes of data happen to have to go between the two switches to reach an active interface.

http://www.linuxhorizon.ro/bonding.html
 
Old 06-17-2008, 01:24 PM   #3
coontie
Member
 
Registered: Jun 2003
Distribution: Fedora Core 5
Posts: 100

Rep: Reputation: 15
you will need switches that support STP -- spanning tree protocol. Then you tie them all together in a loop and voila.

A dumb, non-managed switch will prob not support this but even a lowly 29xx cisco switch will support STP. What you are trying to do is somewhat non-trivial but can be done.

Make sure once you build this, you draw a very accurate diagram and then start pulling cables to simulate failures. Only way to make sure it all works.

Also, your FW is still a single point of failure. You will need 2 FWs with a VIP between them.

And I doubt you really need bonding, that's to eliminate a single network POF to 1 server but you'll have multiple servers in a cluster, so your redundancy will come from the multi-server app setup, not from the multipath network.

Also, you cannot bond 2 NICs into 2 different switches. Bonding is good for an aggregate bandwidth increase, not so much for redundancy.

Finally, don't forget about the ISP link. If that goes down, you're toast.

Last edited by coontie; 06-17-2008 at 01:33 PM.
 
Old 06-17-2008, 01:36 PM   #4
coontie
Member
 
Registered: Jun 2003
Distribution: Fedora Core 5
Posts: 100

Rep: Reputation: 15
there are also IP layer issues to consider.

I'm counting at least 3 separate subnets - 1 between the FWs and the proxies, 1 between the proxies and apaches and 1 south of apaches to the MYSQLs. You can't stuff all this into 1 network, it won't work.

Look into this also: http://www.linuxvirtualserver.org/

Last edited by coontie; 06-17-2008 at 01:41 PM.
 
Old 06-17-2008, 02:03 PM   #5
acid_kewpie
Moderator
 
Registered: Jun 2001
Location: UK
Distribution: Gentoo, RHEL, Fedora, Centos
Posts: 43,417

Rep: Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985
You do not *need* STP capable switches at all. NIC bonding does not bridge the interfaces so at no point are you in any danger of creating a loop, so any switch will be technically fine.
 
Old 06-17-2008, 02:22 PM   #6
coontie
Member
 
Registered: Jun 2003
Distribution: Fedora Core 5
Posts: 100

Rep: Reputation: 15
Quote:
Originally Posted by acid_kewpie View Post
You do not *need* STP capable switches at all. NIC bonding does not bridge the interfaces so at no point are you in any danger of creating a loop, so any switch will be technically fine.
This goes beyond bonding.

Unless he buys a switch for every subnet OR implements some funky direct routing voodoo (which is pretty damn complex + ARP issues to deal with), he'll need a VLAN support, so a managed switch is required.

That's one thing. Another thing is, how's he gonna manage all this infrastructure? He'll need a PC that will be able to access all these networks, which implies a switch with inter-VLAN routing (or a PC with N NICs for N networks -- doesn't scale).

Basically, this gets pretty complex pretty quick.

But I agree. An STP-capable switch is technically not required. But it'll pretty much follow from the real-world requirements above.
 
Old 06-17-2008, 03:06 PM   #7
acid_kewpie
Moderator
 
Registered: Jun 2001
Location: UK
Distribution: Gentoo, RHEL, Fedora, Centos
Posts: 43,417

Rep: Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985
Well it all depends what budgets and requirements you have to work with. *I* would have seperate networks between each layer, but it's probably more likely they'll just be on a flat network behind the firewall.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: How to setup apache high availability cluster? LXer Syndicated Linux News 0 09-01-2007 03:10 PM
High availability Samba cluster DRBD + Heartbeat djalex Linux - Networking 3 09-05-2006 01:14 PM
High Availability Cluster IP... CRCool75 Linux - Networking 2 06-26-2006 09:44 PM
LXer: How To Set Up A Loadbalanced High-Availability Apache Cluster LXer Syndicated Linux News 0 05-14-2006 04:54 PM
LXer: Penguin Computing's Scyld ClusterWare(TM) HPC Offers Enhanced High Availability and Most Advanced Linux Cluster Virtualization, Enabling Broader Cluster Use LXer Syndicated Linux News 0 04-06-2006 06:03 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Networking

All times are GMT -5. The time now is 09:56 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration