Internet switch redundancy for high availability cluster
Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Internet switch redundancy for high availability cluster
Hi there. I'm looking at setting up a high availability Apache/MySQL cluster under Linux and trying to eliminate single points of failure as much as possible.. yet (like everyone) trying to do it as inexpensively as possible.
I'm currently planning two front end machines (running heartbeat and likely HA-Proxy and Squid) behind our firewall with one NIC each connected to the firewall and another to an internal network switch. They'll then query three Apache web servers on the internal network, which in turn will communicate with three machines running MySQL cluster. Anyone see any potential problems or suggestions with that configuration so far?
The services on the network should have reasonable redundancy, so with that configuration I'm not too concerned if any one of the 7 machines goes down. But, though less likely, I do have a concern about the internal network switch. If it were to die, I'd lose all connectivity, and hence all functionality.
Does anyone have any suggestions on the best way to harden this? I've just recently learned about Linux network bonding, but I'm still pretty green on how it all works. Would this be a good way to provide redundancy? I could add another NIC to each of the machines, so that they are each connected to two internal switches, and it seems that networking bonding might be the software way of tying together the redundancy I'd need. But would it? And if traffic is split between the two NICs, I'm still pretty clueless as to how the receiving computer reassembles the packets. I haven't seen many examples of how bonding would be set up using two switches.
If anyone knows if this would work, or could suggest a better way of accomplishing the same thing, I'm all ears. Normally, I'd just try and figure things out on my own, but in this case the hardware we purchase is dependent on the strategy we plan on using.... so we want to make sure we purchase the right hardware.
for a simple option, yes a bond would be a good point to go from, largely as you expect it to work. With money to burn, there are plenty of pretty sexy was to make the bonding really really good, but even with dumb L2 switches you can get a good level of resilience purely from the linux side. just use an active passive bonding mode (mode=1), and all the while it has a network connection to the switch (and *only the switch... there is still room for other issues to arise in end to end connectivity remember) then a primary nic would be used, and upon failure a secondary one would kick in. Personally I would look to alternate actives and passives between the two switches rather than let a purely secondary switch gather dust and maybe even fail quietly. You could also use mode 0 for RR strafing of packets across both switches if you want too, not my personal favourite but fine. Given that a single cable between the two is fine for operational performance (cheap gig uplinking switch are readily available to make a decent enough backbone here) there shouldn't be any issues if volumes of data happen to have to go between the two switches to reach an active interface.
you will need switches that support STP -- spanning tree protocol. Then you tie them all together in a loop and voila.
A dumb, non-managed switch will prob not support this but even a lowly 29xx cisco switch will support STP. What you are trying to do is somewhat non-trivial but can be done.
Make sure once you build this, you draw a very accurate diagram and then start pulling cables to simulate failures. Only way to make sure it all works.
Also, your FW is still a single point of failure. You will need 2 FWs with a VIP between them.
And I doubt you really need bonding, that's to eliminate a single network POF to 1 server but you'll have multiple servers in a cluster, so your redundancy will come from the multi-server app setup, not from the multipath network.
Also, you cannot bond 2 NICs into 2 different switches. Bonding is good for an aggregate bandwidth increase, not so much for redundancy.
Finally, don't forget about the ISP link. If that goes down, you're toast.
I'm counting at least 3 separate subnets - 1 between the FWs and the proxies, 1 between the proxies and apaches and 1 south of apaches to the MYSQLs. You can't stuff all this into 1 network, it won't work.
You do not *need* STP capable switches at all. NIC bonding does not bridge the interfaces so at no point are you in any danger of creating a loop, so any switch will be technically fine.
You do not *need* STP capable switches at all. NIC bonding does not bridge the interfaces so at no point are you in any danger of creating a loop, so any switch will be technically fine.
This goes beyond bonding.
Unless he buys a switch for every subnet OR implements some funky direct routing voodoo (which is pretty damn complex + ARP issues to deal with), he'll need a VLAN support, so a managed switch is required.
That's one thing. Another thing is, how's he gonna manage all this infrastructure? He'll need a PC that will be able to access all these networks, which implies a switch with inter-VLAN routing (or a PC with N NICs for N networks -- doesn't scale).
Basically, this gets pretty complex pretty quick.
But I agree. An STP-capable switch is technically not required. But it'll pretty much follow from the real-world requirements above.
Well it all depends what budgets and requirements you have to work with. *I* would have seperate networks between each layer, but it's probably more likely they'll just be on a flat network behind the firewall.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.