Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Hi,
We are maintaining a apache web server which has to handle 1000 simultaneous requests.but currently our server is able to handle only 100 simultaneous requests.we want to increase the scalability of server by clustering.I want the cluster to handle only apache requests and we are using php for web scipting.
1) Which software should be used to build such a cluster?
2) Because we will have sessions in php all the requests are to be forwarded to the same backend server for proper access.Is it possible to overcome this problem with the above cluster software?
1) There are a number of different options. Perhaps the easiest option is to purchase a hardware load balancer. If you don't have the money for that you could use round-robin DNS or some sort of reverse squid proxy. I've never actually tried these though. If you're running RHEL, CentOS, or Fedora you could also try out the Red hat cluster suite which I believe can do something like this.
2) A load balancer or proxy can probably be configured to track sessions and send them to the same server. One other idea is to store the session data on an NFS server accessible to all of the Web servers -- the session,save_path configuration directive in your PHP config can do this.
BTW have you tried playing with the Apache paramaters (spare servers, timeouts, etc.) to try to increase the load it can handle. Depending on your hardware you might be able to do more than 100 connections (particularly if you have a big SMP box). The bottleneck might also be with file access and if so you could look into faster disks and/or a RAID configuration.
Thanks for the reply.I checked http://lcic.org/load_balancing.html link and i found that Linux virtual server is the best option for load balancing.
I am thinking of using LVS for load balancing but
Its mentioned at http://www.linux-vs.org/docs/persistence.html that LVS will take care of https,cookies etc. will it also be able to take care of sessions?
I checked out the reverse proxy,round-robin DNS and redhat cluster suite but will they be able to handle sessions by default(i mean without using nfs etc..)?
Here is my uname -a output.
Linux localhost.localdomain 2.6.9-34.EL #1 Wed Mar 8 00:07:35 CST 2006 i686 athlon i386 GNU/Linux
Will this kernel has any problems with LVS?
General Questions : Many web servers in the internet use php and how are they able to balance load and able to handle 1000's of requests? which clustering option are they using?
PHP stores sessions as files on the server with a session key handed back to the client (usually in the form of a cookie) handed back to the client. As long as the Web server can access the session state files, you're OK. LVS, round-robin DNS, etc. don't do anything AFAIK to make that happen -- you have to set it up yourself. If you use the Red Hat cluster suite you can use GFS to provide a global filesystem to put the shared data on, which is probably more robust than NFS. You could also use a database back-end. Many databases, including MySQL and Oracle, and be clustered for added performance/reliability. This might involve re-writing the app to be able to handle a db back-end, though.
Internet facing Web sites use a number of techniques. I imagine that big companies with $$$ mostly use special commercialized appliances from vendors such as Cisco that transparently balance HTTP traffic. As I mentioned above, I think this can be done with Squid but it probably isn't as fast as a commercial solution on specialized hardware.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.