Linux - Virtualization and CloudThis forum is for the discussion of all topics relating to Linux Virtualization and Linux Cloud platforms. Xen, KVM, OpenVZ, VirtualBox, VMware, Linux-VServer and all other Linux Virtualization platforms are welcome. OpenStack, CloudStack, ownCloud, Cloud Foundry, Eucalyptus, Nimbus, OpenNebula and all other Linux Cloud platforms are welcome. Note that questions relating solely to non-Linux OS's should be asked in the General forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
We have a need for a large cluster of virtual machines running on both VMware ESXi and KVM. I have about 20 intel type servers, each with about 500(GB) disks. I don't have a SAN and have no budget. What is the best way to create shared storage - using some of the servers - to be used by the hypervisors (the remaining servers) for virtual machine images and to enable live migrations (vmotion)?
I have looked into AoE, NFS, GFS, etc. but have not yet started testing any of these options out...
Does anyone out there have similar setup and have a tested solution?
Thanks! Great article on clustermonkey. Yes, I have definitely been looking into GlusterFS, it sounds like the perfect solution for us. We will be testing it out soon.
Has anyone actually used GlusterFS as shared storage for running Virtual Machines? Curious if good performance is achievable with a 1 GbE network and 8 or so servers. Thinking of using the above servers as nodes with disks configured as RAID0 since fault tolerance can be handled using GlusterFS?
Also, considering using Gluster native client on hypervisors for better performance. Any compatibility issues with VMware ESXi and the native client?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.