LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Virtualization and Cloud
User Name
Password
Linux - Virtualization and Cloud This forum is for the discussion of all topics relating to Linux Virtualization and Linux Cloud platforms. Xen, KVM, OpenVZ, VirtualBox, VMware, Linux-VServer and all other Linux Virtualization platforms are welcome. OpenStack, CloudStack, ownCloud, Cloud Foundry, Eucalyptus, Nimbus, OpenNebula and all other Linux Cloud platforms are welcome. Note that questions relating solely to non-Linux OS's should be asked in the General forum.

Notices


Reply
  Search this Thread
Old 08-24-2015, 08:07 AM   #16
dyasny
Member
 
Registered: Dec 2007
Location: Canada
Distribution: RHEL,Fedora
Posts: 995

Rep: Reputation: 115Reputation: 115

Quote:
Originally Posted by biosboy4 View Post
Why are there 2 hosts? Is that not an insane waste of resources?
when you;re talking about hundreds of hosts, it makes more sense to have a controller cluster, and not keep the controller a VM. However, for smaller deployments, the engine can be a VM within it's own setup, that will take an extra step to setup: http://www.ovirt.org/Hosted_Engine_Howto

Quote:
why am I having yum problems with CentOS Minimal? It's as if there is no internet at all. (It's installed on a virtualbox VM.)
are you able to go online, e.g. ping 8.8.8.8 ?

Quote:
and I would like to use internal hard drives for storage if possible. If not, I can build a SAN or something.
The classic datacentre is a bunch of hosts and controllers, plus a SAN. If you prefer to use the servers' disks instead, you can either go with local storage (and lose some of the nicer features, like quick live migration), or wait for hyperconverged setup support, the use gluster when it comes. Shouldn't be long. For details - subscribe and email users@ovirt.org
 
Old 08-24-2015, 08:34 AM   #17
biosboy4
Member
 
Registered: Aug 2015
Distribution: Debian, SUSE, NXOS
Posts: 242

Original Poster
Rep: Reputation: 38
Quote:
Originally Posted by dyasny View Post
are you able to go online, e.g. ping 8.8.8.8 ?
Network is unreachable.


Ok, so you mean keeping a physical cluster just for the controller? I think that's overboard for my personal project but I can see why that's necessary for larger deployments.

Man this is so cool. Thanks for your help.

Edit: The size of my deployment will be 2 physical hosts with a steady stream of data moving between them so that they're 99.99 identical at all times. Everything else will be VM's. I will set up another host to act as the controller if needed, but then I need HA for the controller host too. Sigh.. this could get expensive lol.

Last edited by biosboy4; 08-24-2015 at 08:54 AM.
 
Old 08-24-2015, 10:20 AM   #18
dyasny
Member
 
Registered: Dec 2007
Location: Canada
Distribution: RHEL,Fedora
Posts: 995

Rep: Reputation: 115Reputation: 115
Proper datacenters are expensive. But for two hosts, you can easily go with hosted engine, and if you really have to use the internal storage, you could do something silly, like
1. set up drbd for replication between the two hosts
2. export the drbd backed partition/LV via iscsi from both nodes
3. discover the iscsi share using both IPs, so you can have multipath detect both hosts, and automatically failover IOs if one host goes down.

You will lose half the disk space, since you're replicating, and you probably won't be able to scale this solution out or up, but it's an idea good enough for a testbed
 
Old 08-25-2015, 04:32 AM   #19
Slax-Dude
Member
 
Registered: Mar 2006
Location: Valadares, V.N.Gaia, Portugal
Distribution: Slackware
Posts: 528

Rep: Reputation: 272Reputation: 272Reputation: 272
Quote:
Originally Posted by dyasny View Post
2. export the drbd backed partition/LV via iscsi from both nodes
3. discover the iscsi share using both IPs, so you can have multipath detect both hosts, and automatically failover IOs if one host goes down.
Does this mean that the VM will use iscsi multipath target on an active-passive mode or active-active?
If active-active, does iscsi multipath talks to drdb to avoid data corruption?
 
Old 08-25-2015, 08:40 AM   #20
dyasny
Member
 
Registered: Dec 2007
Location: Canada
Distribution: RHEL,Fedora
Posts: 995

Rep: Reputation: 115Reputation: 115
I wouldn't touch a/a drbd with a stick

The idea would be to have an a/p backend, and if drbd fails over, iscsi would have at least one usable path left. Of course for a demo setup all this is unnecessary and HA isn't a requirement, but the o/p asked for it.

BTW, instead of iscsi, NFS can be exported from the DRBD backed volume, will also work, but will require manual failover for a secondary IP to become usable
 
Old 08-25-2015, 12:30 PM   #21
Slax-Dude
Member
 
Registered: Mar 2006
Location: Valadares, V.N.Gaia, Portugal
Distribution: Slackware
Posts: 528

Rep: Reputation: 272Reputation: 272Reputation: 272
Quote:
Originally Posted by dyasny View Post
I wouldn't touch a/a drbd with a stick
What would you use for a clustered filesystem?
I use AoE a lot, but now that coraid went TITSUP I might consider iscsi (can't do a RAID5 with drdb only RAID1).

My smallest system is a 3 host cluster: I export one of the drives of each host using AoE then, at each host, I setup a RAID5 with one local drive and 2 AoE drives (one from each of the other hosts) then slap OCFS2 on the RAID5.
All 3 hosts are active and have r/w access to that volume, where I keep the .xml files and the virtual HD images for all VMs (each host runs 4 or 5 VMs).
Any one of the hosts can die and the other two take care of whatever VMs it was running, seamlessly.
I guess I can use iscsi instead of AoE, but wanted to know what you use

@biosboy4, sorry if I'm hijacking your thread, but I think that this might help you decide what kind of setup you want to go with
 
Old 08-25-2015, 01:27 PM   #22
dyasny
Member
 
Registered: Dec 2007
Location: Canada
Distribution: RHEL,Fedora
Posts: 995

Rep: Reputation: 115Reputation: 115
Quote:
Originally Posted by Slax-Dude View Post
What would you use for a clustered filesystem?
I use AoE a lot, but now that coraid went TITSUP I might consider iscsi (can't do a RAID5 with drdb only RAID1).
I would rather avoid clustered FS like the plague. If you absolutely must, NFS makes much more sense, because you don't need to deal with locking and failovers and STONITH and other wonders, it just works. AoE was deadborn, it was obvious because iSCSI just made more sense and had an easy onramp for the enterprise folks, who didn't deal much on IDE anyway.

Quote:
My smallest system is a 3 host cluster: I export one of the drives of each host using AoE then, at each host, I setup a RAID5 with one local drive and 2 AoE drives (one from each of the other hosts) then slap OCFS2 on the RAID5.
All 3 hosts are active and have r/w access to that volume, where I keep the .xml files and the virtual HD images for all VMs (each host runs 4 or 5 VMs).
Any one of the hosts can die and the other two take care of whatever VMs it was running, seamlessly.
I guess I can use iscsi instead of AoE, but wanted to know what you use
the things people would do, just to avoid using glusterfs and ceph
 
Old 08-26-2015, 05:15 AM   #23
Slax-Dude
Member
 
Registered: Mar 2006
Location: Valadares, V.N.Gaia, Portugal
Distribution: Slackware
Posts: 528

Rep: Reputation: 272Reputation: 272Reputation: 272
Quote:
Originally Posted by dyasny View Post
I would rather avoid clustered FS like the plague. If you absolutely must, NFS makes much more sense, because you don't need to deal with locking and failovers and STONITH and other wonders, it just works.
So... you just use a SAN and hide the inner workings of shared storage?
Quote:
Originally Posted by dyasny View Post
AoE was deadborn, it was obvious because iSCSI just made more sense and had an easy onramp for the enterprise folks, who didn't deal much on IDE anyway.
Gotcha.
iscsi it is, then. Thanks.
Quote:
Originally Posted by dyasny View Post
the things people would do, just to avoid using glusterfs and ceph
Yeah, some people don't use redhatish distros and have no "next, next, finish" GUI available to setup things like glusterfs and ceph.
My setup "looks" complex, until you try to make glusterfs and ceph work on Slackware
 
Old 08-26-2015, 08:57 AM   #24
dyasny
Member
 
Registered: Dec 2007
Location: Canada
Distribution: RHEL,Fedora
Posts: 995

Rep: Reputation: 115Reputation: 115
Quote:
Originally Posted by Slax-Dude View Post
So... you just use a SAN and hide the inner workings of shared storage?
I mostly use RHEV for VMs, and RHEV doesn't use a clustered FS, instead if uses LVM with it's own access management system. This lets RHEV run clusters of hundreds of hosts, and never run into typical scsi3-PR bottlenecks

Quote:
Yeah, some people don't use redhatish distros and have no "next, next, finish" GUI available to setup things like glusterfs and ceph.
My setup "looks" complex, until you try to make glusterfs and ceph work on Slackware
well, I typically need things to work, I don't have the time to sit around compiling and tweaking every knob. For that, out of the box distros just make more sense. Slackware is great for learning the inner workings of a system, but running a cluster of servers on it, especially a cluster that needs to always be up to date and dynamically change - nightmare.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
802.3ad fault tolerance? wstewart90 Linux - Networking 2 12-21-2013 04:05 PM
Fault Tolerance between 2 ISPs ghight Linux - Networking 17 01-18-2006 10:17 AM
how to do iptables fault tolerance ? adrianmak Linux - Networking 1 11-01-2004 07:44 AM
Adapter Teaming for Fault Tolerance jayesh_777 Linux - Networking 1 09-26-2003 01:12 AM
PPP Fault tolerance Sathe Linux - Networking 2 10-18-2001 11:35 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Virtualization and Cloud

All times are GMT -5. The time now is 07:27 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration