Linux - ContainersThis forum is for the discussion of all topics relating to Linux containers. Docker, LXC, LXD, runC, containerd, CoreOS, Kubernetes, Mesos, rkt, and all other Linux container platforms are welcome.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Has anyone tried placing some of your key network services into a container? i.e. LDAP server, DNS server, DHCP Server, Mail server, or Wiki server. Seems like using Docker containers for these services could be beneficial. Thoughts?
Has anyone tried placing some of your key network services into a container? i.e. LDAP server, DNS server, DHCP Server, Mail server, or Wiki server. Seems like using Docker containers for these services could be beneficial. Thoughts?
I have (Bind, ftp/sftp, mail, each in its own container or two), but using OpenVZ containers. Packing more features securely onto one iron box (or two for failover) is nowhere near as efficient using other technology.
In my view, some of these services, such as DNS or DHCP, are "services of the host," and therefore usually exist outside of the container structure.
Server programs, on the other hand, might reside in a container simply as a means of tightly controlling what they can see and can access.
But ... it has to make sense. Containers aren't magic.
Actually, while the software to PROVIDE them runs on a host these are NETWORK services. The network does not really care where they run, only that they work.
There are two way the software can run that involves containers:
1. You can run the software in an LXC style process container for security and isolation
2. You can use a server container to run and isolate the software as if you were using full virtualization.
Each has certain advantages, but I prefer #2 for the Disaster Recovery and High Availability advantages.
I like to say that, "a container is a very-sophisticated pair of Rose-Colored Glasses."
The software "in" the container is actually using a portion of the resources of the host, and is being run directly by that host's operating system. But it cannot see it clearly. Instead, it sees what it wants to see – and, what we want it to see.
Services like DNS and DHCP are often run outside of containers for the same reason that "these services, on your home network, are provided by something 'outside of' your computer(s)." They probably need to see the real world as it actually is. "Putting blinkers on 'em" probably wouldn't make much sense.
All great points. I'm thinking of running the a VM to host several docker images. With this solution I'm leaving all the fault tolerance and backups to the hypervisor and using the containers to create "ideally" more isolated and secure network services. This could also make deploying upgrades to these services faster with an easy roll-back option. Thoughts?
All great points. I'm thinking of running the a VM to host several docker images. With this solution I'm leaving all the fault tolerance and backups to the hypervisor and using the containers to create "ideally" more isolated and secure network services. This could also make deploying upgrades to these services faster with an easy roll-back option. Thoughts?
Not a new plan, good shops have been doing this for years. The biggest reason is not speed of deployment for services like these, but rather the speed of failover in HA solutions, the backup and recovery (DR) options, and maximizing the use of resources (use of the host) to maximize ROI (Return on Investment).
Not a new plan, good shops have been doing this for years. The biggest reason is not speed of deployment for services like these, but rather the speed of failover in HA solutions, the backup and recovery (DR) options, and maximizing the use of resources (use of the host) to maximize ROI (Return on Investment).
How is container failover better than VM failover? I've always run my hypervisors in redundant clusters and the failover has always been seamless. DR with VM's is pretty easy too. I'm trying to understand what the containers provide that is better than a VM and can containers offer better security for my network services.
You should always bear in mind that "containers," vs. "virtual-machine monitors," are entirely different technologies, each with their own advantages and disadvantages – and, fundamental characteristics.
With containers, all of the processes actually are "running on the same Linux host." They just don't know it. With virtualization, they're running in an environment which literally undertakes to create an entire machine. (Virtualization relies very heavily on CPU-architecture features provided by modern chips.)
The advantage of containers is that they provide "isolation" – of a certain sort – at much less cost. Processes perceive only the files that we allow them to see, and a user-id/group-id/permissions structure that, to them, appears to be "real." The processes which run in a container are [usually ...] always user-land processes ... even they think that they are running as root.
But – and this is the key(!) point – we're actually accomplishing this feat by means of, shall we say, "a chroot-jail, on steroids." Everything that the containerized process perceives is actually a very-carefully constructed illusion. Almost nothing that "the containerized application thinks is true," really is true. We are performing the entire trick within the auspices of an operating system that is directly running on the (virtual?) "real hardware." We're not actually emulating the whole environment: instead, we're very-tightly controlling what the process (thinks that it ...) sees, and of course, what it can do. (And we're exploiting bulletproof, hardware-provided, features to help us do so.)
Virtual-machine technology, on the other hand relies heavily on fairly-exotic hardware support by the CPU, which puts itself into a specially-constructed physical operating mode. Therefore, it is truly capable of running any operating system. Containers, on the other hand, are purely a software-constructed environment that is peculiar to Linux, and cannot support an environment other than their own. (Other operating systems, such as Windows, today provide similar contrivances, each one peculiar to itself.)
"With containers, we really are 'pulling the wool down over your eyes!'""But it works!"
Last edited by sundialsvcs; 11-08-2017 at 08:52 PM.
... Containers, on the other hand, are purely a software-constructed environment that is peculiar to Linux, and cannot support an environment other than their own. (Other operating systems, such as Windows, today provide similar contrivances, each one peculiar to itself.)
I cannot totally agree, but that is a pretty good high level description. Actually, containerization is a specific and limited form of virtualization, but run under a kernel and therefor limited to those things that kernel can do. Full virtualization runs additional kernels as sub processes of the hypervisor (Node 0) kernel. There are many similarities. In full virtualization the host is telling the guest a few more lies, and the process separation is even greater because of that guest kernel. One high level difference is that with containers, since they share a kernel, you can achieve great density. In other words, the same hardware can run many more containers than full virtual guests.
The other issue I have is that they are specific to Linux, they are not. Windows can be tweaked to perform the same kind of containerization, and the people that originated Virtuozzo did exactly that. Microsoft changed the licensing to remove any financial advantage to doing it in Windows about 2007 I believe. The PARALLELS products, I believe, once used that technology.
Distribution: Solaris 11.4, Oracle Linux, Mint, Debian/WSL
Posts: 9,789
Rep:
Quote:
Originally Posted by wpeckham
Actually, containerization is a specific and limited form of virtualization, but run under a kernel and therefore limited to those things that kernel can do.
As if the kernel is limited in the kind of things it can do
Quote:
The other issue I have is that they are specific to Linux, they are not.
Correct, BSD jails, Solaris Zones, AIX WPARs and HP-UX Resource Partitions are other examples of such technology.
I've worked with virtualization, primarily VMware and KVM, for the past tens years. So, I understand that technology pretty well. I've just started looking at Docker. Thus far I'm not seeing any real advantage to using Docker to "containize" basic network services. I do see where it is really great for application developers. Does anyone here see an advantage to say running my LDAP or DNS server in a container versus a virtual machine. Are there perhaps some security advantages I'm not seeing?
If you use basic services and you feel more happy with VMs go with them, we use VMs and dockers and in general if the services are not elastic we use VMs, on the other hand, for services that needs to grow dynamically we use docker for some of them, I think is the HTTP servers. However, we use in the past VMware and was fine, if you have the money for pay the licenses.
Is difficult to evaluate what is best, I think both are complementary technologies that depending on your use case, one is better than the other, in your case for a LDAP and a DNS server I will go to the VM, unless your DNS server is a high performance server.
Virtualization uses CPU-specific features which create the basis of a robust virtual-machine environment, calling-out to the hypervisor when a VM attempts to do certain things and when its time-slice ends.
Containers use Linux features to create total isolation between what are actually Linux processes running directly on the host, and to furnish them with the illusions that they expect.
The essential advantage of containers is that they are very lightweight, because they do, in fact, run directly in the host environment. (They just can't take off their rose-colored glasses or get out of their comfy padded room.) It is also trivially-easy to make a new one, or to get rid of it, because "a container basically consists of a set of rules which are applied by the supervisor."
So, containers are a great invention i-f your particular use-case is compatible with both their features and their limitations. When I'm running an installation on a commercial cloud-server, I strongly prefer to have everything in my hands, and not to be too dependent upon the influences of a hypervisor that I can't directly control. I'd rather have their VMWare running one beefy virtual-machine that hosts the majority of my entire environment, using files and databases that are local to it. (That being said, I might then have another, separate virtual machine running database-replication and such.)
Last edited by sundialsvcs; 11-22-2017 at 09:55 AM.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.