I am aware that Docker containers shares the host OS, id it possible to run two different container environments on a single host OS/machine?
Yes this is possible. In fact, some enterprise solutions actually take advantage of this solution. Rancher, for example, creates a platform for deploying Kubernetes environments. The underlying operating systems for the nodes are typically deployed as their own OS, RancherOS. Wherein there are two instances of the Docker daemon running. One for userland, and one for system apps. RancherOS is unique in that is runs all essential system services as containers on the host. So when you connect to a node, you can run a system-docker ps and see the state of all the services. However, if you run a docker ps you will only see your userland containers.
Here is more information on this solution: https://rancher.com/docs/os/v1.2/en/system-services/adding-system-services/
As for doing so yourself, this is also possible and somewhat simple. Here is an example of someone doing so: https://www.jujens.eu/posts/en/2018/Feb/25/multiple-docker/
Alternatively, if you didn't want to modify your personal workstation, you can also run docker within a docker container using a project like this: https://github.com/jpetazzo/dind
Let me know if I can help you with anything else. :)
Related
I've read that you shouldn't ssh into a docker container. But why? I'd like to use a docker container as a replacement for a normal VM. What are the disadvantages? I know that this will create a lot of layers. But I could flatten my container on a regular base.
Can I use the container as a regular vm and what is the "worst case" that can happen?
Docker containers are optimized around running single processes. Virtual machines are optimized around running entire operating systems.
At a technical level you generally can run something that looks like a full VM inside a Docker container, but it's a lot of hand setup. For instance, a typical systemd setup wants to manage several host devices and kernel-level configuration options, and your choices to run systemd are either (a) let it manage the host and possibly conflict with the host's systemd, or (b) manually figure out which unit files you can't run and disable them. All of the prebuilt Docker images run only single services (just MySQL, just Nginx, just a Python runtime, ...) and so you're also giving up this ecosystem.
A VM certainly gives up some amount of efficiency by virtualizing hardware devices and running multiple OS kernels, but if you really want to run a VM, it's not a huge performance loss; just run a VM if that's the model you want to use.
No you can't use it as a replacement for a VM since you can only have one entrypoint on a docker container. You can not expose multiple services on multiple ports like you would on a regular virtual machine.
I know that containers are a form of isolation between the app and the host (the managed running process). I also know that container images are basically the package for the runtime environment (hopefully I got that correct). What's confusing to me is when they say that a Docker image doesn't retain state. So if I create a Docker image with a database (like PostgreSQL), wouldn't all the data get wiped out when I stop the container and restart? Why would I use a database in a Docker container?
It's also difficult for me to grasp LXC. On another question page I see:
LinuX Containers (LXC) is an operating system-level virtualization
method for running multiple isolated Linux systems (containers) on a
single control host (LXC host)
What does that exactly mean? Does it mean I can have multiple versions of Linux running on the same host as long as the host support LXC? What else is there to it?
LXC and Docker, Both are completely different. But we say both are container holders.
There are two types of Containers,
1.Application Containers: Whose main motto is to provide application dependencies. These are Docker Containers (Light Weight Containers). They run as a process in your host and gets all the things done you want. They literally don't need any OS Image/ Boot Up thing. They come and they go in a matter of seconds. You cannot run multiple process/services inside a docker container. If you want, you can do run multiple process inside a docker container, but it is laborious. Here, resources (CPU, Disk, Memory, RAM) will be shared.
2.System Containers: These are fat Containers, means they are heavy, they need OS Images
to launch themselves, at the same time they are not as heavy as Virtual Machines, They are very similar to VM's but differ in architecture a bit.
In this, Let us say Ubuntu as a Host Machine, if you have LXC installed and configured in your ubuntu host, You can run a Centos Container, a Ubuntu(with Differnet Version), a RHEL, a Fedora and any linux flavour on top of a Ubuntu Host. You can also run multiple process inside an LXC contianer. Here also resoucre sharing will be done.
So, If you have a huge application running in one LXC Container, it requires more resources, simultaneously if you have another application running inside another LXC container which require less resources. The Container with less requirement will share the resources with the container with more resource requirement.
Answering Your Question:
So if I create a Docker image with a database (like PostgreSQL), wouldn't all the data get wiped out when I stop the container and restart?
You won't create a database docker image with some data to it(This is not recommended).
You run/create a container from an image and you attach/mount data to it.
So, when you stop/restart a container, data will never gets lost if you attach that data to a volume as this volume resides somewhere other than the docker container (May be a NFS Server or Host itself).
Does it mean I can have multiple versions of Linux running on the same host as long as the host support LXC? What else is there to it?
Yes, You can do this. We are running LXC Containers in our production.
I have a couple of Docker swarm questions (Sorry for not splitting them up but they are all closely related):
Do all instances in a swarm have to run on different machines or can they all run on the same? (if having limited amount of hardware and just wanting to try swarm mode)
Do I have to run swarm mode to be able to communicate between instances?
What is the key difference between swarm mode and just running a number of containers as regular?
What are the options of communication between instances of containers? (in swarm and in regular mode) http? named pipes? other?
If using http communication between containers on same machine, will it be roughly similarly as fast as named pipes?
Is there any built in support for a message bus or similar in Docker?
Is there support for any consensus protocol in Docker?
Are there any GUI's for designing, managing, testing and/or debugging Docker swarms?
Can a container list other containers, stop/restart some and start new ones? (to be able to function as a manager for other containers)
Can a container be given access to OS-features (Linux in my case) to configure for instance a reverse proxy or port forwarding on the WAN?
Background: What I'm trying to figure out is how I should go about and build a micro service mesh using Docker. The containers will be running .NET Core. I'm not too keen on relying too much on specifically Docker since it may not be the preferred tech in a couple of years. What can/should I do with Docker and what can/should I do inside the containers. That's what I'm trying to figure out.
I've copied your questions and tried to answer them.
Do all instances in a swarm have to run on different machines or can they all run on the same? (if having limited amount of hardware and just wanting to try swarm mode)
You can have only one machine in a swarm and run multiple tasks of the same service or in other words your scale of a service can be more than the number of actual machines. I have a testing swarm with a single machine and one with three and it works the same way.
Do I have to run swarm mode to be able to communicate between instances?
You have to run your docker in swarm mode in order to create a service, please see this link
What is the key difference between swarm mode and just running a number of containers as regular?
The key difference afaik is, that when a task goes down, docker puts another task up automatically. And you can easily scale your services, which means you can easily have multiple tasks just by scaling your service (up or down). As of running a container - when it goes down you have to manually start another.
What are the options of communication between instances of containers? (in swarm and in regular mode) http? named pipes? other?
I've currently only tested with a couple of wildfly servers in a swarm, which are on the same network. I'm not sure about others, but would love to find out. I've only read about RabbitMQ, but can't seem to find the link atm.
If using http communication between containers on same machine, will it be roughly similarly as fast as named pipes?
I can't say.
Is there any built in support for a message bus or similar in Docker?
I can't say.
Are there any GUI's for designing, managing, testing and/or debugging Docker swarms?
I've tested rancher and portainer.io, for a list of them I found this link
Can a container list other containers, stop/restart some and start new ones?
I'm not sure why would you want to do that? And I guess it's possible, see this link
Can a container be given access to OS-features (Linux in my case) to configure for instance a reverse proxy or port forwarding on the WAN?
I can't say.
#namokarm did a great job, and I'm filling in the gaps:
Benefits of Swarm over docker run or docker-compose.
All communications between containers has to be TCP/UDP etc. You could force two containers to only run on a single machine, then bind-mount their socket so they skip the network, but that would be a bit of an anti-pattern. Swarm is designed for everything to be distributed and TCP/UDP.
In a few cases, such as PHP-FPM + Nginx, I recommend bundling both in the same container (against docker best practices, but trust me it's easier than separate containers). This will ensure they scale together (1-to-1 relationship) and stay fast since they use local sockets to communicate). I only recommend this for a few setups like this, the other being ColdFusion + Nginx because they are two parts of the same tool that provide a HTTP response... I don't recommend bundling images together in nearly all other cases, but I'm open to ideas :).
Rancher is no longer supporting Swarm. Portainer and SwarmPit are GUI options.
Yes a container running something like Portainer/SwarmPit or controlling the Docker socket through a bind-mount or TCP can control the whole Swarm. This is how all docker management works :)
For reverse proxy, you would run a container-based proxy like Traefik or Docker Flow Proxy, which sets up HAProxy for Docker and Swarm.
Many of these topics are discussed in my DockerCon talks: https://www.bretfisher.com/dockercon18/
I am quite new to docker and I need some help about distributing my application.
Consider this:
I have a pool of physical machines, each of them running the latest version of docker.
My "Application A" has several containers. To be clear in this definition, an application would be a database running in a container, 4 messaging containers and a master container. All 6 containers need to communicate between each other. The database, the messaging and etc containers would be the "services".
I can also have "Application B", "Application C" and "Application N...", that are slightly different in size and configuration from "Application A". Applications do not communicate between each other and are completely independent.
Requirements:
All applications "A,B,C..N" must use the same pool of physical machines.
Each service of each application must run in a different physical machine, if needed.
You may want to restrict how each service is allocated to each physical machine
I need to create applications "on the fly"
My first thought would be to use a docker-compose to define an application and several dockerfiles to define the services inside it. But if I do that, each application would be running in the same docker engine and therefore, the same physical machine.
I have read that you could deploy a docker compose into a docker swarm. In this case, docker swarm would act as a docker engine. However, I could not find any examples on how to do that and I am not sure of the limitations.
My second thought would be to use swarm mode. I would create a swarm, and run services on it. However, I would lose the the concept of "application". There would be a bunch of services thrown into the swarm and I could not manage how each of them communicate with each other.
So, given this problem:
Is there any assumption or statement I got wrong?
What is the recommended docker tools usage in the scenario?
It is possible to use Docker Compose with Docker Swarm Mode (Docker 1.12), but it is currently not completely compatible with it. Have a look at Docker Stacks and Bundles.
In the next version of Docker (1.13) there will also be the new release of Docker Compose v3, which will be compatible with Docker without Docker Compose. This will make it possible to deploy your Docker Compose file like this:
docker deploy --compose-file docker-compose.yml AppA
This is currently experimental but works quite fine with Docker 1-13-rc5. (Docker Releases)
A more detailed explanation of this can be found in this article.
For your requirements to have them all run on different hosts, this is possible with defining constraints in the docker service create (or in the Docker Compose v3) (See Docker Service Create - Constraints). But why do you need to have them run on different hosts?
It is possible to limit the CPU and memory usage that each service is able to use with --limit-cpu and --limit-memory.
If you want to play with Docker Swarm Mode you can create a swarm with Docker Machine on your local host. (Attention do not use the old Docker Swarm)
I installed Docker and kitematic. I had VirtualBox before that and used many machines on Vbox. Docker is working, I can pull containers and other stuff like that. Like this link : https://docs.docker.com/mac/started/
I can add containers by:
<i> docker run docker/whalesay cowsay boo </i>
I want to know if there is any way that I can import some of my Vbox machines into docker as a Container locally?
I have ova and ovf file in my local pc. I don't wanna get involved with online containers! Is there any way to accomplish this.
Thank you.
Looks like you have some confusion on the concept of a container.
A container is not a virtual machine.
You can't import virtual machines into Docker. What you can do is build and run a Docker container which eliminates the need for a virtual machine (depending on your use case of course).
You can find a good explanation about the difference between a container and a virtual machine here.
TL;DR:
Both virtual machines and containers allow you to run multiple applications on a shared hardware.
When using virtual machines, the hardware is shared among all applications, however each application runs on a separate operating system.
When using containers, both the hardware AND the operating system are shared, and each application runs in a separate container.
This is in no way an exhaustive explanation regarding Docker containers - there are MANY more advantages to using Docker instead of a virtual machine (portability, consistency, infrastructure-as-code). This is just the main difference between them.