Why does Docker have a daemon? - docker

I recently discovered rkt, a competitor container runtime to Docker. It seems like rkt does not need a daemon. For me, rkt is like running any other command and it works easily with systemd (or other init systems).
This makes me wonder about the utility of Docker's daemon.
Why does Docker need a daemon ? What does the daemon provide that would not be possible without it ? Is its only goal to remove the need for an init system like systemd (as can be seen in the Rancher OS) ?

Docker was designed as a client/server application which allows you to have remote access to the docker API. This allows tools like the classic container based swarm that were effectively a reverse proxy to a cluster of docker hosts.
The daemon also provides a place for shared state. It's restarting containers according to their restart policy. But it's also managing networks and volumes that may be shared between multiple containers.
Lastly, with the introduction of swarm mode, the daemon is also the central location for these tools that would otherwise be run as their own daemons with tools like kubernetes.
If you need a daemon-less solution but otherwise like docker, then consider using runc which is the runtime environment that docker uses for each container by default.
This doesn't involve the init inside the container. If you need that, docker now includes an optional init that you can enable per container. And you've always had the option to include your own init, like tini, if you needed something to cleanup zombie processes.

Related

Should we use supervisors to keep processes running in Docker containers?

I'm using Docker to run a java REST service in a container. If I were outside of a container then I might use a process manager/supervisor to ensures that the java service restarts if it encounters a strange one-off error. I see some posts about using supervisord inside of containers but it seems like they're focused mostly on running multiple services, rather than just keeping one up.
What is the common way of managing services that run in containers? Should I just be using some built in Docker stuff on the container itself rather than trying to include a process manager?
You should not use a process supervisor inside your Docker container for a single-service container. Using a process supervisor effectively hides the health of your service, making it more difficult to detect when you have a problem.
You should rely on your container orchestration layer (which may be Docker itself, or a higher level tool like Docker Swarm or Kubernetes) to restart the container if the service fails.
With Docker (or Docker Swarm), this means setting a restart policy on the container.

Starting a system service in a Docker container

I am running a Docker container where the application needs to have the autofs service running, but it's not currently run by default. The container already uses supervisord to manage several background processes, so I figure I should add the service to supervisor's program list.
Is there a way to do that which isn't repeating much of the logic in /etc/init.d/autofs? Something like:
[program:autofs]
service = autofs
would be awesome but this syntax doesn't appear to be supported by supervisord.
Should I call systemctl, service, or /etc/init.d/autofs directly?
do you need the service to run on the host? In that case you may need to add the various mount points to be able to interact with the host systemd from the container and manually start the service with systemctl.
An alternative we use on Atomic Host (and that can be used on other systems as well) for managing system services in containers is what we call "system containers". We use systemd to lauch and manage a runC container. In this way you can specify the dependency on another service directly in the template configuration file for systemd.

Bootstrapping docker deamon

In the official Kubernetes multinode Docker guide , it is mentioned that you need to another Docker instance:
A bootstrap Docker instance which is used to start etcd and flanneld, on which the Kubernetes components depend
So what is a bootstrap instance and how do you make sure that keeps running on restarts ?
The documentation gives a detailed explanation as to the purpose of the bootstrap instance of Docker:
This guide uses a pattern of running two instances of the Docker
daemon: 1) A bootstrap Docker instance which is used to start etcd and
flanneld, on which the Kubernetes components depend 2) A main Docker
instance which is used for the Kubernetes infrastructure and user’s
scheduled containers
This pattern is necessary because the flannel daemon is responsible
for setting up and managing the network that interconnects all of the
Docker containers created by Kubernetes. To achieve this, it must run
outside of the main Docker daemon. However, it is still useful to use
containers for deployment and management, so we create a simpler
bootstrap daemon to achieve this.
In summary the special bootstrap docker daemon runs the bits that kubernetes depends on, freeing up the the normal docker daemon to be managed by kubernetes. This is a trick that leverages the fact that both etcd and flanneld can be run as containers. Alternatively one would have to set them up locally as services.
As for ensuring the bootstrapping docker daemon survives a restart, the answer lies within the code. Here's where it's being called when running the master.sh script.
https://github.com/kubernetes/kube-deploy/blob/master/docker-multinode/master.sh#L36
https://github.com/kubernetes/kube-deploy/blob/master/docker-multinode/docker-bootstrap.sh#L20
So the code attempts to setup a service for the extra docker daemon process.

How is "lxd" different from lxc/docker?

Questions
How does lxd provide Full operating system functionality within containers, not just single processes?
How is it different from lxc/docker + wrappers?
Is it similar to a container that is launched with docker + supervisor/wrapper script to contain multiple processes in one container?
In other words:
What can I do with lxd that I cannot do with some wrappers over lxc and docker ?
Why is it available only in ubuntu if they are making use of mainline kernel features (namespaces and cgroup )?
How does lxd provide Full operating system functionality within containers, not just single processes?
Containers are Isolated Linux systems using the cgroups capabilities for limit cpu/memory/network/etc in the Linux kernel, without the need for starting a full virtual machine.
LXD uses the capabilities provided by liblxc (that is based in LXC) and from this comes the capabilities for full OS functionality.
How is it different from lxc/docker + wrappers?
LXD use liblxc from LXC. Docker is more application focused, only the principal process for your app inside the container (using libcontainer now by default, Docker did use liblxc first for this)
Is it similar to a container that is launched with docker + supervisor/wrapper
script to contain multiple processes in one container?
Something similar. The diference between LXD and Docker is that Docker is an application container, LXD is a system container. LXD use upstart/systemd like principal process inside the container and by default is ready to be a full VM environment with very light memory/cpu usage. Yes, you can build your docker with supervisorctl/runit, but you need to do manually this process. You can check how is done in http://phusion.github.io/baseimage-docker/ that do something similar inside a container.
What can I do with lxd that I cannot do with some wrappers over lxc and docker ?
live migrations of containers, use your containers like full virtual machines, precise config for dedicate cpu cores/memory/network I/O for use in your container, run your container process in unprivileged mode (root process inside your container != root process in your host) by default Docker work in privileged mode, only now in Docker 1.10 they implement unprivileged mode but you need to review (and maybe rewrite) your Dockerfiles because many things will not work in unprivileged mode.
LXD and Docker are diferent things. LXD gives you a "full OS" in a container and you can use any deployment tool that works in a VM for deploying applications in LXD. With Docker your application is inside the container and you need diferent tools for deploying applications in Docker and do metric for performance. Docker is designed to run on various OS platforms, like Windows. LXD/LXC can only run on Linux: this is the reason Docker no longer uses LXC as part of its stack.
Why is it available only in ubuntu if they are making use of mainline kernel features (namespaces and cgroup )?
LXD has commercial support from Canonical if is needed, but you can build LXD in Centos 7, ArchLinux (with kernel patched) check https://github.com/lxc/lxd. Gentoo supports LXD now https://wiki.gentoo.org/wiki/LXD.
LXD is based on liblxc, its purpose is to control some lxc with added capabilities, like snapshots or live migration. LXD is linked to LXC and they are OS centered.
Docker is much more application centered, based at the beginning on LXC but now independent from LXC, it can use openvz or whatever. Docker only focuses on application with lib and dependency, not on OS.
look at this for more :
https://www.flockport.com/lxc-vs-lxd-vs-docker-making-sense-of-the-rapidly-evolving-container-ecosystem/
Regards.
LXD works in conjunction with LXC and is not designed to replace or supplant LXC. Instead, it’s intended to make LXC-based containers easier to use through the addition of a back-end daemon supporting a REST API and a straightforward CLI client that works with both the local daemon and remote daemons via the REST API.
LXD is more like docker host.

Docker Container management in production environment

Maybe I missed something in the Docker documentation, but I'm curious and can't find an answer:
What mechanism is used to restart docker containers if they should error/close/etc?
Also, if many functions have to be done via a docker run command, say for instance volume mounting or linking, how does one bring up an entire hive of containers which complete an application without using docker compose? (as they say it is not production ready)
What mechanism is used to restart docker containers if they should error/close/etc?
Docker restart policies, as set with the --restart option to docker run. From the docker-run(1) man page:
--restart=""
Restart policy to apply when a container exits (no, on-fail‐
ure[:max-retry], always)
Also, if many functions have to be done via a docker run command, say for instance volume mounting or linking, how does one bring up an entire hive of containers which complete an application without using docker compose?
Well, you can of course use docker-compose if that is the best match for your requirements, even if it is not labelled as "production ready".
You can investigate larger container management solutions like Kubernetes or even OpenStack (although I would not recommend the latter unless you are already familiar with OpenStack).
You could craft individual systemd unit files for each container.

Resources