docker-compose autorestart and supervisord autorestart : which to use? - docker

I ve seen in some build the use of supervisor to run the docker-compose up -d command with the possibility to autostart and/or autorestart.
Im wondering if this cohabitation of supervisor and docker-compose works well? Aren't the two autorestart options interfering with each other? Also what is the benefit to use supervisor in place of a simple docker-compose except run at startup if the server is shut down?
Please share your experience if you have some on using theses two tools
Thank you

Running multiple single-process containers is almost always better than running a single multiple-process container; avoid supervisord when possible.
Mechanically, the combination should work fine. Supervisord will capture logs and take responsibility for restarting the process in the container. That means docker logs will have no interesting output, and you need to get the file content out of the container. If one of the managed processes fails then supervisord will restart it. The container itself will probably never be restarted, unless supervisord manages to crash somehow.
There are a couple of notable disadvantages to using supervisord:
As noted, it swallows logs, so you need a complex file-oriented approach to read them out.
If one of the processes fails then you'll have difficulty seeing that from outside the container.
If you have a code update you have to delete and recreate the container with the new image, which with supervisord means restarting every process.
In a clustered environment like Kubernetes, every process in the supervisord container runs on the same node, and you can't scale individual processes up to handle additional load.
Given the choice of tools you suggest, I'd pretty much always use Compose with its restart: policies, and not use supervisord at all.

Related

Running a `docker` container with `detach=False`

In my Golang program, I am currently spawning a Docker container to perform some work. I chose to use a Docker container here since there are a lot of dependencies and OS-related items that will be much simpler to manage via a packaged container image. I am using the Golang Docker API to manage the containers (github.com/docker/docker/client)
One issue I am facing is if the consumer of my Golang program presses Ctrl-C, the program quits but the Docker container is still running. This will cause actions to keep continuing even if the consumer believes they have stopped the program.
If the Golang program was instead a bash script, I believe that running docker run without the -d flag would cause the container to be stopped as soon as this calling parent is stopped. However, in the Golang docker client at the URL provided previously, I don't see an option to do this. There are two parts here: container_create.go and container_start.go. The structs provided for container_create only contain pre-run based configurations (such as ports to expose, etc.), but there is no mention of background or detached modes. container_start also does not seem to have any options relevant to this.

Why doesn't docker containers support sudo or systemd?

During my studies, I came across the fact that Docker containers don't support neither sudo nor systemd services. Not that I need these tools but I'm just curious about the topic and couldn't find an adequate reasoning.
Docker is aimed at being minimal, since there can be many, many containers running at the same time. The idea is to reduce memory and disk usage. Since containers already run as root to begin with unless otherwise specified, there's no need to have sudo. Also, since most containers only ever run one process, there's no need for a service manager like systemd. Even if they did need to run more than one process, there are smaller programs like supervisord.
sudo is unnecessary in Docker. A container generally runs a single process, and if you intend it to run as not-root, you don't generally want it to be able to become root arbitrarily. In a Dockerfile, you can use USER to switch users as many times as you'd like; outside of Docker, you can use docker run -u root or docker exec -u root to get a root shell no matter how the container is configured.
Mechanically, sudo is bad for non-interactive environments (especially, it's very prone to asking for a user password) and users in Docker aren't usually configured with passwords at all. The most common recipe I see involves echo plain-text-password | passwd user, in a file committed to source control, and also easily retrieved via docker history; this is not good security practice.
systemd is unnecessary in Docker. A container generally runs a single process, so you don't need a process manager. Running systemd instead of the process you're trying to run also means you don't get anything useful from docker logs, can't use Docker restart policies effectively, and generally miss out on the core Docker ecosystem.
systemd also runs against the Unix philosophy of "make each program do one thing well". If you look at the set of things listed out on the systemd home page it sets up a ton of stuff; much of that is system-level things that belong to the host (swap, filesystem mounts, kernel parameters) and other things that you can't run in Docker (console getty processes). This also means you usually can't run systemd in a container without it being --privileged, which in turn means it can interfere with this system-level configuration.
There are some good technical reasons to run a dedicated init process in Docker, but a lightweight single-process init like tini is a better choice.
Beside what #Aplet123 mentioned,consider that since the containers themselves don't have root access and even cannot see other processes in the system(unless created by the --ipc option), they cannot cause any harm to your system by any means even if all the processes within the container have root access.So there's no need to limit that already-limited environment with non-root users.And when there is only one user,there's no need to have sudo.
Also starting and stopping the containers as services can be done by docker itself,so the docker daemon(which itself has been started via systemd) is in fact the Master SystemD for all containers.So there's no need to have systemd too for example when you want to start your apache HTTP server.

Ansible commands on docker containers?

Upto now i had setup my ansible-playbook commands running on AWS EC2 instances.
can i run regular ansible commands like (linefile, apt, pip, etc) on container?
can i add my container-ip to hosts file in container-group and then does the same code works, here if i chanage my main.yml file that has
hosts: ec2-group
to
hosts:contaniers-group
does all commands work?
i am bit beginner into this..please do confirm me i am actually thinking of making docker-compose files from scratch, and run docker-compose commands using ansible.
You can, but it's not really how Docker is designed to be used.
A Docker container is usually a wrapper around a single process. In the standard setup you create an image that has that application built and packaged, and you can just run it without any further setup. It's not usually interesting to run a bare Linux distribution container (which won't have an application installed) or to run an interactive shell as the main container process. Tutorials like Docker's Build and run your image walk through this sequence.
A corollary to this is that containers don't usually have any local state. In the best case any state a container needs is in an external database; if you can't do that then you store local state in a volume that outlives the container.
Finally, it's extremely routine to delete and recreate containers. You need to do this to change some common options; in a cluster environment like Kubernetes this can happen outside your control. When this happens the new container will restart running its default setup, and it won't know about any manual changes the previous container might have had.
So you don't usually want to try to install software directly in a running container, since that will get lost as soon as the container exits. You can, in principle, get a shell in a container (via docker exec) but this is more of a debugging tool than an administration tool. You could make the only process a container runs be an ssh daemon, but anything you start this way will get lost as soon as the container exits (and I've never seen a recipe that correctly and securely sets up credentials to access it).
I'd recommend learning the standard Dockerfile system and running self-contained Docker images over trying to adapt Ansible to this rather different environment.

Docker Process Management

I have a deployed application running inside a Docker container, which is, in effect, an websocket client that runs forever. Every deploy I'm rebuilding the container and starting it with docker run using the command set in the Dockerfile.
Now, I've noticed a few times that the process occasionally dies without restarting. When running docker ps, I can see that the container is up, and has been up for 2 weeks, however the process running inside of it has died without the host being any the wiser
Do I need to go so far as to have a process manager inside of the docker container to manage the containerized process?
EDIT:
Dockerfile: https://github.com/DVG/catpen-edi/blob/master/Dockerfile
We've developed a process-manager tailor-made for Docker containers and have been using it with quite a bit of success to solve exactly the problem you describe. The best starting point is to take a look at chaperone-docker on github. The readme on the first page contains a quick link to a minimal base image as well as a fully configured LAMP stack so you can try it out and see what a fully-configured image would look like. It's open-source and fully documented.
This is a very interesting problem here related to PID1 and the fact that docker replaces PID1 with the command specified in CMD or ENTRYPOINT. What's happening is that the child process isn't automagically adopted by anything if the parent dies and it becomes an orphan (since there is no PID1 in the sense of a traditional init system like you're used to). Here is some excellent reading to give you a few ideas. You may get some mileage out of their baseimage-docker image which comes with their simplified init system ("my_app"), which will solve some of this problem for you. However, I would strongly caution you against automatically adopting the Phusion mindset for all of your containers, as there exists some ideological friction in that space. I can't recall any discussion on Docker's Github about a potential minimal init system to solve this problem, but I can't imagine it will be a problem forever. Good luck!
If you have two ruby processes it sounds like the child hasn't exited, the application has just stopped working. It's likely the EventMachine reactor is sitting in the background.
Does the EDI app really need to spawn the additional Ruby process? This only adds another layer between Docker and your app. Run the server directly with CMD [ "ruby", "boot.rb" ]. If you find the problem still occurs with a single process then you will need to find what is causing your app to hang.
When a process is running as PID 1 is docker it will need handle the SIGINT and SIGTERM signals too.
# Trap ^C
Signal.trap("INT") {
shut_down
exit
}
# Trap `Kill `
Signal.trap("TERM") {
shut_down
exit
}
Docker also has restart policies for when the container does actually die.
docker run --restart=always
no
Do not automatically restart the container when it exits. This is
the default.
on-failure[:max-retries]
Restart only if the container
exits with a non-zero exit status. Optionally, limit the number of
restart retries the Docker daemon attempts.
always
Always restart the
container regardless of the exit status. When you specify always, the
Docker daemon will try to restart the container indefinitely. The
container will also always start on daemon startup, regardless of the
current state of the container.
unless-stopped
Always restart the
container regardless of the exit status, but do not start it on daemon
startup if the container has been put to a stopped state before.

update running docker container

I have a running docker container with a base image fedora:latest.
I would like to preserve the state of my running applications, but still update a few packages which got security fixes (i.e. gnutls, openssl and friends) since I first deployed the container.
How can I do that without interrupting service or losing the current state?
So optimally I would like to get a bash/csh/dash/sh on the running container, or any fleet magic?
It's important to note that you may run into some issues with the container shutting down.
For example, imagine that you have a Dockerfile for an Apache container which runs Apache in the foreground. Imagine that you attach a shell to your container (via docker exec) and you start updating. You have to apply a fix to Apache and, in the process of updating, Apache restarts. The instant that Apache shuts down, the container will stop. You're going to lose the current state of the applications. This is going to require extremely careful planning and some luck, and some updates will probably not be possible.
The better way to do it is rebuild the image upon which the container is based with all the appropriate updates, then re-run the container. There will be a (brief) interruption in service. However, in order for you to be able to save the state of your applications, you would need to design the images in such a way that any state information that needs to be preserved is stored in a persistent manner - either in the host file system by mounting a directory or in a data container.
In short, if you're going to lose important information when your container shuts down, then your system is fragile & you're going to run into problems sooner or later. Better to redesign it so that everything that needs to be persistent is saved outside the container.
If the docker container has a running bash
docker attach <containerIdOrName>
Otherwise execute a new program in the same container (here: bash)
docker exec -it <containerIdOrName> bash

Resources