restart docker container 1 if container 2 fails - docker

I have 2 docker containers when 1 fails i want to reset the other.
I thought restart_policy might be the correct area to look.
restart_policy
But it seems a bit sparse on options. Nothing i see anywhere seems to the right kind of thing for reset dependant on each other.
Ideally i want to put the reset instructions in my docker-compose file.
any ideas?
ps) I'm doing this because if a container with code for a webscraper fails, then i need to reset the webdriver container it is using.

Related

docker compose pull - suppress layer status information to cleanup terminal output problem due to many layers

Moving from docker-compose to the newer built in docker compose - the output is way more verbose and causes an output problem when there are a lot of container layers while using an SSH client to deploy. If there isn't enough vertical lines, it causes the ssh terminal scroll at an unreadable speed.
The original docker-compose seemed to show the data in a single line per service, and was clean looking.
Using the new docker compose:
Example of good looking input with enough vertical terminal space:
If the user does not have enough screen resolution, this basically pushes the previous output on the screen off and scrolled it out of the terminal buffer, many thousands of lines created.
One solution would be to pull each individual layer individually with something like
docker compose ps --services |xargs -n 1 docker compose pull
However, hoping there was a flag to just suppress the layer output, and just show a status on a single line for each service like the older docker-compose.
I am not looking to make it quiet (-q / --quiet), just not messy.
Reference: https://docs.docker.com/engine/reference/commandline/compose_pull/
Reference docs don't seem to have any solution here. Was curious if there was another method (without me going back to the older docker-compose)

Rsyslog can't start inside of a docker container

I've got a docker container running a service, and I need that service to send logs to rsyslog. It's an ubuntu image running a set of services in the container. However, the rsyslog service cannot start inside this container. I cannot determine why.
Running service rsyslog start (this image uses upstart, not systemd) returns only the output start: Job failed to start. There is no further information provided, even when I use --verbose.
Furthermore, there are no error logs from this failed startup process. Because rsyslog is the service that can't start, it's obviously not running, so nothing is getting logged. I'm not finding anything relevant in Upstart's logs either: /var/log/upstart/ only contains the logs of a few things that successfully started, as well as dmesg.log which simply contains dmesg: klogctl failed: Operation not permitted. which from what I can tell is because of a docker limitation that cannot really be fixed. And it's unknown if this is even related to the issue.
Here's the interesting bit: I have the exact same container running on a different host, and it's not suffering from this issue. Rsyslog is able to start and run in the container just fine on that host. So obviously the cause is some difference between the hosts. But I don't know where to begin with that: There are LOTS of differences between the hosts (the working one is my local windows system, the failing one is a virtual machine running in a cloud environment), so I wouldn't know where to even begin about which differences could cause this issue and which ones couldn't.
I've exhausted everything that I know to check. My only option left is to come to stackoverflow and ask for any ideas.
Two questions here, really:
Is there any way to get more information out of the failure to start? start itself is a binary file, not a script, so I can't open it up and edit it. I'm reliant solely on the output of that command, and it's not logging anything anywhere useful.
What could possibly be different between these two hosts that could cause this issue? Are there any smoking guns or obvious candidates to check?
Regarding the container itself, unfortunately it's a container provided by a third party that I'm simply modifying. I can't really change anything fundamental about the container, such as the fact that it's entrypoint is /sbin/init (which is a very bad practice for docker containers, and is the root cause of all of my troubles). This is also causing some issues with the docker logging driver, which is why I'm stuck using syslog as the logging solution instead.

How can I get "docker-compose scale" to use the latest image for any additional instances created?

In my project, I have a number of micro-services that rely upon each other. I am using Docker Compose to bring everything up in the right order.
During development, when I write some new code for a container, the container will need to be restarted, so that the new code can be tried. Thus far I've simply been using a restart of the whole thing, thus:
docker-compose down && docker-compose up -d
That works fine, but bringing everything down and up again takes ~20 seconds, which will be too long for a live environment. I am therefore looking into various strategies to ensure that micro-services may be rebooted individually with no interruption at all.
My first approach, which nearly works, is to scale up the service to reboot, from one instance to two. I then programmatically reset the reverse proxy (Traefik) to point to the new instance, and then when that is happy, I docker stop on the old one.
My scale command is the old variety, since I am using Compose 1.8.0. It looks like this:
docker-compose scale missive-storage-backend=2
The only problem is that if there is a new image, Docker Compose does not use it - it stubbornly uses the hash identical to the already running instance. I've checked docker-compose scale --help and there is nothing in there relating to forcing the use of a new image.
Now I could use an ordinary docker run, but then I'd have to replicate all the options I've set up for this service in my docker-compose.yml, and I don't know if something run outside of the Compose file would be understood as being part of that Compose application (e.g. would it be stopped with a docker-compose down despite having been started manually?).
It's possible also that later versions of Docker Compose may have more options in the scale function (it has been merged with up anyway).
What is the simplest way to get this feature?
(Aside: I appreciate there are a myriad of orchestration tools to do gentle reboots and other wizardry, and I will surely explore that bottomless pit when I have the time available. For now, I feel that writing a few scripts to do some deployment tasks is the quicker win.)
I've fixed this. Firstly I tried upgrading to Compose 1.9, but that didn't seem to offer the changes I need. I then bumped to 1.13, which is when scale becomes deprecated as a separate command, and appears as a switch to the up command.
As a test, I have an image called missive-storage, and I add a dummy change to the Dockerfile, so docker ps reports the name of the image in a running container as d4ebdee0f3e2 instead (since missive-storage:latest has changed).
The ps line looks like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
45b8023f6ef1 d4ebdee0f3e2 "/usr/local/bin/du..." 4 minutes ago Up 4 minutes app_missive-storage-backend_1
I then issue this command (missive-storage-backend is the DC service name for image missive-storage):
docker-compose up -d --no-recreate --scale missive-storage-backend=2
which results in these containers:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0bd6577f281a missive-storage "/usr/local/bin/du..." 2 seconds ago Up 2 seconds app_missive-storage-backend_2
45b8023f6ef1 d4ebdee0f3e2 "/usr/local/bin/du..." 4 minutes ago Up 4 minutes app_missive-storage-backend_1
As you can see, this gives me two running containers, one based on the old image, and one based on the new image. From here I can just redirect traffic by sending a configuration change to the front-end proxy, then stop the old container.
Note that --no-recreate is important - without it, Docker Compose seems liable to reboot everything, defeating the object of the exercise.

Docker swarm mode load balancing

I've set up a docker swarm mode cluster, with two managers and one worker. This is on Centos 7. They're on machines dkr1, dkr2, dkr3. dkr3 is the worker.
I was upgrading to v1.13 the other day, and wanted zero downtime. But it didn't work exactly as expected. I'm trying to work out the correct way to do it, since this is one of the main goals, of having a cluster.
The swarm is in 'global' mode. That is, one replica per machine. My method for upgrading was to drain the node, stop the daemon, yum upgrade, start daemon. (Note that this wiped out my daemon config settings for ExecStart=...! Be careful if you upgrade.)
Our client/ESB hits dkr2, which does its load balancing magic over the swarm. dkr2 which is the leader. dkr1 is 'reachable'
I brought down dkr3. No issues. Upgraded docker. Brought it back up. No downtime from bringing down the worker.
Brought down dkr1. No issue at first. Still working when I brought it down. Upgraded docker. Brought it back up.
But during startup, it 404'ed. Once up, it was OK.
Brought down dkr2. I didn't actually record what happened then, sorry.
Anyway, while my app was starting up on dkr1, it 404'ed, since the server hadn't started yet.
Any idea what I might be doing wrong? I would suppose I need a health check of some sort, because the container is obviously ok, but the server isn't responding yet. So that's when I get downtime.
You are correct -- You need to specify a healthcheck to run against your app inside the container in order to make sure it is ready. Your container will not receive traffic until this healtcheck has passed.
A simple curl to an endpoint should suffice. Use the Healthcheck flag in your Dockerfile to specify a healthcheck to perform.
An example of the healthcheck line in a Dockerfile to check if an endpoint returned 200 OK would be:
HEALTHCHECK CMD curl -f 'http://localhost:8443/somepath' || exit 1
If you can't modify your Dockerfile, then you can also specify your healthcheck manually at deployment time using the compose file healthcheck format.
If that's also not possible either and you need to update a running service, you can do a service update and use a combination of the health flags to specify your healthcheck.

Is it possible/sane to develop within a container Docker

I'm new to Docker and was wondering if it was possible (and a good idea) to develop within a docker container.
I mean create a container, execute bash, install and configure everything I need and start developping inside the container.
The container becomes then my main machine (for CLI related works).
When I'm on the go (or when I buy a new machine), I can just push the container, and pull it on my laptop.
This sort the problem of having to keep and synchronize your dotfile.
I haven't started using docker yet, so is it something realistic or to avoid (spacke disk problem and/or pull/push timing issue).
Yes. It is a good idea, with the correct set-up. You'll be running code as if it was a virtual machine.
The Dockerfile configurations to create a build system is not polished and will not expand shell variables, so pre-installing applications may be a bit tedious. On the other hand after building your own image to create new users and working environment, it won't be necessary to build it again, plus you can mount your own file system with the -v parameter of the run command, so you can have the files you are going to need both in your host and container machine. It's versatile.
> sudo docker run -t -i -v
/home/user_name/Workspace/project:/home/user_name/Workspace/myproject <container-ID>
I'll play the contrarian and say it's a bad idea. I've done work where I've tried to keep a container "long running" and have modified it, but then accidentally lost it or deleted it.
In my opinion containers aren't meant to be long running VMs. They are just meant to be instances of an image. Start it, stop it, kill it, start it again.
As Alex mentioned, it's certainly possible, but in my opinion goes against the "Docker" way.
I'd rather use VirtualBox and Vagrant to create VMs to develop in.
Docker container for development can be very handy. Depending on your stack and preferred IDE you might want to keep the editing part outside, at host, and mount the directory with the sources from host to the container instead, as per Alex's suggestion. If you do so, beware potential performance issue on macos x with boot2docker.
I would not expect much from the workflow with pushing the images to sync between dev environments. IMHO keeping Dockerfiles together with the code and synching by SCM means is more straightforward direction to start with. I also carry supporting Makefiles to build image(s) / run container(s) same place.

Resources