I have only one small web project to be run through the Docker and only one machine where I can't use virtualization and I don't really need that either. I would like to know how can I deploy my application to VPS with Docker without any downtime.
For now, I am just using a repository and creating docker container with docker-compose (including some configuration for production through specific .yaml file).
I guess the best would be to use Swarm, but I think it's not possible since I could only use one machine.
Single machine deployments are a great use case for Swarm. You can do "rolling updates" if your services that make it possible for zero downtime service updates (assuming your running 2 containers of a service).
Obviously, you won't have hardware or OS level fault-tolerance, but Swarm is a better solution for production then the docker-compose cli.
See all my reasons for using Swarm in this case in my GitHub AMA on the subject: Only one host for production environment. What to use: docker-compose or single node swarm?
See my YouTube video on an example of rolling updates.
Here's a simple approach we’ve used in production with just nginx and docker-compose: https://engineering.tines.com/blog/simple-zero-downtime-deploys
Basically, it’s this bash script:
reload_nginx() {
docker exec nginx /usr/sbin/nginx -s reload
}
zero_downtime_deploy() {
service_name=tines-app
old_container_id=$(docker ps -f name=$service_name -q | tail -n1)
# bring a new container online, running new code
# (nginx continues routing to the old container only)
docker-compose up -d --no-deps --scale $service_name=2 --no-recreate $service_name
# wait for new container to be available
new_container_id=$(docker ps -f name=$service_name -q | head -n1)
new_container_ip=$(docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $new_container_id)
curl --silent --include --retry-connrefused --retry 30 --retry-delay 1 --fail http://$new_container_ip:3000/ || exit 1
# start routing requests to the new container (as well as the old)
reload_nginx
# take the old container offline
docker stop $old_container_id
docker rm $old_container_id
docker-compose up -d --no-deps --scale $service_name=1 --no-recreate $service_name
# stop routing requests to the old container
reload_nginx
}
Related
When I run docker start, it seems the container might not be fully started at the time the docker start command returns. Is it so?
Is there a way to wait for the container to be fully started before the command returns? Thanks.
A common technique to make sure a container is fully started (i.e. services running, ports open, etc) is to wait until a specific string is logged. See this example Waiting until Docker containers are initialized dealing with PostgreSql and Rails.
Edited:
There could be another solution using the HEALTHCHECK of Docker containers.The idea is to configure the container with a health check command that is used to determine whether or not the main service if fully
started and running normally.
The specified command runs inside the container and sets the health status to starting, healthy or unhealthy
depending of its exit code (0 - container healthy, 1 - container is not healthy). The status of the container can then be retrieved
on the host by inspecting the running instance (docker inspect).
Health check options can be configured inside Dockerfile or when the container is run. Here is a simple example for PostgreSQL
docker run --name postgres --detach \
--health-cmd='pg_isready -U postgres' \
--health-interval='5s' \
--health-timeout='5s' \
--health-start-period='20s' \
postgres:latest && \
until docker inspect --format "{{json .State.Health.Status }}" postgres| \
grep -m 1 "healthy"; do sleep 1 ; done
In this case the health command is pg_isready. A web service will typically use curl, other containers have their specific commands
The docker community provides this kind of configuration for several official images here
Now, when we restart the container (docker start), it is already configured and we need only the second part:
docker start postgres && \
until docker inspect --format "{{json .State.Health.Status }}" postgres|\
grep -m 1 "healthy"; do sleep 1 ; done
The command will return when the container is marked as healthy
Hope that helps.
Disclaimer, I'm not an expert in Docker, and will be glad to know by myself whether a better solution exists.
The docker system doesn't really know that container "may not be fully started".
So, unfortunately, there is nothing to do with this in docker.
Usually, the commands used by the creator of the docker image (in the Dockerfile) are supposed to be organized in a way that the container will be usable once the docker start command ends on the image, and its the best way. However, it's not always the case.
Here is an example:
A Localstack, which is a set of services for local development with AWS has a docker image, but once its started, for example, S3 port is not ready to get connections yet.
From what I understand a non-ready-although-exposed port will be a typical situation that you refer to.
So, out of my experience, in the application that talks to docker process the attempt to connect to the server port should be enclosed with retries and once it's available.
The docker daemon isn't starting anymore on my computer (Linux / Centos 7), and I strongly suspect that a container that is set to auto-restart is to blame in this case. If I start the daemon manually, the last line I see is "Loading containers: start", and then it just hangs.
What I'd like to do is to start the daemon without starting any containers. But I can't find any option to do that. Is there any option in docker to start the daemon without also starting containers set to automatically restart? If not, is there a way to remove the containers manually that doesn't require the docker daemon running?
I wrote this little script to stop all the containers before docker is started. It requires to have jq installed.
for i in /var/lib/docker/containers/*/config.v2.json; do
touch "$i.new" && getfacl -p "$i" | setfacl --set-file=- "$i.new"
cat "$i" | jq -c '.State.Running = false' > "$i.new" && mv -f "$i.new" "$i"
done
I think we need to verify the storage driver for docker that you are using. Devicemapper is known to have some issues similar to what you are describing. I would suggest moving to overlay2 as a storage driver.
If you are not running this on a prod system, you can try to do below steps to see if the daemon is coming up or not,
Stop the daemon process
Clean the docker home directory, default is /var/lib/docker/*
You may not be able to remove everything, in that case safe bet is to stop docker from autostart ,systemctl disable docker and restart the system
Once system is up, execute step-2 again and try to restart the daemon. Hopefully everything will come up.
I would like to have 3 versions of my container running at any one time (on the same machine). Something like this:
version v7 (stage)
version v6 (live)
version v5 (old)
then I would like to map this to 3 urls:
v7.example.com
v6.example.com
v5.example.com
And also, a 4th url, which refers to the current (or default) version:
www.example.com (which maps to http://v6.mydomain.com)
Presumably, I could take some configuration step that would change the "default" version from v6 to v7. That step should hopefully be instant and atomic.
The idea is that deploying the next version of an app is a distinct step from activating that version (by activate, I mean making that version the default).
Therefore a rollout (or a rollback) would simply be a matter of changing the default version to the next (or previous) version.
Google App Engine supports this kind of pattern and I really like it.
Has anyone set something like this up using Docker? I would appreciate any advice on how to do it. Thanks.
I would do this with a reverse proxy in front of the containers running your webapp.
Example using the jwilder/nginx-proxy image
Let's say your docker host IP address is 11.22.33.44.
Let's say your docker images are:
mywebapp:5 for v5
mywebapp:6 for v6
mywebapp:7 for v7
First, make sure your DNS is set up so that v5.example.com, v6.example.com, v7.example.com and www.example.com all resolve to 11.22.33.44.
Start a jwilder/nginx-proxy on your docker host:
docker run -d --name reverseproxy -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro -e DEFAULT_HOST=www.example.com jwilder/nginx-proxy
Set v6 as the default one
Start the webapps containers:
docker run -d -name webapp5 -e VIRTUAL_HOST="v5.example.com" mywebapp:5
docker run -d -name webapp6 -e VIRTUAL_HOST="v6.example.com,www.example.com" mywebapp:6
docker run -d -name webapp7 -e VIRTUAL_HOST="v7.example.com" mywebapp:7
The jwilder/nginx-proxy will use the value of the VIRTUAL_HOST environment variable to update its configuration and route the requests to the correct container.
How to make v7 the new default one
First, remove container webapp7 and create a new one with www.example.com added to the VIRTUAL_HOST variable:
docker rm webapp7
docker run -d -name webapp7 -e VIRTUAL_HOST="v7.example.com,www.example.com" mywebapp:7
In this state, the reverse proxy will load balance queries for www.example.com to both webapp6 and webapp7 containers.
Finally, remove container webapp6 and eventually recreate it, but without www.example.com in the VIRTUAL_HOST value:
docker rm webapp6
docker run -d -name webapp6 -e VIRTUAL_HOST="v6.example.com" mywebapp:7
I thought I would share what I ended up doing. I took Thomasleveil's advice to use nginx. But rather than starting and stopping a whole docker container and nginx just to switch versions, I do this:
Change the port number in the nginx config file (see file below)
Call service nginx reload (which is instant).
server{
location / {
proxy_pass http://192.168.1.50:81/;
}
}
I've two dockers: one is a nginx frontend and the other is an expressjs application. Nginx is the entry point and it does a proxy to expressjs.
I do:
docker run -d --name 'expressjs' geographica/expressjs
docker run -d --name 'nginx' --link expressjs nginx
After that when I update the image geographica/expressjs I need to recreated the expressjs container:
docker stop expressjs && docker rm expressjs && docker run -d --name 'expressjs' geographica/expressjs
At this point, I also need to recreate the nginx container. How can I do it without recreating the nginx container?
It's a simplification of our problem, our real server has a nginx frontend and N applications, so each time we update one of the application we need to restart the nginx and stop the service for other applications.
Please, avoid docker-compose solutions. I wouldn't like to have a unique/huge docker-compose file for all the applications.
UPDATED:
I also think that something like that would be useful. https://github.com/docker/docker/issues/7468. having a docker link command to change container links at runtime. Unfortunately, it's not still available in 1.8.2.
This was discussed in issue 6350:
If I explicitly do a docker restart the IP is correctly updated, however I was using "systemctl restart" which does a stop, kill and rm before a run
In that case ("stop - rm - run"), links are not refreshed:
docker does not assume that a container with the same name should be linked to
It doesn't always make sense to keep that "link", after all the new container could be completely unrelated.
My solution and my advice, is that:
you look into something a bit more robust like the Ambassador pattern that is just a fancy way of saying you link to a proxy that you never restart - to keep the docker links active.
(also introduced here)
Another solution is to just docker create, docker start and docker stop instead of docker rm.
Lastly, my actual solution was to use something like SkyDNS or docker-gen to keep a central DNS with all the container names. This last solution it's the best for me because it allows me to move containers between hosts and docker linking can't work like that.
With next versions of docker, libnetwork will actually the way to go.
(see "The Container Network Model (CNM)", and "Docker Online Meetup #22: Docker Networking - video")
I have a docker image with all the necessary tools and environment properly set up. However, I am having a hard time running it in the background.
Seems like there are two approaches:
(1) can run the box as daemon and I can attach to it whenever I want to use the box. However, the container exit with code zero right after I run it as daemon.
$:~/docker/docker_scrapy$ sudo docker run -ti -v ~/docker/docker_scrapy/myvolume:/var/myvolume 3fb9894af1d9 /bin/bash
root#3fc39116a586:/# python -c 'from bs4 import BeautifulSoup'
root#3fc39116a586:/# cd /var/myvolume/
root#3fc39116a586:/var/myvolume#
$:~/docker/docker_scrapy$ sudo docker run -d -v ~/docker/docker_scrapy/myvolume:/var/myvolume 3fb9894af1d9
c5fab6e6ac02a579e3371aa641b18ca67feb93a9f4f4934b6d083157182fe4e1
$:~/docker/docker_scrapy$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Clearly, I can start the box in the interactive mode, but when I try to run it as a daemon, it will exit with code 0 right after I started. And I can not attach to it because I need to start it. Does that mean you can not run a image in the daemon mode if it is idle?
(2 )Or setting it up as a SSH server, and I can ssh in and do the work whenever I want. Like Vagrant up/ssh..
In summary:
(1) What did I do wrong with the detach/attach?
(2) Which is the proper way to have a run docker in the background? daemon/ssh
If you give it another command to run after starting the service that waits for input then the container will keep running until you attach and exit that command. I usually leave a shell running after the service starts so I can debug things. here's a simple example:
First let's create a service that runs in the background
arthur#a:~$ docker run -ti ubuntu bash
root#5dc7f330b947:/# cat <<'EOF' >start-service.sh
> while true
> do
> echo service is running >> service.log
> sleep 10
> done
> EOF
root#5dc7f330b947:/# chmod +x start-service.sh
root#5dc7f330b947:/# exit
arthur#a:~$ docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5dc7f330b947 ubuntu:12.04 bash 50 seconds ago Exited (0) 3 seconds ago jolly_nobel
arthur#a:~$ docker commit 5dc7f330b947 service/example
4c37b69b129287d79a6fe3916e4293f935194966b1de49d125f1cf8d6ab14f6f
then we can start it (i background it with a & here. in your example the & would not be required). Note it's fine to use both the interactive and detach options.
arthur#a:~$ docker run -ti -d service/example bash -c "./start-service.sh & bash"
b35a5397ea2d29b4085d93ef32270379b09e49118380b0376309bca74fd719d0
arthur#a:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b35a5397ea2d service/example:latest bash -c './start-ser 7 seconds ago Up 7 seconds cranky_wright
later we can attach and check on the service by looking in it's log file:
arthur#a:~$ docker attach b35a5397ea2d
root#b35a5397ea2d:/# cat service.log
service is running
service is running
service is running
root#b35a5397ea2d:/#
I don't recommend running sshd inside the container because it leaves an option for attackers that isn't strictly useful for me.
A lots of questions in there. I would firstly suggest you go through the docker tutorial to grasp some of the underlying concepts. That said...
That Dockerfile will never run in the background, that's not how docker works. There is no cmd, no entrypoint, nothing to run.
Docker by default runs one task, when that returns the container stops. So if all you wanted was a sshd you would run that as your CMD in non daemon mode. (sshd -D)
There are ways to run daemonized apps though:
Using supervisord, as documented on the docker site.
Another alternative is phusion/baseimage.
Phusion/baseimage provides ssh access, but honestly to do what I need in containers I find nsenter easier to use. Especially when paired with the phusion docker-bash tool.
Notice: this answer promotes a tool I've written.
First of all, conceptually running multiple processes in one container is not the right approach (https://docs.docker.com/articles/dockerfile_best-practices/). A more favorable solution is one that involves multiple containers each running their own process/service. Linking them together would result in a coherent application.
I've created a containerized SSH server that you can 'stick' to any running container. This way you can create compositions with every container, without that container even knowing about ssh. The only requirement is that the container has bash.
The following example would start an SSH server attached to a container with name 'sshd-web-server1'.
docker run -ti --name sshd-web-server1 -e CONTAINER=web-server1 -p 2222:22 \
-v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker \
jeroenpeeters/docker-ssh
You connect to the SSH server with your ssh client of choice, just as you normally would.
Be adviced: Docker-SSH is currently still under development, but it does work! Please let me know what you think
For more pointers and documentation see: https://github.com/jeroenpeeters/docker-ssh