Deploying new versions of an image instantly - docker

I would like to have 3 versions of my container running at any one time (on the same machine). Something like this:
version v7 (stage)
version v6 (live)
version v5 (old)
then I would like to map this to 3 urls:
v7.example.com
v6.example.com
v5.example.com
And also, a 4th url, which refers to the current (or default) version:
www.example.com (which maps to http://v6.mydomain.com)
Presumably, I could take some configuration step that would change the "default" version from v6 to v7. That step should hopefully be instant and atomic.
The idea is that deploying the next version of an app is a distinct step from activating that version (by activate, I mean making that version the default).
Therefore a rollout (or a rollback) would simply be a matter of changing the default version to the next (or previous) version.
Google App Engine supports this kind of pattern and I really like it.
Has anyone set something like this up using Docker? I would appreciate any advice on how to do it. Thanks.

I would do this with a reverse proxy in front of the containers running your webapp.
Example using the jwilder/nginx-proxy image
Let's say your docker host IP address is 11.22.33.44.
Let's say your docker images are:
mywebapp:5 for v5
mywebapp:6 for v6
mywebapp:7 for v7
First, make sure your DNS is set up so that v5.example.com, v6.example.com, v7.example.com and www.example.com all resolve to 11.22.33.44.
Start a jwilder/nginx-proxy on your docker host:
docker run -d --name reverseproxy -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro -e DEFAULT_HOST=www.example.com jwilder/nginx-proxy
Set v6 as the default one
Start the webapps containers:
docker run -d -name webapp5 -e VIRTUAL_HOST="v5.example.com" mywebapp:5
docker run -d -name webapp6 -e VIRTUAL_HOST="v6.example.com,www.example.com" mywebapp:6
docker run -d -name webapp7 -e VIRTUAL_HOST="v7.example.com" mywebapp:7
The jwilder/nginx-proxy will use the value of the VIRTUAL_HOST environment variable to update its configuration and route the requests to the correct container.
How to make v7 the new default one
First, remove container webapp7 and create a new one with www.example.com added to the VIRTUAL_HOST variable:
docker rm webapp7
docker run -d -name webapp7 -e VIRTUAL_HOST="v7.example.com,www.example.com" mywebapp:7
In this state, the reverse proxy will load balance queries for www.example.com to both webapp6 and webapp7 containers.
Finally, remove container webapp6 and eventually recreate it, but without www.example.com in the VIRTUAL_HOST value:
docker rm webapp6
docker run -d -name webapp6 -e VIRTUAL_HOST="v6.example.com" mywebapp:7

I thought I would share what I ended up doing. I took Thomasleveil's advice to use nginx. But rather than starting and stopping a whole docker container and nginx just to switch versions, I do this:
Change the port number in the nginx config file (see file below)
Call service nginx reload (which is instant).
server{
location / {
proxy_pass http://192.168.1.50:81/;
}
}

Related

How to run sitespeed.io in apache/ngnix server?

I have recently heard about sitespeed.io and started using it to measure performance of my site.
I am running it in a docker container on my gcp cloud instance.
The problem is everytime i run the command it stores the result in a particular directory sitespeed-result and then I need to copy the whole thing on my local windows machine to view index.html file.
Is it possible to run this on a server like apache? I mean for example I can run an apache container on my docker host but how do i map this sitespeed io result so that it can be available using http://my-gcp-instance:80 where my apache container is running on port 80.
sudo docker run -v "$(pwd)":/sitespeed.io sitespeedio/sitespeed.io:13.3.0 https://mywebsite.com
Sorry for posting thr question this but I got it working.
sudo docker run -dit --name my-apache -p 8080:80 -v "$(pwd)":/usr/local/apache2/htdocs/ httpd:2.4
(pwd) is where i am storing the sitespeed results.

Docker deployment - one machine - no downtime

I have only one small web project to be run through the Docker and only one machine where I can't use virtualization and I don't really need that either. I would like to know how can I deploy my application to VPS with Docker without any downtime.
For now, I am just using a repository and creating docker container with docker-compose (including some configuration for production through specific .yaml file).
I guess the best would be to use Swarm, but I think it's not possible since I could only use one machine.
Single machine deployments are a great use case for Swarm. You can do "rolling updates" if your services that make it possible for zero downtime service updates (assuming your running 2 containers of a service).
Obviously, you won't have hardware or OS level fault-tolerance, but Swarm is a better solution for production then the docker-compose cli.
See all my reasons for using Swarm in this case in my GitHub AMA on the subject: Only one host for production environment. What to use: docker-compose or single node swarm?
See my YouTube video on an example of rolling updates.
Here's a simple approach we’ve used in production with just nginx and docker-compose: https://engineering.tines.com/blog/simple-zero-downtime-deploys
Basically, it’s this bash script:
reload_nginx() {
docker exec nginx /usr/sbin/nginx -s reload
}
zero_downtime_deploy() {
service_name=tines-app
old_container_id=$(docker ps -f name=$service_name -q | tail -n1)
# bring a new container online, running new code
# (nginx continues routing to the old container only)
docker-compose up -d --no-deps --scale $service_name=2 --no-recreate $service_name
# wait for new container to be available
new_container_id=$(docker ps -f name=$service_name -q | head -n1)
new_container_ip=$(docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $new_container_id)
curl --silent --include --retry-connrefused --retry 30 --retry-delay 1 --fail http://$new_container_ip:3000/ || exit 1
# start routing requests to the new container (as well as the old)
reload_nginx
# take the old container offline
docker stop $old_container_id
docker rm $old_container_id
docker-compose up -d --no-deps --scale $service_name=1 --no-recreate $service_name
# stop routing requests to the old container
reload_nginx
}

Run shiny app on shiny server installed in docker container from Windows 10 Pro?

I am using Windows 10 Pro with Docker installed. I $ docker pull rocker/shiny image on my computer and started it as described in documentation https://hub.docker.com/r/rocker/shiny/ using the following command:
docker run -d -p 80:3838 -v C:\\Users\\<My name>\\Documents\\R\\Rprojects\\ShinyHelloWorld\\:/srv/shiny-server/ -v C:\\Users\\<My name>\\Documents\\R\\Rprojects\\ShinyHelloWorld\\:/var/log/shiny-server/ rocker/shiny
The container created successfully:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f0ee402966b9 rocker/shiny "/usr/bin/shiny-serv…" 2 minutes ago Up 2 minutes 0.0.0.0:80->3838/tcp youthful_banach
I created ShinyHelloWorld application using RStudio, and the folder on the local host that I mounted to docker container basically contains one file app.R with default shiny application created by RStudio.
Now the problem is: I can't run this application from my browser using address http://localhost:3838/ShinyHelloWorld/.
When I use URL http://localhost:3838 it returns web page with single sentence Index of /. So, there is some one who listens.
Did I correctly run shiny server?
I suppose that I am using incorrect URL in my browser to access server. How to do it correctly?
Do I need some installation of my shiny app to the server?
Is it possible to run shiny server using tocken, like with:
http://localhost:8888/?token=44dab68c1bc7b1662041853573f37cfa03f13d029d397816
as described, e.g. in the book for COOK, J.: Docker for Data Science: Building Scalable and Extensible Data Infrastructure Around the Jupyter Notebook Server: Apress., 2017
How to find the tocken if it exists?
Suppose that I want to use docker-compose.yml and then $ docker-compose up. Please, help complete the script below to execute the same command as above.
version: "3"
services:
image: rocker/shiny
volumes:
- C:\\Users\\aabor\\Documents\\R\\Rprojects\\ShinyHelloWorld:/srv/shiny-server/
- C:\\Users\\aabor\\Documents\\R\\Rprojects\\ShinyHelloWorld:/var/log/shiny-server/
ports:
- 80:3838
container_name: rocker-shiny-container
Look at ports 0.0.0.0:80->3838/tcp - means your port 80 will go to 3838 on the container - so you should try http://localhost first.
I resolved the issue by myself. The problem was with folder path.
This command will create docker container correctly:
docker run -d -p 3838:3838 -v //c/Users/<My Name>/Documents/R/Rprojects:/srv/shiny-server/ -v //c/Users/<My Name>/Documents/R/Rprojects:/var/log/shiny-server/ rocker/shiny
Then if I use URL http://localhost:3838/ShinyHelloWorld/ in my browser shiny application will start.

How to refresh a container links

I've two dockers: one is a nginx frontend and the other is an expressjs application. Nginx is the entry point and it does a proxy to expressjs.
I do:
docker run -d --name 'expressjs' geographica/expressjs
docker run -d --name 'nginx' --link expressjs nginx
After that when I update the image geographica/expressjs I need to recreated the expressjs container:
docker stop expressjs && docker rm expressjs && docker run -d --name 'expressjs' geographica/expressjs
At this point, I also need to recreate the nginx container. How can I do it without recreating the nginx container?
It's a simplification of our problem, our real server has a nginx frontend and N applications, so each time we update one of the application we need to restart the nginx and stop the service for other applications.
Please, avoid docker-compose solutions. I wouldn't like to have a unique/huge docker-compose file for all the applications.
UPDATED:
I also think that something like that would be useful. https://github.com/docker/docker/issues/7468. having a docker link command to change container links at runtime. Unfortunately, it's not still available in 1.8.2.
This was discussed in issue 6350:
If I explicitly do a docker restart the IP is correctly updated, however I was using "systemctl restart" which does a stop, kill and rm before a run
In that case ("stop - rm - run"), links are not refreshed:
docker does not assume that a container with the same name should be linked to
It doesn't always make sense to keep that "link", after all the new container could be completely unrelated.
My solution and my advice, is that:
you look into something a bit more robust like the Ambassador pattern that is just a fancy way of saying you link to a proxy that you never restart - to keep the docker links active.
(also introduced here)
Another solution is to just docker create, docker start and docker stop instead of docker rm.
Lastly, my actual solution was to use something like SkyDNS or docker-gen to keep a central DNS with all the container names. This last solution it's the best for me because it allows me to move containers between hosts and docker linking can't work like that.
With next versions of docker, libnetwork will actually the way to go.
(see "The Container Network Model (CNM)", and "Docker Online Meetup #22: Docker Networking - video")

Will the linked docker container gets the link when it goes down and come up?

I've been wondering if the linked container shuts down and starts again, does the container that is linked with it
restores the --link connection?
Fire 2 containers.
docker run -d --name fluentd millisami/fluentd
docker run -d --name railsapp --link fluentd:fluentd millisami/rails
Now if the fluentd container is stopped and restarted, will the railsapp container restores the link and linked ENV vars automatically?
UPDATE:
As of Docker 1.3 (or probably even from earlier version, not sure), the /etc/hosts file will be updated with the new ip of a linked containers if it restarts. This means that if you access it via its name within the /etc/hosts entry as opposed to the environment variable, your link will still work even after restart.
Example:
When you starts two containers app_image and mysql and link them like this:
$ docker run --name mysql mysql:5.6.20
$ docker run -d --link mysql:mysql app_image
you'll get an entry in your /etc/hosts within app_image:
# /etc/hosts example
mysql 172.17.0.12
and that ip will be refreshed in case mysql crashes and is restarted.
So don't use environment variables when referring to your linked container:
$ ping $MYSQL_PORT_3306_TCP_ADDR # --> don't use! won't work after the linked container restarts
$ ping mysql # --> instead, access it by the name as in /etc/hosts
Old answer:
Nope,it won't. In the face of crashing containers scenario, links are as good as dead. I think it is pretty much an open problem,i.e., there are many candidate solutions, but none are yet to be crowned the standard approach.
You might want to take a look at http://jasonwilder.com/blog/2014/07/15/docker-service-discovery/

Resources