Versioning a docker composition - docker

I usually use simple production images composition to manage my production deployments hence exclusively relying on docker-compose.
I'm not using Kubernetes or other tools since I want to keep it simple for my simple apps (I don't deploy on multiple hosts, manage load balancing or do CD/CI)
Here is how my production composition could look like:
version: '3'
services:
php:
image: ${CONTAINER_REGISTRY_BASE}/php:${VERSION}
depends_on:
- db
env_file:
- ./api.env
api:
image: ${CONTAINER_REGISTRY_BASE}/api:${VERSION}
depends_on:
- php
- db
db:
image: mariadb:10.2
client:
image: ${CONTAINER_REGISTRY_BASE}/client-prod:${VERSION}
env_file:
- ./client.env
admin:
image: ${CONTAINER_REGISTRY_BASE}/admin-prod:${VERSION}
env_file:
- ./admin.env
Keeping one global version for the application stack in a .env file, when I update this version I simply have to do this:
docker-compose build
docker-compose push
And in the production server (after having updated the version)
docker-compose up -d
As you can imagine, the issue is that I'm shipping the whole stack even if there is a very small modification in one of the services.
I thought about having a different version for each service but it seems quite complicated to maintain as we can't really be sure what is the last version for each.
Am I seeing it wrong ? Shouldn't I use docker-compose in production ? In which case what should I use ?
Can someone suggest me a simple deployment way based on a docker registry ?

Short Answer:
You should actually have separate services for each, and I'd recommend moving away from just the normal Docker server for your production to Swarm.
Long Answer:
If you move to Docker Swarm, you can use the same compose file with a few changes to make it into a stack file. What this does is creates a series of services, individually maintained by the Swarm.
During your build process, I would recommend assigning each new build a specific version number, and then tagging it specifically as latest or dev. Push the latest or dev version to the repository.
The reason for this is two fold:
When you update your stack file, you'll want to specify a discrete version of the image to use. IE 1.2.3 or whatever your newest is for the service you had just changed. This is because if you try to use "latest" there's a good change the Swarm won't even try to pull the image because on deployment. I've always found it's better to avoid ambiguity in the production sense
For developers, whenever they start working, or whenever they're doing work on their local environment, they can always target the dev or latest image via composer overrides (basically calling two compose files in sequence, the first having core definitions, the second having changes or additions etc)
Whenever you do a docker stack deploy, it's only going to look at differences in the services to make changes. I'd play around with it, but I think it would fit your work flow a lot better.
Side note:
I've never really found a good use case for databases existing in Docker shy of (1) lack of resources or (2) integration testing.

Related

Should I Set docker image version in docker-compose?

Imagine I have docker-compose.yml for adding mongo as a container, is it a good thing to set verion in front of image name or let it to be the latest by default?
version: '3.8'
services:
mongo:
image: mongo:4.0
ports:
- "27017:27017"
Actually what is the pros and cons for application in development and production realeses?
image: mongo:4.0 VS image: mongo
Including a version number as you've done is good practice. I'd generally use a major-only image tag (mongo:4) or a major+minor tag (mongo:4.4) but not a super-specific version (mongo:4.4.10) unless you have automation to update it routinely.
Generally the Docker Hub images get rebuilt fairly routinely; but, within a given patch line, only the most-recent versions get patches. Say the debian:focal base image gets a security update. As of this writing, the mongo image has 4, 4.4, and 4.4.10 tags, so all of those get rebuilt, but e.g. 4.4.9 won't. So using a too-specific version could mean you don't get important updates.
Conversely, using latest means you just don't care what version you have. Your question mentions mongo:4.0 but mongo:latest is currently version 5.0.5; are there compatibility issues with that major-version upgrade?
The key rules here are:
If you already have some image:tag locally, launching a container will not pull it again, even if it's updated in the repository.
Minor-version tags like mongo:4.4 will continue to get updates as long as they are supported, but you may need to docker-compose pull to get updates.
Patch-version tags like mongo:4.4.9 will stop getting updates as soon as there's a newer patch version, even if you docker pull mongo:4.4.9.
Using a floating tag like ...:latest or a minor-version tag could mean different systems get different builds of the image, depending on what they have locally. (Your coworker could have a different mongo:latest than you; this is a bigger problem in cluster environments like Kubernetes.)
I think it's a good thing to put version in front of the image name, you can manage them more easily, but you have to be careful to pass the update regularly to avoid loopholes.

Is there a way to enforce inter-module dependencies/initialization order?

Using Azure IoT Edge, I have not found any way to guarantee the initialization order of containers/modules in a deployment. Suppose for example, I have 2 modules, A and B. A is a server and B is a client that depends on A. As far as I know, there's no way to guarantee that A starts up before B does.
The Azure IoT Edge deployment templates conform to the Docker Engine API, and I could not find any way to enforce dependencies through that API. As a workaround, I make no assumptions about which containers are running in each container's code. This works, although the overhead of additional code is not ideal, especially considering a tool like docker-compose would make enforcing initialization order rather trivial.
I want to do something like this (src: https://docs.docker.com/compose/compose-file/):
version: "3.7"
services:
web:
build: .
depends_on:
- db
- redis
redis:
image: redis
db:
image: postgres
As a workaround, and following the example above, in the web container I've been doing things like the following to ensure postgres is up and running before web performs postgres dependent actions:
postgresIsUp = False
while not postgresIsUp:
try:
pingPostgres()
postgresIsUp = True
except PingError:
print("postgres is not yet running")
This is, of course, a contrived example with obvious flaws, but it demonstrates the gist of the workaround.
No, IotEdge does not support the initialization of modules in a specific order.
Please be aware that even if it would be possible to start them in a specific order to resolve dependencies, you would still be running into problems if one of the modules crashes. It will be restarted by EdgeHub but you would loose the order of initialization.
Mike Yagley (one of the contributors working on IotEdge) gives an explanation on this issue on github.
StartupOrder: Introduced in IoT Edge version 1.0.10. Which order the IoT Edge agent should start the modules when first deployed.

How to publish docker-compose.yml itself?

I'd like to make sure that our frontend developers have access to the latest versions of the backend web app and can update it whenever desired, so to avoid incompatibilities with the API, which is also under development.
I have created a docker-compose.yml file containing two services: one for the backend web application, built with a custom Dockerfile, and a generic postgres image for the database. It all works fine.
I already published the backend webapp image to my private docker registry powered by Nexus repository manager, using the docker-compose push command.
Now I would like to somehow make my docker-compose.yml available, so that all that frontend devs need to do is run it with simple command.
Is there a way to publish docker-compose.yml to a Docker registry so I can avoid sharing backend sources with the frontend devs?
The traditional solution for sharing docker-compose.yml files has been version control (e.g. GitHub).
Recently, Docker has been working on docker-app which allows you to share docker-compose.yml files using a registry server. This is a separate install, and currently experimental, so I wouldn't base a production environment on it, but may be useful for developers to try out. You can checkout the project here:
https://github.com/docker/app

How to link multiple docker swarm services?

I'm a huge fan of the docker philosophy (or at least I think I am). Even so, I'm still quite novice in the sense that I don't seem to grasp the intended way of using docker.
As I see it currently, there are two ways of using docker.
Create a container with everything I need for the app in it.
For example, I would like something like a Drupal site. I would then put nginx, php, mysql and code into a container. I could run this as a service in swarm mode and scale it as needed. If I need another Drupal site, I would then run a second container/service that holds nginx, php and mysql and (slightly) different code. I would now need 2 images to run a container or service off.
Pro's - Easy, everything I need in a single container
Con's - Cannot run each container on port 80 (so need a reverse proxy or something). (Not sure at but I could imagine) Server load is higher since there are multiple containers/services running nginx, php and mysql.
Create 4 separate containers. 1 nginx container, 1 php container, 1 mysql container and 1 code/data container.
For example, I would like the same Drupal site. I could now run them all as a separate service and scale them across my servers as the amount of code containers (Drupal sites or other sites) increases. I would only need 1 image per container/service instead of a separate image for each site.
Pro's - Modular, single responsibility per service (1 for database, 1 for webserver etc), easy to scale only the area that needs scaling (scale database if requests increase, nginx if traffic increases etc).
Con's - I don't know how to make this work :).
Personally I would opt to make a setup according to the second option. Have a database container/service, nginx container/service etc. This seems much more flexible to me and makes more sense.
I am struggling however on how to make this work. How would I make the nginx service look at the php service and point the nginx config to the code folder in the data service etc. I have read some stuff about an overlay network but that does not make clear to me how nginx would look for php in a separate container/service.
I therefore have 2 (and a half) questions:
How is docker meant to be used (option 1 or 2 above or totally different)?
How can I link services together (make nginx look for php in a different service)?
(half) I know I am a beginner trying to grasp the concept but setting up a simple webserver and running websites seems like a basic task (at least, it is for me in conventional ways) but I can't seem to find my answers online anywhere. Am I totally off par in the way I think I would like to use docker or have I not been looking well enough?
How is docker meant to be used (option 1 or 2 above or totally different)?
Upto you, I prefer using Option #2, but i have at times used mix of Option #1 and options #2 also. So it all depends on the use case and which options looks better for the use case. At one of our client it was needed to have SSH and Nginx, PHP all in same container. So we mixed #1 and #2. Mysql, redis on their own container and app on one container
How can I link services together (make nginx look for php in a different service)?
Use docker-compose to define your services and docker stack to deploy them. You won't have to worry about the names of the services
version: '3'
services:
web:
image: nginx
db:
image: mysql
environment:
- "MYSQL_ROOT_PASSWORD=root"
Now deploy using
docker stack deploy --compose-file docker-compose.yml myapp
In your nginx container you can reach mysql by using it's service name db. So linking happens automatically and you need not worry.
I know I am a beginner trying to grasp the concept but setting up a simple webserver and running websites seems like a basic task (at least, it is for me in conventional ways) but I can't seem to find my answers online anywhere. Am I totally off par in the way I think I would like to use docker or have I not been looking well enough
There are lot of good resources available in forms of articles, you just need to look

Rancher development environment

I started to use rancher recently for a project.
Within few days I set up a standard microservice architecture with 4 basic services (hosted on Digital Ocean), trying to make it as production ready as possible
Services:
Api Gateway
GraphQL Api
OAuth2 Server
Frontend
it also includes Loadbalancers, Health checks etc...
I'm amazed at how good it is, as such I heavily used all the features provided by rancher in my configs, for example, the DNS conventions <service>.<stack>, sidekicks, rancher-compose etc...
The above services lives in their own repository and they have their
own Dockerfile , docker-compose.yml and rancher-compose.yml for production, so that they can be deployed independently.
Now that I proved myself that rancher will be my new "friend", I need a strategy to run the same application on my local environment and being able to develop my services, just like I would do with Vagrant.
I'm wondering what's the best approach to port an application that runs on rancher to a development environment.
I had some ideas on how to tackle this, however, none of them seemed to allow me to achieve it without re-configuring the whole services for development.
1 - Rancher on local machine
This is the first approach I took, install a rancher-server and a rancher-client locally and deploy the whole stack just like in production. It seemed the most logical idea to me. However, this wouldn't allow me to change the code of the services and being reflected into the containers live. Maybe using shared volumes might work but it looks trivial to me if you have any idea please let me know. For me, This solution is gone :(
2 - Docker compose
My second attempt was to use plainly docker compose and shared volumes, omitting load balancers and all the features of rancher :( However, this might work, I would need to change all the configurations of all my services where they point to a rancher specific DNS domain <service>.<stack> to use just <service> over the bridge network. But this means maintaining 2 different configurations for different environments, which is weird and not fun to do.
3 - Vagrant
As the second solution is already messy (double docker-compose and double configuration for the services) why not just re-create the whole environment in vagrant (without rancher features, maybe with ansible) where one nginx does reverse proxy and resolve requests between services. However, this require also quite a lot work and double effort again :(
Is there any other approach which will make rancher suitable for a development environment in a non-painfull way? How companies which rely on rancher or any other platform tools solved this issue?
Rancher on the local machine is a common pattern. If you run Rancher on a VM, or locally on a Linux box, when you launch your stacks the subtle change is that you add volumes to the host..
services:
myapp:
volumes:
- /Users/myhome/code:/src
...
You could now use templating features in the compose files and the Rancher CLI. Something like:
services:
myapp:
{{ if dev-local == "true"}}
volumes:
- /Users/blah:/src
{{end}}
...
Then you could have an answers file that just has
dev-local="false"

Resources