Better Docker Compose production setting? - docker

I have really simple web application contains those containers:
Frontend website (Nuxt.js - node app)
Backend API (PHP, Symfony)
MySQL
Every container has own Dockerfile and I can run it with Docker Compose together. It's really nice and I like the simplicity.
There is deploy script on my server. It clones GIT monorepo and run docker-compose:
DIR=$(dirname $(readlink -f $0))
rm -rf $DIR/app
git clone git#bitbucket.org:adam/myproject.git $DIR/app
cd $DIR/app && \
docker-compose down --remove-orphans && \
docker-compose up --build -d
But this solution is really slow and it makes ~3 minutes downtime. For this project I can accept few seconds downtime, it's not fatal. I don't need really zero downtime. But 3 minutes is not acceptable.
The most time-consuming is "npm build" inside containers. It's something which it must be run after every change.
What I can do better? Is Swarm or Kubernetes really only solution? Can I build containers while the old app still running? And after build just stop old and run new?
Thanks!

If you can structure things so that your images are self-contained, then you can get a fairly short downtime.
I would recommend using a unique tag for your images. A date stamp works well; you mention you have a monorepo, so you can use the commit ID in that repo for your image tag too. In your docker-compose.yml file, use an environment variable for your image names
version: '3'
services:
frontend:
image: myname/frontend:${TAG:-latest}
ports: [...]
et: cetera
Do not use volumes: to overwrite the code in your images. Do have your CI system test your images as built, running the exact image you're getting ready to deploy; no bind mounts or extra artificial test code. The question mentions "npm build inside containers"; run all of these build steps during the docker build phase and specify them in your Dockerfile, so you don't need to run these at deploy time.
When you have a new commit in your repo, build new images. This can happen on a separate system; it can happen in parallel with your running system. If you use a unique tag per image then it's more obvious that you're building a new image that's different from the running image. (In principle you can use a single ...:latest tag but I wouldn't recommend it.)
# Choose a tag; let's pick something based on a timestamp
export TAG=20200117.01
# Build the images
docker-compose build
# Push the images to a repository
# (Recommended; required if you're building somewhere
# other than the deployment system)
docker-compose push
Now you're at a point where you've built new images, but you're still running containers based on old images. You can tell Docker Compose to update things now. If you docker-compose pull images up front (or if you built them on the same system) then this just consists of stopping the existing containers and starting new ones. This is the only downtime point.
# Name the tag you want to deploy (same as above)
export TAG=20200117.01
# Pre-pull the images
docker-compose pull
# ==> During every step up to this point the existing system
# ==> is running undisturbed
# Ask Compose to replace the existing containers
# ==> This command is the only one that has any downtime
docker-compose up -d
(Why is the unique tag important? Say a mistake happens, and build 20200117.02 has a critical bug. It's very easy to set the tag back to the earlier 20200117.01 and re-run the deploy, so roll back the deployed system without doing a git revert and rebuilding the code. If you're looking at cluster managers like Kubernetes, the changed tag value is a signal to a Kubernetes Deployment object that something has updated, so this triggers an automatic redeployment.)

Only problem really was docker-compose down before docker-compose build. I deleted down command and downtime is a few seconds now. I thought, build shutdowns running containers before building automatically. I don't know why. Thanks Noé for idea! I'm idiot.

While I do think that switching to Kubernetes (or maybe Docker Swarm which I don't have experience with) would be the best option, YES you can build your docker images and then restart.
You just need to run the docker-compose build command. See below:
DIR=$(dirname $(readlink -f $0))
rm -rf $DIR/app
git clone git#bitbucket.org:adam/myproject.git $DIR/app
cd $DIR/app && \
docker-compose build && \
docker-compose down --remove-orphans && \
docker-compose up -d

This long time can come from multiple things:
Your application ignore the stop signal, docker-compose wait for them to terminate before killing them. Check that your container are well exiting without waiting the kill signal.
Your Dockerfile is wrongly ordered. Docker have built-in cache for every step but if an earlier step changed then it have do make every steps again. I recommend you to look carefuly when you copy files it's often this that break cache.
Run docker-compose build before putting down containers. Be careful about mounted volumes, if docker can't get the context it will failed

Related

How to test a Dockerfile with minimal overhead

I'm trying to learn how to write a Dockerfile. Currently my strategy is as follows:
Guess what commands are correct to write based documentation.
Run sudo docker-compose up --build -d to build a docker container
Wait ~5 minutes for my anaconda packages to install
Find that I made a mistake on step 15, and go back to step 1.
Is there a way to interactively enter the commands for a Dockerfile, or to cache the first 14 successful steps so I don't need to rebuild the entire file? I saw something about docker exec but it seems that's only for running containers. I also want to try and use the same syntax as I use in the dockerfile (i.e. ENTRYPOINT and ENV) since I'm not sure what the bash equivalent is/if it exists.
you can run docker-compose without the --build flag that way you don't have to rebuild the image every time, although as you are testing the Dockerfile i don't know if you have much options here; the docker should cache automatically the builds but only if there's no changes from the last time that you made a build, and there's no way to build a image interactively, docker doesn't work like that, lastly, the docker exec is just to run commands inside the container that was created from the build.
some references for you: docker cache, docker file best practices

How to make changes to docker-compose.yml and Drupal 9 project files correctly if containers are already running?

I installed Docker and Docker Compose
I downloaded the latest release Docker-based Drupal stack
(there are php, mariadb, apache images etc.) and put it in the my project
folder /var/www/html/mydrupaldocker
Next, I made the settings in the .env and docker-compose.yml files and running the containers with the command:
docker-compose up -d
After running images from this folder, as well as adding the unzip drupal 9 folder to the my project folder, I will start installing drupal 9 in the browser.
And I have questions on two possible situations:
Situation №1:
I made mistakes in the file docker-compose.yml I have the commented code which is responsible for the few images. Accordingly, the containers were not started. And I want to place the project in another place of the computer (not critical, but it is desirable)
I can do:
docker-compose stop
docker-compose rm
Fix everything that I need. And run again:
docker-compose up -d
Is it right to do so? Or do I need something otherwise?
Situation №2:
Everything is set up well, running all the necessary containers, installed the Drupal 9 site in the container. And then I created a sub theme, added content, wrote code in php, js, css files, etc.
How do I commit the changes now? What commands do you need to write in the terminal? For example, in technology such as git, this is done with the commands:
git add.
git commit -m "first"
How is it done in Docker? Perhaps there will be a situation when I need to roll back the container to the version below.
Okay, let's go by each case.
Situation No.1
Whenever you make changes to docker-compose.yml, it's fine to restart the service/images so they reflect the new changes. It could be as minor as a simple port switch from 80 to 8080. Hence, you could just do docker-compose stop && docker-compose up -d and docker-cli will restart the containers with the new changes.
You don't really need to remove the containers/services unless you have used custom Dockerfile and have made changes to it. Although, your below assumption would still give the same result, it's just has an extra step of removing the containers without any changes being done to the actual docker images.
I can do: docker-compose stop docker-compose rm
Fix everything that I need. And run again:
docker-compose up -d
Situation No.2
In this you would be committing your entire project to git along with your Dockerfile and docker-compose.yml file from your host machine and not the container. There's no rocket science here.
You won't be committing your code to git via the containers. The containers are only for deploying and testing your code. You would be committing just the configuration files i.e Dockerfile (if custom is used offcourse) and docker-compose.yml file along with your source code to git. The result would be that, any developer who is collaborating with you in a team, can just take a pull of the project and run docker-compose up -d and the same containers/services running on your machine will be up and running on the host machine of the other dev.
Regarding how to roll back to old version of docker services, you can just rollback to a previous commit and the docker-compose.yml will be reverted. Then you can just do:
docker-compose down && docker-compose up -d

Rapidly modifying Python app in K8S pod for debugging

Background
I have a large Python service that runs on a desktop PC, and I need to have it run as part of a K8S deployment. I expect that I will have to make several small changes to make the service run in a deployment/pod before it will work.
Problem
So far, if I encounter an issue in the Python code, it takes a while to update the code, and get it deployed for another round of testing. For example, I have to:
Modify my Python code.
Rebuild the Docker container (which includes my Python service).
scp the Docker container over to the Docker Registry server.
docker load the image, update tags, and push it to the Registry back-end DB.
Manually kill off currently-running pods so the deployment restarts all pods with the new Docker image.
This involves a lot of lead time each time I need to debug a minor issue. Ideally, I've prefer being able to just modify the copy of my Python code already running on a pod, but I can't kill it (since the Python service is the default app that is launched, with PID=1), and K8S doesn't support restarting a pod (to my knowledge). Alternately, if I kill/start another pod, it won't have my local changes from the pod I was previously working on (which is by design, of course; but doesn't help with my debug efforts).
Question
Is there a better/faster way to rapidly deploy (experimental/debug) changes to the container I'm testing, without having to spend several minutes recreating container images, re-deploying/tagging/pushing them, etc? If I could find and mount (read-write) the Docker image, that might help, as I could edit the data within it directly (i.e. new Python changes), and just kill pods so the deployment re-creates them.
There are two main options: one is to use a tool that reduces or automates that flow, the other is to develop locally with something like Minikube.
For the first, there are a million and a half tools but Skaffold is probably the most common one.
For the second, you do something like ( eval $(minikube docker-env) && docker build -t myimagename . ) which will build the image directly in the Minikube docker environment so you skip steps 3 and 4 in your list entirely. You can combine this with a tool which detects the image change and either restarts your pods or updates the deployment (which restarts the pods).
Also FWIW using scp and docker load is very not standard, generally that would be combined into docker push.
I think your pain point is the container relied on the python code. You can find a way to exclude the source code from docker image build phase.
For my experience, I will create a docker image only include python package dependencies, and use volume to map source code dir to the container path, so you don't need to rebuild the image if no dependencies are added or removed.
Example
I have not much experience with k8s, but I believe it must be more or less the same as docker run.
Dockerfile
FROM python:3.7-stretch
COPY ./python/requirements.txt /tmp/requirements.txt
RUN pip install --no-cache-dir -r /tmp/requirements.txt
ENTRYPOINT ["bash"]
Docker container
scp deploy your code to the server, and map your host source path to the container source path like this:
docker run -it -d -v /path/to/your/python/source:/path/to/your/server/source --name python-service your-image-name
With volume mapping, your container no longer depend on the source code, you can easily change your source code without rebuilding your image.

How to rebuild and update a container without downtime with docker-compose?

I enjoy a lot using docker-compose.
Eg. on my server, when I want to update my app with minor changes, I only need to git pull origin master && docker-compose restart, works perfectly.
But sometimes, I need to rebuild (eg. I added an npm dependency, need to run npm install again).
In this case, I do docker-compose build --no-cache && docker-compose restart.
I would expect this to :
create a new instance of my container
stop the existing container (after the newer has finished building)
start the new one
optionally remove the old one, but this could be done manually
But in practice it seems to restart the former one again.
Is it the expected behavior?
How can I handle a rebuild and start the new one after it is built?
Maybe I missed a specific command? Or would it make sense to have it?
from the manual docker-compose restart
If you make changes to your docker-compose.yml configuration these
changes will not be reflected after running this command.
you should be able to do
$docker-compose up -d --no-deps --build <service_name>
The --no-deps will not start linked services.
The problem is that restart will restart your current containers, which is not what you want.
As an example, I just did this
change the docker file for one of the images
call docker-compose build to build the images
call docker-compose down1 and docker-compose up
docker-compose restart will NOT work here
using docker-compose start instead also does not work
To be honest, i'm not completly sure you need to do a down first, but that should be easy to check.1 The bottomline is that you need to call up. You will see the containers of unchanged images restarting, but for the changed image you'll see recreating.
The advantage of this over just calling up --build is that you can see the building-process first before you restart.
1: from the comments; down is not needed, you can just call up --build. Down has some "down"-sides, including possible being destructive to your (volume-)data.
Use the --build flag to the up command, along with the -d flag to run your containers in the background:
docker-compose up -d --build
This will rebuild all images defined in your compose file, then restart any containers whose images have changed.
-d assumes that you don't want to keep everything running in your shell foreground. This makes it act more like restart, but it's not required.
Don't manage your application environment directly. Use deployment tool like Rancher / Kubernetes. Using one you will be able to upgrade your dockerized application without any downtime and even downgrade it should you need to.
Running Rancher is as easy as running another docker container as this tool is available in the Docker Hub.
You can use Swarm. Init swarm first by docker swarm init command and use healthcheck in docker-compose.yml.
Then run below command:
docker stack deploy -c docker-compose.yml project_name
instead of
docker-compose up -d.
When docker-compose.yml file is updated only run this command again:
docker stack deploy -c docker-compose.yml project_name
Docker Swarm will create new version of services and stop old version after that.
Though the accepted answer shall work to rebuild the container before starting the new one as a replacement, it is ok for simple use case, but the container will still be down during new container initialization process. If this is quite long, it can be an issue.
I managed to achieve rolling updates with docker-compose (along with a nginx reverse proxy), and detailed how I built that in this github issue: https://github.com/docker/compose/issues/1786#issuecomment-579794865
Hope it can help!
Run the following commands:
docker-compose pull
docker-compose up -d --no-deps --build <service_name>
As the top rated answer mentioned
docker-compose up -d --no-deps --build <service_name>
will restart a single service without taking down the whole compose.
I just wanted to add to the top answer in case anyone is unsure how to update an image without restarting the container.
Another way:
docker-compose restart in your case could be replaced with docker-compose up -d --force-recreate, see https://docs.docker.com/compose/reference/up/
Running docker-compose up while docker-compose is in the running state, will recreate container that got their configuration changed.
Thats the easiest way, and it will only affect containers that got their configuration changed.
root#docker:~# docker-compose up
traefik is up-to-date
nginx is up-to-date
Recreating php ... done

How to get docker-compose to always re-create containers from fresh images?

My docker images are built on a Jenkins CI server and are pushed to our private Docker Registry. My goal is to provision environments with docker-compose which always start the originally built state of the images.
I am currently using docker-compose 1.3.2 as well as 1.4.0 on different machines but we also used older versions previously.
I always used the docker-compose pull && docker-compose up -d commands to fetch the fresh images from the registry and start them up. I believe my preferred behaviour was working as expected up to a certain point in time, but since then docker-compose up started to re-run previously stopped containers instead of starting the originally built images every time.
Is there a way to get rid of this behaviour? Could that way be one which is wired in the docker-compose.yml configuration file to not depend "not forgetting" something on the command line upon every invocation?
ps. Besides finding a way to achieve my goal, I would also love to know a bit more about the background of this behaviour. I think the basic idea of Docker is to build an immutable infrastructure. The current behaviour of docker-compose just seem to plain clash with this approach.. or do I miss some points here?
docker-compose up --force-recreate is one option, but if you're using it for CI, I would start the build with docker-compose rm -f to stop and remove the containers and volumes (then follow it with pull and up).
This is what I use:
docker-compose rm -f
docker-compose pull
docker-compose up --build -d
# Run some tests
./tests
docker-compose stop -t 1
The reason containers are recreated is to preserve any data volumes that might be used (and it also happens to make up a lot faster).
If you're doing CI you don't want that, so just removing everything should get you want you want.
Update: use up --build which was added in docker-compose 1.7
The only solution that worked for me was the --no-cache flag:
docker-compose build --no-cache
This will automatically pull a fresh image from the repo. It also won't use the cached version that is prebuilt with any parameters you've been using before.
By current official documentation there is a shortcut that stops and removes containers, networks, volumes, and images created by up, if they are already stopped or partially removed and so on, then it will do the trick too:
docker-compose down
Then if you have new changes on your images or Dockerfiles use:
docker-compose build --no-cache
Finally:docker-compose up
In one command: docker-compose down && docker-compose build --no-cache && docker-compose up
docker-compose up --build # still use image cache
OR
docker-compose build --no-cache # never use cache
You can pass --force-recreate to docker compose up, which should use fresh containers.
I think the reasoning behind reusing containers is to preserve any changes during development. Note that Compose does something similar with volumes, which will also persist between container recreation (a recreated container will attach to its predecessor's volumes). This can be helpful, for example, if you have a Redis container used as a cache and you don't want to lose the cache each time you make a small change. At other times it's just confusing.
I don't believe there is any way you can force this from the Compose file.
Arguably it does clash with immutable infrastructure principles. The counter-argument is probably that you don't use Compose in production (yet). Also, I'm not sure I agree that immutable infra is the basic idea of Docker, although it's certainly a good use case/selling point.
docker-compose up --build --force-recreate
I claimed 3.5gb space in ubuntu AWS through this.
clean docker
docker stop $(docker ps -qa) && docker system prune -af --volumes
build again
docker build .
docker-compose build
docker-compose up
Also if the compose has several services and we only want to force build one of those:
docker-compose build --no-cache <service>
together with --force-recreate,
you might want to consider using this flag too:
-V, --renew-anon-volumes Recreate anonymous volumes instead of retrieving
data from the previous containers.
I'm not sure from which version this flag is available, so check your docker-compose up --help if you have it or not
$docker-compose build
If there is something new it will be rebuilt.

Resources