Save and later restore Docker stack state - docker

I'm using docker swarm deploy -c docker-compose.yml somestack to deploy to a docker swarm. However, I can later scale it with docker service scale somestack_someservice=5 (or whatever). So now docker-compose.yml no longer reflects the system. My question is, is there any way to save off the current configuration of the stack, and then later reapply it, similar to how I originally created it (with docker-compose.yml)?

There is no direct way to generate docker-compose.yml file, although you can use
docker service inspect --pretty <service-name> command to obtain all configuration in text/json format.
There are some chances we can try to create docker-compose.yml from it.
Also, think about lack of this feature as advantage. If you want to make some adjustments, make them in docker-comose.yml first and then call docker stack deploy to apply it.

Related

How to restart a docker stack container by cron? [duplicate]

Does anyone know if there is a way to have docker swarm restart one service that is part of a stack without restarting the whole stack?
Doing docker stack deploy again for me is the way to go to update services. As Francois' Answer, and also in my own experience, doing so updates only services that need to be updated.
But sometimes, it seems easier when testing stuff to only restart a single service. In my case, I had to clear the volume and update the service to start it like it was fresh. I'm not sure if there is downside to the method I will describe. I tested it on my development stack and it worked great for me.
Get the service id you want to tear down then use docker service update --force <id> to force the update of the service which effectively re-deploy it
$ docker stack services <stack_name>
ID NAME ...
3xrdy2c7pfm3 stack-name_api ...
$ docker service update --force 3xrdy2c7pfm3
The --force flag will force the service to update causing it to restart.
Scale to 0 and back up:
docker service scale myservice=0
docker service scale myservice=10
Looking at the docker stack documentation:
Extended description
Create and update a stack from a compose or a dab file on the swarm
From this blog article: docker stack works in a similar way as docker compose. It’s idempotent. If the stack is already deployed, docker stack deploy will restart only those services which has the digest or tag that is updated:
From my experience, when I deploy the same stack again with one service changing, only the updated service will be restarted.
BUT... there seems to be some limitations to changes that are taken into account (some report bugs with image tags), so give it a try and see if works as expected.
You can also use service update if you want to be sure that only targeted service if updated with your changes.
You can also refer to this similar SO QA.
As per the example in the documentation for rolling updates:
$ docker service update --image redis:3.0.7 redis
However, that only works if your image is already on the local machines. If not then you need to use --with-registry-auth to send registry authentication details to the swarm agents. See details in the docker service update documentation.
$ docker service update --with-registry-auth --image redis:3.0.7 redis
To restart a single service (with rolling restart to avoid downtime in case the service has multiple replicas) in already configured, existing stack, you can do:
docker service update --force stack_service_name
I don't recommend running the same command again (in another shell) until this one completes (because otherwise the rolling restart isn't guaranteed; it might restart all replicas of that service).
The docker service update command also checks for newer version of the image:tag you are trying to use.
If your registry requires auth, also pass --with-registry-auth argument, like so:
docker service update --force --with-registry-auth stack_service_name
If you don't pass this argument, the service will still be restarted, but the check won't be made and the service will still use the old container image without pulling the new one first. Which might be what you want.
In case you want to also switch to different image tag (or completely different image), you can do it from here too, but remember to also change the tag in your docker-stack.yml, or your next docker stack deploy will revert it back to the verison defined there:
docker service update --with-registry-auth --force --image nginx:edge stack_service_name
remove it:
docker stack rm stack_name
redeploy it:
docker stack deploy -c docker-compose.yml stack_name

Should I create a docker container or docker start a stopped container?

From the docker philosophy's point of view it is more advisable:
create a container every time we need to use a certain environment and then remove it after use (docker run <image> all the time); or
create a container for a specific environment (docker run <image>), stop it when it is not necessary and whenever it is initialized again (docker start <container>);
If you docker rm the old container and docker run a new one, you will always get a clean filesystem that starts from exactly what's in the original image (plus any volume mounts). You will also fairly routinely need to delete and recreate a container to change basic options: if you need to change a port mapping or an environment variable, or if you need to update the image to have a newer version of the software, you'll be forced to delete the container.
This is enough reason for me to make my standard process be to always delete and recreate the container.
# docker build -t the-image . # can be done first if needed
docker stop the-container # so it can cleanly shut down and be removed
docker rm the-container
docker run --name the-container ... the-image
Other orchestrators like Docker Compose and Kubernetes are also set up to automatically delete and recreate the container (or Kubernetes pod) if there's a change; their standard workflows do not generally involve restarting containers in-place.
I almost never use docker start. In a Compose-based workflow I generally use only docker-compose up -d, letting it restart things if needed; docker-compose down if I need the CPU/memory resources the container stack was using but not in routine work.
I'm talking with regards to my experience in the industry so take my answer with a grain of salt, because there might be no hard evidence or reference to the theory.
Here's the answer:
TL;DR:
In short, you never need the docker stop and docker start because taking this approach is unreliable and you might lose the container and all the data inside if no proper action is applied beforehand.
Long answer:
You should only work with images and not the containers. Whenever you need some specific data or you need the image to have some customization, you better use docker save to have the image for future use.
If you're just testing out on your local machine, or in your dev virtual machine on a remote host, you're free to use either one you like. I personally take each of the approaches on different scenarios.
But if you're talking about a production environment, you'd better use some orchestration tool; it could be as simple and easy to work with as docker-compose or docker swarm or even Kubernetes on more complex environments.
You better not take the second approach (docker run, docker stop & docker start) in those environments because at any moment in time you might lose that container and if you are solely dependent on that specific container or it's data, then you're gonna have a bad weekend.

Docker backup container with startup parameters

Im facing the same problem since months now and i dont have an adequate solution.
Im running several Containers based on different images. Some of them were started using portainer with some arguments and volumes. Some of them were started using the CLI and docker start with some arguments and parameters.
Now all these settings are stored somewhere. Because if i stop and retart such a container, everything works well again. but, if i do a commit, backup it with tar and load it on a different system and do a docker start, it has lost all of its settings.
The procedure as described here: https://linuxconfig.org/docker-container-backup-and-recovery does not work in my case.
Now im thinking about to write an own web application which will create me some docker compose files based on my setting rather than to just do a docker start with the correct params. This web application should also take care of the volumes (just folders) and do a incremental backup of them with borg to a remote server.
But actually this is only an idea. Is there a way to "extract" a docker compose file of a running containter? So that i can redeploy a container 1:1 to an other server and just have to run docker run mycontainer and it will have the same settings?
Or do i have to write my web app? Or have i missed some page on google and there is already such a solution?
Thank you!
To see the current configuration of a container, you can use:
docker container inspect $container_id
You can then use those configurations to run your container on another machine. There is no easy import/export of these settings to start another container that I'm aware of.
Most people use a docker-compose.yml to define how they want a container run. They also build images with a Dockerfile and transfer them with a registry server rather than a save/load.
The docker-compose.yml can be used with docker-compose or docker stack deploy and allows the configuration of the container to be documented as a configuration file that is tracked in version control, rather than error prone user entered settings. Running containers by hand or starting them with a GUI is useful for a quick test or debugging, but not for reproducibility.
You would like to backup the instance but the commands you're providing are to backup the image. I'd suggest to update your Dockerfile to solve the issue. In case you really want to go down the saving the instance current status, you should use the docker export and docker import commands.
Reference:
https://docs.docker.com/engine/reference/commandline/import/
https://docs.docker.com/engine/reference/commandline/export/
NOTE: the docker export does not export the content of the volumes anyway, I suggest you to should refer to https://docs.docker.com/engine/admin/volumes/volumes/

Any reasons to not use Docker Swarm (instead of Docker-Compose) on a single node?

There's Docker Swarm (now built into Docker) and Docker-Compose. People seem to use Docker-Compose when running containers on a single node only. However, Docker-Compose doesn't support any of the deploy config values, see https://docs.docker.com/compose/compose-file/#deploy, which include mem_limit and cpus, which seems like nice/important to be able to set.
So therefore maybe I should use Docker Swarm? although I'm deploying on a single node only. Also, then the installation instructions will be simpler for other people to follow (they won't need to install Docker-Compose).
But maybe there are reasons why I should not use Swarm on a single node?
I'm posting an answer below, but I'm not sure if it's correct.
Edit: Please note that this is not an opinion based question. If you have a look at the answer below, you'll see that there are "have-to" and "cannot-do" facts about this.
For development, use Docker-Compose. Because only Docker-Compose is able to read your Dockerfiles and build images for you. Docker Stack instead needs pre-built images. Also, with Docker-Compose, you can easily start and stop single containers, with docker-compose kill ... and ... start .... This is useful, during development (in my experience). For example, to see how the app server reacts if you kill the database. Then you don't want Swarm to auto-restart the database directly.
In production, use Docker Swarm (unless: see below), so you can configure mem limits. Docker-Compose has less functionality that Docker Swarm (no mem or cpu limits for example) and doesn't have anything that Swarm does not have (right?). So no reason to use Compose in production. (Except maybe if you know how Compose works already and don't want to spend time reading about the new Swarm commands.)
Docker Swarm doesn't, however, support .env files like Docker-Compose does. So you cannot have e.g. IMAGE_VERSION=1.2.3 in an .env file and then in the docker-compose.yml file have: image: name:${IMAGE_VERSION}. See https://github.com/moby/moby/issues/29133 — instead you'll need to set env vars "manually": IMAGE_VERSION=SOMETHING docker stack up ... (this actually made me stick with Docker-Compose. + that I didn't reasonably quickly find out how to view a container's log, via Swarm; Swarm seemed more complicated.)
In addition to #KajMagnus answer I should note that Docker Swarm still don't support Linux Capabilities as Docker [Compose] do. You can learn about this issue and dive into Docker community discussions here.

docker service update vs docker stack deploy with existing stack

I have a doubt in using docker swarm mode commands to update existing services after having deployed a set of services using docker stack deploy.
As far I understood every service is pinned to the SHA256 digest of the image at the time of creation, so if you rebuild and push an image (with same tag) and you try to run a docker service update, service image is not updated (even if SHA256 is different). On the contrary, if you run a docker stack deploy again, all the services are updated with the new images.
I managed to update the service image also by using docker service update --image repository/image:tag <service>. Is this the normal behavior of these commands or is there something I didn't understood?
I'm using Docker 17.03.1-ce
Docker stack deploy documentation says:
"Create and update a stack from a compose or a dab file on the swarm. This command has to be run targeting a manager node."
So the behaviour you described is as expected.
Docker service update documentation is not so clear but you yourself said it only runs with --image repository/image:tag <service> so the flag is necessary to update the image.
You have two ways to accomplish what you want.
It is normal and expected behavior for docker stack deploy to update images of existing services to whatever hash the specified tag is linked.
If no tag is present, latest is assumed - which can be problematic at times, since the latest tag is not well understood by most persons, and thus lead to some unexpected results.

Resources