docker compose down specific profile - docker

When I try to docker compose down specific profile, it stops and removes all container.
I want to remove only containers that are in referred profile.
docker compose --profile elk down # Let's say I have some services in elk profile
In above example I wanted to bring down only services that are tagged with elk profile.

Same issue here (not really an answer). Alternatively it would be great to have docker compose --profile foo up --remove-orphans or similar also working.
There was a similar issue about it but it literally just got closed due to inactivity:
https://github.com/docker/compose/issues/8432

We have been experiencing the same issues. I followed the thread on the bug report. It looks like it was not solved yet.
We moved from using compose to swarm (multiple 1 node manager clusters instead of only using compose). And since now we use stacks we don't need profiles anymore.

Related

Docker Compose "Ghost Containers"

I am using docker-compose to deploy an application combining a number of different images.
Using Docker version 18.09.2, build 6247962
Docker-compose 1.117
Primarily, I have
ZooKeeper
Kafka
MYSQLDb
I notice a strange problem where i could not start my application with docker-compose up due to port already being assigned. I then checked docker stats and saw that there were three containers named "test_ZooKeeper.1slehgaior"
"test_Kafka.kgjdorgsr"
"test_MYSQLDB.kgjdorgsr"
I have tried kill the containers, removing them and pruning the system. When ever I kill one of these containers, it instantly restarts and I cannot for the life of me determine where they are being created from!
Please help :)
If you look into your docker-compose.yaml I'm pretty sure you'll find a restart:always somewhere. If you want to correctly shut down a running docker container managed by docker-compose, one way is to use docker-compose down from the directory where your yaml sits.
More information on the subject:
https://docs.docker.com/config/containers/start-containers-automatically/
Otherwise, you might try out to stop a single running container instead of killing it, which according to my memory tells docker not to restart it again, while a killed container looks to the service like it just has crashed. Not too sure about the last part though.

Docker error: Cannot start service ...: network 7808732465bd529e6f20e4071115218b2826f198f8cb10c3899de527c3b637e6 not found

When starting a docker container (not developed by me), docker says a network has not been found.
Does this mean the problem is within the container itself (so only the developer can fix it), or is it possible to change some network configuration to fix this?
I'm assuming you're using docker-compose and seeing this error. I'd recommend
docker-compose up --force-recreate <name>
That should recreate the containers as well as supporting services such as the network in question (it will likely create a new network).
shutdown properly first, then restart
docker-compose down
docker-compose up
I was facing this similar issue and this worked for me :
Try running this
- docker container ls -a and remove the container id by docker container rm ca877071ac10 (this is the container id ).
The problem was there were some old container instances which were not removed. Once all the old terminated instances get removed, you can start the container with docker-compose file
This can be caused by some old service that has not been killed, first add
--remove-orphans flag when bringing down your container to remove any undead services running, then bring the container back up
docker-compose down --remove-orphans
docker-compose up
This is based in this answer.
In my case the steps that produced the error where:
Server restart, containers from a docker-compose stack remained stopped.
Network prune ran, so the network associated with stack containers where deleted.
Running docker-compose --project-name "my-project" up -d failed with the error described in this topic.
Solved simply adding force-recreate, in this way:
docker-compose --project-name "my-project" up -d --force-recreate
This possibly works because with this containers are recreated linked with the also recreated network (previously pruned as described in the pre conditions).
Apparently VPN was causing this. Turning off VPN and resetting Docker to factory settings has solved the problem in two computers in our company. A third, personal computer that did not have VPN never showed the problem.
Amongst other things docker system prune will remove 'all networks not used by at least one container' allowing them to be recreated next docker-compose up
More precisely docker network prune can also be used.

Does the Docker message: "Ignoring unsupported options: restart" mean the restart policy is ignored?

Using docker stack deploy, I can see the following message:
Ignoring unsupported options: restart
Does it mean that restart policies are not in place?
Do they have to be specified outside the compose file?
You can see this message for example with the
Joomla compose file available at the bottom of that page.
To start the compose file:
sudo docker swarm init
sudo docker stack deploy -c stackjoomla.yml joomla
A Compose YAML file is used by both docker-compose tool, for local (single-host) dev and test scenarios, and Swarm Stacks, for production multi-host concerns.
There are many settings in the Compose file which only work in one tool or the other (docker-compose up vs. docker stack deploy) because some settings are specific to dev and others specific to production clusters. It's OK that they are there, and you'll see warnings in either tool when there are settings included that the specific tool will ignore. This is commonly seen for build: settings (which are docker-compose only) and deploy: settings (which are Swarm Stacks only).
The whole goal here is a single file you can use in both tools, and the relevant sections of the compose file are used in that scenario, while the rest are ignored.
All of this can be referenced for the individual setting in the compose file documentation. If you're often working in Compose YAML, I recommend always having a tab open on this page, as I've referenced it almost daily for years, as the spec keeps changing (we're on 3.4+ now).
docker-compose does not restart containers by default, but it can if you set the single-setting restart: as documented here. But that setting doesn't work for Swarm Stacks. It will show up as a warning in a docker stack deploy to remind you that the setting will not take effect in a Swarm Stack.
Swarm Stacks use the restart_policy: under the deploy: setting, which gives finer control with multiple sub-settings. Like all Stack's, the defaults don't have to be specified in the compose file, and you'll see their default settings documented on that docs page.
There is a list on that page of the settings that won't work in a Swarm Stack, but it looks incomplete as the restart: setting should be there too. I'll submit a PR to fix that.
Also, in the Joomla example you pointed us too, that README seems out of date as well, as it includes links: in the compose example, which are depreciated as of Compose version 2, and not needed anymore (because all containers on a custom virtual network can reach each other now).
If you docker-compose up your application on a Docker host in standalone mode, all that Compose will do is start containers. It will not monitor the state of these containers once they are created.
So it is up to you to ensure that your application will still work if a container dies. You can do this by setting a restart-policy.
If you deploy an application into a Docker swarm with docker stack deploy, things are different.
A stack is created that consists of service specifications.
Docker swarm then makes sure that for each service in the stack, at all times the specified number of instances is running. If a container fails, swarm will always spawn a new instance in order to match the service specification again. In this context, a restart-policy does not make any sense and the corresponding setting in the compose file is ignored.
If you want to stop the containers of your application in swarm mode, you either have to undeploy the whole stack with docker stack rm <stack-name> or scale the service to zero with docker service scale <service-name>=0.

Docker backup container with startup parameters

Im facing the same problem since months now and i dont have an adequate solution.
Im running several Containers based on different images. Some of them were started using portainer with some arguments and volumes. Some of them were started using the CLI and docker start with some arguments and parameters.
Now all these settings are stored somewhere. Because if i stop and retart such a container, everything works well again. but, if i do a commit, backup it with tar and load it on a different system and do a docker start, it has lost all of its settings.
The procedure as described here: https://linuxconfig.org/docker-container-backup-and-recovery does not work in my case.
Now im thinking about to write an own web application which will create me some docker compose files based on my setting rather than to just do a docker start with the correct params. This web application should also take care of the volumes (just folders) and do a incremental backup of them with borg to a remote server.
But actually this is only an idea. Is there a way to "extract" a docker compose file of a running containter? So that i can redeploy a container 1:1 to an other server and just have to run docker run mycontainer and it will have the same settings?
Or do i have to write my web app? Or have i missed some page on google and there is already such a solution?
Thank you!
To see the current configuration of a container, you can use:
docker container inspect $container_id
You can then use those configurations to run your container on another machine. There is no easy import/export of these settings to start another container that I'm aware of.
Most people use a docker-compose.yml to define how they want a container run. They also build images with a Dockerfile and transfer them with a registry server rather than a save/load.
The docker-compose.yml can be used with docker-compose or docker stack deploy and allows the configuration of the container to be documented as a configuration file that is tracked in version control, rather than error prone user entered settings. Running containers by hand or starting them with a GUI is useful for a quick test or debugging, but not for reproducibility.
You would like to backup the instance but the commands you're providing are to backup the image. I'd suggest to update your Dockerfile to solve the issue. In case you really want to go down the saving the instance current status, you should use the docker export and docker import commands.
Reference:
https://docs.docker.com/engine/reference/commandline/import/
https://docs.docker.com/engine/reference/commandline/export/
NOTE: the docker export does not export the content of the volumes anyway, I suggest you to should refer to https://docs.docker.com/engine/admin/volumes/volumes/

Any reasons to not use Docker Swarm (instead of Docker-Compose) on a single node?

There's Docker Swarm (now built into Docker) and Docker-Compose. People seem to use Docker-Compose when running containers on a single node only. However, Docker-Compose doesn't support any of the deploy config values, see https://docs.docker.com/compose/compose-file/#deploy, which include mem_limit and cpus, which seems like nice/important to be able to set.
So therefore maybe I should use Docker Swarm? although I'm deploying on a single node only. Also, then the installation instructions will be simpler for other people to follow (they won't need to install Docker-Compose).
But maybe there are reasons why I should not use Swarm on a single node?
I'm posting an answer below, but I'm not sure if it's correct.
Edit: Please note that this is not an opinion based question. If you have a look at the answer below, you'll see that there are "have-to" and "cannot-do" facts about this.
For development, use Docker-Compose. Because only Docker-Compose is able to read your Dockerfiles and build images for you. Docker Stack instead needs pre-built images. Also, with Docker-Compose, you can easily start and stop single containers, with docker-compose kill ... and ... start .... This is useful, during development (in my experience). For example, to see how the app server reacts if you kill the database. Then you don't want Swarm to auto-restart the database directly.
In production, use Docker Swarm (unless: see below), so you can configure mem limits. Docker-Compose has less functionality that Docker Swarm (no mem or cpu limits for example) and doesn't have anything that Swarm does not have (right?). So no reason to use Compose in production. (Except maybe if you know how Compose works already and don't want to spend time reading about the new Swarm commands.)
Docker Swarm doesn't, however, support .env files like Docker-Compose does. So you cannot have e.g. IMAGE_VERSION=1.2.3 in an .env file and then in the docker-compose.yml file have: image: name:${IMAGE_VERSION}. See https://github.com/moby/moby/issues/29133 — instead you'll need to set env vars "manually": IMAGE_VERSION=SOMETHING docker stack up ... (this actually made me stick with Docker-Compose. + that I didn't reasonably quickly find out how to view a container's log, via Swarm; Swarm seemed more complicated.)
In addition to #KajMagnus answer I should note that Docker Swarm still don't support Linux Capabilities as Docker [Compose] do. You can learn about this issue and dive into Docker community discussions here.

Resources