--always-recreate-deps is described as:
Recreate dependent containers. Incompatible with --no-recreate.
--build is described as:
Build images before starting containers.
What is the difference between "Recreate dependent containers" and "Build images before starting containers"?
When a Dockerfile changed I use docker compose up --build. Do I need to also use --always-recreate-deps?
What are use cases for the --always-recreate-deps while we already have --build and --force-recreate?
--always-recreate-deps: This option tells Docker Compose to always recreate the dependencies of a service, even if they haven't changed. This means that if a service depends on another service, and the other service's image hasn't been updated, Docker Compose will still recreate that service when the up command is run with the --always-recreate-deps option.
--build: This option tells Docker Compose to build the images for all services defined in the docker-compose.yml file before starting the containers. This is useful if you have made changes to your services and want to ensure that the images are rebuilt and the containers are running the latest version of your code.
In summary, --always-recreate-deps option ensures that all dependent services are recreated even if they haven't changed whereas --build option ensures that the images are rebuilt and the containers are running the latest version of your code.
Related
TLDR: When using docker compose, I can simply recreate a container by changing its configuration and/or image in the docker-compose.yml file along with running docker-compose up. Is there any generic equivalent for recreating a container (to apply changes) which was created by a bare docker create/run command?
Elaborating a bit:
The associated docker compose documentation states:
If there are existing containers for a service, and the service’s configuration or image was changed after the container’s creation, docker-compose up picks up the changes by stopping and recreating the containers (preserving mounted volumes).
I'm having troubles to understand which underlaying steps are actually performed during this recreation, as e.g. the docker (without compose) documentation doesn't really seem to use the recreate term at all.
Is it safe to simply run docker container rm xy and then docker container create/run (along with passing the full and modified configuration)? Or is docker compose actually doing more under the hood?
I already found answers about applying specific configuration changes like e.g. this one about port mappings, but I'm still wondering whether there is a more general answer to this.
I'm having troubles to understand which underlaying steps are actually performed during this recreation, as e.g. the docker (without compose) documentation doesn't really seem to use the recreate term at all.
docker-compose is a high level tool; it performs in a single operation what would require multiple commands using the docker cli. When docker-compose says, "docker-compose up picks up the changes by stopping and recreating the containers", it means it is doing the equivalent of:
docker stop <somecontainer>
docker rm <somecontainer>
docker run ...
(Where ... represents whatever configuration is implied by the service definition in your docker-compose.yaml).
Let's say it recognizes a change in container1 it does (not really, working via API):
docker compose rm -fs container1
docker compose create (--build) container1
docker compose start container1
What is partially close to (depending on your compose-config):
docker rm -f projectname_container1
(docker build --flags)
docker create --allDozensOfAttributes projectname_container1
docker start projectname_container1
docker network connect (--flags) projectname_networkname projectname_container1
and maybe more..
so i would advise to use the docker compose commands for single services instead of docker cli if suitable..
The issue is that the variables and settings are not exposed through any docker apis. It may be possible by way of connecting directly to the docker socket, parsing the variables, and then stopping/removing the container and recreating it.
This would be prone to all kinds of errors and would require lots of debugging to get these values.
What I do is to simply store my docker commands in a shell script. You can just save the command you need to run into a text file, name it .sh, set the -x on the file, then run it. Then when you stop/delete the container, you can just rerun the shell script.
Another thing you can do would be to replace the docker command with a function (in something like your ~/.bashrc) that stores the arguments to a text file and rechecks that text file with a passed argument (like "recreate" followed by a name). However, I'm more a fan of doing docker containers in their own shell scripts as its more portable.
Trying to determine if calling
docker-compose down
docker-compose build
docker-compose up
is the same as:
docker-compose build
docker-compose up
I have looked and can't find anything specific. I know docker-compose down removes containers and networks docker-compose build creates the services. So I am not sure if down is an unnecessary extra step or not.
docker-compose build creates the images, which are used by docker-compose up to create containers. It's during the docker-compose up step in which networks are created as well. Multiple containers (which are effectively, a running instance of the image) can be created from one image.
If no changes are made to the files in the build environment, or the steps in the Dockerfile, running a build will not create a new image.
Regardless, building the images while containers are running should not affect running containers, since the newly-built image will be named differently (see for yourself, with docker image ls) and the running container will still be running off of the old image.
To answer your question then, possibly: if the steps in the Dockerfile changed, or the files in the build environment changed, etc. before you call build then the image will be rebuilt and the new container created by docker-compose up will be run from the new image.
Otherwise, if none of those changed, calling build will do nothing, and up will also do nothing (provided the containers are still running)
I'm using Docker Destkop under Windows 10.
I use multiples docker-compose files for multiple projects,
and I switch between projects to stop/start some docker-compose.yml files depends on my needs.
When I leave my workstation, I put my Windows10 into stand by or shutdown, but when I come back, I always get 2 docker compose actives.
I don't know why they are already started, or where does Docker Destkop retrieve docker container to start ?
I want to retrieve my workstation with zero container running. What should I check ?
It's very likely happening because of the restart or restart_policy setting declared in your docker-compose.yml file (depending on which version of docker-compose you are using). More information can be found in the official documentation. Here you can look for the RESTART_POLICY section to find out what options there are available for you and what do they mean (or here if you are using the older version of docker-compose). While here is the description of the arguments you can pass to the docker run command.
I have .net , sql database, kafka docker images and I used docker compose yml file to run them together.
I noticed the command down and up does not create fresh environment.
docker compose docker-compose -f dc-all-sql.yml down
then:
docker-compose -f dc-all-sql.yml up
I managed to have fresh environment by using docker desktop 'rest factory setting' option only.
Is my understanding of these command wrong?
Basically I want to have fresh environment, when I up the system, new docker images downloaded.
By default, the only things removed by down are:
Containers for services defined in the Compose file
Networks defined in the networks section of the Compose file
The default network, if one is used
so the images ar not removed , you could use this along with down:
Options:
--rmi type Remove images. Type must be one of:
'all': Remove all images used by any service.
'local': Remove only images that don't have a
custom tag set by the `image` field.
see this
We are trying to upgrade docker container to latest image.
Here is the process i am trying to follow.
Let's say i have already pulled docker image having version 1.1
Create container with image 1.1
Now we have fixed some issue on image 1.1 and uploaded it as 1.2
After that i wanted to update container running on 1.1 to 1.2
Below are the step i thought i will follow.
Pull latest image
Inspect docker container to get all the info(port, mapped volume etc.)
Stop current container
Remove current container
Create container with values got on step 2 and using latest image.
The problem I am facing is i don't know how to use output of "Docker Inspect" command while creating container.
What you should have done in the first place:
In production environments, with lots of containers, You will lose track of docker run commands. In order to keep up with complexity, Use docker-compose.
First you need to install docker-compose. Refer to official documents for that.
Then create a yaml file, describing your environment. You can specify more than one container (for apps that require multiple services, for example nginx,php-fpm and mysql)
Now doing all that, When you want to upgrade containers to newer versions, you just change the version in the yaml file, and do a docker-compose down and docker-compose up.
Refer to compose documentation for more info.
What to do now:
Start by reading docker inspect output. Then gather facts:
Ports Published. (host and container mapping)
Networks used (names,Drivers)
Volumes mounted. (bind/volume,Driver,path)
Possible Run time command arguments
Possible Environmental variables
Restart Policy
Then try to create docker-compose yaml file with those facts on a test machine, and test your setup.
When confident enough, Roll it in production and keep latest compose yaml for later reference.