Trying to determine if calling
docker-compose down
docker-compose build
docker-compose up
is the same as:
docker-compose build
docker-compose up
I have looked and can't find anything specific. I know docker-compose down removes containers and networks docker-compose build creates the services. So I am not sure if down is an unnecessary extra step or not.
docker-compose build creates the images, which are used by docker-compose up to create containers. It's during the docker-compose up step in which networks are created as well. Multiple containers (which are effectively, a running instance of the image) can be created from one image.
If no changes are made to the files in the build environment, or the steps in the Dockerfile, running a build will not create a new image.
Regardless, building the images while containers are running should not affect running containers, since the newly-built image will be named differently (see for yourself, with docker image ls) and the running container will still be running off of the old image.
To answer your question then, possibly: if the steps in the Dockerfile changed, or the files in the build environment changed, etc. before you call build then the image will be rebuilt and the new container created by docker-compose up will be run from the new image.
Otherwise, if none of those changed, calling build will do nothing, and up will also do nothing (provided the containers are still running)
Related
--always-recreate-deps is described as:
Recreate dependent containers. Incompatible with --no-recreate.
--build is described as:
Build images before starting containers.
What is the difference between "Recreate dependent containers" and "Build images before starting containers"?
When a Dockerfile changed I use docker compose up --build. Do I need to also use --always-recreate-deps?
What are use cases for the --always-recreate-deps while we already have --build and --force-recreate?
--always-recreate-deps: This option tells Docker Compose to always recreate the dependencies of a service, even if they haven't changed. This means that if a service depends on another service, and the other service's image hasn't been updated, Docker Compose will still recreate that service when the up command is run with the --always-recreate-deps option.
--build: This option tells Docker Compose to build the images for all services defined in the docker-compose.yml file before starting the containers. This is useful if you have made changes to your services and want to ensure that the images are rebuilt and the containers are running the latest version of your code.
In summary, --always-recreate-deps option ensures that all dependent services are recreated even if they haven't changed whereas --build option ensures that the images are rebuilt and the containers are running the latest version of your code.
I'm having difficulties understanding docker. No matter how many tutorials I watch, guides I read, for me docker-compose is like being able to define multiple Dockerfiles, ie multiple containers. I can define environment variables in both, ports, commands, base images.
I read in other questions/discussions that Dockerfile defines how to build an image, and docker-compose is how to run an image, but I don't understand that. I can build docker containers without having to have a Dockerfile.
It's mainly for local development though. Does Dockerfile have an important role when deploying to AWS for example (where it's probably coming out of the box for example for EC2)?
So the reason why I can work locally with docker-compose only is because the base image is my computer (sorting out the task Dockerfile is supposed to do)?
Think about how you'd run some program, without Docker involved. Usually it's two steps:
Install it using a package manager like apt-get or brew, or build it from source
Run it, without needing any of its source code locally
In plain Docker without Compose, similarly, you have the same two steps:
docker pull a prebuilt image with the software, or docker build it from source
docker run it, without needing any of its source code locally
I'd aim to have a Dockerfile that creates an immutable copy of your image, with all of its source code and library dependencies as part of the image. The ideal is that you can docker run your image without -v options to inject source code or providing the command at the docker run command line.
The reality is that there are a lot of moving parts: you probably need to docker network create a network to get containers to communicate with each other, and use docker run -e environment variables to specify host names and database credentials, and launch multiple containers together, and so on. And that's where Compose comes in: instead of running a series of very long docker commands, you can put all of the details you need in a docker-compose.yml file, check it in, and run docker-compose up to get all of those parts put together.
So, do:
Use Compose to start multiple containers together
Use Compose to write down complex runtime options like port mappings or environment variables with host names and credentials
Use Compose to build your image and start a container from it with a single command
Build your application code and a standard CMD to run it into your Dockerfile.
After I make any code change, my typical development steps are:
Change code and build the jar file
Build docker image that consumes the jar file
Kill current "docker-compose up" command.
Run "docker-compose up" again.
My docker-compose file has five services. On step 3, all the containers go down. Ideally, I just need to re-run my app container.
I am looking for a way to do the following in a batch script:
Bring down the app container. The other four containers should continue to run.
Build a new docker image
Force docker-compose to recreate my app container and start it.
For step 1, looks like I can use "docker-compose kill myappname."
Step 2 is also straightforward.
I am wondering how I can accomplish step 3. Thanks.
You don't need to stop the current container explicitly.
If your image has changed, docker compose will recognize it and recreate your container, when you run the up command again.
So, rebuild your image and run docker compose up.
If you use compose to build the image, add the --build flag to let it rebuild your image, after which the container is also recreated.
You cann also add the name off a specific service to your up command, i.e. docker compose up -d --build app
If the image hasnt changed, you cann add the --force-recreate flag.
We are trying to upgrade docker container to latest image.
Here is the process i am trying to follow.
Let's say i have already pulled docker image having version 1.1
Create container with image 1.1
Now we have fixed some issue on image 1.1 and uploaded it as 1.2
After that i wanted to update container running on 1.1 to 1.2
Below are the step i thought i will follow.
Pull latest image
Inspect docker container to get all the info(port, mapped volume etc.)
Stop current container
Remove current container
Create container with values got on step 2 and using latest image.
The problem I am facing is i don't know how to use output of "Docker Inspect" command while creating container.
What you should have done in the first place:
In production environments, with lots of containers, You will lose track of docker run commands. In order to keep up with complexity, Use docker-compose.
First you need to install docker-compose. Refer to official documents for that.
Then create a yaml file, describing your environment. You can specify more than one container (for apps that require multiple services, for example nginx,php-fpm and mysql)
Now doing all that, When you want to upgrade containers to newer versions, you just change the version in the yaml file, and do a docker-compose down and docker-compose up.
Refer to compose documentation for more info.
What to do now:
Start by reading docker inspect output. Then gather facts:
Ports Published. (host and container mapping)
Networks used (names,Drivers)
Volumes mounted. (bind/volume,Driver,path)
Possible Run time command arguments
Possible Environmental variables
Restart Policy
Then try to create docker-compose yaml file with those facts on a test machine, and test your setup.
When confident enough, Roll it in production and keep latest compose yaml for later reference.
So I am using gitlab-ci to deploy my websites in docker containers, because the gitlab-ci docker runner doesn't seem to do what I want to do I am using the shell executor and let it run docker-compose up -d. Here comes the problem.
I have 2 volumes in my docker-container. ./:/var/www/html/ (which is the content of my git repo, so files I want to replace on build) and a mount that is "inside" of this mount /srv/data:/var/www/html/software/permdata (which is a persistent mount on my server).
When the gitlab-ci runner starts it tries to remove all files while the container is running, but because of this mount in mount it gets a device busy and aborts. So I have to manually stop and remove the container before I can run my build (which kind of defeats the point of build automation).
Options I thought about to fix this problem:
stop and remove the container before gitlab-ci-multi-runner starts (seems not possible)
add the git data to my docker container and only mount my permdata (seems like you can't add data to a container without the volume option with docker compose like you can in a Dockerfile)
Option 2 would be ideal because then it would also sort out my issues with permissions on the files.
Maybe someone has gone through the same problem and could give me an advice
seems like you can't add data to a container without the volume option with docker compose like you can in a Dockerfile
That's correct. The Compose file is not meant to replace the Dockerfile, it's meant to run multiple images for an application or project.
You can modify the Dockerfile to copy in the git files.