Pull image with docker-compose run - docker

Here's a typical docker-compose file. I use is it both for building image (docker-compose build) and to run my tests (docker-compose run test ).
version: '2'
services:
test :
links:
- web
cmd : "mvn clean verify"
web:
image: my_repo/my_image:tag
build: .
When I use the run command docker-compose try to build the image before running the test.
Is there anyway to force it to pull existing image instead of trying to build new one ?

You can use "pull" command before run. There is pull all new images from registry
docker-compose pull
docker-compose run

Both of your solutions work fine.
I was just expecting to have to have some thing like
'docker run test --pull' or 'docker dun test --build' to force the pull/build.
Thanks !

It's normal that it's build the web image before creating the test container, because there's link between (web depends on test). If you want to not do the build each time you run docker-compose up, start by creating your web image:
docker build -t web .
then update your Dockerfile with the new image:
version: '2'
services:
test :
links:
- web
cmd : "mvn clean verify"
web:
image: web

Related

`docker-compose run --rm` does not remove depends_on services

I have a docker-compose.yml file like this,
version: "3.9"
services:
db-test:
image: mariadb:10.6
profiles:
- test
be:
build:
context: .
dockerfile: test.Dockerfile
depends_on:
- db-test
volumes:
- gradle-cache:/home/gradle/.gradle
profiles:
- test
# volumes
volumes:
gradle-cache:
driver: local
This is for testing the image inside a specific environment. This is a one time task; so I use docker-compose run --profile test --rm be.
This runs perfectly fine.
My concern is that, after the run is over, the dependency service db-test is still running. be service is automatically deleted when using --rm option.
Is there a way to clean up everything? i.e. containers related to be and db-test services must be deleted after the run is complete. Like in case of docker-compose down.
I think my answer is late for you but it can be pretty useful for others on the Internet.
You run your command with the profile option: docker-compose run --profile test --rm be. So when you want to delete all containers related to the profile, you need to run docker down with the profile option too: docker-compose --profile test down.

docker-compose build --parallel - run command before build specific image

my docker-compose.yml file is below.
I want to build parallel those images - I am running the a command docker-compose build --parallel
BUT I want to run a command before it builds the images of service2 & service3 while building service1 - parallel.
When the command will be finished it will join to the building-parallel-process.
version: '3.4'
services:
service1:
image: "company/service1:${TAG}"
build:
context: ./folder/service1/
dockerfile: Dockerfile
service2:
image: "company/service2:${TAG}"
build:
context: ./folder/service2/
dockerfile: Dockerfile
service3:
image: "company/service3:${TAG}"
build:
context: ./folder/service3
dockerfile: Dockerfile
Compose doesn't really have any sort of workflow handling like this, especially around building images. It's assumed that building an image only depends on the local source tree and nothing else. Compose also doesn't have any ability to run non-Docker commands or launch temporary containers as part of the up workflow.
The good news is that re-running a build is very quick if nothing has changed. So with the workflow you've described, you might separately build the first image, run the command, and then rebuild everything; re-rebuilding the first image will take almost no time and you won't get a new image.
#!/bin/sh
# Build the one image that needs special handling
docker-compose build service1
# Run the command
the_command
# Rebuild everything in parallel (service1 will be a no-op)
docker-compose build --parallel
If you can run the preparatory step in a Dockerfile RUN command that might be easier to manage. If that needs software that isn't ordinarily part of your image, you could use a multi-stage build to do it in effectively a temporary image.

Docker: How to update your container when your code changes

I am trying to use Docker for local development. The problem is that when I make a change to my code, I have to run the following commands to see the updates locally:
docker-compose down
docker images # Copy the name of the image
docker rmi <IMAGE_NAME>
docker-compose up -d
That's quite a mouthful, and takes a while. (Possibly I could make it into a bash script, but do you think that is a good idea?)
My real question is: Is there a command that I can use (even manually each time) that will update the image & container? Or do I have to go through the entire workflow above every time I make a change in my code?
Just for reference, here is my Dockerfile and docker-compose.yml.
Dockerfile
FROM node:12.18.3
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 4000
CMD ["npm", "start"]
docker-compose.yml
version: "2"
services:
web:
build:
context: .
dockerfile: Dockerfile
container_name: web
restart: always
ports:
- "3000:3000"
depends_on:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- "27017:27017"
Even though there are multiple good answers to this question, I think they missed the point, as the OP is asking about the local dev environment. The command I usually use in this situation is:
docker-compose up -d --build
If there aren't any errors in Dockerfile, it should rebuild all the images before bringing up the stack. It could be used in a shell script if needed.
#!/bin/bash
sudo docker-compose up -d --build
If you need to tear down the whole stack, you can have another script:
#!/bin/bash
sudo docker-compose down -v
The -v flag removes all the volumes so you can have a fresh start.
NOTE: In some cases, sudo might not be needed to run the command.
When a docker image is build the artifacts are already copied and no new change can reflect until you rebuild the image.
But
If it is only for local development, then you can leverage volume sharing to update code inside container in runtime. The idea is to share your app/repo directory on host machine with /usr/src/app (as per your Dockerfile) and with this approach your code (and new changes) will be appear on both host and the running container.
Also, you will need to restart the server on every change and for this you can run your app using nodemon (as it watches for changes in code and restarts the server)
Changes required in Dockerfile.
services:
web:
...
container_name: web
...
volumes:
- /path/in/host/machine:/usr/src/app
...
...
ports:
- "3000:3000"
depends_on:
- mongo
You may use Docker Swarm as an orchestration tool to apply rolling updates. Check Apply rolling updates to a service.
Basically you issue docker compose up once and do it with a shell script maybe, and once you get your containers running and then you may create a Jenkinsfile or configure a CI/CD pipeline to pull the updated image and apply it to running container with previous image with docker service update <NEW_IMAGE>.

What is the difference between `docker-compose build` and `docker build`?

What is the difference between docker-compose build and docker build?
Suppose in a dockerized project path there is a docker-compose.yml file:
docker-compose build
And
docker build
docker-compose can be considered a wrapper around the docker CLI (in fact it is another implementation in python as said in the comments) in order to gain time and avoid 500 characters-long lines (and also start multiple containers at the same time). It uses a file called docker-compose.yml in order to retrieve parameters.
You can find the reference for the docker-compose file format here.
So basically docker-compose build will read your docker-compose.yml, look for all services containing the build: statement and run a docker build for each one.
Each build: can specify a Dockerfile, a context and args to pass to docker.
To conclude with an example docker-compose.yml file :
version: '3.2'
services:
database:
image: mariadb
restart: always
volumes:
- ./.data/sql:/var/lib/mysql
web:
build:
dockerfile: Dockerfile-alpine
context: ./web
ports:
- 8099:80
depends_on:
- database
When calling docker-compose build, only the web target will need an image to be built. The docker build command would look like :
docker build -t web_myproject -f Dockerfile-alpine ./web
docker-compose build will build the services in the docker-compose.yml file.
https://docs.docker.com/compose/reference/build/
docker build will build the image defined by Dockerfile.
https://docs.docker.com/engine/reference/commandline/build/
Basically, docker-compose is a better way to use docker than just a docker command.
If the question here is if docker-compose build command, will build a zip kind of thing containing multiple images, which otherwise would have been built separately with usual Dockerfile, then the thinking is wrong.
Docker-compose build, will build individual images, by going into individual service entry in docker-compose.yml.
With docker images, command, we can see all the individual images being saved as well.
The real magic is docker-compose up.
This one will basically create a network of interconnected containers, that can talk to each other with name of container similar to a hostname.
Adding to the first answer...
You can give the image name and container name under the service definition.
e.g. for the service called 'web' in the below docker-compose example, you can give the image name and container name explicitly, so that docker does not have to use the defaults.
Otherwise the image name that docker will use will be the concatenation of the folder (Directory) and the service name. e.g. myprojectdir_web
So it is better to explicitly put the desired image name that will be generated when docker build command is executed.
e.g.
image: mywebserviceImage
container_name: my-webServiceImage-Container
example docker-compose.yml file :
version: '3.2'
services:
web:
build:
dockerfile: Dockerfile-alpine
context: ./web
ports:
- 8099:80
image: mywebserviceImage
container_name: my-webServiceImage-Container
depends_on:
- database
Few additional words about the difference between docker build and docker-compose build.
Both have an option for building images using an existing image as a cache of layers.
with docker build, the option is --cache-from <image>
with docker-composer, there is a tag cache_from in the build section.
Unfortunately, up until now, at this level, images made by one are not compatible with the other as a cache of layers (Ids are not compatible).
However, docker-compose v1.25.0 (2019-11-18), introduces an experimental feature COMPOSE_DOCKER_CLI_BUILD so that docker-compose uses native docker builder (therefore, images made by docker build can be used as a cache of layers for docker-compose build)

Docker-compose run service if another service status is 0 (success)

I am pretty new to Docker and Docker compose.
I want to use docker compose to test my project and publish it if tests are ok. If tests are failed, it should not publish the app at all.
Here is my docker-compose.yml
version: '3'
services:
mongodb:
image: mongo
test:
build:
context: .
dockerfile: Dockerfile.tests
links:
- mongodb
publish:
build:
context: .
dockerfile: Dockerfile.publish
?? # I want to say here that publish step is dependent to test.
After that, in my testAndPublish.sh file, I would like to say:
docker-compose up
if [ $? = 0 ]; then # If all the services succeed
....
else
....
fi
So if test or publish steps are failed, I am not going to push it.
How can I build step like processes in docker-compose?
Thanks.
I think you're trying to do everything with docker-compose which is the wrong way around.
When it comes to CI (f.e. Travis or CircleCI) I always make my workflow as follows:
let's say you have a web node and database node
In travis.yml or circle.yml at the install step I always put things like f.e. docker-compose run web npm install and others
at the test step I would put docker-compose run web npm test or something similar like docker-compose run web my-test-script.sh, that way you'll know that the tests will run in the declared docker environment, if they fail this step fails and the whole test step in the CI fails which is desired
at the deploy step I would run some deploy.sh script which will build the image from Dockerfile (the one that web uses) and push it for example to Docker Hub.
This way your CI test routine still depends on specific Docker environment but the deploy push (which doesn't need Docker) is kept separately from the application which makes it more convinient imho.

Resources