Docker compose force pull when build is specified - docker

Is is possible to skip build step even if it's specified. For example for the following file only redis image is pulled after running docker-compose pull command.
version: '3.4'
services:
myservice:
image: ${PRIVATE_REGISTRY-}my-service
build:
context: .
dockerfile: MyService/Dockerfile
ports: [5555:80]
redis:
image: redis
restart: ${RESTART_POLICY-no}
And if I remove build: step from the file then the other image is getting pulled as well.

No, generally this is not possible as docker doesn't know then what to do. If you access the situation that your image doesn't have to be built everytime then try considering building it from cli when this is needed. In general I would advise to work with images and named versions. This version may be "lastest" for example in CI. Suggestion to take this approach:
DEV: your approach is perfect for often changing containers
STAGE: when moving from dev to stage, tag the images and give them a version. Only use namend versions in higher environments
PROD: same as stage, other version naming strategies

Related

Docker Compose - Is it possible to update all changed images with a single command?

I have a compose.yml like this one:
version: '3.7'
services:
nginx:
restart: unless-stopped
image: ghcr.io/user/frontend:latest
ports:
- 80:80
depends_on:
- backend
backend:
restart: unless-stopped
image: ghcr.io/user/backend:latest
entrypoint: /home/app/web/wsgi-entrypoint.sh
expose:
- 8000
We have 2 images stored on Github: frontend and backend.
My goal is the following: when an image has been updated on the Github Docker Registry, I'd like to automatically update the image on the server and launch the new one substituting the old one via docker-compose.
For example: I have a running compose made by frontend and backend, but I just pushed a new image: ghcr.io/user/frontend:latest.
Now, I want a single command which updates only the images that have been changed (in this case ghcr.io/user/frontend:latest) and when I reload the frontend webpage I see the changes.
My attempt is the following:
docker-compose up -d --build
But the system says:
compose-backend_1 is up-to-date
compose-nginx_1 is up-to-date
which is not true!
So, the working procedure I use is a bit manual:
docker pull ghcr.io/user/frontend:latest
I see in the console: Status: Downloaded newer image,
which is the proof that a new image has been downloaded.
Then, if I relaunch the same command the console displays: Status: Image is up to date for ghcr.io/user/frontend:latest
Finally:
docker-compose up -d --build
says: Recreating compose-nginx_1 ... done
I suppose the command docker-compose up -d --build ALONE is not looking for new images and so does not update the image that is changed.
So, is there a SINGLE specific command to fix this?
Should be achieved by running docker-compose pull, and then docker-compose up -d
Or, shorter: docker-compose up -d --pull always
You can use variable substitution in many places in a docker-compose.yml file, in particular including the image:. If you give every build a unique tag, then you can supply the tag as an environment variable, and it will work the way you describe.
Let's say the two images have the same tagging scheme (they don't necessarily need to). You could update the Compose file to say
version: '3.8'
services:
nginx:
restart: unless-stopped
image: ghcr.io/user/frontend:${TAG:-latest} # <--
ports:
- 80:80
depends_on:
- backend
backend:
restart: unless-stopped
image: ghcr.io/user/backend:${TAG:-latest} # <--
Notice the $TAG reference at the end of the image: lines. If TAG isn't set in the environment, it will use latest, but if it is, it will use that exact build.
Now, if I run:
TAG=20221020 docker-compose up -d
For both containers, Compose will notice that they're running an older build, automatically pull the updated image from GitHub, and restart both containers against the newer image.
This brings the mild complication of your continuous-deployment system needing to know the current image tag. In exchange, though, you get the ability to very easily roll back – if today's build turns out to have a critical bug you can run the exact same command with a different tag to redeploy on yesterday's build. A setup like this is also necessary if you're considering migrating to Kubernetes, since it depends on the text of the image: string changing to trigger a redeployment.

Rename official postgres image

I am using an official Postgres12 image that I'm pulling inside the docker-compose.yml. Everything is working fine.
services:
db:
container_name: db
image: postgres:12
volumes:
- ...
ports:
- 5432:5432
environment:
- POSTGRES_USER=...
Now, when I run docker-compose up, I get this image
My question is: is there a way in which I can rename the image inside docker-compose.yml? I know there is a command but I require it to be everything inside the file if possible.
Thanks!
In a Compose file, there's no direct way to run docker tag or any other command that modifies some existing resource.
If you're trying to optionally point Compose at a local mirror of Docker Hub, you can take advantage of knowing the default repository is docker.io and use an optional environment variable:
image: ${REGISTRY:-docker.io}/postgres:latest
REGISTRY=docker-mirror.example.com docker-compose up
Another possible approach is to build a trivial image that doesn't actually extend the base postgres image at all:
build:
context: .
dockerfile: Dockerfile.postgres
image: local-images.example.com/my-project/postgres
# Dockerfile.postgres
FROM postgres:latest
# End of file
There's not really any benefit to doing this beyond the cosmetic appearances in the docker images output. Having it be clear that you're using a standard Docker Hub image could be slightly preferable; its behavior is better understood than something you built locally and if you have multiple projects running at once they can more obviously share the same single image.

How to have a url as image name in a docker-compose file

I'm trying to write a docker-compose file that will build and push a versioned (1.0, 1.1...) and latest build of my image to my local v2 docker registry. However when I run docker-compose build I get the following error:
ERROR: Couldn't connect to Docker daemon - you might need to run `docker-machine start default`.
I found a lot of people complaining about this error for many different reasons, in my case it has nothing to do with permissions or weather or not the docker service is running, I narrowed it down to my image naming having a URL on it (the URL of my local registry), I know that because if I name my image normally (like '/app:latest'), then the commands runs fine. So how can I have a URL as the image name?
Here is what I'm trying to do (docker-compose.yaml):
version: "3.8"
x-marvin-backend: &default-marvin-backend
container_name: marvin_backend
build: ./marvin-api
image: "http://my_registry_url:5000/marvin/backend:latest"
ports:
- "3000:3000"
networks:
- backend
x-marvin-frontend: &default-marvin-frontend
container_name: marvin_frontend
image: http://my_registry_url:5000/marvin/frontend:latest
build:
context: ./marvin-front
args:
- REACT_APP_SERVICES_HOST=http://marvin_backend:3000/
ports:
- "80:80"
networks:
- backend
depends_on:
- backend
services:
backend: *default-marvin-backend
backend_versioned:
<< : *default-marvin-backend
image: http://my_registry_url:5000/marvin/backend:1.0
frontend: *default-marvin-frontend
frontend_versioned:
<< : *default-marvin-frontend
image: http://my_registry_url:5000/marvin/frontend:1.0
networks:
backend:
I'm new to docker in general, my main goal here is to have a simple, preferably one command (e.g docker-compose build), that will build and tag both my front end and back end images so that I can just execute docker-compose push to push those newly created images to my registry running on AWS. With that I also want to be able to override the latest version of those images in the registry while also adding a versioned image for backup purposes, in case I want to revisit any of those version in the future.
Then in the AWS EC2 machine I have another docker-compose.yaml file that just fetches the latest versions of both images and run their containers.
So to summarize I would develop the application on my local machine, then add the new version manually to the versioned services in the local docker-compose.yaml file, then run docker-compose build followed by docker-compose push; then ssh into my AWS machine and run docker-compose up to fetch the latest and newly updated images and run them.
This could later evolve into a CI/CD pipeline, but right now I'm taking baby steps and trying to get my image name to have a URL in it.
Thank you.
Edit
I tried using a .env with REGISTRY=http://my_registry_url:5000/marvin and then using image: "${REGISTRY}/frontend:latest" or image: "$${REGISTRY}/frontend:latest" but that also didn't work
Just remove the http:// part from your images.

What is the difference between `docker-compose build` and `docker build`?

What is the difference between docker-compose build and docker build?
Suppose in a dockerized project path there is a docker-compose.yml file:
docker-compose build
And
docker build
docker-compose can be considered a wrapper around the docker CLI (in fact it is another implementation in python as said in the comments) in order to gain time and avoid 500 characters-long lines (and also start multiple containers at the same time). It uses a file called docker-compose.yml in order to retrieve parameters.
You can find the reference for the docker-compose file format here.
So basically docker-compose build will read your docker-compose.yml, look for all services containing the build: statement and run a docker build for each one.
Each build: can specify a Dockerfile, a context and args to pass to docker.
To conclude with an example docker-compose.yml file :
version: '3.2'
services:
database:
image: mariadb
restart: always
volumes:
- ./.data/sql:/var/lib/mysql
web:
build:
dockerfile: Dockerfile-alpine
context: ./web
ports:
- 8099:80
depends_on:
- database
When calling docker-compose build, only the web target will need an image to be built. The docker build command would look like :
docker build -t web_myproject -f Dockerfile-alpine ./web
docker-compose build will build the services in the docker-compose.yml file.
https://docs.docker.com/compose/reference/build/
docker build will build the image defined by Dockerfile.
https://docs.docker.com/engine/reference/commandline/build/
Basically, docker-compose is a better way to use docker than just a docker command.
If the question here is if docker-compose build command, will build a zip kind of thing containing multiple images, which otherwise would have been built separately with usual Dockerfile, then the thinking is wrong.
Docker-compose build, will build individual images, by going into individual service entry in docker-compose.yml.
With docker images, command, we can see all the individual images being saved as well.
The real magic is docker-compose up.
This one will basically create a network of interconnected containers, that can talk to each other with name of container similar to a hostname.
Adding to the first answer...
You can give the image name and container name under the service definition.
e.g. for the service called 'web' in the below docker-compose example, you can give the image name and container name explicitly, so that docker does not have to use the defaults.
Otherwise the image name that docker will use will be the concatenation of the folder (Directory) and the service name. e.g. myprojectdir_web
So it is better to explicitly put the desired image name that will be generated when docker build command is executed.
e.g.
image: mywebserviceImage
container_name: my-webServiceImage-Container
example docker-compose.yml file :
version: '3.2'
services:
web:
build:
dockerfile: Dockerfile-alpine
context: ./web
ports:
- 8099:80
image: mywebserviceImage
container_name: my-webServiceImage-Container
depends_on:
- database
Few additional words about the difference between docker build and docker-compose build.
Both have an option for building images using an existing image as a cache of layers.
with docker build, the option is --cache-from <image>
with docker-composer, there is a tag cache_from in the build section.
Unfortunately, up until now, at this level, images made by one are not compatible with the other as a cache of layers (Ids are not compatible).
However, docker-compose v1.25.0 (2019-11-18), introduces an experimental feature COMPOSE_DOCKER_CLI_BUILD so that docker-compose uses native docker builder (therefore, images made by docker build can be used as a cache of layers for docker-compose build)

How to make docker-compose build image once and repeatedly use it to run containers?

I am trying to run multiple containers built from the same image. My problem is when I define my docker-compose.yml this way
version: '3'
services:
crossbar-target:
container_name: "app-crossbar-target"
build:
context: ../../crossbar
crossbar-source-domain1:
container_name: "app-crossbar-source-domain1"
build:
context: ../../crossbar
crossbar-source-domain2:
container_name: "app-crossbar-source-domain2"
build:
context: ../../crossbar
I will get three containers as I want them, but I get three images as well.
In case I have hundreds of containers, I don't like the idea having hundreds of images as well. That makes my local image repository completely unusable and unreadable.
Thinking and searching for solution I tried define crossbar image by itself and then reuse it:
version: '3'
services:
app-crossbar:
build:
context: ../../crossbar
crossbar-target:
container_name: "app-crossbar-target"
image: project_name/app-crossbar
crossbar-source-domain1:
container_name: "app-crossbar-source-domain1"
image: project_name/app-crossbar
crossbar-source-domain2:
container_name: "app-crossbar-source-domain2"
image: project_name/app-crossbar
Now, I have only one image for all containers, but get an app-crossbar container as well. That doesn't suit my needs as well.
Is there any way to make docker-compose manage my images efficiently and make only those images I need and then use them to run containers in any amount needed? Or do I have to manage my images separately in some other routines? I like calling:
docker-compose build
to rebuild all needed images and I like the idea that all my docker logic is held in one place.
Build the crossbar image first and properly tag it. Then you can spawn multiple services using that image. The latest tag here is only an example. I would use a unique version number.
version: '3'
services:
app-crossbar:
image: crossbar:latest
crossbar-target:
image: crossbar:latest
crossbar-source-domain1:
image: crossbar:latest
crossbar-source-domain2:
image: crossbar:latest
A huge advantage of pre-building images (and possibly storing them in an docker image repo) is that you can revert to older versions when needed. In addition you can run tests with the pre-build images if needed.
It's pretty standard to have a CI setup where for example Jenkins builds a new image of your crossbar project (when you push a branch), properly tags it an pushes the image to your private (or public) image repo.
I'd say even if you could build the image with compose, that is still not the right way to do it for prod. It might be acceptable if you extend your image to add a layer with config files. By pre-building images your pipeline is also compatible with swarm and other clustered environments.
If this is overkill for your needs you just build locally (and TAG the images properly). The tagging part is important so you don't have to battle with docker cache and you can still find older versions if a revert is needed.
(In a dev environment you can be a more careless and just map in directories and file from the host to the containers. No one wants to re-build for every single change)

Resources