Docker-compose: why does the number appended to services not increment? - docker

Please help me understand the number that is suffixed to services spun up by docker-compose.
From docker-compose image named: "prefix_%s_1" instead of "%s" and Why docker container name has an random number at the end?, I formed the understanding that the number is an integer incremented with each instance of that service.
I think that understanding is incorrect because, with this experiment, I'm trying to spin up multiple instances of a detached service, but the suffixed number is always "_1":
# Dockerfile
FROM alpine:3.15 as base
# docker-compose.yml
version: '3.6'
services:
dummy:
build:
context: .
$ docker-compose up --build --detach
WARNING: The Docker Engine you're using is running in swarm mode.
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
To deploy your application across the swarm, use `docker stack deploy`.
Building dummy
[+] Building 0.3s (5/5) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 43B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/alpine:3.15 0.0s
=> CACHED [1/1] FROM docker.io/library/alpine:3.15 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:35e223a20dbce8c0b81d3257f8cad0c7b2b35d8e18eadfec7eeb7de86a472e7b 0.0s
=> => naming to docker.io/library/docker-compose_dummy 0.0s
Successfully built 35e223a20dbce8c0b81d3257f8cad0c7b2b35d8e18eadfec7eeb7de86a472e7b
Recreating docker-compose_dummy_1 ... done
$ docker-compose up --detach
WARNING: The Docker Engine you're using is running in swarm mode.
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
To deploy your application across the swarm, use `docker stack deploy`.
Starting docker-compose_dummy_1 ... done
$ docker-compose up --detach
WARNING: The Docker Engine you're using is running in swarm mode.
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
To deploy your application across the swarm, use `docker stack deploy`.
Starting docker-compose_dummy_1 ... done
$

Related

Why does Docker checks any node versions if they already installed

I have built my custom frontend image
FROM node:16-alpine3.16
WORKDIR /usr/src/app
COPY . .
EXPOSE 4200
CMD ["npm", "run", "start"]
Than I run it with docker compose
docker compose up frontend-app --build
Image running and working as expected.
Than I run it with docker compose without --build flag
docker compose up frontend-app
Image running and working as expected.
But when I disable wi-fi (internet), and run previous command again with --build flag, it shows me an error:
=> ERROR [internal] load metadata for docker.io/library/node:14.15.5-alpine3.10 0.1s
------
> [internal] load metadata for docker.io/library/node:14.15.5-alpine3.10:
------
failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to create LLB definition: failed to do request: Head "https://registry-1.docker.io/v2/library/node/manifests/14.15.5-alpine3.10": Failed to lookup host: registry-1.docker.io
What is the point always check for the node updates if node already dowloaded, and even pulled separatly from docker image?
Next command shows that this image localy already exists:
docker pull node:14.15.5-alpine3.10
14.15.5-alpine3.10: Pulling from library/node
b038bcb63e9c: Already exists
2ad96160a6c4: Already exists
694a34677dcf: Already exists
253b9b23d1bc: Already exists
Digest: sha256:fd87531f9bf187273c77ad3ddd5067110ef983f998fc2ea1b9932950df78bd8c
Status: Downloaded newer image for node:14.15.5-alpine3.10
docker.io/library/node:14.15.5-alpine3.10
Using --build you are building again the image, why do you use it if you already have the image?
Use just docker-compose down & docker-compose-up

Docker compose stops multiple services

I hope I didn't miss anything simple from the manual.
The structure is:
/home/user
/foo1/bar1/docker/
Dockerfile
docker-compose.yml
/foo2/bar2/docker/
Dockerfile
docker-compose.yml
docker-compose.yml
version: '3.9'
services:
foo1-bar1:
build:
context: .
args:
DOCKER_SERVER_ROOT_DIR: ${DOCKER_SERVER_ROOT_DIR}
dockerfile: Dockerfile
image: foo1-bar1:v1
container_name: foo1-bar1-v1
The same is for foo-bar-v2.
Both of them I successfully run as:
cd /foo1/bar1/docker/
docker-compose up -d
[+] Running 1/1
⠿ Container foo1-bar1-v1 Started
cd /foo2/bar2/docker/
docker-compose up -d
[+] Running 1/1
⠿ Container foo2-bar2-v1 Started
The question is, why does it stop both of them when I try to stop only 1? Service names, container names, image names are different...
user#vm:~/foo1/bar1/docker$ docker-compose stop
[+] Running 2/2
⠿ Container foo1-bar1-v1 Stopped
⠿ Container foo2-bar2-v2 Stopped
docker-compose has the concept of projects. Run docker-compose --help and you will see:
--project-directory string Specify an alternate working directory
(default: the path of the, first specified, Compose file)
-p, --project-name string Project name
So in your case, both your services belong to the same project named docker.
You can actually run docker-compose -p docker ps and you will see both your services.
You can also override this by specifying your own project name independent of the directory name.
My version of docker-compose (Docker Compose version v2.10.2 MacOS) does warn me that there are orphan containers in this project when I replicate your setup. Also it doesn't automatically stop "orphan" services and gives a warning that the network could not be removed either.
This is also another interesting fact: both services run on the same network (docker_default) only because the project name (folder name) is the same.
I hope this explains it.
You have to specify the service to stop. Otherwise it will stop all services.
docker compose stop [OPTIONS] [SERVICE...]
here : docker-compose stop foo1-bar1

Docker compose with Dockerfile creates multiple images instead of one

I have a docker-compose.yml and Dockerfile in the same folder. When running docker compose build this should result in one image and one container, but somehow I'm left with two images and two containers. The same docker-compose.yml and Dockerfile on my desktop however results in one image. What is happening here?
docker-compose.yml
version: '3.3'
services:
nginx-proxy:
build: .
restart: always
ports:
- '80:80'
- '443:443'
volumes:
- /etc/nginx/certs/domain.nl.crt:/etc/letsencrypt/live/domain.nl/cert.pem:ro
- /etc/nginx/certs/domain.nl.key:/etc/letsencrypt/live/domain.nl/privkey.pem:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
Dockerfile:
FROM nginxproxy/nginx-proxy
EXPOSE 80
EXPOSE 443
The output of docker compose build looks completely different on the server and on my desktop too.
Running docker compose build Linux server:
Sending build context to Docker daemon 404B
Step 1/3 : FROM nginxproxy/nginx-proxy
latest: Pulling from nginxproxy/nginx-proxy
f7a1c6dad281: Pull complete
4d3e1d15534c: Pull complete
9ebb164bd1d8: Pull complete
59baa8b00c3c: Pull complete
a41ae70ab6b4: Pull complete
e3908122b958: Pull complete
3016ffcb703f: Pull complete
d0b58d19b229: Pull complete
e75e1b46ae51: Pull complete
a1b8f07fa83d: Pull complete
b1ff9eda0cc4: Pull complete
d334f8d44841: Pull complete
4f4fb700ef54: Pull complete
Digest: sha256:5e0be3b1bb035301c5bb4edbe0dee5bc0f133d26866e3165f1912acb126e15d4
Status: Downloaded newer image for nginxproxy/nginx-proxy:latest
---> 65d9e0769695
Step 2/3 : EXPOSE 80
---> Running in 4bdc00f75332
---> 0ca795ce85fb
Step 3/3 : EXPOSE 443
---> Running in 3530eba2d3cf
---> 9440ccab1790
Successfully built 9440ccab1790
Successfully tagged nginx-proxy_nginx-proxy:latest
Result:
docker image ls -a
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 0ca795ce85fb 46 minutes ago 156MB
nginx-proxy_nginx-proxy latest 9440ccab1790 46 minutes ago 156MB
nginxproxy/nginx-proxy latest 65d9e0769695 4 days ago 156MB
docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3530eba2d3cf 0ca795ce85fb "/app/docker-entrypo…" 25 seconds ago Created affectionate_vaughan
4bdc00f75332 65d9e0769695 "/app/docker-entrypo…" 25 seconds ago Created fervent_varahamihira
Running docker compose build Desktop:
[+] Building 2.4s (5/5) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 31B 0.0s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/nginxproxy/nginx-proxy:latest 2.1s
=> CACHED [1/1] FROM docker.io/nginxproxy/nginx-proxy#sha256:5e0be3b1bb035301c5bb4edbe0dee5bc0f133d26866e3165f1912acb126e15d4 0.0s
=> exporting to image 0.1s
=> => exporting layers 0.0s
=> => writing image sha256:f66ed6ec69a319ce3a15b1f1948720bdd8174d13299513cc162a812616874793 0.0s
=> => naming to docker.io/library/nginx-proxy_nginx-proxy
Result:
docker image ls -a
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx-proxy_nginx-proxy latest f66ed6ec69a3 4 days ago 156MB
docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
You're looking at two different build tools. The classic docker build performs each step using a container that gets committed into an dangling image. For some of those changes, the container isn't even run, it's just created. These are visible in the container and image listings. Deleting them may delete your build cache which will force a rebuild of the entire image, and while they report the size of all their layers, those layers are shared with the final created image, so deleting the dangling images often won't save much space (maybe a few kb for some json metadata). Because of that, people would leave them around.
The other build is using buildkit, which runs directly on containerd and runc, so you don't see the build artifacts in the docker container and image list. This is the preferred builder and enabled by default on newer versions of docker.

How to know the names of all containers spun up by `docker-compose up`?

I'm tasked with creating a utility that spins up docker containers -- by calling docker-compose up --detach -- then checks the exit-codes of all those containers.
I'm starting by doing manually what I think the utility will do, and using very simple Dockerfile and docker-compose.yml examples:
My Dockerfile and docker-compose.yml are as follows:
# Dockerfile
FROM alpine:3.15 as base
# docker-compose.yml
version: '3.6'
services:
dummy:
build:
context: .
entrypoint: ["sh", "-c", "exit 42"]
$ docker-compose up --build --detach
WARNING: The Docker Engine you're using is running in swarm mode.
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
To deploy your application across the swarm, use `docker stack deploy`.
Building dummy
[+] Building 0.3s (5/5) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 43B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/alpine:3.15 0.0s
=> CACHED [1/1] FROM docker.io/library/alpine:3.15 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:35e223a20dbce8c0b81d3257f8cad0c7b2b35d8e18eadfec7eeb7de86a472e7b 0.0s
=> => naming to docker.io/library/docker-compose_dummy 0.0s
Successfully built 35e223a20dbce8c0b81d3257f8cad0c7b2b35d8e18eadfec7eeb7de86a472e7b
Starting docker-compose_dummy_1 ... done
$
$ docker inspect docker-compose_dummy_1 --format='{{.State.ExitCode}}'
42
$
Question: Is there a way to know the names of all the containers spun up by docker-compose up as would seem required for the subsequent docker inspect?
Note: it is within my utility's ability to set environment variables, but it cannot create or modify a .env or the docker-compose.yml.
From recent reading (docker-compose image named: "prefix_%s_1" instead of "%s"), I understand that the "prefix" to the service is controllable with --project-name and defaults to basename $PWD.
And from Docker-compose: why does the number appended to services not increment?, I understand that the suffixed number increments with --scale.
But I'm having a hard time putting this all together into a whole: so given an immutable docker-compose.yml, Dockerfile, and .env, is there a way I can determine the names of all containers spun up by docker-compose up --detach that I can then use with docker inspect ... --format='{{.State.ExitCode}}' to determine each's exit code?
(I'm most unsure about that suffixed integer. I don't think my utility will be using the --scale argument, but I'm not sure if there are other reasons that suffixed number could be anything other than "_1", so I'm dubious about assuming that the suffix will be "_1")
have you already tried this ?
docker-compose ps -a

How to disable loading metadata while executing Docker build?

Windows 10, DockerDesktop v3.5.2, Docker v20.10.7
Whenever my computer is disconnected from the Internet
the command docker build -t my_image . produces output like the following.
Dockerfile contains line FROM some_public_image:123
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 187B 0.0s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 2B 0.0s
=> ERROR [internal] load metadata for docker.io/library/some_public_image:123 0.0s
------
> [internal] load metadata for docker.io/library/some_public_image:123:
------
failed to solve with frontend dockerfile.v0: failed to create LLB definition:
failed to do request: Head https://registry-1.docker.io/v2/library/some_public_image/manifests/123:
dial tcp: lookup registry-1.docker.io on 192.168.65.5:53: no such host
Sometimes it causes build failure: not sure, but I guess when there were no network availability between the Docker daemon start and execution of docker build.
I suppose it tries to sync the pulled image with the repo version (i.e. check the image integrity), but would like to control this network activity and turn this "load metadata" off (saying I'm sure that all pulled images are fine) even when the connection is stable.
How to disable loading metadata while executing docker build command?
I have this problem too, and via Google I found other people asking about it with no good solution.
My workaround:
When I'm still on a good Internet connection I docker pull some_public_image:123.
docker tag some_public_image:123 this_is_stupid_but_i_retagged_some_public_image.
In my Dockerfile I use FROM this_is_stupid_but_i_retagged_some_public_image.
I wouldn't merge this hack to main but it let me be productive on a long trip.

Resources