I'm trying to pass docker-compose secrets to a Dockerfile, a feature that should be supported in docker-compose v2.5.0. For some odd reason, the secret I'm passing isn't recognized.
I loosely followed the example in How to use file from home directory in docker compose secret?
Here are the files in the directory I'm testing it out in:
.
├── docker-compose.working.yml
├── docker-compose.yml
├── Dockerfile
└── secret
Their contents:
secret
cool
docker-compose.yml
services:
notworking:
build: .
secrets:
- mysecret
secrets:
mysecret:
file: ./secret
Dockerfile
FROM busybox
RUN --mount=type=secret,required=true,id=mysecret cat /run/secrets/mysecret
Running the command docker-compose up yields an error about not being able to find the mysecret secret I defined.
Sending build context to Docker daemon 369B
STEP 1/6: FROM busybox
Resolving %!q(<nil>) to docker.io (enforced by caller)
Trying to pull docker.io/library/busybox:latest...
Getting image source signatures
Copying blob sha256:f5b7ce95afea5d39690afc4c206ee1bf3e3e956dcc8d1ccd05c6613a39c4e4f8
Copying config sha256:ff4a8eb070e12018233797e865841d877a7835c4c6d5cfc52e5481995da6b2f7
Writing manifest to image destination
Storing signatures
STEP 2/6: RUN --mount=type=secret,required=true,id=mysecret cat /run/secrets/mysecret
1 error occurred:
* Status: building at STEP "RUN --mount=type=secret,required=true,id=mysecret cat /run/secrets/mysecret": resolving mountpoints for container "b84f93ec384894b22ab1fba365f2d8a206e686882a19f6a3781a129a14fcb969": secret required but no secret with id mysecret found
, Code: 1
What's odd though is that my other contrived docker-compose.working.yml just worksTM, though it doesn't point to a local Dockerfile.
docker-compose.working.yml
services:
working:
image: busybox
command: cat /run/secrets/mysecret
secrets:
- mysecret
secrets:
mysecret:
file: ./secret
When I run docker-compose -f docker-compose.working.yml up, I get what I expect:
[+] Running 1/0
⠿ Container webster-parser-working-1 Created 0.0s
Attaching to webster-parser-working-1
webster-parser-working-1 | cool
webster-parser-working-1 exited with code 0
Some extra info:
$ docker version
Docker version 20.10.19, build d85ef84533
$ docker-compose --version
Docker Compose version 2.12.0
FYI, I'm also using Podman under the hood, though I doubt it's the cause behind why it's not working.
Does anyone know why it ain't working?
I've gotten this working with slight changes to your docker compose:
version: '3.8'
services:
worksnow:
build:
context: .
secrets:
- mysecret
entrypoint: cat /run/secrets/mysecret
secrets:
- mysecret
secrets:
mysecret:
file: ./secret
$ docker compose up
[+] Running 1/1
⠿ Container docker-compose-secrets-worksnow-1 Recreated 0.1s
Attaching to docker-compose-secrets-worksnow-1
docker-compose-secrets-worksnow-1 | cool
docker-compose-secrets-worksnow-1 exited with code 0
It seems like the trouble is that the secret is needed during the build in order for Docker to successfully interpret the RUN statement. Once you actually run the container, of course, it also needs the secret to be available then in order to access it.
RUN is a container build step, so (confusingly) it's not going to be executed when the container is actually run. That's why I needed to add an entrypoint to get the output to show up.
In case you're wondering if including the secrets in the build step is somehow storing the secret in the image, it's not. We can test this using Google's container-diff.
$ container-diff diff --type=file daemon://busybox daemon://docker-compose-worksnow
-----File-----
These entries have been added to busybox:
FILE SIZE
/proc 0
/run 0
/run/secrets 0
/sys 0
These entries have been deleted from busybox: None
These entries have been changed between busybox and docker-compose-notworking: None
Related
I am trying to determine why the cloudformation building of application fails when trying to create resources for BackgroundjobsService (Create failed in cloudformation). The only main differences from other services I have built is that it has no exposed ports and I am using ubuntu instead of php-apache image.
Dockerfile (I made it super simply (basically do nothing)
# Pulling Ubuntu image
FROM ubuntu:20.04
docker-compose.yml
services:
background_jobs:
image: 000.dkr.ecr.us-east-1.amazonaws.com/company/job-scheduler
restart: always
env_file: ../.env.${ENV}
build:
context: "."
How I deploy (I verified the enf files exist in the parent directory of job-scheduler).
cd job-scheduler
ENV=dev docker --context default compose build
docker push 000.dkr.ecr.us-east-1.amazonaws.com/company/job-scheduler:latest
ENV=dev docker --context tcetra-dev compose up
I don't know how to find any sort of error logs but the task defination gets created and all my env vars are in there.
I'm having trouble demonstrating that data I generate on a shared volume is persistent, and I can't figure out why. I have a very simple docker-compose file:
version: "3.9"
# Define network
networks:
sorcernet:
name: sorcer_net
# Define services
services:
preclean:
container_name: cleaner
build:
context: .
dockerfile: DEESfile
image: dees
networks:
- sorcernet
volumes:
- pgdata:/usr/share/appdata
#command: python run dees.py
process:
container_name: processor
build:
context: .
dockerfile: OASISfile
image: oasis
networks:
- sorcernet
volumes:
- pgdata:/usr/share/appdata
volumes:
pgdata:
name: pgdata
Running the docker-compose file to keep the containers running in the background:
vscode ➜ /com.docker.devenvironments.code $ docker compose up -d
[+] Running 4/4
⠿ Network sorcer_net Created
⠿ Volume "pgdata" Created
⠿ Container processor Started
⠿ Container cleaner Started
Both images are running:
vscode ➜ /com.docker.devenvironments.code $ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
oasis latest e2399b9954c8 9 seconds ago 1.09GB
dees latest af09040befd5 31 seconds ago 1.08GB
and the volume shows up as expected:
vscode ➜ /com.docker.devenvironments.code $ docker volume ls
DRIVER VOLUME NAME
local pgdata```
Running the docker container, I navigate to the volume folder. There's nothing in the folder -- this is expected.
vscode ➜ /com.docker.devenvironments.code $ docker run -it oasis
[root#049dac037802 opt]# cd /usr/share/appdata/
[root#049dac037802 appdata]# ls
[root#049dac037802 appdata]#
Since there's nothing in the folder, I create a file in called "dog.txt" and recheck the folder contents. The file is there. I exit the container.
[root#049dac037802 appdata]# touch dog.txt
[root#049dac037802 appdata]# ls
dog.txt
[root#049dac037802 appdata]# exit
exit
To check the persistence of the data, I re-run the container, but nothing is written to the volume.
vscode ➜ /com.docker.devenvironments.code $ docker run -it oasis
[root#1787d76a54b9 opt]# cd /usr/share/appdata/
[root#1787d76a54b9 appdata]# ls
[root#1787d76a54b9 appdata]#
What gives? I've tried defining the volume as persistent, and I know each of the images have a folder location at /usr/share/appdata.
If you want to check the persistence of the data in the containers defined in your docker compose, the --volumes-from flag is the way to go
When you run
docker run -it oasis
This newly created container shares the same image, but it doesn't know anything about the volumes defined.
In order to link the volume to the new container run this
docker run -it --volumes-from $CONTAINER_NAME_CREATED_FROM_COMPOSE oasis
Now this container shares the volume pgdata.
You can go ahead and create files at /usr/share/appdata and validate their persistence
This question builds off of the question asked here: How to prevent docker-compose building the same image multiple times?
In version 1 of docker compose, if you have multiple services which depend on the same dockerfile, you can prevent the dockerfile from being built twice by specifying the build tag once and referring the the image in dependent services:
version: '2'
services:
abc:
image: myimage
command: abc
build:
context: .
dockerfile: Dockerfile
xyz:
image: myimage
depends_on:
- abc
command: xyz
The above code runs properly after disabling version 2 of docker-compose:
docker-compose disable-v2
docker-compose up
However if you run docker-compose v2.3.3, it gives the following error:
[+] Running 0/2
⠿ abc Error 1.3s
⠿ xyz Error 1.3s
Error response from daemon: pull access denied for myimage, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
What is the proper way to have multiple services use one dockerfile in docker-compose version 2?
Here is a simplified version of my docker-compose.yml (it's the volume in buggy-service that does not behave as I expect):
version: '3.4'
services:
local-db:
image: postgres:9.6
environment:
- DB_NAME=${DB_NAME}
# other env vars (not important)
ports:
- 5432:5432
volumes:
- ~/.docker-volumes/${DB_NAME}/postgresql/data:/var/lib/postgresql/data
- postgresql:/docker-entrypoint-initdb.d
buggy-service:
build:
context: .
dockerfile: Dockerfile.test
target: buggy-image
args:
# bunch of args (not important)
volumes:
- /Users/me/temp:/temp
volumes:
postgresql:
driver_opts:
type: none
device: /Users/me/postgresql
o: bind
If I do docker-compose -f docker-compose.yml up -d local-db, a container for it starts up automatically and I find that /Users/me/postgresql on the host machine (Mac OSX) binds correctly to /docker-entrypoint-initdb.d with content synced.
However, if I do docker-compose -f docker-compose.yml up --build -d buggy-service, a container does not start up automatically.
Question: How do I get buggy-service to behave like local-db, i.e., start up automatically with the required volume mounted?
Here's the stripped down version of Dockerfile.test referenced by buggy-service:
FROM microsoft/dotnet:2.1-sdk-alpine AS buggy-image
# Bunch of ARG definitions (not important)
VOLUME /temp
# other stuff (not important)
ENTRYPOINT ["/bin/bash"]
# Other FROMs
Edit 1
A bit more info about what I’m trying to achieve...
The buggy-container I’m trying to get working runs .Net Core as the base image. Its purpose is to run dotnet test and generate coverage reports, which can then be consumed in the host, which may either be a local dev machine or a build server (in this case, BitBucket pipelines).
... followed by docker run -dit --name buggy-container buggy-image
This command creates a new container, not based on anything in the compose yml file. Without a volume specification, it will only get an anonymous volume since you've defined the volume in the Dockerfile (I tend to recommend against defining a volume there). You can see the anonymous volumes with a docker volume ls command, they'll be the ones with a long unique id and no reference to what they belong to.
To define a host volume from docker run, you need the -v flag:
docker run -dit -v /Users/me/temp:/temp --name buggy-container buggy-image
From your now changed question, you have a new issue. Your container specifies a single command to run in the entrypoint:
ENTRYPOINT ["/bin/bash"]
When bash runs, it reads input from stdin. When that input ends, like when you run a container with no input attached, bash will exit. When the process your container runs exits, the container exits. From the details available, I can't tell you what that command should be, but a good starting point is to look at other images on docker hub that perform a similar task that you're trying to run, and look at the Dockerfile they use (many hub images point back to a GitHub repo with the full source).
I am trying to come up with a CI system where I validate the Dockerfile and docker-compose.yaml files that are used to build our images.
I found Google containter-structure-tests
that can be used to verify the structre of Docker images that are built. This works if the docker images are build from Dockerfile.
Is there a way that I can verify the docker images with all the configurations that are added to the images by Docker-compose?
EDIT:
Maybe I didn't all put all my details into the questions.
Lets say I have docker-compose file with the following structure:
version: "3"
services:
image-a:
build:
context: .
dockerfile: Dockerfile-a
image-b:
build:
context: .
dockerfile: Dockerfile-b
ports:
- '8983:8983'
volumes:
- '${DEV_ENV_ROOT}/solr/cores:/var/data/solr'
- '${DEV_ENV_SOLR_ROOT}/nginx:/var/lib/nginx'
Now that the images would be built from Dockerfile-a and Dockerfile-b, there would be configurations made on top of image foo-b. How can I validate those configurations without building the container from image foo-b? Would that even be possible?
Assuming you have the following docker-compose.yml file:
version: "3"
services:
image-a:
build:
context: .
dockerfile: Dockerfile-a
image-b:
build:
context: .
dockerfile: Dockerfile-b
Build your images running the command docker-compose --project-name foo build. This will make all images' name start with the prefix foo_. So you would end up with the following image names:
foo_image-a
foo_image-b
The trick is to use a unique id (such as your CI job id) instead of foo so you can identify the very images that were just built.
Now that you know the names of your images, you can use:
container-structure-test test --image foo_image-a --config config.yaml
container-structure-test test --image foo_image-b --config config.yaml
If you are to make some kind of generic job which does not know the docker compose service names, you can use the following command to get the list of images starting with that foo_ prefix:
docker image list --filter "reference=foo_*"
REPOSITORY TAG IMAGE ID CREATED SIZE
foo_image-a latest 0c5e1cf8c1dc 16 minutes ago 4.15MB
foo_image-b latest d4e384157afb 16 minutes ago 4.15MB
and if you want a script to iterate over this result, add the --quiet option to obtain just the images' id:
docker image list --filter "reference=foo_*" --quiet
0c5e1cf8c1dc
d4e384157afb