There are some questions about using secrets with docker-compose without swarm mode, but when trying to follow some of them, I never managed to read the secrets inside running container.
Approach #1
docker-compose.yml:
version: "3.8"
services:
server:
image: alpine:latest
secrets:
- sec-str
environment:
- TE_STR=${sec-str}
command: tail -F .
secrets:
sec-str:
file: ./secret.s
secret.s:
sec-str="A!Bit#complicated-String^%"
Outcome:
/ # echo $TE_STR
str
Approach #2
Only change is made here, in secret.s:
"A!Bit#complicated-String^%"
Outcome:
/ # echo $TE_STR
str
Approach #3
TE_STR=${sec-str} replaced with TE_STR=$sec-str.
Outcome:
/ # echo $TE_STR
-str
Running out of ideas for now. Any clues from you?
Secrets are still files inside the container.
You can find yours at:
/run/secrets/sec-str
If you need it as en environment variable do as follows:
environment:
- TE_STR_FILE=/run/secrets/sec-str
This will set TE_STR to the contents of your secret.
Related
I recently started using Buildkit to hide some env vars, and it worked great in prod by gha!
My Dockerfile now is something like this:
# syntax=docker/dockerfile:1.2
...
RUN --mount=type=secret,id=my_secret,uid=1000 \
MY_SECRET=$(cat /run/secrets/my_secret) \
&& export MY_SECRET
And my front was something like this:
DOCKER_BUILDKIT=1 docker build \
--secret id=my_secret,env="MY_SECRET"
And when I run this on my Github actions, it works perfectly.
But now, the problem here is when I try to build it locally. When performing a docker-compose build it fails. Of course, because I'm not passing in any secret so my backend (Dockerfile) won't be able to read it from run/secrets/.
What I've tried to do, so far, to accomplish the local build using docker-compose build:
1. Working with Docker secrets:
I basically tried doing:
$ docker swarm init
$ echo "my_secret_value" docker secret create my_secret -
I thought that saving a secret would fix the problem but didn't work. I still got the same error message:
cat: can't open '/run/secrets/my_secret': No such file or directory
I also tried passing in the secret on my docker-compose file like the following but didn't work either:
version: '3'
services:
app:
build:
context: "."
args:
- "MY_SECRET"
secrets:
- my_secret
secrets:
my_secret:
external: true
I also tried storing the secret in a local file, but didn't work, the same error:
version: '3'
services:
app:
build:
context: "."
args:
- "MY_SECRET"
secrets:
- my_secret
secrets:
my_secret:
file: ./my_secret.txt
I also tried doing something like this answer something like this:
args:
- secret=id=my_secret,src=./my_secret.txt
But still got the same error:
cat: can't open '/run/secrets/my_secret': No such file or directory
What am I doing wrong to successfully perform a docker-compose build?
I'm aware that I can easily use two Dockerfiles, a Dockerfile to build in local and a Dockerfile to build in prod but I just want to use Buildkit as it is, by only modifying my docker-compose.yml file.
Does anyone have an idea about what am I missing to be able to build locally reading from /run/secrets/?
Support for this was recently implemented in v2. See the below pull requests.
https://github.com/docker/compose/pull/9386
https://github.com/compose-spec/compose-spec/pull/238
The provided example looks like this:
services:
frontend:
build:
context: .
secrets:
- server-certificate
secrets:
server-certificate:
file: ./server.cert
So you are close, but you have to add the secret key under the build key.
Also keep in mind that you have to use docker compose instead of docker-compose, in order to use v2 which is built into the docker client.
I see lots of questions around setting/changing the COMPOSE_PROJECT_NAME or PROJECT_NAME using ENV variables.
I'm fine with the default project name, but I would like to reference it in my compose file.
version: "3.7"
services:
app:
build: DockerFile
container_name: app
volumes:
- ./:/var/app
networks:
- the-net
npm:
image: ${project_name}_app
volumes:
- ./:/var/app
depends_on:
- app
entrypoint: [ 'npm' ]
networks:
- the-net
npm here is arbitrary , hopefully the fact that could be run as its own container or in other ways does not distract from the questions.
is it possible to reference the project name with out setting it manually or first?
Unfortunately it is not possible.
As alluded to, you can create a .env file and populate it with COMPOSE_PROJECT_NAME=my_name, but the config option does not present itself in your environment by default.
Unfortunately the env substitution in docker-compose is fairly limited, meaning we cannot use the available PWD env variable and greedy match it at all
$ cd ~
$ pwd
/home/tqid
$ echo "Base Dir: ${PWD##*/}"
Base Dir: tqid
When we use this reference, compose has issues:
$ docker-compose up -d
ERROR: Invalid interpolation format for "image" option in service "demo": "${PWD##*/}"
It's probably better to be explicit anyway, the COMPOSE_PROJECT_NAME is based on your dir, and if someone clones to a new folder then it gets out of whack, including the .env file in source control would provide a re-usable and consistent place to reference the name
https://docs.docker.com/compose/reference/envvars/#compose_project_name
using the same image as another container was what I was after ... reuse the image and change the entry point.
Specify the same build: options for both containers.
This seems inefficient, in that it will trigger the build sequence twice and docker images will list both of them. However, the way Docker's layer caching works, if identical RUN commands are run on identical input images, the resulting layer will simply be reused, and the two final images will have the same image ID; they will literally be the same image with two names.
The context I've run into this the most is with a Python application where the same code base is used for a Django or Flask Web server, plus a Celery worker. The Docker-level setup is fairly language-independent, though: specify the same build: for both containers, and override the command: for the container(s) that need to do a non-default task.
version: '3.8'
services:
app:
build: .
ports: ['3000:3000']
environment:
REDIS_HOST: redis
worker:
build: . # <-- same as app
command: npm run worker # <-- overrides Dockerfile CMD
environment:
REDIS_HOST: redis
redis:
image: redis
It is also valid to specify build: and image: together in the docker-compose.yml file; this specifies the name of the image that will be built. It's frequently useful to explicitly specify this because you will need to point at a specific Docker Hub or other registry location to push the built image. If you do this, then you'll know the image name and don't need to depend on the context name.
version: '3.8'
services:
app:
build: .
image: registry.example.com/my/app:${TAG:-latest}
worker:
image: registry.example.com/my/app:${TAG:-latest}
command: npm run worker
You will need to manually docker-compose build in this setup. Compose's workflow doesn't have a way to specify that one container's build must run before a different container can start.
I am trying to setup a Docker-based Jenkins instance. Essentially, I run the jenkins/jenkins:lts image as a container and mount a data volume to persist the data Jenkins will create.
Now, what I would like to do is share the host's ssh keys with this Jenkins instance. It's probably due to my limited Docker knowledge, but my problem is I don't know how I can mount additional files/directories to my volume and Jenkins requires that I put ssh keys within var/jenkins_home/.ssh.
I tried naively creating the directories in Dockerfile and then mounting them with docker-compose. It failed, as you might expect, since the volume is the one containing Jenkins' home directory data, not the Jenkins container itself.
I have the following docker-compose.yml (not working, for the reasons mentioned above):
version: '3.1'
services:
jenkins:
restart: always
build: ./jenkins
environment:
VIRTUAL_HOST: ${NGINX_VIRTUAL_HOST}
VIRTUAL_PORT: 8080
JAVA_OPTS: -Djenkins.install.runSetupWizard=false
TZ: America/New_York
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- jenkins_data:/var/jenkins_home
networks:
- web
- proxy
healthcheck:
test: ["CMD", "curl --fail http://${NGINX_VIRTUAL_HOST}/ || exit 1"]
interval: 1m
timeout: 10s
retries: 3
secrets:
- host_ssh_key
volumes:
jenkins_data:
networks:
web:
driver: bridge
proxy:
external:
name: nginx-proxy
secrets:
host_ssh_key:
file: ~/.ssh/id_rsa
My question is: is there anyway I could get this secret within my data volume?
I know this is a fairly old thread but a lot of people get stuck on this including me and the answer is simply not true. You can indeed use secrets with docker-compose without using Swarm provided it's a local machine or the secrets file is mounted on the host. Not saying this is secure or desirable, just that it can be done. One of the best explanations of the several ways this is possible is this blog;
Using Docker Secrets during Development
Below is an example of parts of a docker compose file used to add an api key to a Spring application. The key are then available at /run/secrets/captcha-api-key inside the Docker container. Docker compose "fakes" it by literally binding the file as a mount which then can be accessed in whatever way. It's not secure as in the file is still there, visible to all with access to /run/secrets but it's definitely doable as a work-around. Great for dev servers but would not do it in production though;
version: '3.6'
services:
myapp:
image: mmyapp
restart: always
secrets:
- captcha-api-key
secrets:
captcha-api-key:
file: ./captcha_api_key.txt
EDIT: Besides that, one can simply just run a one-node swarm which is just a tiny bit more on the resources and use secrets the way they are intended. Provided the images are already built, "docker stack deploy mydocker-composefile.yml mystackname" will do mostly the same as old docker compose did. Note though that the yml file must be written in 3 or higher specification.
Here is a short but concise write-up on compose vs swarm; The Difference Between Docker Compose And Docker Stack
mount the secret like given and try.
secrets:
- source: host_ssh_key
target: /var/jenkins_home/.ssh/id_rsa
mode: 0600
It can't be done. Secrets will only work with docker swarm; docker-compose is unable to use secrets.
More details in this GitHub issue.
docker stack deploy isnt respecting the extra_hosts parameter in my compose file. when i do a simple docker-compose up the entry is created in the /etc/hosts however when i do docker deploy –compose-file docker-compose.yml myapp it ignores extra_hosts, any insights?
Below is the docker-compose.xml:
version: '3'
services:
web:
image: user-service
deploy:
labels:
- the label
build:
context: ./
environment:
DATABASE_URL: jdbc:postgresql://dbhost:5432/postgres
ports:
- 9002:9002
extra_hosts:
- "dbhost: ${DB_HOST}"
networks:
- wellness_swarm
env_file:
- .env
networks:
wellness_swarm:
external:
name: wellness_swarm
the docker-compose config also displays the compose file properly.
This may not be a direct answer to the question as it doesn't use env variables but what I found was that the extra_hosts block in the compose file was ignored in swarm mode if entered in the format above.
i.e. this works for me and puts entries in /etc/hosts in the container:
extra_hosts:
retisdev: 10.48.161.44
retistesting: 10.48.161.44
whereas when entered in the other format it gets ignored when deploying as a service
extra_hosts:
- "retisdev=10.48.161.44"
- "retistesting=10.48.161.44"
I think it's an ordering issue. The ${} variable you've got in the compose file runs during the YAML processing before the service definition is created. Then stack deploy processes the .env file for running in the container as envvars, but the YAML variable is needed first...
To fix that, you should use the docker-compose config command first, to process the YAML, and then use the output of that to send to the stack deploy.
docker-compose config will show you the output you're likely wanting.
Then do a pipe to get a one-liner.
docker-compose config | docker stack deploy -c - myapp
Note: Ideally you wouldn't use the extra_hosts, but rather put the envvar directly in the connection string. Your way seems like unnecessary complexity and isn't the usual way I see a connection string created.
e.g.
version: '3'
services:
web:
image: user-service
deploy:
labels:
- the label
build:
context: ./
environment:
DATABASE_URL: jdbc:postgresql://${DB_HOST}:5432/postgres
ports:
- 9002:9002
networks:
- wellness_swarm
env_file:
- .env
networks:
wellness_swarm:
external:
name: wellness_swarm
As i see https://github.com/moby/moby/issues/29133 seems like it is by design where in the compose command takes into consideration the environment variables mentioned in .env file however the deploy command ignores that :( why is that so, pretty lame reasons!
I have been working in a docker environment for PHP development and finally I get it working as I need. This environment relies on docker-compose and the config looks like:
version: '2'
services:
php-apache:
env_file:
- dev_variables.env
image: reynierpm/php55-dev
build:
context: .
args:
- PUID=1000
- PGID=1000
ports:
- "80:80"
extra_hosts:
- "dockerhost:xxx.xxx.xxx.xxx"
volumes:
- ~/var/www:/var/www
There are some configurations like extra_hosts and env-file that is giving me some headache. Why? Because I don't know if the image will works under such circumstances.
Let's said:
I have run docker-compose up -d and the image reynierpm/php55-dev with tag latest has been built
I have everything working as it should be because I am setting the proper values on the docker-compose.yml file
I have logged in into my account and I push the image to the repository: docker push reynierpm/php55-dev
What happen if tomorrow you clone the repository and try to run docker-compose up but changing the docker-compose.yml file to fit your settings? How the image behaves in this case? I mean makes sense to create/upload the image to Docker Hub if any time I run the command docker-compose up it will be build again due to the changes on the config file?
Maybe I am completing wrong and some magic happen behind scenes but I need to know if I am doing this right
If people clone your git repository and do a docker-compose up -d it will in fact building a new image. If you only want people use your image from docker hub, drop the build section of docker-compose.yml and publish it in your docker hub page. Check this you can see the proposed docker-compose.yml.
Just paste this in your page:
version: '2'
services:
php-apache:
image: reynierpm/php55-dev
ports:
- "80:80"
environment:
DOCKERHOST: 'yourhostip'
PHP_ERROR_REPORTING: 'E_ALL & ~E_DEPRECATED & ~E_NOTICE'
volumes:
- ~/var/www:/var/www
If your env_file just have a couple of variables it is better to show them directly in the Dockerfile. It is better to replace extra_hosts with an environment variable and change in your php.ini or where ever you use the extra host by the variable:
.....
xdebug.remote_host = ${DOCKERHOST}
.....
You can in your Dockerfile define a default value for this variable:
ENV DOCKERHOST=localhost
Hope it helps
Regards