How to use secrets when building docker compose locally - docker

I recently started using Buildkit to hide some env vars, and it worked great in prod by gha!
My Dockerfile now is something like this:
# syntax=docker/dockerfile:1.2
...
RUN --mount=type=secret,id=my_secret,uid=1000 \
MY_SECRET=$(cat /run/secrets/my_secret) \
&& export MY_SECRET
And my front was something like this:
DOCKER_BUILDKIT=1 docker build \
--secret id=my_secret,env="MY_SECRET"
And when I run this on my Github actions, it works perfectly.
But now, the problem here is when I try to build it locally. When performing a docker-compose build it fails. Of course, because I'm not passing in any secret so my backend (Dockerfile) won't be able to read it from run/secrets/.
What I've tried to do, so far, to accomplish the local build using docker-compose build:
1. Working with Docker secrets:
I basically tried doing:
$ docker swarm init
$ echo "my_secret_value" docker secret create my_secret -
I thought that saving a secret would fix the problem but didn't work. I still got the same error message:
cat: can't open '/run/secrets/my_secret': No such file or directory
I also tried passing in the secret on my docker-compose file like the following but didn't work either:
version: '3'
services:
app:
build:
context: "."
args:
- "MY_SECRET"
secrets:
- my_secret
secrets:
my_secret:
external: true
I also tried storing the secret in a local file, but didn't work, the same error:
version: '3'
services:
app:
build:
context: "."
args:
- "MY_SECRET"
secrets:
- my_secret
secrets:
my_secret:
file: ./my_secret.txt
I also tried doing something like this answer something like this:
args:
- secret=id=my_secret,src=./my_secret.txt
But still got the same error:
cat: can't open '/run/secrets/my_secret': No such file or directory
What am I doing wrong to successfully perform a docker-compose build?
I'm aware that I can easily use two Dockerfiles, a Dockerfile to build in local and a Dockerfile to build in prod but I just want to use Buildkit as it is, by only modifying my docker-compose.yml file.
Does anyone have an idea about what am I missing to be able to build locally reading from /run/secrets/?

Support for this was recently implemented in v2. See the below pull requests.
https://github.com/docker/compose/pull/9386
https://github.com/compose-spec/compose-spec/pull/238
The provided example looks like this:
services:
frontend:
build:
context: .
secrets:
- server-certificate
secrets:
server-certificate:
file: ./server.cert
So you are close, but you have to add the secret key under the build key.
Also keep in mind that you have to use docker compose instead of docker-compose, in order to use v2 which is built into the docker client.

Related

docker-compose secrets without swarm mode: how to import their values?

There are some questions about using secrets with docker-compose without swarm mode, but when trying to follow some of them, I never managed to read the secrets inside running container.
Approach #1
docker-compose.yml:
version: "3.8"
services:
server:
image: alpine:latest
secrets:
- sec-str
environment:
- TE_STR=${sec-str}
command: tail -F .
secrets:
sec-str:
file: ./secret.s
secret.s:
sec-str="A!Bit#complicated-String^%"
Outcome:
/ # echo $TE_STR
str
Approach #2
Only change is made here, in secret.s:
"A!Bit#complicated-String^%"
Outcome:
/ # echo $TE_STR
str
Approach #3
TE_STR=${sec-str} replaced with TE_STR=$sec-str.
Outcome:
/ # echo $TE_STR
-str
Running out of ideas for now. Any clues from you?
Secrets are still files inside the container.
You can find yours at:
/run/secrets/sec-str
If you need it as en environment variable do as follows:
environment:
- TE_STR_FILE=/run/secrets/sec-str
This will set TE_STR to the contents of your secret.

Can you use the current project_name in a docker compose file?

I see lots of questions around setting/changing the COMPOSE_PROJECT_NAME or PROJECT_NAME using ENV variables.
I'm fine with the default project name, but I would like to reference it in my compose file.
version: "3.7"
services:
app:
build: DockerFile
container_name: app
volumes:
- ./:/var/app
networks:
- the-net
npm:
image: ${project_name}_app
volumes:
- ./:/var/app
depends_on:
- app
entrypoint: [ 'npm' ]
networks:
- the-net
npm here is arbitrary , hopefully the fact that could be run as its own container or in other ways does not distract from the questions.
is it possible to reference the project name with out setting it manually or first?
Unfortunately it is not possible.
As alluded to, you can create a .env file and populate it with COMPOSE_PROJECT_NAME=my_name, but the config option does not present itself in your environment by default.
Unfortunately the env substitution in docker-compose is fairly limited, meaning we cannot use the available PWD env variable and greedy match it at all
$ cd ~
$ pwd
/home/tqid
$ echo "Base Dir: ${PWD##*/}"
Base Dir: tqid
When we use this reference, compose has issues:
$ docker-compose up -d
ERROR: Invalid interpolation format for "image" option in service "demo": "${PWD##*/}"
It's probably better to be explicit anyway, the COMPOSE_PROJECT_NAME is based on your dir, and if someone clones to a new folder then it gets out of whack, including the .env file in source control would provide a re-usable and consistent place to reference the name
https://docs.docker.com/compose/reference/envvars/#compose_project_name
using the same image as another container was what I was after ... reuse the image and change the entry point.
Specify the same build: options for both containers.
This seems inefficient, in that it will trigger the build sequence twice and docker images will list both of them. However, the way Docker's layer caching works, if identical RUN commands are run on identical input images, the resulting layer will simply be reused, and the two final images will have the same image ID; they will literally be the same image with two names.
The context I've run into this the most is with a Python application where the same code base is used for a Django or Flask Web server, plus a Celery worker. The Docker-level setup is fairly language-independent, though: specify the same build: for both containers, and override the command: for the container(s) that need to do a non-default task.
version: '3.8'
services:
app:
build: .
ports: ['3000:3000']
environment:
REDIS_HOST: redis
worker:
build: . # <-- same as app
command: npm run worker # <-- overrides Dockerfile CMD
environment:
REDIS_HOST: redis
redis:
image: redis
It is also valid to specify build: and image: together in the docker-compose.yml file; this specifies the name of the image that will be built. It's frequently useful to explicitly specify this because you will need to point at a specific Docker Hub or other registry location to push the built image. If you do this, then you'll know the image name and don't need to depend on the context name.
version: '3.8'
services:
app:
build: .
image: registry.example.com/my/app:${TAG:-latest}
worker:
image: registry.example.com/my/app:${TAG:-latest}
command: npm run worker
You will need to manually docker-compose build in this setup. Compose's workflow doesn't have a way to specify that one container's build must run before a different container can start.

How do I create this using Docker

Download the repository to your local machine and unzip the directory. Enter the directory (you may rename the directory first) on command line environment, then use the following command to download the rails docker image and to build.
The repository file is been downloaded and unzipped it. What should I do
docker-compose run web rails new . --force --no-deps --database=postgresql
docker-compose build
ERROR:
Can't find a suitable configuration file in this directory or any
parent. Are you in the right directory?
Supported filenames: docker-compose.yml, docker-compose.yaml
You need to define docker-compose.yml file or docker-compose.yaml. Accordingly to documentation:
Using Compose is basically a three-step process:
1. Define your app’s environment with a Dockerfile so it can be reproduced anywhere.
2. Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.
3. Run docker-compose up and Compose starts and runs your entire app.
A docker-compose.yml looks like this:
version: '3'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
- logvolume01:/var/log
links:
- redis
redis:
image: redis
volumes:
logvolume01: {}
More on official website

docker compose - ignore build context path

I have docker-compose.yml file with build context property specified like this:
version: '3'
services:
my-service:
container_name: my-service
image: my-service
build:
context: foo
ports:
- 8088:8088
# other services
When I run docker-compose up locally, build context does exist and everything works fine. However, my CI server is configured to use the same docker-compose.yml file but there is no build context (images are copied as .tar archive via SSH and then loaded via docker load).
Now I've got an error:
ERROR: build path /foo either does not exist, is not accessible, or is
not a valid URL.
So I've tried to find a way to suppress looking for this build context when running docker-compose up (I don't want to build images cause they are already up-to-date), but docker-compose up --no-build does not work. Any ideas?
I posted your issue as a feature request on the docker-compose repository. Let's see how it progresses:
https://github.com/docker/compose/issues/7674
Meanwhile, you will have to workaround this by modifying your CI script that does the docker-compose up --no-build so it does the mkdir -p ... that you need.
docker-compose.override.yml is good solution in this case. You may override only build block and this is not hard to mantain as two independent files.
docker-compose.override.yml:
version: '3'
services:
my-service:
build:
context: foo
docker-compose.yml
version: '3'
services:
my-service:
container_name: my-service
image: my-service
ports:
- 8088:8088
See https://docs.docker.com/compose/extends/
I had the same problem, and my solution, until "docker-compose config" provides a way to skip the directory-exists check, is to automatically create those directories that "docker-compose config" expects.
Here is a one-liner that does this:
egrep ' (context:|build)' < docker-compose.yml | sed -E 's/\s+\S+:\s*//' | xargs mkdir -p .
It's ugly, but I couldn't figure out another way. (The .extends. way mentioned by dngnezdi is not a solution, as I needed an automatic method.)

Building and uploading images to Docker Hub, how to from Docker Compose?

I have been working in a docker environment for PHP development and finally I get it working as I need. This environment relies on docker-compose and the config looks like:
version: '2'
services:
php-apache:
env_file:
- dev_variables.env
image: reynierpm/php55-dev
build:
context: .
args:
- PUID=1000
- PGID=1000
ports:
- "80:80"
extra_hosts:
- "dockerhost:xxx.xxx.xxx.xxx"
volumes:
- ~/var/www:/var/www
There are some configurations like extra_hosts and env-file that is giving me some headache. Why? Because I don't know if the image will works under such circumstances.
Let's said:
I have run docker-compose up -d and the image reynierpm/php55-dev with tag latest has been built
I have everything working as it should be because I am setting the proper values on the docker-compose.yml file
I have logged in into my account and I push the image to the repository: docker push reynierpm/php55-dev
What happen if tomorrow you clone the repository and try to run docker-compose up but changing the docker-compose.yml file to fit your settings? How the image behaves in this case? I mean makes sense to create/upload the image to Docker Hub if any time I run the command docker-compose up it will be build again due to the changes on the config file?
Maybe I am completing wrong and some magic happen behind scenes but I need to know if I am doing this right
If people clone your git repository and do a docker-compose up -d it will in fact building a new image. If you only want people use your image from docker hub, drop the build section of docker-compose.yml and publish it in your docker hub page. Check this you can see the proposed docker-compose.yml.
Just paste this in your page:
version: '2'
services:
php-apache:
image: reynierpm/php55-dev
ports:
- "80:80"
environment:
DOCKERHOST: 'yourhostip'
PHP_ERROR_REPORTING: 'E_ALL & ~E_DEPRECATED & ~E_NOTICE'
volumes:
- ~/var/www:/var/www
If your env_file just have a couple of variables it is better to show them directly in the Dockerfile. It is better to replace extra_hosts with an environment variable and change in your php.ini or where ever you use the extra host by the variable:
.....
xdebug.remote_host = ${DOCKERHOST}
.....
You can in your Dockerfile define a default value for this variable:
ENV DOCKERHOST=localhost
Hope it helps
Regards

Resources