Docker-compose secrets won't load DEBUG env - docker

I am using the following code in a docker-compose.yml file.
I also start the container with "docker stack deploy -c docker-compose.yml default" in windows command prompt.
Swarm is active.
For the love of god (got to be honest, I've started my studies in this field not so long ago) I can not figure out why the DEBUG env will always set to " /run/secrets/debug" string and not the actual value of the secret key.
I've checked the live containe, and it does contain the debug file in /run/secrets, and if i run "cat debug" I get the secret value back.
Can anyone help me?
Code below:
version: '3.9'
services:
portfolio:
image: kcisijohnny/portfolio
build: portfolio
ports:
- '5000:5000'
networks:
- portfolio_network
environment:
DEBUG: /run/secrets/debug
secrets:
- debug
secrets:
debug:
external: true
name: debug
I've tried every variation I foud online in the docker-compose.yml:
DEBUG: cat /run/secrets/debug
DEBUG: cat "/run/secrets/debug"
DEBUG=$(cat /run/secrets/debug)
DEBUG=$$(cat /run/secrets/debug)
non if these worked.

Related

docker-compose secrets without swarm mode: how to import their values?

There are some questions about using secrets with docker-compose without swarm mode, but when trying to follow some of them, I never managed to read the secrets inside running container.
Approach #1
docker-compose.yml:
version: "3.8"
services:
server:
image: alpine:latest
secrets:
- sec-str
environment:
- TE_STR=${sec-str}
command: tail -F .
secrets:
sec-str:
file: ./secret.s
secret.s:
sec-str="A!Bit#complicated-String^%"
Outcome:
/ # echo $TE_STR
str
Approach #2
Only change is made here, in secret.s:
"A!Bit#complicated-String^%"
Outcome:
/ # echo $TE_STR
str
Approach #3
TE_STR=${sec-str} replaced with TE_STR=$sec-str.
Outcome:
/ # echo $TE_STR
-str
Running out of ideas for now. Any clues from you?
Secrets are still files inside the container.
You can find yours at:
/run/secrets/sec-str
If you need it as en environment variable do as follows:
environment:
- TE_STR_FILE=/run/secrets/sec-str
This will set TE_STR to the contents of your secret.

passing multiple .yml files to docker-compose

Docker noob here.
I have two files docker-compose.build.yml and docker-compose.up.yml in my docker folder. Following are the contents of both files..
docker-compose.build.yml
version: "3"
services:
base:
build:
context: ../
dockerfile: ./docker/Dockerfile.base
args:
DEBUG: "true"
image: ottertune-base
labels:
NAME: "ottertune-base"
web:
build:
context: ../
dockerfile: ./docker/Dockerfile.web
image: ottertune-web
depends_on:
- base
labels:
NAME: "ottertune-web"
volumes:
- ../server:/app
driver:
build:
context: ../
dockerfile: ./docker/Dockerfile.driver
image: ottertune-driver
depends_on:
- base
labels:
NAME: "ottertune-driver"
docker-compose.up.yml
version: "3"
services:
web:
image: ottertune-web
container_name: web
expose:
- "8000"
ports:
- "8000:8000"
links:
- backend
- rabbitmq
depends_on:
- backend
- rabbitmq
environment:
DEBUG: 'true'
ADMIN_PASSWORD: 'changeme'
BACKEND: 'postgresql'
DB_NAME: 'ottertune'
DB_USER: 'postgres'
DB_PASSWORD: 'ottertune'
DB_HOST: 'backend'
DB_PORT: '5432'
DB_OPTS: '{}'
MAX_DB_CONN_ATTEMPTS: 30
RABBITMQ_HOST: 'rabbitmq'
working_dir: /app/website
entrypoint: ./start.sh
labels:
NAME: "ottertune-web"
networks:
- ottertune-net
driver:
image: ottertune-driver
container_name: driver
depends_on:
- web
environment:
DEBUG: 'true'
working_dir: /app/driver
labels:
NAME: "ottertune-driver"
networks:
- ottertune-net
rabbitmq:
image: "rabbitmq:3-management"
container_name: rabbitmq
restart: always
hostname: "rabbitmq"
environment:
RABBITMQ_DEFAULT_USER: "guest"
RABBITMQ_DEFAULT_PASS: "guest"
RABBITMQ_DEFAULT_VHOST: "/"
expose:
- "15672"
- "5672"
ports:
- "15673:15672"
- "5673:5672"
labels:
NAME: "rabbitmq"
networks:
- ottertune-net
backend:
container_name: backend
restart: always
image: postgres:9.6
environment:
POSTGRES_USER: 'postgres'
POSTGRES_PASSWORD: 'ottertune'
POSTGRES_DB: 'ottertune'
expose:
- "5432"
ports:
- "5432:5432"
labels:
NAME: "ottertune-backend"
networks:
- ottertune-net
networks:
ottertune-net:
driver: bridge
Nothing wrong with the dockerfiles, i just have a few doubts about this approach.
What purpose does having multiple files serve instead of just one docker-compose.yml?
How does docker-compose work when used with multiple files?
When i do docker-compose -f docker-compose.build.yml build --no-cache
Building base
Step 1/1 : FROM ubuntu:18.04
---> 775349758637
[Warning] One or more build-args [DEBUG] were not consumed
Successfully built 775349758637
Successfully tagged ottertune-base:latest
Building web
Step 1/1 : FROM ottertune-base
---> 775349758637
Successfully built 775349758637
Successfully tagged ottertune-web:latest
Building driver
Step 1/1 : FROM ottertune-base
---> 775349758637
Successfully built 775349758637
Successfully tagged ottertune-driver:latest
and then docker-compose up i get the error
rabbitmq is up-to-date
backend is up-to-date Starting web ... error
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:346:
starting container process caused "exec: \"./start.sh\": stat ./start.sh: no such file or
directory": unknown
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:346:
starting container process caused "exec: \"./start.sh\": stat ./start.sh: no such file or
directory": unknown
ERROR: Encountered errors while bringing up the project.
this entrypoint start.sh is defined in the docker-compose.up.yml file which I didn't pass as an argument to
docker-compose build
So, why is the docker-compose up trying to run this entrypoint from a yml file which is not even passed during build? Really confused on this and didn't find much about it on google and stackoverflow.
If you docker-compose -f a.yml -f b.yml ..., Docker Compose merges the two YAML files. If you look at the two files you've posted, one has all of the run-time settings (ports:, environment:, ...), and if you happened to have the images already it would be enough to run the application. The second only has build-time settings (build:), but requires the source tree checked out locally to be able to run.
You probably need to specify both files on every docker-compose invocation
docker-compose -f docker-compose.build.yml -f docker-compose.up.yml up --build
It does seem like the author of these files intended for them to be run separately
docker-compose -f docker-compose.build.yml build
docker-compose -f docker-compose.up.yml up
but note that some of the run-time options in the build file, like volumes: to hide the application built into the image, will never take effect.
(You should be able to delete a large number of settings in the "up" YAML file that either duplicate what's in the image or that Docker Compose can provide for you: container_name:, expose:, links:, working_dir:, entrypoint:, networks:, and (probably) labels: are all unnecessary and can be deleted.)
What purpose does having multiple files serve instead of just one docker-compose.yml?
You can share configuration across environments. For example, I keep the common configuration such as the network and server in a docker-compose.yml. I keep my development environment specifics such as a server with automatic reload and debugging enabled in a docker-compose.override.yml. I keep the production-specific configs in a docker-compose.prod.yml. Then I can run docker-compose up --build for my development environment (Docker Compose uses docker-compose.yml and docker-compose.override.yml by default). And I can run my prod environment with docker-compose -f docker-compose.yml -f docker-compose.prod.yml up --build. You can read about this in the dedicated docs page.
How does docker-compose work when used with multiple files?
It takes the first file as the base file, and adds or replaces configs from subsequent files ot the base file. See the relevant docs.
When i do docker-compose -f docker-compose.build.yml build --no-cache ...
As for your last question, I can't really tell by what I've seen. But unlike Dockerfiles which need two commands (docker build and docker run), docker-compose only needs one. So when you do docker-compose up, it looks for a file named docker-compose.yml (and also docker-compose.override.yml if it's present).

Keycloak 8: User with username 'admin' already added

I cannot start keycloak container using ansible and docker-compose. I'am getting error: User with username 'admin' already added to '/opt/jboss/keycloak/standalone/configuration/keycloak-add-user.json'
I have 3 ansible jobs:
Create netwrok:
- name: Create a internal network
docker_network:
name: internal
Setup postgres:
- name: "Install Postgres"
docker_compose:
project_name: posgressdb
restarted: true
pull: yes
definition:
version: '2'
services:
postgres:
image: postgres:12.1
container_name: postgres
restart: always
env_file:
- /etc/app/db.env
networks:
- internal
volumes:
- postgres-data:/var/lib/postgresql/data
- /etc/app/createdb.sh:/docker-entrypoint-initdb.d/init-app-db.sh
ports:
- "5432:5432"
volumes:
postgres-data:
networks:
internal:
external:
name: internal
Create keycloak container:
- name: Install keycloak
docker_compose:
project_name: appauth
restarted: true
pull: yes
definition:
version: '2'
services:
keycloak:
image: jboss/keycloak:8.0.1
container_name: keycloak
restart: always
environment:
- DB_VENDOR=POSTGRES
- DB_ADDR=postgres
- DB_PORT=5432
- DB_SCHEMA=public
- DB_DATABASE=keycloak
- DB_USER=keycloak
- DB_PASSWORD=keycloak
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=admin
networks:
- internal
networks:
internal:
external:
name: internal
Does anyone have any idea why I get this error?
EDIT
If I downgrade keycloak to version 7 it starts normally!
Just to clarify the other answers. I had the same issue. What helped for me was:
stop all containers
comment out the two relevant lines
version: "3"
services:
keycloak:
image: quay.io/keycloak/keycloak:latest
environment:
# KEYCLOAK_USER: admin
# KEYCLOAK_PASSWORD: pass
...
start all containers;
wait until keycloak container has successfully started
stop all containers, again
comment back in the two lines from above
version: "3"
services:
keycloak:
image: quay.io/keycloak/keycloak:latest
environment:
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: pass
...
start all containers
This time (and subsequent times) it worked. Keycloak was running and the admin user was registered and working as expected.
This happens when Keycloak is interrupted during boot. After this, command which attempts to add admin user starts to fail. In Keycloak 7 this wasn't fatal, but in 8.0.1 this line was added to /opt/jboss/tools/docker-entrypoint.sh which aborts the entire startup script:
set -eou pipefail
Related issue: https://issues.redhat.com/browse/KEYCLOAK-12896
The reason commenting out the KEYCLOAK_USER works is it forces a recreation of the container. The same can be accomplished with:
docker rm -f keycloak
docker compose up keycloak
I had the same issue. After commenting out the KEYCLOAK_USER env variables in docker-compose and updating the stack, the container started again.
docker_compose:
project_name: appauth
restarted: true
pull: yes
definition:
version: '2'
services:
keycloak:
image: jboss/keycloak:8.0.1
container_name: keycloak
restart: always
environment:
- DB_VENDOR=POSTGRES
- DB_ADDR=postgres
- DB_PORT=5432
- DB_SCHEMA=public
- DB_DATABASE=keycloak
- DB_USER=keycloak
- DB_PASSWORD=keycloak
#- KEYCLOAK_USER=admin
#- KEYCLOAK_PASSWORD=admin
networks:
- internal
networks:
internal:
external:
name: internal
According to my findings, the best way to set this default user is NOT by adding it via environment variables, but via the following command:
docker exec <CONTAINER> /opt/jboss/keycloak/bin/add-user-keycloak.sh -u <USERNAME> -p <PASSWORD>
As per the official documentation.
I use Keycloak 12 where I still see this problem when the startup is interrupted. I could see that removing the file "keycloak-add-user.json" and restarting the container works.
Idea is to integrate this logic into container startup. I developed a simple custom-entrypoint script.
#!/bin/bash
set -e
echo "executing the custom entry point script"
FILE=/opt/jboss/keycloak/standalone/configuration/keycloak-add-user.json
if [ -f "$FILE" ]; then
echo "keycloak-add-user.json exist, hence deleting it"
rm $FILE
fi
echo "executing the entry point script from original image"
source "/opt/jboss/tools/docker-entrypoint.sh"
And I ensured to rebuild the keycloak image with appropriate adaptations to Entrypoint
in Dockerfile during the initial deployment.
ARG DEFAULT_IMAGE_BASEURL_APPS
FROM "${DEFAULT_IMAGE_BASEURL_APPS}/jboss/keycloak:12.0.1"
COPY custom-entrypoint.sh /opt/jboss/tools/custom-entrypoint.sh
ENTRYPOINT [ "/opt/jboss/tools/custom-entrypoint.sh" ]
As our deployment is on-premise, the access to the development team is not that easy. All that our first line support could do is try giving a restart of the server where we deployed. Hence the idea of this workaround.
The way I got past this was to replace set -eou pipefail with # set -eou pipefail within the container file systems.
Logged in as root on my docker host and then edited each of the files returned by this search:
find /var/lib/docker/overlay2 | grep /opt/jboss/tools/docker-entrypoint.sh
Thomas Solutions is good but restarting all containers and start again is worthless because my docker-compose file has 7 services.
I resolved the issue in two steps.
first I commend these two lines of code like other fellows did
#- KEYCLOAK_USER=admin
#- KEYCLOAK_PASSWORD=admin
Then new terminal I run this command and it works.
docker-compose up keycloak
keycloak is a ServiceName
For other users with this problem and none of the previous answers have helped, check your connection to the database, this error usually appears if keycloak cannot connect to the database.
Test in Keycloak 8 with Docker.
I have tried the solution by
Thomas as but it sometimes works sometimes does not.
The issue is that Keycloak on boot does not find the db required, so it gets interrupted as Zmey mentions. Have you tried in the second ansible job to add depends_on: - postgres ?
Having the same issue but with docker-compose, i first started with the postgres container in order to create the necessary dbs (manual step) docker-compose up postgres and the i booted the entire setup docker-compose up.
This was happening to me when I used to shut down the Keycloak containers in Portainer and tried to get them up and running again.
I can prevent the error by also 'removing' the container after I've shut it down (both in Portainer) and then running docker-compose up. Make sure not to remove any volumes attached to your containers else you may lose data.
In case you want to add user before server start or want it look like a classic migration, build custom image with admin parameters passed
FROM quay.io/keycloak/keycloak:latest
ARG ADMIN_USERNAME
ARG ADMIN_PASSWORD
RUN /opt/jboss/keycloak/bin/add-user-keycloak.sh -u $ADMIN_USERNAME -p $ADMIN_PASSWORD
docker-compose:
auth_service:
build:
context: .
dockerfile: Dockerfile
args:
ADMIN_USERNAME: ${KEYCLOAK_USERNAME}
ADMIN_PASSWORD: ${KEYCLOAK_PASSWORD}
(do not add KEYCLOAK_USERNAME/KEYCLOAK_PASSWORD to the environment section)
I was facing this issue with Keycloak "jboss/keycloak:11.0.3" running in Docker, error:
User with username 'admin' already added to '/opt/jboss/keycloak/standalone/configuration/keycloak-add-user.json'
Adicional info, was running with PostgreSQL v13.2 also in Docker. I create some schemas for other resources but I wasn't creating the schema for the Keycloak, so the solution was for my case, run in postgres the create schema command:
CREATE SCHEMA IF NOT EXISTS keycloak AUTHORIZATION postgres;
NOTE: Hope this helps, none of other solutions shared in this post solved my issue.
You can also stop the containers and simply remove associated volumes.
If you don't know wiwh volume is associated to your keycloak container, run:
docker-compose down
for vol in $(docker volume ls --format {{.Name}}); do
docker volume rm $vol
done

Trying to connect"neither an image nor a build context specified."

Running docker-compose commands like build and up works.
However when I try to connect docker with VS Code I get this error:
The Compose file is invalid because:
Service your-service-name-here has neither an image nor a build context specified. At least one must be provided.
This is the compose file:
version: '3'
services:
db:
image: postgres
ports:
- "5432:5432"
web:
build: .
command: bin/rails server --port 3000 --binding 0.0.0.0
ports:
- "3000:3000"
links:
- db
volumes:
- .:/myapp
Look at your devcontainer.json in the .devcontainer folder. Mine had an autogenerated docker-compose.yml from a previous experiment and it was a partially-filled template which could not work, hence the error message.
Found this by looking carefully at the command VSCode was trying to execute (the -f argument).
Cleaning up the .json config file solved the issue.
I very probably that your should be specific build context
build:
context: . # if you stay on root project

Docker Deploy stack extra hosts ignored

docker stack deploy isnt respecting the extra_hosts parameter in my compose file. when i do a simple docker-compose up the entry is created in the /etc/hosts however when i do docker deploy –compose-file docker-compose.yml myapp it ignores extra_hosts, any insights?
Below is the docker-compose.xml:
version: '3'
services:
web:
image: user-service
deploy:
labels:
- the label
build:
context: ./
environment:
DATABASE_URL: jdbc:postgresql://dbhost:5432/postgres
ports:
- 9002:9002
extra_hosts:
- "dbhost: ${DB_HOST}"
networks:
- wellness_swarm
env_file:
- .env
networks:
wellness_swarm:
external:
name: wellness_swarm
the docker-compose config also displays the compose file properly.
This may not be a direct answer to the question as it doesn't use env variables but what I found was that the extra_hosts block in the compose file was ignored in swarm mode if entered in the format above.
i.e. this works for me and puts entries in /etc/hosts in the container:
extra_hosts:
retisdev: 10.48.161.44
retistesting: 10.48.161.44
whereas when entered in the other format it gets ignored when deploying as a service
extra_hosts:
- "retisdev=10.48.161.44"
- "retistesting=10.48.161.44"
I think it's an ordering issue. The ${} variable you've got in the compose file runs during the YAML processing before the service definition is created. Then stack deploy processes the .env file for running in the container as envvars, but the YAML variable is needed first...
To fix that, you should use the docker-compose config command first, to process the YAML, and then use the output of that to send to the stack deploy.
docker-compose config will show you the output you're likely wanting.
Then do a pipe to get a one-liner.
docker-compose config | docker stack deploy -c - myapp
Note: Ideally you wouldn't use the extra_hosts, but rather put the envvar directly in the connection string. Your way seems like unnecessary complexity and isn't the usual way I see a connection string created.
e.g.
version: '3'
services:
web:
image: user-service
deploy:
labels:
- the label
build:
context: ./
environment:
DATABASE_URL: jdbc:postgresql://${DB_HOST}:5432/postgres
ports:
- 9002:9002
networks:
- wellness_swarm
env_file:
- .env
networks:
wellness_swarm:
external:
name: wellness_swarm
As i see https://github.com/moby/moby/issues/29133 seems like it is by design where in the compose command takes into consideration the environment variables mentioned in .env file however the deploy command ignores that :( why is that so, pretty lame reasons!

Resources