Docker-compose volume doesn't save state of container for Keycloak - docker

docker-compose.yml
services:
idprovider-app:
container_name: idprovider-app
build:
dockerfile: Dockerfile
context: .
environment:
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
volumes:
- keycloak-data-volume:/var/lib/keycloak/data
ports:
- "8090:8090"
- "8443:8443"
volumes:
keycloak-data-volume:
external: true
dockerfile
FROM jboss/keycloak:7.0.1
EXPOSE 8080
EXPOSE 8443
docker inspect "container"
"Mounts": [
{
"Type": "volume",
"Name": "keycloak-data-volume",
"Source": "/mnt/sda1/var/lib/docker/volumes/keycloak-data-volume/_data",
"Destination": "/var/lib/keycloak/data",
"Driver": "local",
"Mode": "rw",
"RW": true,
"Propagation": ""
}
],
docker volume inspect keycloak-data-volume
[
{
"CreatedAt": "2019-12-10T19:31:55Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/mnt/sda1/var/lib/docker/volumes/keycloak-data-volume/_data",
"Name": "keycloak-data-volume",
"Options": {},
"Scope": "local"
}
]
There isn't errors, but it doesn't save state. I have no any idea what's wrong. I run it on Windows 10.

Using default database location you may try this option with docker-compose:
keycloak:
image: quay.io/keycloak/keycloak:14.0.0
container_name: keycloak
environment:
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
ports:
- "8082:8080"
restart: always
volumes:
- .local/keycloak/:/opt/jboss/keycloak/standalone/data/
Found similar answer with plain docker https://stackoverflow.com/a/60554189/6916890
docker run --volume /root/keycloak/data/:/opt/jboss/keycloak/standalone/data/

In case you are using docker setup mentioned in https://www.keycloak.org/getting-started/getting-started-docker and looking for a way to persist data even if the container is killed then you can use docker volumes and mount the /opt/keycloak/data/ folder from docker container to a directory in your local machine.
The only change you need to do in the docker command mentioned in the getting started doc is add volume mount docker option using
-v /<path-in-your-local-machine>/keycloak-data/:/opt/keycloak/data/
so, the final docker run command with an example of local directory would look like:
docker run -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin \
-v /Users/amit/workspace/keycloak/keycloak-data/:/opt/keycloak/data/ \
quay.io/keycloak/keycloak:19.0.3 start-dev

Which database are you using with it? I think you need to bind the database volume as well with it to save the state.
For eg: for postgress
services:
postgres:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: password
or for mysql
services:
mysql:
image: mysql:5.7
volumes:
- mysql_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: keycloak
MYSQL_USER: keycloak
MYSQL_PASSWORD: password

You must specify the source of the database in the environment variables.
If you used a different service for the PostgreSQL instance for postgres, you must specify the DB_ADDR environment variable in your service.
services:
idprovider-app:
container_name: idprovider-app
build:
dockerfile: Dockerfile
context: .
environment:
DB_VENDOR: POSTGRES
# Specify hostname of the database (eg: hostname or hostname:port)
DB_ADDR: hostname:5432
DB_DATABASE: keycloak
DB_USER: keycloak
DB_SCHEMA: public
DB_PASSWORD: password
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
volumes:
- keycloak-data-volume:/var/lib/keycloak/data
ports:
- "8090:8090"
- "8443:8443"
volumes:
keycloak-data-volume:
external: true

my 2 cents, worked for me with the persistent volume pointing to /opt/keycloak/data/h2, with Keycloak docker version 19.0.1 :
-v /<path-in-your-local-machine>/keycloak-data/:/opt/keycloak/data/h2

Update for version >= 17.0
To complement lazylead's answer, you need to use /opt/keycloak/data/ instead of /opt/jboss/keycloak/standalone/data/ for keycloak version >= 17.0.0
https://stackoverflow.com/a/60554189/5424025

Related

How to deal with more than one `network_mode` in a VSCode Remote dev container?

I would like to have an application, database and redis service running in a dev container where I'd be able to access my database and redis inside the container, application and on Windows, this is what currently works just as I wanted for my application and database:
.devcontainer.json:
{
"name": "Node.js, TypeScript, PostgreSQL & Redis",
"dockerComposeFile": "docker-compose.yml",
"service": "akira",
"workspaceFolder": "/workspace",
"settings": {
"typescript.tsdk": "node_modules/typescript/lib",
"sqltools.connections": [
{
"name": "Container database",
"driver": "PostgreSQL",
"previewLimit": 50,
"server": "database",
"port": 5432,
"database": "akira",
"username": "ailuropoda",
"password": "melanoleuca"
}
],
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.fixAll": true
}
},
"extensions": [
"aaron-bond.better-comments",
"dbaeumer.vscode-eslint",
"esbenp.prettier-vscode",
"mtxr.sqltools",
"mtxr.sqltools-driver-pg",
"redhat.vscode-yaml"
],
"forwardPorts": [5432],
"postCreateCommand": "npm install",
"remoteUser": "node"
}
docker-compose.yml:
version: "3.8"
services:
akira:
build:
context: .
dockerfile: Dockerfile
command: sleep infinity
env_file: .env
volumes:
- ..:/workspace:cached
database:
image: postgres:latest
restart: unless-stopped
environment:
POSTGRES_USER: ailuropoda
POSTGRES_DB: akira
POSTGRES_PASSWORD: melanoleuca
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql/data
redis:
image: redis:alpine
tty: true
ports:
- 6379:6379
volumes:
pgdata:
Dockerfile:
ARG VARIANT="16-bullseye"
FROM mcr.microsoft.com/vscode/devcontainers/typescript-node:0-${VARIANT}
As you can see I already tried to achieve what I wanted to using networks but without success, my question is: How can I add Redis to my services while still being able to connect redis and database inside the application and on Windows?
Switch all non-dev containers to network_mode: service:akira
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile
volumes:
- ../..:/workspace:cached
command: sleep infinity
postgresql:
image: postgres:14.1
network_mode: service:akira
restart: unless-stopped
volumes:
- ../docker/volumes/postgresql:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: pornapp
redis:
image: redis
network_mode: service:akira
restart: unless-stopped
volumes:
- ../docker/volumes/redis:/data
It seems this was the original configuration :
https://github.com/microsoft/vscode-dev-containers/pull/523
But it was reverted back because if you rebuild the dev container while others servies are running, the port forwarding will break :
https://github.com/microsoft/vscode-dev-containers/issues/537
If you're using Docker on WSL, I found that I can often not connect when the process is listening on ::1, but when explicitly binding the port to 127.0.0.1 makes the service accessible through Windows.
So something like
ports:
- 127.0.0.1:5432:5432
might work
Delete all of the network_mode: settings. Compose will use the default network_mode: bridge. You'll be able to communicate between containers using their Compose service names as host names, as described in Networking in Compose in the Docker documentation.
version: "3.8"
services:
akira:
build: .
env_file: .env
environment:
PGHOST: database
database:
image: postgres:latest
...
In SO questions I frequently see trying to use network_mode: to make other things appear as localhost. That host name is incredibly context-sensitive; if you asked my laptop, one of the Stack Overflow HTTP servers, your application container, or your database container who localhost is, they'd each independently say "well I am of course" but referring to a different network context. network_mode: service:... sounds like you're trying to make the other container be localhost; in practice it's extremely unusual to use this.
You may need to change your application code to make settings like the database location configurable, depending on where they're running, and environment variables are an easy way to set this in Docker. For this particular example I've used the $PGHOST variable the standard PostgreSQL client libraries use; in a Typescript/Node context you may need to change your code to refer to process.env.SOME_HOSTNAME instead of 'localhost'.

How to link Keycloak docker image to MariaDB docker

I have a Maria DB docker image for my application.
I've now pulled Keycloak image. It is using the default h2. But I want to use my existing maria DB image
The documentation is asking me to create network etc, but I am not sure how I do it in cloud. So looking for a configuration based solution i.e. change Keycloak image config to link to Maria DB image. I am not using docker compose, I only pulled image.
https://github.com/keycloak/keycloak-containers/blob/master/server/README.md#environment-variables
Not sure what is environment variables - are these inside keycloak image or on host machine?
start command: docker run -p 7080:8080 -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=admin jboss/keycloak
and I find it is highly unsecure. Is there a secure way?
Edit:
I opened cli from docker dashboard, and typed
env
but do not know how can I add more env variables like
PROXY_ADDRESS_FORWARDING: 'true'
# PostgreSQL DB settings
DB_VENDOR: postgres
DB_ADDR: 172.17.0.1
DB_PORT: 5432
DB_DATABASE: keycloak
DB_SCHEMA: public
DB_USER: keycloak
DB_PWD: keycloak
(how to change PROXY_ADDRESS_FORWARDING=true from false ?)
I was able to do it like this. You need to define a network and add database and keycloak services to that network.
To add the env variables you have to define them under environment block.
version: '3.7'
services:
demo_db:
container_name: demo-maria-db
image: mariadb:10.5.8-focal
restart: always
ports:
- 3306:3306
volumes:
- /apps/demo/db:/data/db
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: mydb
MYSQL_USER: user
MYSQL_PASSWORD: password
networks:
demo_mesh:
aliases:
- demo-db
demo_keycloak:
container_name: demo-keycloak
image: jboss/keycloak:10.0.1
restart: always
ports:
- 8180:8080
environment:
PROXY_ADDRESS_FORWARDING: "true"
DB_VENDOR: mariadb
DB_ADDR: demo-db
DB_DATABASE: keycloak
DB_USER: user
DB_PASSWORD: password
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
depends_on:
- demo_db
networks:
- demo_mesh
networks:
demo_mesh: {}

Where is the pgpass file in pgadmin4 docker container when this file is mounted as a volume

I'm using the following image https://hub.docker.com/r/dpage/pgadmin4/ to set up pgAdmin4 on Ubuntu 18-04.
I have mounted a volume containing a pgpass file (which was also chmod for the pgadmin user inside the container) as you can see in my Compose file:
version: '3.8'
services:
pgadmin4:
image: dpage/pgadmin4
container_name: pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=me#localhost
- PGADMIN_DEFAULT_PASSWORD=******************
- PGADMIN_LISTEN_PORT=5050
- PGADMIN_SERVER_JSON_FILE=servers.json
volumes:
- ./config/servers.json:/pgadmin4/servers.json # <-- this file is well taken into account
- ./config/pgpass:/pgpass # <- there is no way to find this one on the other hand
ports:
- "5000:5000"
restart: unless-stopped
network_mode: host
But the it seems it's not recognized from the pgadmin webpage when I right click on a server and check its Advanced properties:
And if I manually specify /pgpass in the top greenish box where there's only a slash in the image, it says:
But if I log into the container, I can actually list that file:
/ $ ls -larth /pgpass
-rw------- 1 pgadmin pgadmin 574 Mar 10 22:37 /pgpass
What did I do wrong?
How can I get the pgpass file to be recognized by the application?
I got it working with the following insight.
In servers.json when you specify:
"PassFile": "/pgpass"
It means that / in the path begins in the user's storage dir, i.e.
pattern:
/var/lib/pgadmin/storage/<USERNAME>_<DOMAIN>/
example:
/var/lib/pgadmin/storage/postgres_acme.com/
Here's a working example that copies everything into the right spot and sets the perms correctly.
pgadmin:
image: dpage/pgadmin4
restart: unless-stopped
environment:
PGADMIN_DEFAULT_EMAIL: postgres#acme.com
PGADMIN_DEFAULT_PASSWORD: postgres
PGADMIN_LISTEN_ADDRESS: '0.0.0.0'
PGADMIN_LISTEN_PORT: 5050
tty: true
ports:
- 5050:5050
volumes:
- ~/data/pgadmin_data:/var/lib/pgadmin
- ./local-cloud/servers.json:/pgadmin4/servers.json # preconfigured servers/connections
- ./local-cloud/pgpass:/pgadmin4/pgpass # passwords for the connections in this file
entrypoint: >
/bin/sh -c "
mkdir -m 700 /var/lib/pgadmin/storage/postgres_acme.com;
chown -R pgadmin:pgadmin /var/lib/pgadmin/storage/postgres_acme.com;
cp -prv /pgadmin4/pgpass /var/lib/pgadmin/storage/postgres_acme.com/;
chmod 600 /var/lib/pgadmin/storage/postgres_acme.com/pgpass;
/entrypoint.sh
"
The following config worked for me:
pgpass
servers.json
docker-compose.yaml
dockerfile_for_pgadmin
pgpass
docker_postgres_db:5432:postgres:postgres:postgres
servers.json
{
"Servers": {
"1": {
"Name": "docker_postgres",
"Group": "docker_postgres_group",
"Host": "docker_postgres_db",
"Port": 5432,
"MaintenanceDB": "postgres",
"Username": "postgres",
"PassFile": "/pgpass",
"SSLMode": "prefer"
}
}
}
docker-compose.yaml
version: "3.9"
services:
docker_postgres_db:
image: postgres
volumes:
- ./postgres_db_data:/var/lib/postgresql/data # mkdir postgres_db_data before docker compose up
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
ports:
- "15432:5432"
pgadmin:
build:
context: .
dockerfile: ./dockerfile_for_pgadmin
environment:
PGADMIN_DEFAULT_EMAIL: pgadmin#pgadmin.com
PGADMIN_DEFAULT_PASSWORD: pgadmin
ports:
- "5050:80"
volumes:
- ./servers.json:/pgadmin4/servers.json # preconfigured servers/connections
dockerfile+for_pgadmin
FROM dpage/pgadmin4
USER pgadmin
RUN mkdir -p /var/lib/pgadmin/storage/pgadmin_pgadmin.com
COPY ./pgpass /var/lib/pgadmin/storage/pgadmin_pgadmin.com/
USER root
RUN chown pgadmin:pgadmin /var/lib/pgadmin/storage/pgadmin_pgadmin.com/pgpass
RUN chmod 0600 /var/lib/pgadmin/storage/pgadmin_pgadmin.com/pgpass
USER pgadmin
ENTRYPOINT ["/entrypoint.sh"]
On pgAdmin 6.2, PassFile points to the absolute path inside container, instead of a path under STORAGE_DIR (/var/lib/pgadmin).
Before the entrypoint phase, just need to set owner and permissions for the pgpass file.
docker-compose.yml
pgadmin:
image: dpage/pgadmin4:6.2
entrypoint: >
/bin/sh -c "
cp -f /pgadmin4/pgpass /var/lib/pgadmin/;
chmod 600 /var/lib/pgadmin/pgpass;
chown pgadmin:pgadmin /var/lib/pgadmin/pgpass;
/entrypoint.sh
"
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL:-pgadmin4#pgadmin.org}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD:-admin}
PGADMIN_CONFIG_SERVER_MODE: "False"
PGADMIN_CONFIG_MASTER_PASSWORD_REQUIRED: "False"
volumes:
- ./config/servers.json:/pgadmin4/servers.json
- ./config/pgpass:/pgadmin4/pgpass
ports:
- "${PGADMIN_PORT:-5050}:80"
servers.json
{
"Servers": {
"1": {
"Name": "pgadmin4#pgadmin.org",
"Group": "Servers",
"Host": "postgres",
"Port": 5432,
"MaintenanceDB": "postgres",
"Username": "postgres",
"SSLMode": "prefer",
"PassFile": "/var/lib/pgadmin/pgpass"
}
}
}
pgpass
postgres:5432:postgres:postgres:Welcome01
Update:
Updated entrypoint on docker-compose.yml and PassFile on servers.json for a cross platform working solution.
Update 2:
I created a container image (dcagatay/pwless-pgadmin4) for passwordless pgadmin4.
The problem here seems to be that '/' in the servers.json file does not mean '/' in the filesystem, but something relative to the STORAGE_DIR set in the config. In fact, a separate storage directory for each user is created, so with your user me#localhost you will have to mount ./config/pgpass to /var/lib/pgadmin4/storage/me_localhost/pgpass, but still refer to it as /pgpass in your servers.json.
I'm running the latest version of pgadmin4 as of this post (6.11). It took me forever to find the answer as to how to set a pgpass file location without storing it in the user's uploads dir (insecure IMO).
Unfortunately it does not seem to work using an absolute path e.g. /var/lib/pgadmin/pgpass.
However, what did work was this workaround I found here: https://github.com/rowanruseler/helm-charts/issues/72#issuecomment-1002300143
Basically if you use ../../pgpass, you can traverse the filesystem instead of the default behaviour of looking inside the user's uploads folder.
Example servers.json:
{
"Servers": {
"1": {
"Name": "my-postgres-instance",
"Group": "Servers",
"Host": "postgres",
"Port": 5432,
"MaintenanceDB": "postgres",
"Username": "postgres",
"SSLMode": "prefer",
"PassFile": "../../pgpass"
}
}
}
Also, setting the file permission as 0600 is a critical step - the file can not be world-readable, see https://stackoverflow.com/a/28152568/15198761 for more info.
In a K8s environment, using the offical pgadmin image I use a configmap for the servers.json along with the following command:
command:
- sh
- -c
- |
set -e
cp -f /pgadmin4/pgpass /var/lib/pgadmin/
chown 5050:5050 /var/lib/pgadmin/pgpass
chmod 0600 /var/lib/pgadmin/pgpass
/entrypoint.sh
Using a combination of the above, I was finally able to connect to my postgres instance without needing to put in a password or keeping the pgpass file in the user's uploads dir.

Adding a label or name to volume in docker-compose.yml file

I need to find volume by label or name easily not by a docker assigned id like:
docker volume ls --filter label=key=value
but if I try add a 'container_name' or 'labels' to docker-compose.yaml I can't see any assigned label of name to volume when I inspect it, here is an output:
>>> docker volume inspect <volume_id>
[
{
"CreatedAt": "2020-10-28T11:41:51+01:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/4dce13df34f4630b34fbf1f853f7b59dbee2e3150a5122fa38d02024c155ec7d/_data",
"Name": "4dce13df34f4630b34fbf1f853f7b59dbee2e3150a5122fa38d02024c155ec7d",
"Options": null,
"Scope": "local"
}
]
I believe I can filter volumes by labels and name.
Here is a part of docker-compose.yml config file for mongo service:
version: '3.4'
services:
mongodb:
container_name: some_name
image: mongo
labels:
com.docker.compose.project: app-name
restart: always
ports:
- 27017:27017
volumes:
- ./mongo:/data/db
I'm not exactly sure what you're tring to acheive here, but I hope something in my response will be helpful.
You can define a named volume within your docker-compose.yml
version: '3.4'
services:
mongodb:
container_name: some_name
image: mongo
labels:
com.docker.compose.project: app-name
restart: always
ports:
- 27017:27017
volumes:
- mongo_db:/data/db
volumes:
mongo_db:
You could then use the docker volume inspect command to see some details about this volume.
docker volume inspect mongo_db

Docker container in CircleCI not showing files even though volume appears mounted

This is the docker-compose command and the results:
$ docker-compose -f docker-compose-base.yml -f docker-compose-test.yml run api sh -c 'pwd && ls'
Starting test-db ... done
/usr/src/api
node_modules
I then inspected the most recent container id:
$ docker inspect --format='{{json .Mounts}}' e150beeef85c
[
{
"Type": "bind",
"Source": "/home/circleci/project",
"Destination": "/usr/src/api",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "4f86174ca322af6d15489da91f745861815a02f5b4e9e879ef5375663b9defff",
"Source": "/var/lib/docker/volumes/4f86174ca322af6d15489da91f745861815a02f5b4e9e879ef5375663b9defff/_data",
"Destination": "/usr/src/api/node_modules",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
]
Which means, these files are not appearing:
$ ls /home/circleci/project
Dockerfile docker-compose-base.yml docker-compose-prod.yml migrations nodemon-debug.json package-lock.json src test-db.env tsconfig.build.json tslint.json
README.md docker-compose-dev.yml docker-compose-test.yml nest-cli.json nodemon.json package.json test test.env tsconfig.json
Why could this be?
Update: I should mention that all this works fine on my local dev environment. The above is failing on CircleCI.
When I inspect the differences between the containers, the only major things that I see is that my dev environment runs Docker 19 using overlay2 graph driver and the above failing environment runs Docker 17 using aufs graph driver.
Update 2: Actual docker-compose files:
# docker-compose-base.yml
version: '3'
services:
api:
build: .
restart: on-failure
container_name: api
# docker-compose-test.yml
version: '3'
networks:
default:
external:
name: lb_lbnet
services:
test-db:
image: postgres:11
container_name: test-db
env_file:
- ./test-db.env # uses POSTGRES_DB and POSTGRES_PASSWORD to create a fresh db with a password when first run
api:
restart: 'no'
env_file:
- test.env
volumes:
- ./:/usr/src/api
- /usr/src/api/node_modules
depends_on:
- test-db
ports:
- 9229:9229
- 3000:3000
command: npm run start:debug
And finally Dockerfile:
FROM node:11
WORKDIR /usr/src/api
COPY package*.json ./
RUN npm install
COPY . .
# not using an execution list here so we get shell variable substitution
CMD npm run start:$NODE_ENV
As #allisongranemann pointed out, CircleCI states:
It is not possible to mount a volume from your job space into a
container in Remote Docker (and vice versa).
The original reason why I wanted to mount the project directory to docker was that in the development environment, I could change code quickly and run tests without rebuilding the container.
With this limitation, the solution I went with was to remove volumes mounting from docker-compose-test.yml as follow:
version: '3'
services:
test-db:
image: postgres:11
container_name: test-db
env_file:
- ./test-db.env # uses POSTGRES_DB and POSTGRES_PASSWORD to create a fresh db with a password when first run
api:
restart: 'no'
env_file:
- test.env
depends_on:
- test-db
ports:
- 9229:9229
- 3000:3000
command: npm run start:debug
And I also added docker-compose-test-dev.yml that adds the volumes for the dev environment:
version: '3'
services:
api:
volumes:
- ./:/usr/src/api
Finally, to run tests on the dev environment, I run:
docker-compose -f docker-compose-base.yml -f docker-compose-test.yml -f docker-compose-test-dev.yml run api npm run test:e2e

Resources