This is the docker-compose command and the results:
$ docker-compose -f docker-compose-base.yml -f docker-compose-test.yml run api sh -c 'pwd && ls'
Starting test-db ... done
/usr/src/api
node_modules
I then inspected the most recent container id:
$ docker inspect --format='{{json .Mounts}}' e150beeef85c
[
{
"Type": "bind",
"Source": "/home/circleci/project",
"Destination": "/usr/src/api",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "4f86174ca322af6d15489da91f745861815a02f5b4e9e879ef5375663b9defff",
"Source": "/var/lib/docker/volumes/4f86174ca322af6d15489da91f745861815a02f5b4e9e879ef5375663b9defff/_data",
"Destination": "/usr/src/api/node_modules",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
]
Which means, these files are not appearing:
$ ls /home/circleci/project
Dockerfile docker-compose-base.yml docker-compose-prod.yml migrations nodemon-debug.json package-lock.json src test-db.env tsconfig.build.json tslint.json
README.md docker-compose-dev.yml docker-compose-test.yml nest-cli.json nodemon.json package.json test test.env tsconfig.json
Why could this be?
Update: I should mention that all this works fine on my local dev environment. The above is failing on CircleCI.
When I inspect the differences between the containers, the only major things that I see is that my dev environment runs Docker 19 using overlay2 graph driver and the above failing environment runs Docker 17 using aufs graph driver.
Update 2: Actual docker-compose files:
# docker-compose-base.yml
version: '3'
services:
api:
build: .
restart: on-failure
container_name: api
# docker-compose-test.yml
version: '3'
networks:
default:
external:
name: lb_lbnet
services:
test-db:
image: postgres:11
container_name: test-db
env_file:
- ./test-db.env # uses POSTGRES_DB and POSTGRES_PASSWORD to create a fresh db with a password when first run
api:
restart: 'no'
env_file:
- test.env
volumes:
- ./:/usr/src/api
- /usr/src/api/node_modules
depends_on:
- test-db
ports:
- 9229:9229
- 3000:3000
command: npm run start:debug
And finally Dockerfile:
FROM node:11
WORKDIR /usr/src/api
COPY package*.json ./
RUN npm install
COPY . .
# not using an execution list here so we get shell variable substitution
CMD npm run start:$NODE_ENV
As #allisongranemann pointed out, CircleCI states:
It is not possible to mount a volume from your job space into a
container in Remote Docker (and vice versa).
The original reason why I wanted to mount the project directory to docker was that in the development environment, I could change code quickly and run tests without rebuilding the container.
With this limitation, the solution I went with was to remove volumes mounting from docker-compose-test.yml as follow:
version: '3'
services:
test-db:
image: postgres:11
container_name: test-db
env_file:
- ./test-db.env # uses POSTGRES_DB and POSTGRES_PASSWORD to create a fresh db with a password when first run
api:
restart: 'no'
env_file:
- test.env
depends_on:
- test-db
ports:
- 9229:9229
- 3000:3000
command: npm run start:debug
And I also added docker-compose-test-dev.yml that adds the volumes for the dev environment:
version: '3'
services:
api:
volumes:
- ./:/usr/src/api
Finally, to run tests on the dev environment, I run:
docker-compose -f docker-compose-base.yml -f docker-compose-test.yml -f docker-compose-test-dev.yml run api npm run test:e2e
Related
I have created a simple app connected with PostgreSQL and pgAdmin, as well as a web server in a Docker images running in a container.
My question is how I can make it reload, like with nodemon in a local server, without the need of deleting the container everytime.
I have been trying different solutions and methods I have seen around but I haven't been able to make it work.
I have already tried inserting the command: ["npm", "run", "start:dev"] in the docker-compose.file as well...
My files are:
Dockerfile
FROM node:latest
WORKDIR /
COPY package*.json ./
COPY . .
COPY database.json .
COPY .env .
EXPOSE 3000
CMD [ "npm", "run", "watch ]
Docker-compose.file
version: '3.7'
services:
postgres:
image: postgres:latest
environment:
- POSTGRES_USER=test
- POSTGRES_PASSWORD=tes
- POSTGRES_DB=test
ports:
- 5432:5432
logging:
options:
max-size: 10m
max-file: "3"
pgadmin:
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=test#gmail.com
- PGADMIN_DEFAULT_PASSWORD=pasword123test
ports:
- "5050:80"
web:
build: .
# command: ["npm", "run", "start:dev"]
links:
- postgres
image: prueba
depends_on:
- postgres
ports:
- '3000:3000'
env_file:
- .env
Nodemon.json file:
{
"watch": ["dist"],
"ext": ".ts,.js",
"ignore": [],
"exec": "ts-node ./dist/server.js"
}
Package.json file:
"scripts": {
"start:dev": "nodemon",
"build": "rimraf ./dist && tsc",
"start": "npm run build && node dist/server.js",
"watch": "tsc-watch --esModuleInterop src/server.ts --outDir ./dist --onSuccess \"node ./dist/server.js\"",
"jasmine": "jasmine",
"test": "npm run build && npm run jasmine",
"db-test": "set ENV=test&& db-migrate -e test up && npm run test && db-migrate -e test reset",
"lint": "eslint . --ext .ts",
"prettier": "prettier --config .prettierrc src/**/*.ts --write",
"prettierLint": "prettier --config .prettierrc src/**/*.ts --write && eslint . --ext .ts --fix"
},
Thanks
The COPY . . command only runs when the image is built, which only happens when you first run docker compose up. In order for the container to be aware of changes, you need the code changes on your host machine to be synchronized with the code inside the container, even after the build is complete.
Below I've added the volume mount to the web container in your docker compose and uncommented the command that should support hot-reloading. I assumed that the source code you wanted to change lives in a src directory, but feel free to update to reflect how you've organized your source code.
version: '3.7'
services:
postgres:
image: postgres:latest
environment:
- POSTGRES_USER=test
- POSTGRES_PASSWORD=tes
- POSTGRES_DB=test
ports:
- 5432:5432
logging:
options:
max-size: 10m
max-file: "3"
pgadmin:
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=test#gmail.com
- PGADMIN_DEFAULT_PASSWORD=pasword123test
ports:
- "5050:80"
web:
build: .
command: ["npm", "run", "start:dev"]
links:
- postgres
image: prueba
depends_on:
- postgres
ports:
- '2000:2000'
env_file:
- .env
volumes:
# <host-path>:<container-path>
- ./src:/src/
If that isn't clear, here's an article that might help:
https://www.freecodecamp.org/news/how-to-enable-live-reload-on-docker-based-applications/
I have two docker containers connected through a frontendbuild docker volume:
services:
nginx:
container_name: nginx
build:
context: .
dockerfile: ./compose/production/nginx_ssltls/Dockerfile
#restart: unless-stopped
volumes:
- ./compose/production/nginx_live:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
- staticfiles_harcelement:/app/static
- mediafiles_harcelement:/app/media
- frontendbuild:/usr/share/nginx/html/build
ports:
- "80:80"
- "443:443"
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
networks:
- network_app
react
build:
context: .
dockerfile: ./compose/production/frontend/Dockerfile
#restart: always
volumes:
- frontendbuild:/app/frontend/build
networks:
- network_app
Each time i re-run my build, the volume is not updated with the updated /app/frontend/build folder from the updated react docker container.
I have found how to update a volume from a folder on my host machine, but this time the build is created in the Dockerfile, so the files I need to update to the volume are inside the container...
How can i automatize this in code?
Here is result of docker inspect volume :
[
{
"CreatedAt": "2022-07-07T19:37:23+02:00",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "app-harcelement",
"com.docker.compose.version": "1.25.0",
"com.docker.compose.volume": "frontendbuild"
},
"Mountpoint": "/var/lib/docker/volumes/frontendbuild/_data",
"Name": "frontendbuild",
"Options": null,
"Scope": "local"
}
]
Thank you
I'm unable to mount a host directory (on a rasberry pi) to a docker container api_service. Even with host chmod -R 777.
I was able to mount it running the api_service from commandline docker start --mount type=bind,src=/data/yarmp-data,target=/data/yarmp-data docker_api_service_1 and docker inspect containerId in this case the mount section was telling me the mount was done and inside the container it was the case. But I'd like to achieve that with docker-compose.
I tried different syntaxes into the docker-compose.yaml file but never achieving it. Every time removing all containers, images, then docker-compose build and docker-compose up.
What am I missing? is there a way to trace the mount options at startup of the container?
Should the target directory have been created into the target image before mounting it on docker-compose.yaml?
docker-compose.yaml
#Doc: https://github.com/compose-spec/compose-spec/blob/master/spec.md
version: '3.2'
services:
api_service:
build: ./api_service
restart: always
ports:
- target: 8080
published: 8080
depends_on:
- postgres_db
links:
- postgres_db:yarmp-db-host # database is postgres_db hostname into this api_service
volumes:
- type: bind
source: $HOST/data/yarmp-data #Host with this version not working
source: /data/yarmp-data #Host absolute path not working
#source: ./mount-test #not working either
target: /data/yarmp-data
#- /data/yarmp-data:/data/yarmp-data # not working either
postgres_db:
build: ./postgres_db
restart: always
ports:
- target: 5432
published: 5432
env_file:
- postgres_db/pg-db-database.env # configure postgres
volumes:
- database-data:/var/lib/postgresql/data/
postgres_db/Dockerfile
FROM postgres:latest
LABEL maintainer="me#mail.com"
RUN mkdir -p /docker-entrypoint-initdb.d
COPY yarmp-dump.sql /docker-entrypoint-initdb.d/
api_service/Dockerfile
FROM arm32v7/adoptopenjdk
LABEL maintainer="me#mail.com"
RUN apt-get update
RUN apt-get -y install git curl vim
CMD ["/bin/bash"]
#csv files data
RUN mkdir -p /data/yarmp-data #Should I create it or not??
RUN mkdir -p /main-app
WORKDIR /main-app
# JAVA APP DATA
ADD my-api-0.0.1-SNAPSHOT.jar app.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar","/main-app/app.jar"]
Seems my entire docker-compose.yaml file was not correct.
As pointed out by #xdhmoore there was an indentation issue, and others.
I figured out by:
validating the docker-compose.yaml with docker-compose config
Tabs are NOT permitted by the YAML specs, USE ONLY SPACES FOR INDENTATION
Note that vim default configuration file /usr/share/vim/vim81/ftplugin/yaml.vim was right replacing tabs with spaces...
The indentation of long syntax was done on my editor with tabs when 2 spaces before were working. Here my final docker-compose.yaml
docker-compose.yaml
version: '3.2'
services:
api_service:
build: ./api_service
restart: always
ports:
- target: 8080
published: 8080 #2 spaces before published
depends_on:
- postgres_db
links:
- postgres_db:yarmp-db-host
volumes:
- type: bind
source: /data/yarmp-data #2 spaces before source, meaning same level as previous '- types:...' and add 2 spaces more
target: /data/yarmp-data #2 spaces before target
postgres_db:
build: ./postgres_db
restart: always
ports:
- target: 5432
published: 5432 #2 spaces before published
env_file:
- postgres_db/pg-db-database.env # configure postgres
volumes:
- database-data:/var/lib/postgresql/data/
volumes:
database-data:
This is based on the YAML in your answer. When I plug it into this yaml to json converter, I get:
{
"version": "3.2",
"services": null,
"api_service": {
"build": "./api_service",
"restart": "always",
"ports": [
{
"target": "8080\npublished: 8080"
}
],
"depends_on": [
"postgres_db"
],
"links": [
"postgres_db:yarmp-db-host"
],
"volumes": [
{
"type": "bind\nsource: /data/yarmp-data"
}
]
},
"postgres_db": {
"build": "./postgres_db",
"restart": "always",
"ports": [
{
"target": "5432\npublished: 5432"
}
],
"env_file": [
"postgres_db/pg-db-database.env"
],
"volumes": [
"database-data:/var/lib/postgresql/data/"
]
},
"volumes": {
"database-data": null
}
}
You can see several places where the result is something like "type": "bind\nsource: /data/yarmp-data".
It appears that YAML is interpreting the source line here as the 2nd line of a multiline string. However, if you adjust the indentation to line up with the t in - type, you end up with:
...
"volumes": [
{
"type": "bind",
"source": "/data/yarmp-data",
"target": "/data/yarmp-data"
}
]
...
The indentation in YAML is tricky (and it matters), so I've found the above and similar tools helpful to get what I want. It also helps me to think about YAML in terms of lists and objects and strings. Here - creates a new item in a list, and type: bind is a key-value in that item (not in the list). Then source: blarg is also a key-value in the same item, so it makes sense that it should line up with the t in type. Indenting more indicates you are continuing a multiline string, and I think if you indented less (like aligning with -), you would get an error or end up adding a key-value pair to one of the objects higher up the hierarchy.
Anyway, it's certainly confusing. I've found such online tools to be helpful.
I'm using the following image https://hub.docker.com/r/dpage/pgadmin4/ to set up pgAdmin4 on Ubuntu 18-04.
I have mounted a volume containing a pgpass file (which was also chmod for the pgadmin user inside the container) as you can see in my Compose file:
version: '3.8'
services:
pgadmin4:
image: dpage/pgadmin4
container_name: pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=me#localhost
- PGADMIN_DEFAULT_PASSWORD=******************
- PGADMIN_LISTEN_PORT=5050
- PGADMIN_SERVER_JSON_FILE=servers.json
volumes:
- ./config/servers.json:/pgadmin4/servers.json # <-- this file is well taken into account
- ./config/pgpass:/pgpass # <- there is no way to find this one on the other hand
ports:
- "5000:5000"
restart: unless-stopped
network_mode: host
But the it seems it's not recognized from the pgadmin webpage when I right click on a server and check its Advanced properties:
And if I manually specify /pgpass in the top greenish box where there's only a slash in the image, it says:
But if I log into the container, I can actually list that file:
/ $ ls -larth /pgpass
-rw------- 1 pgadmin pgadmin 574 Mar 10 22:37 /pgpass
What did I do wrong?
How can I get the pgpass file to be recognized by the application?
I got it working with the following insight.
In servers.json when you specify:
"PassFile": "/pgpass"
It means that / in the path begins in the user's storage dir, i.e.
pattern:
/var/lib/pgadmin/storage/<USERNAME>_<DOMAIN>/
example:
/var/lib/pgadmin/storage/postgres_acme.com/
Here's a working example that copies everything into the right spot and sets the perms correctly.
pgadmin:
image: dpage/pgadmin4
restart: unless-stopped
environment:
PGADMIN_DEFAULT_EMAIL: postgres#acme.com
PGADMIN_DEFAULT_PASSWORD: postgres
PGADMIN_LISTEN_ADDRESS: '0.0.0.0'
PGADMIN_LISTEN_PORT: 5050
tty: true
ports:
- 5050:5050
volumes:
- ~/data/pgadmin_data:/var/lib/pgadmin
- ./local-cloud/servers.json:/pgadmin4/servers.json # preconfigured servers/connections
- ./local-cloud/pgpass:/pgadmin4/pgpass # passwords for the connections in this file
entrypoint: >
/bin/sh -c "
mkdir -m 700 /var/lib/pgadmin/storage/postgres_acme.com;
chown -R pgadmin:pgadmin /var/lib/pgadmin/storage/postgres_acme.com;
cp -prv /pgadmin4/pgpass /var/lib/pgadmin/storage/postgres_acme.com/;
chmod 600 /var/lib/pgadmin/storage/postgres_acme.com/pgpass;
/entrypoint.sh
"
The following config worked for me:
pgpass
servers.json
docker-compose.yaml
dockerfile_for_pgadmin
pgpass
docker_postgres_db:5432:postgres:postgres:postgres
servers.json
{
"Servers": {
"1": {
"Name": "docker_postgres",
"Group": "docker_postgres_group",
"Host": "docker_postgres_db",
"Port": 5432,
"MaintenanceDB": "postgres",
"Username": "postgres",
"PassFile": "/pgpass",
"SSLMode": "prefer"
}
}
}
docker-compose.yaml
version: "3.9"
services:
docker_postgres_db:
image: postgres
volumes:
- ./postgres_db_data:/var/lib/postgresql/data # mkdir postgres_db_data before docker compose up
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
ports:
- "15432:5432"
pgadmin:
build:
context: .
dockerfile: ./dockerfile_for_pgadmin
environment:
PGADMIN_DEFAULT_EMAIL: pgadmin#pgadmin.com
PGADMIN_DEFAULT_PASSWORD: pgadmin
ports:
- "5050:80"
volumes:
- ./servers.json:/pgadmin4/servers.json # preconfigured servers/connections
dockerfile+for_pgadmin
FROM dpage/pgadmin4
USER pgadmin
RUN mkdir -p /var/lib/pgadmin/storage/pgadmin_pgadmin.com
COPY ./pgpass /var/lib/pgadmin/storage/pgadmin_pgadmin.com/
USER root
RUN chown pgadmin:pgadmin /var/lib/pgadmin/storage/pgadmin_pgadmin.com/pgpass
RUN chmod 0600 /var/lib/pgadmin/storage/pgadmin_pgadmin.com/pgpass
USER pgadmin
ENTRYPOINT ["/entrypoint.sh"]
On pgAdmin 6.2, PassFile points to the absolute path inside container, instead of a path under STORAGE_DIR (/var/lib/pgadmin).
Before the entrypoint phase, just need to set owner and permissions for the pgpass file.
docker-compose.yml
pgadmin:
image: dpage/pgadmin4:6.2
entrypoint: >
/bin/sh -c "
cp -f /pgadmin4/pgpass /var/lib/pgadmin/;
chmod 600 /var/lib/pgadmin/pgpass;
chown pgadmin:pgadmin /var/lib/pgadmin/pgpass;
/entrypoint.sh
"
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL:-pgadmin4#pgadmin.org}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD:-admin}
PGADMIN_CONFIG_SERVER_MODE: "False"
PGADMIN_CONFIG_MASTER_PASSWORD_REQUIRED: "False"
volumes:
- ./config/servers.json:/pgadmin4/servers.json
- ./config/pgpass:/pgadmin4/pgpass
ports:
- "${PGADMIN_PORT:-5050}:80"
servers.json
{
"Servers": {
"1": {
"Name": "pgadmin4#pgadmin.org",
"Group": "Servers",
"Host": "postgres",
"Port": 5432,
"MaintenanceDB": "postgres",
"Username": "postgres",
"SSLMode": "prefer",
"PassFile": "/var/lib/pgadmin/pgpass"
}
}
}
pgpass
postgres:5432:postgres:postgres:Welcome01
Update:
Updated entrypoint on docker-compose.yml and PassFile on servers.json for a cross platform working solution.
Update 2:
I created a container image (dcagatay/pwless-pgadmin4) for passwordless pgadmin4.
The problem here seems to be that '/' in the servers.json file does not mean '/' in the filesystem, but something relative to the STORAGE_DIR set in the config. In fact, a separate storage directory for each user is created, so with your user me#localhost you will have to mount ./config/pgpass to /var/lib/pgadmin4/storage/me_localhost/pgpass, but still refer to it as /pgpass in your servers.json.
I'm running the latest version of pgadmin4 as of this post (6.11). It took me forever to find the answer as to how to set a pgpass file location without storing it in the user's uploads dir (insecure IMO).
Unfortunately it does not seem to work using an absolute path e.g. /var/lib/pgadmin/pgpass.
However, what did work was this workaround I found here: https://github.com/rowanruseler/helm-charts/issues/72#issuecomment-1002300143
Basically if you use ../../pgpass, you can traverse the filesystem instead of the default behaviour of looking inside the user's uploads folder.
Example servers.json:
{
"Servers": {
"1": {
"Name": "my-postgres-instance",
"Group": "Servers",
"Host": "postgres",
"Port": 5432,
"MaintenanceDB": "postgres",
"Username": "postgres",
"SSLMode": "prefer",
"PassFile": "../../pgpass"
}
}
}
Also, setting the file permission as 0600 is a critical step - the file can not be world-readable, see https://stackoverflow.com/a/28152568/15198761 for more info.
In a K8s environment, using the offical pgadmin image I use a configmap for the servers.json along with the following command:
command:
- sh
- -c
- |
set -e
cp -f /pgadmin4/pgpass /var/lib/pgadmin/
chown 5050:5050 /var/lib/pgadmin/pgpass
chmod 0600 /var/lib/pgadmin/pgpass
/entrypoint.sh
Using a combination of the above, I was finally able to connect to my postgres instance without needing to put in a password or keeping the pgpass file in the user's uploads dir.
docker-compose.yml
services:
idprovider-app:
container_name: idprovider-app
build:
dockerfile: Dockerfile
context: .
environment:
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
volumes:
- keycloak-data-volume:/var/lib/keycloak/data
ports:
- "8090:8090"
- "8443:8443"
volumes:
keycloak-data-volume:
external: true
dockerfile
FROM jboss/keycloak:7.0.1
EXPOSE 8080
EXPOSE 8443
docker inspect "container"
"Mounts": [
{
"Type": "volume",
"Name": "keycloak-data-volume",
"Source": "/mnt/sda1/var/lib/docker/volumes/keycloak-data-volume/_data",
"Destination": "/var/lib/keycloak/data",
"Driver": "local",
"Mode": "rw",
"RW": true,
"Propagation": ""
}
],
docker volume inspect keycloak-data-volume
[
{
"CreatedAt": "2019-12-10T19:31:55Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/mnt/sda1/var/lib/docker/volumes/keycloak-data-volume/_data",
"Name": "keycloak-data-volume",
"Options": {},
"Scope": "local"
}
]
There isn't errors, but it doesn't save state. I have no any idea what's wrong. I run it on Windows 10.
Using default database location you may try this option with docker-compose:
keycloak:
image: quay.io/keycloak/keycloak:14.0.0
container_name: keycloak
environment:
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
ports:
- "8082:8080"
restart: always
volumes:
- .local/keycloak/:/opt/jboss/keycloak/standalone/data/
Found similar answer with plain docker https://stackoverflow.com/a/60554189/6916890
docker run --volume /root/keycloak/data/:/opt/jboss/keycloak/standalone/data/
In case you are using docker setup mentioned in https://www.keycloak.org/getting-started/getting-started-docker and looking for a way to persist data even if the container is killed then you can use docker volumes and mount the /opt/keycloak/data/ folder from docker container to a directory in your local machine.
The only change you need to do in the docker command mentioned in the getting started doc is add volume mount docker option using
-v /<path-in-your-local-machine>/keycloak-data/:/opt/keycloak/data/
so, the final docker run command with an example of local directory would look like:
docker run -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin \
-v /Users/amit/workspace/keycloak/keycloak-data/:/opt/keycloak/data/ \
quay.io/keycloak/keycloak:19.0.3 start-dev
Which database are you using with it? I think you need to bind the database volume as well with it to save the state.
For eg: for postgress
services:
postgres:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: password
or for mysql
services:
mysql:
image: mysql:5.7
volumes:
- mysql_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: keycloak
MYSQL_USER: keycloak
MYSQL_PASSWORD: password
You must specify the source of the database in the environment variables.
If you used a different service for the PostgreSQL instance for postgres, you must specify the DB_ADDR environment variable in your service.
services:
idprovider-app:
container_name: idprovider-app
build:
dockerfile: Dockerfile
context: .
environment:
DB_VENDOR: POSTGRES
# Specify hostname of the database (eg: hostname or hostname:port)
DB_ADDR: hostname:5432
DB_DATABASE: keycloak
DB_USER: keycloak
DB_SCHEMA: public
DB_PASSWORD: password
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
volumes:
- keycloak-data-volume:/var/lib/keycloak/data
ports:
- "8090:8090"
- "8443:8443"
volumes:
keycloak-data-volume:
external: true
my 2 cents, worked for me with the persistent volume pointing to /opt/keycloak/data/h2, with Keycloak docker version 19.0.1 :
-v /<path-in-your-local-machine>/keycloak-data/:/opt/keycloak/data/h2
Update for version >= 17.0
To complement lazylead's answer, you need to use /opt/keycloak/data/ instead of /opt/jboss/keycloak/standalone/data/ for keycloak version >= 17.0.0
https://stackoverflow.com/a/60554189/5424025