I'm using Docker along with the jboss/keycloak-ha-postgres and postgres images.
I have two developers who want to share the postgres data. I'm trying to figure out what is the best way to do this.
I've already figured out how to persist data locally using the volumes attribute in my docker-compose.yml file:
version: '2'
services:
db:
container_name: keycloak-postgres
image: postgres
environment:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: password
ports:
- "5432:5432"
volumes_from:
- data
keycloak:
container_name: keycloak
image: jboss/keycloak-ha-postgres
depends_on:
- "db"
environment:
POSTGRES_DATABASE: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: password
POSTGRES_PORT_5432_TCP_ADDR: postgres
POSTGRES_PORT_5432_TCP_PORT: 5432
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin123
links:
- "db"
ports:
- "8080:8080"
data:
container_name: keycloak-postgres-db-data
image: cogniteev/echo
command: echo 'Data Container for PostgreSQL'
volumes:
- /var/lib/postgresql/data
One approach I'm thinking of includes create my own Docker image (using a Dockerfile and using FROM cogniteev/echo) of the volume and hosting it DockerHub. Commit and push changes to the volume data to DockerHub. I'd then have to update my docker-compose.yml file to grab that particular image instead of cogniteev/echo.
But I'm not sure what is the best thing to do in this situation.
Related
I am using docker-compose for my open-source web application. In the process of publishing my project on github I wanted to make my docker-compose.yaml file easier to understand and adapt. I'm still a beginner with Docker but the file works as intended. I just want to improve the readability and changeability of the volumes used by the containers. The values a/large/directory/or/disk:/var/lib/postgresql/data and /another/large/disk/:/something will most likely need to be adapted to the system the user is running my application on. Can I introduce variables for these? How can I make it more obvious that these values are to be changed by the person running my application?
My current docker-compose.yaml file
version: '3'
services:
postgres:
image: postgres:latest
restart: always
expose:
- 5432
ports:
- 5432:5432
environment:
POSTGRES_USER: 'postgres'
POSTGRES_PASSWORD: 'password'
POSTGRES_DB: 'sample'
volumes:
- /a/large/directory/or/disk:/var/lib/postgresql/data
networks:
- mynetwork
mysql:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: 'db'
MYSQL_USER: 'user'
MYSQL_PASSWORD: 'password'
MYSQL_ROOT_PASSWORD: 'password'
expose:
- 3306
ports:
- 3306:3306
volumes:
- ~/data/mysql:/var/lib/mysql
networks:
- mynetwork
depends_on:
- postgres
core:
restart: always
build: core/
environment:
SPRING_APPLICATION_JSON: '{
"database.postgres.url": "postgres:5432/sample",
"database.postgres.user": "postgres",
"database.postgres.password": "password",
"database.mysql.host": "mysql",
"database.mysql.user": "root",
"database.mysql.password": "password"
}'
volumes:
- ~/data/core:/var
- /another/large/disk/:/something
networks:
- mynetwork
depends_on:
- mysql
ports:
- 8080:8080
web:
restart: always
build: web/
networks:
- mynetwork
depends_on:
- core
ports:
- 3000:3000
networks:
mynetwork:
driver: bridge
volumes:
myvolume:
(I'd also appreciate any other suggestions for improvements to my file!)
Docker Compose supports variable interpolation. But then you need to document those values, and people might just assume docker compose up and have it work without extra setup.
Compose typically isn't used for production deployment, so you wouldn't use a real volume. That being said, you can simply use relative directories rather than home folder or absolute path (./data/app:/mount) to the file itself, or a Docker managed volume.
Below is my docker-compose file in which I specified, that the postgres data is saved into my folder in the host called volumes/db_data/. But this folder stays empty. Instead, the data seems to be saved into the default folder var/lib/docker/volumes. Also, the folder user_models is not saved locally in volumes/user_models. The api_server.log is also not saved. Therefore I guess there is something wrong with my understanding of docker volumes in general.
version: "3.7"
services:
db:
image: postgres
environment:
POSTGRES_PASSWORD: xxxxxxxxxxx
POSTGRES_USER: postgres
POSTGRES_DB: user-db
volumes:
- ./volumes/db_data:/var/lib/postgresql/data
ports:
- 5432:5432
api-server:
image: api-server
volumes:
- ./volumes/user_models:/classification/user_models
depends_on:
- db
ports:
- 1337:1337
model-trainer:
image: webapp-trainer
volumes:
- ./volumes/user_models:/user_models
- ./volumes/api_server.log:/api_server.log
depends_on:
- db
react-client:
image: webapp-client
depends_on:
- api-server
ports:
- 3000:3000
Everything else is working fine, just the data in user_models is not saved and the postgres data is saved in the wrong place. What am I doing wrong?
Edit:
docker-compose up prints this at the start:
WARNING: Service "db" is using volume "/var/lib/postgresql/data" from the previous container. Host mapping "/home/theo/Documents/Programming/cbi-webapp/compose/volumes/db_data"
yet db_data is empty
I have a Maria DB docker image for my application.
I've now pulled Keycloak image. It is using the default h2. But I want to use my existing maria DB image
The documentation is asking me to create network etc, but I am not sure how I do it in cloud. So looking for a configuration based solution i.e. change Keycloak image config to link to Maria DB image. I am not using docker compose, I only pulled image.
https://github.com/keycloak/keycloak-containers/blob/master/server/README.md#environment-variables
Not sure what is environment variables - are these inside keycloak image or on host machine?
start command: docker run -p 7080:8080 -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=admin jboss/keycloak
and I find it is highly unsecure. Is there a secure way?
Edit:
I opened cli from docker dashboard, and typed
env
but do not know how can I add more env variables like
PROXY_ADDRESS_FORWARDING: 'true'
# PostgreSQL DB settings
DB_VENDOR: postgres
DB_ADDR: 172.17.0.1
DB_PORT: 5432
DB_DATABASE: keycloak
DB_SCHEMA: public
DB_USER: keycloak
DB_PWD: keycloak
(how to change PROXY_ADDRESS_FORWARDING=true from false ?)
I was able to do it like this. You need to define a network and add database and keycloak services to that network.
To add the env variables you have to define them under environment block.
version: '3.7'
services:
demo_db:
container_name: demo-maria-db
image: mariadb:10.5.8-focal
restart: always
ports:
- 3306:3306
volumes:
- /apps/demo/db:/data/db
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: mydb
MYSQL_USER: user
MYSQL_PASSWORD: password
networks:
demo_mesh:
aliases:
- demo-db
demo_keycloak:
container_name: demo-keycloak
image: jboss/keycloak:10.0.1
restart: always
ports:
- 8180:8080
environment:
PROXY_ADDRESS_FORWARDING: "true"
DB_VENDOR: mariadb
DB_ADDR: demo-db
DB_DATABASE: keycloak
DB_USER: user
DB_PASSWORD: password
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
depends_on:
- demo_db
networks:
- demo_mesh
networks:
demo_mesh: {}
I'm trying to set up a docker-compose file for running Apache Guacamole.
The compose file has 3 services, 2 for guacamole itself and 1 database image. The problem is that the database has to be initialized before the guacamole container can use it, but the files to initialize the database are in the guacamole image. The solution I came up with is this:
version: "3"
services:
init:
image: guacamole/guacamole:latest
command: ["/bin/sh", "-c", "cp /opt/guacamole/postgresql/schema/*.sql /init/" ]
volumes:
- dbinit:/init
database:
image: postgres:latest
restart: unless-stopped
volumes:
- dbinit:/docker-entrypoint-initdb.d
- dbdata:/var/lib/postgresql/data
environment:
POSTGRES_USER: guac
POSTGRES_PASSWORD: guac
depends_on:
- init
guacd:
image: guacamole/guacd:latest
restart: unless-stopped
guacamole:
image: guacamole/guacamole:latest
restart: unless-stopped
ports:
- "8080:8080"
environment:
GUACD_HOSTNAME: guacd
POSTGRES_HOSTNAME: database
POSTGRES_DATABASE: guac
POSTGRES_USER: guac
POSTGRES_PASSWORD: guac
depends_on:
- database
- guacd
volumes:
dbinit:
dbdata:
So I have one container whose job is to copy the database initialization files into a volume and then I mount that volume in the database. The problem is that this creates a race condition and is ugly. Is there some elegant solution for this? Is it possible to mount the files from the guacamole image into the database container? I would rather avoid having an extra sql file with the docker-compose file.
Thanks in advance!
I'm trying to import configuration from one keycloak instance into many different keycloak instances (Each instance is for the same application just differnt sections in my CICD flow)
I'm running keycloak through Docker and finding it difficult to import the required json file
To get the actual data I want imported, I went to the required realm and simply clicked the export button with clients etc. selected. This downloaded a file to my browser which I now want imported when I build my docker containers
I've tried a lot of different methods I've found online and nothing seems to be working so I'd appreciate some help
The first thing I tried was to import the file through the docker-compose file using the following
KEYCLOAK_IMPORT: /realm-export.json
The next thing I tried was also in my docker-compose where I tried
command: "-b 0.0.0.0 -Djboss.http.port=8080 -Dkeycloak.migration.action=import -Dkeycloak.import=realm-export.json
Finally, I tried going into my Dockerfile and running the import as my CMD using the following
CMD ["-b 0.0.0.0", "-Dkeycloak.import=/opt/jboss/keycloak/realm-export.json"]
Below is my current docker-compose and Dockerfiles without the imports added, they might be some help in answering this question. Thanks in advance
# Dockerfile
FROM jboss/keycloak:4.8.3.Final
COPY keycloak-metrics-spi-1.0.1-SNAPSHOT.jar keycloak/standalone/deployments
And the keycloak releated section of my docker-compose file
postgres:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: keycl0ak
POSTGRES_USER: keycl0ak
POSTGRES_PASSWORD: password
ports:
- 5431:5431
keycloak:
build:
context: services/keycloak
environment:
DB_VENDOR: POSTGRES
DB_ADDR: postgres
DB_DATABASE: keycl0ak
DB_USER: keycl0ak
DB_PASSWORD: password
KEYCLOAK_USER: administrat0r
KEYCLOAK_PASSWORD: asc88a8c0ssssqs
ports:
- 8080:8080
depends_on:
- postgres
volumes:
postgres_data:
driver: local
Explanation
First you need to copy the file into your container before you can import it into Keycloak. You could place your realm-export.json in a folder next to the docker-compose.yml, lets say we call it imports. This can be achieved using volumes:. Once the file has been copied into the container then you can use command: as you were before, pointing at the correct file within the container.
File Structure
/your_computer/keycloak_stuff/
|-- docker-compose.yml
|-- imports -> realm-export.json
Docker-Compose
This is how the docker-compose.yml should look with the changes:
postgres:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: keycl0ak
POSTGRES_USER: keycl0ak
POSTGRES_PASSWORD: password
ports:
- 5431:5431
keycloak:
build:
context: services/keycloak
volumes:
- ./imports:/opt/jboss/keycloak/imports
command:
- "-b 0.0.0.0 -Dkeycloak.import=/opt/jboss/keycloak/imports/realm-export.json"
environment:
DB_VENDOR: POSTGRES
DB_ADDR: postgres
DB_DATABASE: keycl0ak
DB_USER: keycl0ak
DB_PASSWORD: password
KEYCLOAK_USER: administrat0r
KEYCLOAK_PASSWORD: asc88a8c0ssssqs
ports:
- 8080:8080
depends_on:
- postgres
volumes:
postgres_data:
driver: local
To wrap up the answer of #JesusBenito and #raujonas, the docker-compose could be changed, so that you make use of the keyloak environment KEYCLOAK_IMPORT:
keycloak:
volumes:
- ./imports:/opt/jboss/keycloak/imports
# command: not needed anymore
# - "-b 0.0.0.0 -Dkeycloak.import=/opt/jboss/keycloak/imports/realm-export.json"
environment:
KEYCLOAK_IMPORT: /opt/jboss/keycloak/imports/realm-export.json -Dkeycloak.profile.feature.upload_scripts=enabled
DB_VENDOR: POSTGRES
DB_ADDR: postgres
DB_DATABASE: keycl0ak
DB_USER: keycl0ak
DB_PASSWORD: password
KEYCLOAK_USER: administrat0r
KEYCLOAK_PASSWORD: asc88a8c0ssssqs
This config worked for me:
keycloak:
image: mihaibob/keycloak:15.0.1
container_name: keycloak
ports:
- "9091:8080"
volumes:
- ./src/test/resources/keycloak:/tmp/import
environment:
...
KEYCLOAK_IMPORT: /tmp/import/global.json