Keycloak fails to import exported realm running in docker - docker

I started my keycloak service using the command:
docker run -d -p 8180:8080 -e KEYCLOAK_USER=admin -e \
KEYCLOAK_PASSWORD=admin -v $(pwd):/tmp --name kc \
jboss/keycloak:8.0.2
I created a new realm on keycloak only giving it a name, nothing else. Exported it running the command:
docker exec -it kc keycloak/bin/standalone.sh \
-Djboss.socket.binding.port-offset=100 -Dkeycloak.migration.action=export \
-Dkeycloak.migration.provider=singleFile \
-Dkeycloak.migration.realmName=my_realm \
-Dkeycloak.migration.usersExportStrategy=REALM_FILE \
-Dkeycloak.migration.file=/tmp/my_realm.json
I now have the realm in the my_realm.json file. I then start a new keycloak using docker compose to set up my entire test environment. I build a new docker image with the this DockerFile:
FROM jboss/keycloak:8.0.2
COPY my_realm.json /tmp/my_realm.json
ENV KEYCLOAK_IMPORT /tmp/my_realm.json
ENV KEYCLOAK_MIGRATION_STRATEGY OVERWRITE_EXISTING
Docker compose:
version: '3.4'
volumes:
postgres_kc_data:
driver: local
services:
kc_postgresql:
image: postgres:11.5
volumes:
- postgres_kc_data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: kc
POSTGRES_USER: kc
POSTGRES_DB: kcdb
ports:
- 50009:5432
keycloak:
build: "./keycloak/" # ref to folder with above DockerFile
environment:
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
DB_VENDOR: POSTGRES
DB_ADDR: kc_postgresql
DB_DATABASE: kcdb
DB_SCHEMA: public
DB_USER: kc
DB_PASSWORD: kc
depends_on:
- kc_postgresql
ports:
- 8080:8080
The log output from running docker compose indicates that it is not able to import realm, and suggests something about validating the clients. I added no clients, so these are the default ones.
08:39:46,713 WARN [org.keycloak.services] (ServerService Thread Pool -- 67) KC-SERVICES0005: Unable to import realm Demo from file /tmp/my_realm.json.: java.lang.NullPointerException
at org.keycloak.keycloak-services#8.0.2//org.keycloak.url.DefaultHostnameProvider.resolveUri(DefaultHostnameProvider.java:83)
at org.keycloak.keycloak-services#8.0.2//org.keycloak.url.DefaultHostnameProvider.getScheme(DefaultHostnameProvider.java:38)
at org.keycloak.keycloak-server-spi#8.0.2//org.keycloak.models.KeycloakUriInfo.<init>(KeycloakUriInfo.java:46)
at org.keycloak.keycloak-services#8.0.2//org.keycloak.services.DefaultKeycloakContext.getUri(DefaultKeycloakContext.java:79)
at org.keycloak.keycloak-services#8.0.2//org.keycloak.services.util.ResolveRelative.resolveRootUrl(ResolveRelative.java:45)
at org.keycloak.keycloak-services#8.0.2//org.keycloak.validation.DefaultClientValidationProvider.validate(DefaultClientValidationProvider.java:44)
at org.keycloak.keycloak-services#8.0.2//org.keycloak.validation.DefaultClientValidationProvider.validate(DefaultClientValidationProvider.java:37)
at org.keycloak.keycloak-server-spi-private#8.0.2//org.keycloak.validation.ClientValidationUtil.validate(ClientValidationUtil.java:30)
at org.keycloak.keycloak-server-spi-private#8.0.2//org.keycloak.models.utils.RepresentationToModel.createClients(RepresentationToModel.java:1224)
at org.keycloak.keycloak-server-spi-private#8.0.2//org.keycloak.models.utils.RepresentationToModel.importRealm(RepresentationToModel.java:362)
at org.keycloak.keycloak-services#8.0.2//org.keycloak.services.managers.RealmManager.importRealm(RealmManager.java:506)
Any pointers are welcome!

Upgrading to 9.0.0 fixes it for me.

Related

Restoring backed-up Docker volumes

due to hardware problem, I had to replace my small home server with a new one.
Several self-hosted services with Docker were running on the server for which I tried to backup the volumes following the instructions on the official Docker website and those present in this YouTube video and in this cheat-sheet. Now, following the same documentation, I am trying to restore the backups but without success. The first one I'm trying for is a stack for Nginx Proxy Manager built with Docker compose with this docker-compose.yaml file:
version: "3.6"
services:
app:
image: jc21/nginx-proxy-manager:latest
restart: always
ports:
- "80:80"
- "443:443"
- "81:81"
environment:
DB_MYSQL_HOST: "db"
DB_MYSQL_PORT: 3306
DB_MYSQL_NAME: "db_name"
DB_MYSQL_USER: "db_user"
DB_MYSQL_PASSWORD: "db_password"
volumes:
- data:/data
- ./letsencrypt:/etc/letsencrypt
depends_on:
- db
db:
image: jc21/mariadb-aria:latest
restart: always
environment:
MYSQL_DATABASE: "db_name"
MYSQL_ROOT_PASSWORD: "root_password"
MYSQL_USER: "db_user"
MYSQL_PASSWORD: "db_password"
volumes:
- db:/var/lib/mysql
volumes:
db:
data:
After starting the stack with docker compose up -d command I'm trying to restore db and data volumes with:
docker run --rm --volumes-from nginx-proxy-manager-db-1 -v $(pwd):/backup ubuntu bash -c "cd / && tar xvf /backup/nginx-proxy-manager_db_20220717-082200.tar --strip 1"
docker run --rm --volumes-from nginx-proxy-manager-app-1 -v $(pwd):/backup ubuntu bash -c "cd / && tar xvf /backup/nginx-proxy-manager_data_20220717-082200.tar --strip 1"
What's wrong?

phpmyadmin docker container can not access mariadb database?

I am trying to make a quick connection setup using the fallowing setup
Copy & Paste to recreate the issue
docker rm -f mariadb && docker run --detach --name mariadb --env MARIADB_USER=user --env MARIADB_PASSWORD=secret --env MARIADB_ROOT_PASSWORD=secret -p 3306:3306 mariadb:latest
docker rm -f phpmyadd && docker run --name phpmyadd -d -e PMA_HOST=host -e PMA_PORT=3306 -p 8080:80 phpmyadmin
docker exec -it mariadb bash
I can login in to mariadb container and access mariadb with
mysql -uroot -psecret
I can also access phpmyadmin container at http://localhost:8080
However when i try to login to mariadb through phpmyadmin i get the fallowing:
It shows that the port is exposed but I can not access it with telnet..
Any idea what is missing here?
For 2 containers to be able to talk to each other, you would have to setup a docker-compose instead. Something like this should work
version: '3.8'
volumes:
mariadb:
driver: local
services:
mariadb:
image: mariadb:10.6
restart: always
environment:
MYSQL_ROOT_PASSWORD: YOUR_ROOT_PASSWORD_HERE
MYSQL_USER: YOUR_MYSQL_USER_HERE
MYSQL_PASSWORD: YOUR_USER_PW_HERE
ports:
- "40000:3306"
volumes:
- mariadb:/var/lib/mysql
phpmyadmin:
image: phpmyadmin
restart: always
ports:
- "40001:80"
environment:
- PMA_HOST=mariadb
- PMA_PORT=3306
And you would start everything using docker-compose up

LDAP authentication doesn't work on Gitea custom image

I'm developing a Docker infrastructure with Ansible and Docker Compose and I have a problem with the authentication via LDAP on my custom image of Gitea.
The error that i get inside the logs of Gitea when I try to use one of the users that are in the LDAP is:
Do you think that is a problem of network or is a problem of the LDAP that doesn't find the user?
The restoration of the LDIF backup works as expected because it adds the user that I'm trying to log:
Also when I create manually a user in Gitea via the graphic interface, in the authentication sources I find ansible-ldap.
What can be the solution to this problem?
This is my configuration:
app.ini (of Gitea)
[DEFAULT]
RUN_USER = git
RUN_MODE = prod
...
[database]
PATH = /data/gitea/gitea.db
DB_TYPE = postgres
HOST = db:5432
NAME = gitea
USER = gitea
PASSWD = gitea
LOG_SQL = false
...
Dockerfile
FROM gitea/gitea:1.16.8
RUN apk add sudo
RUN chmod 777 /home
COPY entrypoint /usr/bin/custom_entrypoint
COPY gitea-cli.sh /usr/bin/gitea-cli.sh
ENTRYPOINT /usr/bin/custom_entrypoint
entrypoint
#!/bin/sh
set -e
while ! nc -z $GITEA__database__HOST; do sleep 1; done;
chown -R 1000:1000 /data/gitea/conf
if ! [ -f /data/gitea.initialized ]; then
gitea-cli.sh migrate
gitea-cli.sh admin auth add-ldap --name ansible-ldap --host 127.0.0.1 --port 1389 --security-protocol unencrypted --user-search-base dc=ldap,dc=vcc,dc=unige,dc=it --admin-filter "(objectClass=giteaAdmin)" --user-filter "(&(objectClass=inetOrgPerson)(uid=%s))" --username-attribute uid --firstname-attribute givenName --surname-attribute surname --email-attribute mail --bind-dn cn=admin,dc=ldap,dc=vcc,dc=unige,dc=it --bind-password admin --allow-deactivate-all
touch /data/gitea.initialized
fi
exec /usr/bin/entrypoint
gitea-cli.sh
#!/bin/sh
echo 'Started gitea-cli'
USER=git HOME=/data/git GITEA_WORK_DIR=/var/lib/gitea sudo -E -u git gitea --config /data/gitea/conf/app.ini "$#"
docker-compose.yaml
db:
image: postgres:14.3
restart: always
hostname: db
environment:
POSTGRES_DB: gitea
POSTGRES_USER: gitea
POSTGRES_PASSWORD: gitea
ports:
- 5432:5432
volumes:
- /data/postgres:/var/lib/postgresql/data
networks:
- vcc
openldap:
image: bitnami/openldap:2.5
ports:
- 1389:1389
- 1636:1636
environment:
BITNAMI_DEBUG: "true"
LDAP_LOGLEVEL: 4
LDAP_ADMIN_USERNAME: admin
LDAP_ADMIN_PASSWORD: admin
LDAP_ROOT: dc=ldap,dc=vcc,dc=unige,dc=it
LDAP_CUSTOM_LDIF_DIR: /bitnami/openldap/backup
LDAP_CUSTOM_SCHEMA_FILE: /bitnami/openldap/schema/schema.ldif
volumes:
- /data/openldap/:/bitnami/openldap
networks:
- vcc
gitea:
image: 127.0.0.1:5000/custom_gitea:51
restart: always
hostname: git.localdomain
build: /data/gitea/custom
ports:
- 4000:4000
- 222:22
environment:
USER: git
USER_UID: 1000
USER_GID: 1000
GITEA__database__DB_TYPE: postgres
GITEA__database__HOST: db:5432
GITEA__database__NAME: gitea
GITEA__database__USER: gitea
GITEA__database__PASSWD: gitea
GITEA__security__INSTALL_LOCK: "true"
GITEA__security__SECRET_KEY: XQolFkmSxJWhxkZrkrGbPDbVrEwiZshnzPOY
volumes:
- /data/gitea:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- /data/gitea/app.ini:/data/gitea/conf/app.ini
deploy:
mode: global
depends_on:
- db
- openldap
- openldap_admin
networks:
- vcc
The problem was the address 127.0.0.1 in the entrypoint file in --host, changing it to openldap (name of the service in the docker-compose file) worked.

Test the connection between docker containers within docker-composed environment

We are using docker-compose to set up the services for our app:
version: "3"
services:
db:
container_name: db
image: postgres:11.1
environment:
POSTGRES_USER: xxx
POSTGRES_PASSWORD: xxx
POSTGRES_DB: xxx
PGPASSWORD: xxx
volumes:
- pgdata:/var/lib/postgresql/data
- ./data/dbdump:/dbdump
networks:
- zenet
ports:
- "5432:5432"
# The React web application
web:
container_name: web
build:
context: .
dockerfile: devenv/web/Dockerfile
volumes:
- ./src/client-app:/usr/local/abc
- /usr/local/abc/node_modules
networks:
- zenet
ports:
- "3000:3000"
command: npm run startindocker
# The Django Rest Framework API
api:
container_name: api
build:
context: .
dockerfile: devenv/api/Dockerfile
environment:
DJANGO_SETTINGS_MODULE: abc.settings.dev
PYTHONSTARTUP: /root/pythonstartup.sh
PYTHONIOENCODING: UTF-8
volumes:
- .:/usr/local/borrow-a-boat
- ./devenv/api/pythonrc.py:/root/pythonstartup.sh
networks:
- zenet
depends_on:
- "db"
ports:
- "9000:9000"
command:
python3 /usr/local/borrow-a-boat/src/django/abc/manage.py runserver 0.0.0.0:9000
tty: true
volumes:
pgdata:
customboatdata:
networks:
zenet:
(sensitive info has been replaced)
My colleagues have the setup running fine. I setup the app & the volumes & containers are up & running. I can hit the service api at port 9000 fine from browser & confirm that the db is populated. However, my web service is unable to get the data from the api. How can I confirm that the above assertion is correct & that the web really cannot communicate with the api service.
And how can I fix this & get the web to receive the data from api. Apologies for the newbie question.
EDIT:
When I run ping api from within the web container using docker exec -it [containerID] /bin/sh, I am recieving a response in the form of :
64 bytes from 172.18.0.4: seq=139 ttl=64 time=0.084 ms
So, clearly, my assertion is incorrect. Why is web service unable to get a response from api service. When I load the web app in browser, I do not get any log display in the terminal of the api being hit.
EDIT-2 :
As per #runwuf question & my response, clearly, the 'web' is able to communicate with the 'api' service. So, something else is wrong. Here are the steps, we follow to setup the stack on our systems. I use a Linux Mint 19.2 OS, while the team uses Macs. The commands are:
docker kill $(docker_container_names)
docker rm -v $(docker_container_names)
docker volume rm abc_pgdata
docker image rm abc_api
docker image rm abc_web
docker-compose build
docker-compose up -d db api web
ssh abc#abc.com 'pg_dump abc | gzip' | gunzip | docker-compose run --rm db psql --host db --username abc
docker-compose run --rm db psql --host db --username abc -c "update core_photo set image_base = 'sample.jpg'"
docker-compose run --rm db psql --host db --username abc -c "update core_experienceimage set image_base = 'sample.jpg'"
In the end, it was a case of env variable not accessible within web service. All it took was to read the console logs in the browser which showed the undefined variable.
The lesson for me is to when it comes to problem solving, no matter how new the technology, don't forget to use the tools you are familiar with.

Wrong order of running docker containers in convox/docker-compose

Here is the compose file that I am using. It consist one postgres db container and one redis container. On top of them I have a gunicorn-django-python web server(docker image:python-3.5). There is one nginx proxy server which is linked to web container.
version: '2'
services:
nginx:
build: ./nginx/
ports:
- 80:80
volumes:
- /usr/src/app/static
links:
- web
web:
build: ./web
ports:
- 8000:8000
volumes:
- /usr/src/app/static
env_file: .env
environment:
DEBUG: 'true'
SECRET_KEY: 5(15ds+i2+%ik6z&!yer+ga9m=e%jcqiz_5wszg)r-z!2--b2d
DB_NAME: postgres
DB_USER: postgres
DB_PASS: postgres
DB_SERVICE: postgres
DB_PORT: 5432
command: /usr/local/bin/gunicorn messanger.wsgi:application -w 2 -b :8000
links:
- redis
- postgres
postgres:
image: postgres
ports:
- 5432
volumes:
- pgdata:/var/lib/postgresql/data/
redis:
image: redis
ports:
- 6379
volumes:
- redisdata:/data
I am facing issue while starting docker containers via
convox start -f docker-compose.yml
Problem is , ideally first postgres/redis servers to be started, then the web server and then nginx server in the end according to their linking order. But actually web server is getting started first due to which it fails without db/cache. See error logs below:
web │ running: docker run -i --rm --name python-app-web -e DB_NAME=postgres -e DB_USER=postgres -e DB_PASS=postgres -e DB_SERVICE=postgres -e DB_PORT=5432 -e DEBUG=true -e SECRET_KEY=5(15ds+i2+%ik6z&!yer+ga9m=e%jcqiz_5wszg)r-z!2--b2d --add-host redis: -e REDIS_SCHEME=tcp -e REDIS_HOST= -e REDIS_PORT=
-e REDIS_PATH= -e REDIS_USERNAME= -e REDIS_PASSWORD= -e REDIS_URL=tcp://:%0A --add-host postgres: -e POSTGRES_SCHEME=tcp -e POSTGRES_HOST= -e POSTGRES_PORT=
-e POSTGRES_PATH= -e POSTGRES_USERNAME= -e POSTGRES_PASSWORD= -e POSTGRES_URL=tcp://:%0A -p 0:8000 -v /Users/gaurav/.convox/volumes/python-app/web/usr/src/app/static:/usr/src/app/static python-app/web sh -c /usr/local/bin/gunicorn messanger.wsgi:application -w 2 -b :8000
web │ invalid argument "redis:" for --add-host: invalid IP address in add-host: ""
web │ See 'docker run --help'.
postgres │ running: docker run -i --rm --name python-app-postgres -p 5432 -v pgdata:/var/lib/postgresql/data/ python-app/postgres
redis │ running: docker run -i --rm --name python-app-redis -p 6379 -v redisdata:/data python-app/redis
But it works fine when I completely removes the nginx server, in that case web server is being kicked off after postgres/redis.
I am not able to understand the actual error.
Complete code can be found here at github.
[Note] Found very strange thing I never expected this. Problem was with the name of the container. If I rename the 'ngnix' container to anything like server/webserver/xyz/myngnix/ngnixxxx etc it all worked as expected and in order. But not working with the name ngnix! strange isn't it.
Just add depends_on directive to docker-compose.yml
version: '2'
services:
depends_on:
- web
nginx:
build: ./nginx/
ports:
- 80:80
volumes:
- /usr/src/app/static
web:
build: ./web
depends_on:
- postrges
- redis
ports:
- 8000:8000
volumes:
- /usr/src/app/static
env_file: .env
environment:
DEBUG: 'true'
SECRET_KEY: 5(15ds+i2+%ik6z&!yer+ga9m=e%jcqiz_5wszg)r-z!2--b2d
DB_NAME: postgres
DB_USER: postgres
DB_PASS: postgres
DB_SERVICE: postgres
DB_PORT: 5432
command: /usr/local/bin/gunicorn messanger.wsgi:application -w 2 -b :8000
postgres:
image: postgres
ports:
- 5432
volumes:
- pgdata:/var/lib/postgresql/data/
redis:
image: redis
ports:
- 6379
volumes:
- redisdata:/data
Order of running containers will be ok. But it doesnt mean that when web app will try to connect to redis, this container will be available. If you want reach this fetures you need something like whait for it or docker-compose health check

Resources