I do not get Keycloak working in docker behind Traefik - docker

I have a domain example.org.
I have docker running there with Traefik as proxy.
Now I want to setup Keycloak. I want to access Keycloak on auth.example.org.
This is my config (docker-compose):
keycloak:
image: quay.io/keycloak/keycloak
restart: always
command: start
environment:
KC_PROXY_ADDRESS_FORWARDING: true
KC_HOSTNAME_STRICT: false
KC_HOSTNAME: auth.example.org
KC_HOSTNAME_PORT: 443
KC_HTTP_ENABLED: true
KC_DB: postgres
KC_DB_URL: jdbc:postgresql://postgres:5432/keycloak?ssl=allow
KC_DB_USERNAME: root
KC_DB_PASSWORD: password
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: password
labels:
- "traefik.http.routers.cloud-network-keycloak.rule=Host(`auth.example.org`)"
- "traefik.http.routers.cloud-network-keycloak.entrypoints=websecure"
- "traefik.http.routers.cloud-network-keycloak.tls.certresolver=letsencryptresolver"
- "traefik.http.routers.cloud-network-keycloak.tls=true"
- "traefik.http.services.cloud-network-keycloak.loadbalancer.server.port=8080"
depends_on:
postgres:
condition: service_healthy
networks:
- internal
- traefik
However, loading the Keycloak admin console on https://auth.example.org/admin/master/console/ throws an error in the browser:
URL: https://auth.example.org/realms/master/protocol/openid-connect/login-status-iframe.html/init?client_id=security-admin-console&origin=https%3A%2F%2Fauth.example.org
Status: 403
I have no clue ... how to resolve this?

In order to get Keycloak responding properly on port 443, I need to remove the KC_HOSTNAME_PORT configuration, leaving me with:
version: "3"
services:
traefik:
image: docker.io/traefik
command:
- --api.insecure=true
- --providers.docker
- --entrypoints.web.address=:80
- --entrypoints.web-secure.address=:443
ports:
- "127.0.0.1:8080:8080"
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
keycloak:
image: quay.io/keycloak/keycloak
restart: always
command: start
environment:
KC_PROXY_ADDRESS_FORWARDING: "true"
KC_HOSTNAME_STRICT: "false"
KC_HOSTNAME: auth.example.com
KC_PROXY: edge
KC_HTTP_ENABLED: "true"
KC_DB: postgres
KC_DB_URL: jdbc:postgresql://postgres:5432/$POSTGRES_DB?ssl=allow
KC_DB_USERNAME: $POSTGRES_USER
KC_DB_PASSWORD: $POSTGRES_PASSWORD
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: password
labels:
- "traefik.http.routers.cloud-network-keycloak.rule=Host(`auth.example.com`)"
- "traefik.http.routers.cloud-network-keycloak.tls=true"
- "traefik.http.services.cloud-network-keycloak.loadbalancer.server.port=8080"
postgres:
image: docker.io/postgres:14
environment:
POSTGRES_USER: $POSTGRES_USER
POSTGRES_PASSWORD: $POSTGRES_PASSWORD
POSTGRES_DB: $POSTGRES_DB
This works for me without errors when I connect to it as
https://auth.example.com. If I re-introduce the KC_HOSTNAME_PORT
setting, I get the same "infinite spinning wheel" that you reported in
your question.

Related

Cant login into pgAdmin 4 with docker

I feel like its highly stupid question, but I just cant log into my pgAdmin.
My compose.yml file is
services:
back:
container_name: blog-django
build: ./blog-master
command: gunicorn blog.wsgi:application --bind 0.0.0.0:8000
expose:
- 8000
links:
- db
volumes:
- blog-django:/usr/src/app/
- blog-static:/usr/src/app/static
- blog-media:/usr/src/app/media
env_file: ./.env
depends_on:
db:
condition: service_healthy
nginx:
container_name: blog-nginx
build: ./nginx/
ports:
- "1337:80"
volumes:
- blog-static:/usr/src/app/static
- blog-media:/usr/src/app/media
links:
- back
depends_on:
- back
db:
container_name: db
image: postgres:14
restart: always
expose:
- "5432"
environment:
POSTGRES_DB: docker
POSTGRES_USER: docker
POSTGRES_PASSWORD: docker
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
healthcheck:
test: ["CMD-SHELL", "pg_isready -U docker"]
interval: 5s
timeout: 5s
retries: 5
pgadmin:
image: dpage/pgadmin4
container_name: pgadmin
restart: always
ports:
- "5050:80"
environment:
PGADMIN_DEFAULT_EMAIL: admin#admin.com
PGADMIN_DEFAULT_PASSWORD: root
volumes:
- pgadmin-data:/var/lib/pgadmin
depends_on:
- db
links:
- db
mailhog:
container_name: mailhog
image: mailhog/mailhog
#logging:
# driver: 'none' # disable saving logs
expose:
- 1025
ports:
- 1025:1025 # smtp server
- 8025:8025 # web ui
volumes:
blog-django:
blog-static:
blog-media:
pgdata:
pgadmin-data:
and my .env file is
DEBUG=1
SECRET_KEY='a key'
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
SQL_ENGINE=django.db.backends.postgresql
SQL_DATABASE=docker
SQL_USER=docker
SQL_PASSWORD=docker
SQL_HOST=db
SQL_PORT=5432
DATABASE=postgres
but with any username/email and password it keeps saying "wrong email/password". although if I login with "admin#admin.com" error is "Incorrect username or password" and with username it says "email/username is not valid"
I tried using admin#mailhog.local email so I can do password recovery, but again, "Incorrect username or password"

nextcloud can't create an admin user

error while trying to create the admin user:
Error while trying to create admin user: Failed to connect to the database: An exception occurred in the driver: SQLSTATE[HY000] [1045] Access denied for user 'nextcloud'#'172.22.0.6' (using password: YES)
docker-compose.yml
version: '3'
volumes:
nextcloud-data:
nextcloud-db:
networks:
nginx_network:
external: true
services:
app:
image: nextcloud
restart: always
volumes:
- nextcloud-data:/var/www/html
environment:
- MYSQL_PASSWORD=test
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_HOST=db
networks:
- nginx_network
db:
image: mariadb
restart: always
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
volumes:
- nextcloud-db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=test
- MYSQL_PASSWORD=test
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
networks:
- nginx_network
I couldn't find any similar problems with a solution that works for me and the docker compose seems okay to me
Solution that worked for me:
changed database container's name
deleted all the volumes
DO NOT SET THE USER AS ROOT normal user is enough
(also this error shows up if you mistype your credentials between containers)

How can I call from my frontend docker nginx to my backend in nodeJs docker with docker swarmn

I have 3 containers that I communicate with docker swarm, if I run my application with http and connect as http //domain.com everything works fine, but if I use https ( //www.domain.com) I can't communicate my frontend with the backend and I get the following error:
Ajax.js: 10 POST https //www.domain.com/Init net :: ERR_NAME_NOT_RESOLVED
can someone help me solve my problem
and understand the mistake
Thank you
I leave my compose
version: '3'
services:
ssl:
image: danieldent/nginx-ssl-proxy
restart: always
environment:
UPSTREAM: myApp:8086
SERVERNAME: dominio.com
ports:
- 80:80/tcp
- 443:443/tcp
depends_on:
- myApp
volumes:
- ./nginxAPP:/etc/letsencrypt
- ./nginxAPP:/etc/nginx/user.conf.d:ro
bdd:
restart: always
image: postgres:12
ports:
- 5432:5432/tcp
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: 12345
POSTGRES_DB: miBDD
volumes:
- ./pgdata:/var/lib/postgresql/data
pgadmin:
image: dpage/pgadmin4
ports:
- 9095:80/tcp
environment:
PGADMIN_DEFAULT_EMAIL: user
PGADMIN_DEFAULT_PASSWORD: 12345
PROXY_X_FOR_COUNT: 3
PROXY_X_PROTO_COUNT: 3
PROXY_X_HOST_COUNT: 3
PROXY_X_PORT_COUNT: 3
volumes:
- ./pgadminAplicattion:/var/lib/pgadmin
myApp:
restart: always
image: appImage
ports:
- 8086:8086
depends_on:
- bdd
working_dir: /usr/myApp
environment:
CONFIG_PATH: ../configuation
command: "node server.js"

Jira & Docker & Traefik Setup

I'm first time Traefik user and I successfully configured this docker compose setup for Jira with Traefik and Let's Encrypt Cert.
My problem is that Jira must be able to connect to his self. Their are some Jira Services like Gadgets that loads it's data via JavaScript from via his own address over http. This typ of service does not work for me. Their is a support documents that describes this problems and also shows solutions for this. But I don't know how to setup this up correctly with Traefik/Docker. https://confluence.atlassian.com/jirakb/how-to-fix-gadget-titles-showing-as-__msg_gadget-813697086.html
Your help would be great. Thanks a lot!
version: '3'
services:
reverse-proxy:
image: traefik # The official Traefik docker image
command: --docker # Enables the web UI and tells Traefik to listen to docker --api
ports:
- "80:80" # The HTTP port
- "443:443" # The HTTPS port
- "8081:8080" # The Web UI (enabled by --api)
hostname: traefik
restart: unless-stopped
domainname: ${DOMAINNAME}
networks:
- frontend
- backend
labels:
- "traefik.enable=false"
- "traefik.frontend.rule=Host:traefik.${DOMAINNAME}"
volumes:
- /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker events
- /etc/compose/traefik:/etc/traefik
- /etc/compose/shared:/shared
jira:
image: dchevell/jira-software:${JIRAVERSION}
ports:
- 8080:8080
networks:
- backend
restart: unless-stopped
volumes:
- /data/files/jira/data:/var/atlassian/application-data/jira
environment:
- JVM_MAXIMUM_MEMORY=2048m
- JVM_MINIMUM_MEMORY=768m
- CATALINA_CONNECTOR_PROXYNAME=jira.${DOMAINNAME}
- CATALINA_CONNECTOR_PROXYPORT=443
- CATALINA_CONNECTOR_SCHEME=https
- CATALINA_CONNECTOR_SECURE=true
depends_on:
- jira-postgresql
links:
- "jira-postgresql:database"
labels:
- "traefik.enable=true"
- "traefik.backend=jira"
- "traefik.frontend.rule=Host:jira.${DOMAINNAME}"
- "traefik.port=8080"
jira-postgresql:
image: postgres:9.6.11-alpine
networks:
- backend
ports:
- 5432:5432
restart: unless-stopped
volumes:
- /data/index/postgresql/data/:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=jira
- POSTGRES_USER=jira
- POSTGRES_DB=jira
labels:
- "traefik.enable=false"
# Portainer
portainer:
image: portainer/portainer
container_name: portainer
restart: always
ports:
- 9000:9000
command: -H unix:///var/run/docker.sock
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./etc-portainer/data:/data
environment:
TZ: ${TZ}
labels:
- "traefik.enable=false"
networks:
frontend:
external:
name: frontend
backend:
driver: bridge
Configuration I got working with apps over secure - not super intuitive, but it looks like it accepts redirects secure traffic properly. I've got mine using acme on godaddy for certs, and it appears to be functioning properly over https with a forced recirect:
Forced redirect for reference:
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
And the dockerfile that I made to get things deployed properly:
version: '3'
services:
jira:
image: dchevell/jira-software:8.1.0
deploy:
restart_policy:
condition: on-failure
labels:
- traefik.frontend.rule=Host:jira.mydomain.com
- traefik.enable=true
- traefik.port=8080
ports:
- "8080"
networks:
- traefik-pub
- jiranet
environment:
- CATALINA_CONNECTOR_PROXYNAME=jira.mydomain.com
- CATALINA_CONNECTOR_PROXYPORT=443
- CATALINA_CONNECTOR_SCHEME=https
- CATALINA_CONNECTOR_SECURE=true
jira-postgresql:
image: postgres:11.2-alpine
networks:
- jiranet
ports:
- "5432"
volumes:
- jira-postgres-data:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=supersecret
- POSTGRES_USER=secret_user
- POSTGRES_DB=jira_db
labels:
- "traefik.enable=false"
volumes:
jira-postgres-data:
networks:
traefik-pub:
external: true
jiranet:
driver: overlay
This still required manual configuration of the database - I may one day take the time to build my own jira dockerfile that accepts the database config already, but with this one working, I don't see much point in pre-configuring the database connection when it's 20 seconds of extra work vs. rebuilding a dockerfile that I haven't written myself.

Unpredictable behavior of registrator and consul

I have very simple docker-compose config:
version: '3.5'
services:
consul:
image: consul:latest
hostname: "consul"
command: "consul agent -server -bootstrap-expect 1 -client=0.0.0.0 -ui -data-dir=/tmp"
environment:
SERVICE_53_IGNORE: 'true'
SERVICE_8301_IGNORE: 'true'
SERVICE_8302_IGNORE: 'true'
SERVICE_8600_IGNORE: 'true'
SERVICE_8300_IGNORE: 'true'
SERVICE_8400_IGNORE: 'true'
SERVICE_8500_IGNORE: 'true'
ports:
- 8300:8300
- 8400:8400
- 8500:8500
- 8600:8600/udp
networks:
- backend
registrator:
command: -internal consul://consul:8500
image: gliderlabs/registrator:master
depends_on:
- consul
links:
- consul
volumes:
- /var/run/docker.sock:/tmp/docker.sock
networks:
- backend
image_tagger:
build: image_tagger
image: image_tagger:latest
ports:
- 8000
networks:
- backend
mongo:
image: mongo
command: [--auth]
ports:
- "27017:27017"
restart: always
networks:
- backend
volumes:
- /mnt/data/mongo-data:/data/db
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: qwerty
postgres:
image: postgres:11.1
# ports:
# - "5432:5432"
networks:
- backend
volumes:
- ./postgres-data:/var/lib/postgresql/data
- ./scripts:/docker-entrypoint-initdb.d
restart: always
environment:
POSTGRES_PASSWORD: qwerty
POSTGRES_DB: ttt
SERVICE_5432_NAME: postgres
SERVICE_5432_ID: postgres
networks:
backend:
name: backend
(and some other services)
Also I configured dnsmasq on host to access containers by internal name.
I spent couple of days, but still not able to make it stable:
1. Very often some services are just not get registered by registrator (sometimes I get 5 out of 15).
2. Very often containers are registered with wrong ip address. So in container info I have one address(correct), in consul - another (incorrect). And when I want to reach some service by address like myservice.service.consul I end up at wrong container.
3. Sometimes resolution fails at all even when containers are registered with correct ip.
Do I have some mistakes in config?
So, at least for now I was able to fix this by passing -resync 15 param to registrator. Not sure if it's correct solution, but it works.

Resources