Keycloak. How change Administration Console Domain? - docker

In docker mode https://hub.docker.com/r/jboss/keycloak
docker run -e KEYCLOAK_USER=<USERNAME> -e KEYCLOAK_PASSWORD=<PASSWORD> jboss/keycloak
All working only via localhost:8080
If we try to access https://custom.com/auth/admin/master/console, we get a white page (access only from localhost).
How change domain to custom.com? (in documentations only change file, but in docker not have possible. Need stateless solutions)
Any idea without mounting file.

Need add -e PROXY_ADDRESS_FORWARDING=true for access to keyloack admin from https://custom.com (not localshost:8080)

Related

Keycloak - WhoamI request fails with 403

I have tried solving this problem for days now and I can't figure out how.
I have created a docker instance with keycloak using:
docker run -p 8080:8080 --name keycloak --net keycloak-network -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=admin -e KEYCLOAK_FRONTEND_URL="https://myurl.com/auth" -e KEYCLOAK_LOGLEVEL=DEBUG --env JAVA_OPTS="-Djboss.bind.address=0.0.0.0" jboss/keycloak
When I open https://myurl.com and click on Administration Console, it correctly prompts the login credentials. But, once I click "login" it gives me a white page. In the Chrome's debugger it says that the request to /auth/admin/master/console/whoami fails with 403. Which is weird, because all requests prior to that one, including the token request, work fine.
However, If I recreate the docker instance with the same command EXCEPT the KEYCLOAK_FRONTEND_URL part, and access the docker with a local address (192.168.1.X) it works just fine.
I enabled the logging and they both look the same except that the one with the url set doesn't have the following line (and other lines from then on):
DEBUG [org.keycloak.services.resources.admin.AdminConsole] (default task-1) setting up realm access for a master realm user
But this doesn't allow me to use my public address, outside my organization.
What am I missing?
Note: I have replaced the URLs with a fake one for obvious security issues.
Thanks a lot in advance!!
It has been a while since I asked this but this might be useful to someone: I finally got to the issue: the server/proxy was for some reason blocking the Authentication header, and therefore it was always giving the 403 code.

How to avoid authentication when connecting to standalone chrome debug container?

I am using selenium/standalone-chrome-debug.
By default connecting to the container via VNC will trigger an authentication prompt which can be avoided by setting an environment variable as per the documentation:
If you want to run VNC without password authentication you can set the environment variable VNC_NO_PASSWORD=1.
When I start the container with the following command, I'm still prompted for the password:
docker run -d -p 4444:4444 -p 0:5900 -v /dev/shm:/dev/shm -e VNC_NO_PASSWORD=1 selenium/standalone-chrome-debug
As you can see in the following screencast:
I'm still asked for a password
Trying to authenticate without password fails
When I use the default password (secret), it passes
Question: how do I avoid authentication completely?
Adding
VNC_NO_PASSWORD: 1
to the environment of the relevant service in docker-compose.yml worked for me.

Unable to access dockerized Neo4j webinterface through dockerized Traefik

I'm trying to access the webinterface of a dockerized neo4j-instance with the help of traefik as a reverse-proxy.
I can reach the neo4j-instance's webinterface by navigating to myDomain.demo:7479/browser. However I want to be able to reach it by simply navigating to myDomain.demo/neo4j/myNeo, so I don't have to remember the port numbers when using multiple neo4j-instances on the same machine.
Sadly I'm not able to reach the webinterface this way, instead I'm shown a blank page which asks me for credentials. I guess this is atleast a good sign, since normally when accessing the webinterface I have to enter my db-credentials into a GUI-mask to connect to my neo4j-db. However this should look like this instead of the simple browser-popup I'm seeing.
Cleary I can't be the first one who tries to access multiple neo4j-instances and their corresponding webinterface behind a reverse-proxy, but I can't wrap my head around how to do it. Here are my setup-commands:
Dockerized Traefik-Proxy
docker run --name proxy -p 80:80 -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock \
traefik \
--api --loglevel=debug --entryPoints="Name:http Address::80" \
--docker --docker.endpoint="unix://var/run/docker.sock"
Dockerized Neo4j-Instance (which is working fine without proxy)
docker run --name myNeo -d --publish=7479:7474 --publish=7701:7687 \
--label traefik.frontend.rule="Host:myDomain.demo;Path:/neo4j/myNeo" \
--label traefik.backend=myNeo \
--label traefik.port=7474 \
neo4j:latest
(for the sake of simplicity I've dropped multiple volume allocations in the neo4j-docker command.)
I noticed that when manually navigating to myDomain.demo:7479, I get redirected to myDomain.demo:7479/browser. Maybe Traefic can't handle that redirect and this is why I'm served a blank page without errors?
Thanks in advance.
Three things:
The Path rule is an exact match only. Path:/example will match /example, but not /example/bacon. You probably want to use PathPrefix instead.
If you can't run myDomain.demo:7479/neo4j/myNeo/browser, then you will not be able to use subdirectories for routing. You need to configure your application to listen on a subpath. Neo4j needs to know its path so it can generate links etc.
Once you can get neo4j working on a subpath, then you can use traefik to route to the custom domain/port.

Set user and password for rabbitmq on docker container

I am trying to create a rabbitmq docker container with default user and password but when I try to enter to the management plugin those credentials doesn't work
This is how I create the container:
docker run -d -P --hostname rabbit -p 5009:5672 -p 5010:15672 --name rabbitmq -e RABBITMQ_DEFAULT_USER=user -e RABBITMQ_DEFAULT_PASS=pass -v /home/desarrollo/rabbitmq/data:/var/lib/rabbitmq rabbitmq:3.6.10-management
What am I doing wrong?,
Thanks in advance
The default user is created only if the database does not exist. Therefore the environment variables have no effect if the volume already exists.
I had the same problem when trying to access in Chrome. Firefox worked fine. The culprit turned out to be a deprecated JS method that was no longer allowed by Chrome.

How to link docker containers?

I have tried linking my docker containers but it seems to give error on access.
My structure is as following:
Database docker(Mysql) - Container name is um-mysql
Back-end docker(Tomcat) - Image name is cz-um-app
Front-end docker(Nginx) - Image name is cz-um-frontend
Linking of Back-end with Database docker is done as following and it works perfectly:
$ docker run -p 8080:8080 --name backendservices --link um-mysql:um-mysql cz-um-app
The linking of Front-end with Back-end is done as following:
$ docker run -p 80:80 --name frontend --link backendservices:backendservices cz-um-frontend
But, linking of Front-end with Back-end is not working.
I have a login page, on submit, it accesses a url http://backendservices:8080/MyApp
In console, it shows error as:
net::ERR_NAME_NOT_RESOLVED
Not sure why linking of back-end container with database works fine and not the same case of front-end with back-end. Do I need to configure some settings in Nginx for this?
The hosts entry is as following and I am able to ping backendservices too:
First you don't need to map 8080:8080 for backendservices: any EXPOSEd port in backendservices image is visible by any other container linked to it. No host port mapping needed.
Secondly, you can check in your front end if the backend has been register:
docker exec -it frontend bash
cat /etc/hosts
If it is not, check docker ps -a to see if backend is still runnong.

Resources