Accessing Keycloak-Service inside Docker by its Network Alias - docker

Scenario
I have secured webservice (JakartaEE + Microprofile + JWT) running in open liberty. As issuer of the jwt token I use keycloak. For testing and development i want to run both services in docker. Therefore I wrote a docker-compose file. As test client I use JUnit with microprofile-client. This is running outside of docker.
Problem
I can retrieve the JWT-Token via localhost at the host - e.g.:
POST /auth/realms/DC/protocol/openid-connect/token HTTP/1.1
Host: localhost:8080
Content-Type: application/x-www-form-urlencoded
realm=DC&grant_type=password&client_id=dc&username=dc_editor&password=******
The problem is, that from the perspective of the webservice localhost isn't the keycloak server. The JWT-Token-Validation against the issuer fails.
Goal
I want to access the keycloak server from the host with its docker-internal network alias - e.g. dcAuthServer. The JWT-Token would be validated correctly.
Code
The docker-compose file looks like this:
version: "3.8"
services:
dcWebservice:
environment:
- DC_AUTH_SERVER_HOST=dcAuthServer
- DC_AUTH_SERVER_PORT=8080
- DC_AUTH_SERVER_REALM=DC
image: dc_webservice:latest
ports:
- "9080:9080"
networks:
- dcNetwork
dcAuthServer:
image: dc_keycloak:latest
ports:
- "8080:8080"
networks:
dcNetwork:
aliases:
- dcAuthServer
healthcheck:
test: "curl --fail http://localhost:8080/auth/realms/DC || false"
networks:
dcNetwork:
The environment DC_AUTH* are used in the mpJwt-configuration in server.xml of the open liberty server:
<mpJwt id="dcMPJWT" audiences="dc" issuer="http://${DC_AUTH_SERVER_HOST}:${DC_AUTH_SERVER_PORT}/auth/realms/${DC_AUTH_SERVER_REALM}"
jwksUri="http://${DC_AUTH_SERVER_HOST}:${DC_AUTH_SERVER_PORT}/auth/realms/${DC_AUTH_SERVER_REALM}/protocol/openid-connect/certs"/>
The issuer is where I have to put a trusted issuer for the JWT-Token.
I hope I did not forget important information - just ask!
Thanks in advance
Robert

Related

Traefik with Docker-Compose not working as expected

I am fairly new to using traefik, so I might be totally missing something simple, but I have the following docker-compose.yaml:
version: '3.8'
services:
reverse-proxy:
container_name: reverse_proxy
restart: unless-stopped
image: traefik:v2.0
command:
- --entrypoints.web.address=:80
- --entrypoints.web-secure.address=:443
- --api.insecure=true
- --providers.file.directory=/conf/
- --providers.file.watch=true
- --providers.docker=true
ports:
- "80:80"
- "8080:8080"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./scripts/certificates/conf/:/conf/
- ./scripts/certificates/ssl/:/certs/
networks:
- bnkrl.io
labels:
- "traefik.enable=true"
- "traefik.http.routers.api.rule=Host(`traefik.bnkrl.io`)"
- "traefik.docker.network=bnkrl.io"
bankroll:
container_name: bankroll
build:
context: .
ports:
- "3000"
volumes:
- .:/usr/src/app
command: yarn start
networks:
- bnkrl.io
labels:
- "traefik.http.routers.bankroll.rule=Host(`bankroll.bnkrl.io`)"
- "traefik.docker.network=bnkrl.io"
- "traefik.http.services.bankroll.loadbalancer.server.port=3000"
- "traefik.http.routers.bankroll-https.rule=Host(`bankroll.bnkrl.io`)"
- "traefik.http.routers.bankroll-https.tls=true"
networks:
bnkrl.io:
external: true
But for some reason the following is happening:
Running curl when ssh'd into my bankroll container gives the following:
/usr/src/app# curl bankroll.bnkrl.io
curl: (7) Failed to connect to bankroll.bnkrl.io port 80: Connection refused
Despite having - "traefik.http.services.bankroll.loadbalancer.server.port=3000" label set up.
I am also unable to hit traefik from my application container:
curl traefik.bnkrl.io
curl: (6) Could not resolve host: traefik.bnkrl.io
Despite my expectation to be able to do so since they are both on the same network.
Any help with understanding what I might be doing wrong would be greatly appreciated! My application (bankroll) is a very basic hello-world react app, but I don't think any of the details around that are relevant to the issue I'm facing.
EDIT: I am also not seeing any error logs on traefik side of things.
You are using host names that are not declared and therefore are unreachable.
To reach a container from another container, you need to use the service name, for example, if you connect to bankroll from the reverse-proxy it will hit the other service.
While if you want to access them from the host machine, you will have to publish the ports (which you did, it's all the stuff in ports in your Docker-compose file) and access from localhost or from your machine local IP address instead of traefik.bnkrl.io.
If you want to access from traefik.bnkrl.io, you will have to declare this host name, and point it to the place where the Docker containers are running from.
So either a DNS record in the domain bnkrl.io pointing to your local machine, or a HOSTS file entry in your computer pointing to 127.0.0.1.
Another note: For SSL you are going to need a valid certificate to use for the host name. While in local development, you can use the self-signed certificate provided by Traefik, but you may have to install it in the computer connecting to the service, or allow untrusted certificates from your browser, or wherever you are making the requests from (some browsers no longer support using self-signed certificates). For SSL on the Internet you will need to look at things like Let's Encrypt.

Setup of Cyberark Conjur server

I've created a project in node.js to store and fetch credentials from cyberark conjur (using its REST-API)
But to test the application I'm stumbling to setup conjur server.
Problem is server is running fine within docker container, but how to access it outside(host machine) (port mapping is not working)
Or is there any conjur server hosted on Internet for public usage
All I want is to test API calls
As of writing this, the Conjur Node.js API is not currently actively being supported. Here are some suggestions for testing the API's.
Can I see the command you're using to start docker/docker-compose file?
Method 1
If you're using the setup from the Conjur Quickstart Guide, your docker-compose.yml file should look something like:
...
conjur:
image: cyberark/conjur
container_name: conjur_server
command: server
environment:
DATABASE_URL: postgres://postgres#database/postgres
CONJUR_DATA_KEY:
CONJUR_AUTHENTICATORS:
depends_on:
- database
restart: on-failure
proxy:
image: nginx:1.13.6-alpine
container_name: nginx_proxy
ports:
- "8443:443"
volumes:
- ./conf/:/etc/nginx/conf.d/:ro
- ./conf/tls/:/etc/nginx/tls/:ro
depends_on:
- conjur
- openssl
restart: on-failure
...
This means Conjur is running behind an NGINX proxy to handle the SSL and does not have a port exposed to outside the Docker network it is running on. With this setup you can access the Conjur Server on https://localhost:8443 on your local machine.
Note: You will need the SSL cert located in ./conf/tls/. Since this is a demo environment, these are made readily available for testing like this.
Method 2
If you do not care about security and are just purely testing the REST API endpoints, you could always cut out the SSL and just modify the docker-compose.yml to expose the Conjur server's port to your local machine like this:
...
conjur:
image: cyberark/conjur
container_name: conjur_server
command: server
environment:
DATABASE_URL: postgres://postgres#database/postgres
CONJUR_DATA_KEY:
CONJUR_AUTHENTICATORS:
ports:
- "8080:80"
depends_on:
- database
restart: on-failure
Now you'll be able to talk to the Conjur Server on your local machine through http://localhost:8080.
For more info: Networking in Docker Compose docs

IdentityServer4 - Different behavior from docker-compose than manual run

I have three different projects in .net core 3.1 :
STS
Admin UI
Admin API
I have a dockerfile by project which are building my images well.
If I run my containers manually with these commands everything is ok :
docker run -e "ASPNETCORE_ENVIRONMENT=Development" -e "ASPNETCORE_URLS=http://+:80" -e "DOTNET_USE_POLLING_FILE_WATCHER=1" -p 5001:80 stsidentity:latest
docker run -e "ASPNETCORE_ENVIRONMENT=Development" -e "ASPNETCORE_URLS=http://+:80" -e "DOTNET_USE_POLLING_FILE_WATCHER=1" -p 5003:80 adminapi:latest
docker run -e "ASPNETCORE_ENVIRONMENT=Development" -e "ASPNETCORE_URLS=http://+:80" -e "DOTNET_USE_POLLING_FILE_WATCHER=1" -p 9001:80 admin:latest
My next step was to create a docker-compose to automatized it but I have some issues.
From Visual Studio, when I execute the docker-compose below, containers sts and api are working well.
When I try to access my Admin UI I have an error "InvalidOperationException: IDX20803: Unable to obtain configuration from: 'http://localhost:5001/.well-known/openid-configuration'."
If manually I copy/paste that Url in my browser I can access it normally.
I don't understand why a different behavior between the docker-compose and when I run containers manually from the same images.
version: "3.4"
services:
admin:
image: ${DOCKER_REGISTRY-}admin:latest
ports:
- "9001:80"
build:
context: .
dockerfile: src/IdentityServer/Admin/Dockerfile
container_name: IS4-admin
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://+:80
- DOTNET_USE_POLLING_FILE_WATCHER=1
depends_on:
- sts.identity
- admin.api
admin.api:
image: ${DOCKER_REGISTRY-}admin-api:latest
ports:
- "5003:80"
build:
context: .
dockerfile: src/IdentityServer/Admin.Api/Dockerfile
container_name: IS4-admin-api
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://+:80
- DOTNET_USE_POLLING_FILE_WATCHER=1
depends_on:
- sts.identity
sts.identity:
image: ${DOCKER_REGISTRY-}sts-identity:latest
ports:
- "5001:80"
build:
context: .
dockerfile: src/IdentityServer/STS.Identity/Dockerfile
container_name: IS4-sts-identity
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://+:80
- DOTNET_USE_POLLING_FILE_WATCHER=1
It's probably something obvious but I don't see it.
--
-- Edit
--
I read it was a problem possibly related to localhost. So I tried to use a Traefik container as a reverse proxy and at the end I have the same issue.
I'm starting to think that my problem is maybe not related to containers but just with the fact to execute it in something different than my localhost on IISExpress.
From what I read on IdentityServer4 it could come from a problem with a certificate. I think I put everything in HTTP and I don't understand why I would have a problem with it in HTTP.
I succeed to obtain a more detailed error message :
IOException: IDX20807: Unable to retrieve document from: 'http://login.traefik.me/.well-known/openid-configuration'. HttpResponseMessage: 'StatusCode: 404, ReasonPhrase: 'Not Found', Version: 1.1, Content: System.Net.Http.HttpConnectionResponseContent, Headers:
{
Date: Wed, 08 Jul 2020 13:21:07 GMT
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
X-Frame-Options: SameOrigin
Referrer-Policy: no-referrer
Content-Security-Policy: script-src 'self' 'unsafe-inline' 'unsafe-eval';style-src 'self' 'unsafe-inline' https://fonts.googleapis.com/ https://fonts.gstatic.com/;font-src 'self' https://fonts.googleapis.com/ https://fonts.gstatic.com/
Content-Length: 0
}', HttpResponseMessage.Content: ''.
Microsoft.IdentityModel.Protocols.HttpDocumentRetriever.GetDocumentAsync(string address, CancellationToken cancel)
InvalidOperationException: IDX20803: Unable to obtain configuration from: 'http://login.traefik.me/.well-known/openid-configuration'.
Microsoft.IdentityModel.Protocols.ConfigurationManager<T>.GetConfigurationAsync(CancellationToken cancel)
Thanks in advance,
Have a nice day
This is exactly the same than the one I was getting with my Docker Compose solution.
Here is a simplified overview of the solution:
Api
Admin
IdentityServer4
Traefik
I configured Traefik to forward requests on 'api.localhost' to the Api container, requests on 'admin.localhost' to be forwarded to the Admin container and request to 'identity.localhost' to be forwarded to the IdentityServer4 container.
Like you, request on both 'api.localhost', 'admin.localhost' and 'identity.localhost' are working fine in a browser.
The problem appears when I try to login on the Api or the Admin containers, because inter-container communication are not handled by Traefik. Furthermore, DNS such as '*.localhost' are handled in the container itself, for it is a loopback DNS, and is therefore not resolved as it should.
My solution to make it work as expected is the following:
STEP 1: Create a new network in your docker-compose file. That network is created to add your own subnet, and therefore specify a static IP for your Traefik container.
networks:
traefik:
ipam:
driver: default
config:
- subnet: 172.30.0.0/16 #replace by your desired subnet
STEP 2: Configure the Traefik container's static IP address:
networks:
default: #required to talk to containers
traefik:
ipv4_address: 127.30.1.1 #can be any address compatible with the subnet defined at step 1
STEP 3: Add the newly created network to all the containers that rely on IdentityServer4, and do not forget to add the 'default' network too. We will, in addition, create extra hosts corresponding to the DNS of our IdentityServer4 container, as configured in Traefik. Those additional hosts will point to the static IP address of the Traefik container.
admin:
image: ...
environment: ...
labels: ...
networks:
traefik:
default:
extra_hosts:
- "identity.localhost: 127.30.1.1 #replace by the configured traefik static address
At this point, if the Admin container tries to resolve the 'identity.localhost' DNS, it will be forwarded to Traefik, which in turn will redirect to the IdentityServer4 container: yaaai!
If you test it out, however, you will still run in a couple of issues, all related to HTTPS (which I strongly recommend you enable and configure)
Those issues are:
The SSL certificates you have configured in Traefik might not be trusted by your containers. If so, make sure to add the root CA (in case of self-signed certificates) in the container's '/etc/ssl/certs' directory.
To do so, you have to mount a volume, such as in the following example:
admin:
image: ...
environment: ...
labels: ...
networks:
traefik:
default:
extra_hosts:
- "identity.localhost: 127.30.1.1
volumes:
- "./localPathToMyCACertificateFiles/:/etc/ssl/certs/"
When Admin or Api containers will try to retrieve the well known configuration on the IdentityServer, it will probably complain that configuration endpoints are in http, instead of https. To fix this issue, you will need to configure forwarded headers for the IdentityServer4 containers and all containers relying on it. Check this out if you don't know how to do so. An important point is to make sure that 'RequireHeaderSymmetry' is set to false.
This should do the trick.

Minio / Keycloak integration: connection refused

I am trying to connect MinIO with KeyCloak and I follow the instructions provided in this documentation:
https://github.com/minio/minio/blob/master/docs/sts/keycloak.md
What I have done so far is deploy a Docker container for the MinIO server, another one for the MinioClient and a third one used for the KeyCloak server.
As you can see in the following snippet the configuration of the Minio Client container is done correctly, since I can list the buckets available in the Minio Server:
mc ls myminio
[2020-05-14 11:54:59 UTC] 0B bucket1/
[2020-05-06 12:23:01 UTC] 0B bucket2/
I have an issue arising when I try to configure MinIO as depicted in step 3 (Configure MinIO) of the documentation. In more detail, the command that I run is this one:
mc admin config set myminio identity_openid config_url="http://localhost:8080/auth/realms/demo/.well-known/openid-configuration" client_id="account"
And the error I get is this one:
mc: <ERROR> Cannot set 'identity_openid config_url=http://localhost:8080/auth/realms/demo/.well-known/openid-configuration client_id=account' to server. Get http://localhost:8080/auth/realms/demo/.well-known/openid-configuration: dial tcp 127.0.0.1:8080: connect: connection refused.
When I curl this address http://localhost:8080/auth/realms/demo/.well-known/openid-configuration from the MinIO Client container though, I retrieve the JSON file.
Turns out, all I had to do is change the localhost in the config_url, from localhost to the IP of the KeyCloak container (172.17.0.3).
This is just a temporary solution that works for now, but I will continue searching for something more concrete than just hardcoding the IP.
When I figure out the solution, this answer will be updated.
Update
I had to create a docker-compose.yml file as the one below in order to overcome the issues without having to manually place the IP of the KeyCloak container.
version: '2'
services:
miniod:
image: minio/minio
restart: always
container_name: miniod
ports:
- 9000:9000
volumes:
- "C:/data:/data"
environment:
- "MINIO_ACCESS_KEY=access_key"
- "MINIO_SECRET_KEY=secret_key"
command: ["server", "/data"]
networks:
- minionw
mcd:
image: minio/mc
container_name: mcd
networks:
- minionw
kcd:
image: quay.io/keycloak/keycloak:10.0.1
container_name: kcd
restart: always
ports:
- 8080:8080
environment:
- "KEYCLOAK_USER=admin"
- "KEYCLOAK_PASSWORD=pass"
networks:
- minionw
networks:
minionw:
driver: "bridge"
Connection refused occurs when a port is not accessible on the hostname or IP we specified.
Please try exposing the port using --expose flag along with the port number which you wish to expose when using the docker CLI. Then being exposed, you can access on it on localhost

Setting up integration testing environment with KeyCloak in Docker

I'm trying to setup integration testing environment for one of our Web API project that secured with KeyCloak. My idea is create the docker compose file where connect all required components and then try to call Web API hosted in contained and validate the response.
Here is the example of docker compose file that connect KeyCloak and Web API together
keycloak:
image: jboss/keycloak:3.4.3.Final
environment:
DB_VENDOR: POSTGRES
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: keycloak
POSTGRES_PORT_5432_TCP_ADDR: postgres
POSTGRES_DATABASE: keycloak
JDBC_PARAMS: 'connectTimeout=30'
ports:
- '18080:8080'
- '18443:8443'
networks:
- integration-test
depends_on:
- postgres
test-web-api:
image: test-web-api
environment:
- IDENTITY_SERVER_URL=https://keycloak:18443/auth/realms/myrealm
networks:
- integration-test
ports:
- "28080:8080"
Now, when I host KeyCloak and Web API in different containers I can't get access from Web API container to KeyCloak using the localhost, so I need to use https://keycloak:18443/ but when I try it and get for example .well-known/openid-configuration from KeyCloak I get connection refused error:
root#0e77e9623717:/app# curl https://keycloak:18443/auth/realms/myrealm/.well-known/openid-configuration
curl: (7) Failed to connect to keycloak port 18443: Connection refused
From the documentation I figured out that I need to enable SSL on KeyCloak but the whole process is a bit confused and it's not very clear what domain to use for the certificate...
If somebody had any experience with the situation like mine and could share it that would be great!
It is not clear how did you configure integration-test network and where are you running your integration tests (host, container) to get the exact answer.
But I try. For keycloak access from the host:
https://<host IP or name>:18443/
From the container in the integration-test network:
https://keycloak:8443/
So try to configure test-web-api:
IDENTITY_SERVER_URL=https://keycloak:8443/auth/realms/myrealm
and your test-web-api should be able to reach keycloak.

Resources