Setting up integration testing environment with KeyCloak in Docker - docker

I'm trying to setup integration testing environment for one of our Web API project that secured with KeyCloak. My idea is create the docker compose file where connect all required components and then try to call Web API hosted in contained and validate the response.
Here is the example of docker compose file that connect KeyCloak and Web API together
keycloak:
image: jboss/keycloak:3.4.3.Final
environment:
DB_VENDOR: POSTGRES
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: keycloak
POSTGRES_PORT_5432_TCP_ADDR: postgres
POSTGRES_DATABASE: keycloak
JDBC_PARAMS: 'connectTimeout=30'
ports:
- '18080:8080'
- '18443:8443'
networks:
- integration-test
depends_on:
- postgres
test-web-api:
image: test-web-api
environment:
- IDENTITY_SERVER_URL=https://keycloak:18443/auth/realms/myrealm
networks:
- integration-test
ports:
- "28080:8080"
Now, when I host KeyCloak and Web API in different containers I can't get access from Web API container to KeyCloak using the localhost, so I need to use https://keycloak:18443/ but when I try it and get for example .well-known/openid-configuration from KeyCloak I get connection refused error:
root#0e77e9623717:/app# curl https://keycloak:18443/auth/realms/myrealm/.well-known/openid-configuration
curl: (7) Failed to connect to keycloak port 18443: Connection refused
From the documentation I figured out that I need to enable SSL on KeyCloak but the whole process is a bit confused and it's not very clear what domain to use for the certificate...
If somebody had any experience with the situation like mine and could share it that would be great!

It is not clear how did you configure integration-test network and where are you running your integration tests (host, container) to get the exact answer.
But I try. For keycloak access from the host:
https://<host IP or name>:18443/
From the container in the integration-test network:
https://keycloak:8443/
So try to configure test-web-api:
IDENTITY_SERVER_URL=https://keycloak:8443/auth/realms/myrealm
and your test-web-api should be able to reach keycloak.

Related

Setup of Cyberark Conjur server

I've created a project in node.js to store and fetch credentials from cyberark conjur (using its REST-API)
But to test the application I'm stumbling to setup conjur server.
Problem is server is running fine within docker container, but how to access it outside(host machine) (port mapping is not working)
Or is there any conjur server hosted on Internet for public usage
All I want is to test API calls
As of writing this, the Conjur Node.js API is not currently actively being supported. Here are some suggestions for testing the API's.
Can I see the command you're using to start docker/docker-compose file?
Method 1
If you're using the setup from the Conjur Quickstart Guide, your docker-compose.yml file should look something like:
...
conjur:
image: cyberark/conjur
container_name: conjur_server
command: server
environment:
DATABASE_URL: postgres://postgres#database/postgres
CONJUR_DATA_KEY:
CONJUR_AUTHENTICATORS:
depends_on:
- database
restart: on-failure
proxy:
image: nginx:1.13.6-alpine
container_name: nginx_proxy
ports:
- "8443:443"
volumes:
- ./conf/:/etc/nginx/conf.d/:ro
- ./conf/tls/:/etc/nginx/tls/:ro
depends_on:
- conjur
- openssl
restart: on-failure
...
This means Conjur is running behind an NGINX proxy to handle the SSL and does not have a port exposed to outside the Docker network it is running on. With this setup you can access the Conjur Server on https://localhost:8443 on your local machine.
Note: You will need the SSL cert located in ./conf/tls/. Since this is a demo environment, these are made readily available for testing like this.
Method 2
If you do not care about security and are just purely testing the REST API endpoints, you could always cut out the SSL and just modify the docker-compose.yml to expose the Conjur server's port to your local machine like this:
...
conjur:
image: cyberark/conjur
container_name: conjur_server
command: server
environment:
DATABASE_URL: postgres://postgres#database/postgres
CONJUR_DATA_KEY:
CONJUR_AUTHENTICATORS:
ports:
- "8080:80"
depends_on:
- database
restart: on-failure
Now you'll be able to talk to the Conjur Server on your local machine through http://localhost:8080.
For more info: Networking in Docker Compose docs

Minio / Keycloak integration: connection refused

I am trying to connect MinIO with KeyCloak and I follow the instructions provided in this documentation:
https://github.com/minio/minio/blob/master/docs/sts/keycloak.md
What I have done so far is deploy a Docker container for the MinIO server, another one for the MinioClient and a third one used for the KeyCloak server.
As you can see in the following snippet the configuration of the Minio Client container is done correctly, since I can list the buckets available in the Minio Server:
mc ls myminio
[2020-05-14 11:54:59 UTC] 0B bucket1/
[2020-05-06 12:23:01 UTC] 0B bucket2/
I have an issue arising when I try to configure MinIO as depicted in step 3 (Configure MinIO) of the documentation. In more detail, the command that I run is this one:
mc admin config set myminio identity_openid config_url="http://localhost:8080/auth/realms/demo/.well-known/openid-configuration" client_id="account"
And the error I get is this one:
mc: <ERROR> Cannot set 'identity_openid config_url=http://localhost:8080/auth/realms/demo/.well-known/openid-configuration client_id=account' to server. Get http://localhost:8080/auth/realms/demo/.well-known/openid-configuration: dial tcp 127.0.0.1:8080: connect: connection refused.
When I curl this address http://localhost:8080/auth/realms/demo/.well-known/openid-configuration from the MinIO Client container though, I retrieve the JSON file.
Turns out, all I had to do is change the localhost in the config_url, from localhost to the IP of the KeyCloak container (172.17.0.3).
This is just a temporary solution that works for now, but I will continue searching for something more concrete than just hardcoding the IP.
When I figure out the solution, this answer will be updated.
Update
I had to create a docker-compose.yml file as the one below in order to overcome the issues without having to manually place the IP of the KeyCloak container.
version: '2'
services:
miniod:
image: minio/minio
restart: always
container_name: miniod
ports:
- 9000:9000
volumes:
- "C:/data:/data"
environment:
- "MINIO_ACCESS_KEY=access_key"
- "MINIO_SECRET_KEY=secret_key"
command: ["server", "/data"]
networks:
- minionw
mcd:
image: minio/mc
container_name: mcd
networks:
- minionw
kcd:
image: quay.io/keycloak/keycloak:10.0.1
container_name: kcd
restart: always
ports:
- 8080:8080
environment:
- "KEYCLOAK_USER=admin"
- "KEYCLOAK_PASSWORD=pass"
networks:
- minionw
networks:
minionw:
driver: "bridge"
Connection refused occurs when a port is not accessible on the hostname or IP we specified.
Please try exposing the port using --expose flag along with the port number which you wish to expose when using the docker CLI. Then being exposed, you can access on it on localhost

Accessing Keycloak-Service inside Docker by its Network Alias

Scenario
I have secured webservice (JakartaEE + Microprofile + JWT) running in open liberty. As issuer of the jwt token I use keycloak. For testing and development i want to run both services in docker. Therefore I wrote a docker-compose file. As test client I use JUnit with microprofile-client. This is running outside of docker.
Problem
I can retrieve the JWT-Token via localhost at the host - e.g.:
POST /auth/realms/DC/protocol/openid-connect/token HTTP/1.1
Host: localhost:8080
Content-Type: application/x-www-form-urlencoded
realm=DC&grant_type=password&client_id=dc&username=dc_editor&password=******
The problem is, that from the perspective of the webservice localhost isn't the keycloak server. The JWT-Token-Validation against the issuer fails.
Goal
I want to access the keycloak server from the host with its docker-internal network alias - e.g. dcAuthServer. The JWT-Token would be validated correctly.
Code
The docker-compose file looks like this:
version: "3.8"
services:
dcWebservice:
environment:
- DC_AUTH_SERVER_HOST=dcAuthServer
- DC_AUTH_SERVER_PORT=8080
- DC_AUTH_SERVER_REALM=DC
image: dc_webservice:latest
ports:
- "9080:9080"
networks:
- dcNetwork
dcAuthServer:
image: dc_keycloak:latest
ports:
- "8080:8080"
networks:
dcNetwork:
aliases:
- dcAuthServer
healthcheck:
test: "curl --fail http://localhost:8080/auth/realms/DC || false"
networks:
dcNetwork:
The environment DC_AUTH* are used in the mpJwt-configuration in server.xml of the open liberty server:
<mpJwt id="dcMPJWT" audiences="dc" issuer="http://${DC_AUTH_SERVER_HOST}:${DC_AUTH_SERVER_PORT}/auth/realms/${DC_AUTH_SERVER_REALM}"
jwksUri="http://${DC_AUTH_SERVER_HOST}:${DC_AUTH_SERVER_PORT}/auth/realms/${DC_AUTH_SERVER_REALM}/protocol/openid-connect/certs"/>
The issuer is where I have to put a trusted issuer for the JWT-Token.
I hope I did not forget important information - just ask!
Thanks in advance
Robert

Problem with ports in docker-compose file

I am trying to link my api with my webapp but is doesn't seem to work.
I have this error
[HPM] Error occurred while trying to proxy request /users/me from
localhost:3000 to http://localhost:8080 (ECONNREFUSED)
(https://nodejs.org/api/errors.html#errors_common_system_errors)
When I try to sign in, it doesn't find the users.
Here is the contents of my docker-compose.yml file
version: '3'
services:
api:
build: ./web3-2019-api
ports:
- "8080:8080"
webapp:
build: ./web3-2019-webapp
ports:
- "3000:3000"
links:
- api
Try to connect via docker host api:8080 instead of localhost.
If you connect via localhost from webapp it expects 8080 to be running in webapp docker, but api is another docker and you should connect via api:8080. Though both are running in same machine they are virtual machines and you should connect via respective docker name within docker network

How to connect to rabbitmq container from the application server container

I am new to docker and I am trying to dockerize this application I have written in Golang. It is a simple web server that interacts with rabbitmq and mongodb
It takes the creadentials form a toml file and loads it into a config struct before starting the application server on port 3000. These are the credentials
mongo_server = "localhost"
database = "collect_db"
rabbitmq_server = "amqp://guest:guest#localhost:5672/"
If it can't connect to these urls it fails with an error. Following is my docker-compose.yml
version: '3'
services:
rabbitmq:
image: rabbitmq
ports:
- 5672:5672
mongodb:
image: mongo
ports:
- 27017:27017
web:
build: .
image: palash2504/collect
container_name: collect_service
ports:
- 3000:3000
depends_on:
- rabbitmq
- mongodb
links: [rabbitmq, mongodb]
But it fails to connect with rabbitmq on the url used for local development i.e. amqp://guest:guest#localhost:5672/
I realise that the rabbitmq container might be running on some different address other than the one provided in the config file.
I would like to know the correct way for setting any env credentials to be able to connect my app to rabbitmq.
Also what approach would be best to change my application code for initializing connections to external services? I was thinking about ditching the config.toml file and using os.Getenv and os.Setenv to get the urls for connections.
Localhost addresses are resolved, well, locally. They thus will not work inside containers, since they will look for a local address (i.e. inside the container).
Services can access each other by using service names as an address. So in the web container you can target mongodb for example.
You might give this a shot:
mongo_server = mongodb
database = "collect_db"
rabbitmq_server = "amqp://guest:guest#rabbitmq/"
It is advisable to set service target environment variables in the compose file itself:
#docker-compose.yml
#...other stuff...
web:
#...other stuff...
environment:
RABBITMQ_SERVER: rabbitmq
MONGO_SERVER: mongodb
depends_on:
- rabbitmq
- mongodb
This gives you a single place to make adjustments to the configuration.
As a side note, to me it seems that links: [rabbitmq, mongodb] can be removed. And I would advise not to alter the container name (remove container_name: collect_service unless it is necessary)

Resources