Migration of docker based keycloak configuration into kubernetes - docker

I am developing an microservice based application. Currently the Docker runs all services. One of this services is keycloak. I need an advice how to migrate my configuration (especially keycloak) to kubernetes.
Application
The application consists of a frontend code (JavaScript) running in browser and the following backend components:
frontend responsible for delivering of frontend code to the user browser and running on port 8080 (https)
backend responsible for delivering business data and running on port 8085 (https)
keycloak responsible for authentification/authorization and running on port 8143 (https)
nginx working as reverse proxy for internal Docker network (i.e. for all services above). The following host based rules are used:
keycloak.external -> keycloak
frontend.external -> frontend
backend.external -> backend
The application workflow:
The user authentificates itself in frontend. For this purpose the frontend re-use keycloak login dialog (keycloak is running in backend as one of components). Afterwards the frontend uses the Json Web Token provided by keycloak for authorization vs backend components in order to extract required information and to present it in browser.
Docker development configuration
Currently I development all components on my laptop using containers for all backend components. I have added the following entries into /etc/hosts
127.0.0.1 keycloak.external
127.0.0.1 frontend.external
127.0.0.1 backend.external
My docker-compose file look likes:
version: '3.5'
services:
keycloak:
image: keycloak
container_name: keycloak
secrets:
- keycloak-server-crt
- keycloak-server-key
- source: keycloak-realm-conf
target: /opt/keycloak/data/import/app-realm.json
networks:
default:
aliases:
- keycloak.external
expose:
- 8143
command:
- "start-dev"
- "--import-realm"
- "--http-enabled=false"
- "--https-port=8143"
- "--https-client-auth=none"
- "--hostname-url=https://keycloak.external:8143"
- "--hostname-strict-backchannel=true"
- "--hostname-admin-url=https://keycloak.external:8143"
- "--https-certificate-file=/run/secrets/keycloak-server-crt"
- "--https-certificate-key-file=/run/secrets/keycloak-server-key"
- "--proxy reencrypt"
- "--hostname-port=8143"
environment:
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: admin
backend:
image: backend
container_name: backend
secrets:
target: /usr/local/backend/certs/server.crt
- source: backend-server-key
target: /usr/local/backend/certs/server.key
expose:
- 8085
environment:
# keycloak settings
KEYCLOAK_AUTH_URL: "https://keycloak.external:8143"
KEYCLOAK_REALM: "APP"
KEYCLOAK_CLIENT_ID: "backend"
KEYCLOAK_SECRET: XXXXXX
frontend:
image: frontend
container_name: frontend
secrets:
- source: frontend-server-crt
target: /etc/nginx/certs/server.crt
- source: frontend-server-key
target: /etc/nginx/certs/server.key
expose:
- 8080
environment:
KEYCLOAK_AUTH_URL: "https://keycloak.external:8143"
KEYCLOAK_REALM: "APP"
KEYCLOAK_CLIENT_ID: "frontend"
nginxproxy:
image: nginx:latest
container_name: nginxproxy
ports:
- "8143:8143"
- "8085:8085"
- "8080:8080"
secrets:
- source: nginxproxy-conf
target: /etc/nginx/conf.d/default.conf
- source: keycloak-server-crt
target: /etc/nginx/certs/keycloak.external.crt
- source: keycloak-server-key
target: /etc/nginx/certs/keycloak.external.key
- source: backend-server-crt
target: /etc/nginx/certs/backend.crt
- source: backend-server-key
target: /etc/nginx/certs/backend.key
- source: frontend-server-crt
target: /etc/nginx/certs/frontend.crt
- source: frontend-server-key
target: /etc/nginx/certs/frontend.key
networks:
default:
name: my-network
driver: bridge
ipam:
config:
- subnet: 172.177.0.0/16
secrets:
......
The above configuration relies on the fact, that the browser access keycloak via
https://keycloak.external:8143 and the backend use the same url. In order to make it possible from internal Docker network alias is defined in the docker-compose.yaml, i.e.
networks:
default:
aliases:
- keycloak.external
What is the best way to migrate my development configuration to kubernetes?
I can imagine that the keycloak url issue can be solved by usage of fixed ServiceIP XX.XX.XX.XX in keycloak Service and consequential usage of hostAliases in backend.
hostAliases:
- ip: "XX.XX.XX.XX"
hostnames:
- "keycloak.external"
Is it correct? But it is not really elegant if the number of backend components requiring keycloak increase.
Update: Well, I have implemented hostAliases for all affected services. It works.

Related

NetCore Docker Application with connection refused

I have two containers (both .net-core), a Web Application and a Web API, the Web Application can be accessed from the host machine using http://localhost:51217, however I can't access the Web API using http://localhost:51218, I got the connection refused, in order to access the Web API, I had to change the Kerstel URL configuration from ASPNETCORE_URLS=http://localhost to ASPNETCORE_URLS=http://0.0.0.0, so webserver listen all IP's.
Any clue why the localhost works for the Web App but not for the Web API, although both have different port mapping.
See below my docker-compose working fine, if I change the API to ASPNETCORE_URLS=http://localhost, I will get connection refused. The docker files exposes port 80.
version: '3.5'
services:
documentuploaderAPI:
image: ${DOCKER_REGISTRY-}documentuploader
container_name: DocumentUpoaderAPI
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://0.0.0.0
networks:
- doc_manager
ports:
- "51217:80"
volumes:
- ${APPDATA}/Microsoft/UserSecrets/:/root/.microsoft/usersecrets
- ${APPDATA}/ASP.NET/Https/:/root/.aspnet/https/
- c:\azurite:/root/.unistad/
build:
context: .
dockerfile: DocumentUploader/Dockerfile
documentmanagerAPP:
image: ${DOCKER_REGISTRY-}documentmanager
container_name: DocumentManagerApp
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://localhost;http://localhost
networks:
- doc_manager
ports:
- "51218:80"
volumes:
- ${APPDATA}/Microsoft/UserSecrets/:/root/.microsoft/usersecrets
- ${APPDATA}/ASP.NET/Https/:/root/.aspnet/https/
build:
context: .
dockerfile: Document Manager/Dockerfile
networks:
doc_manager:
name: doc_manager
driver: bridge
Any idea why localhost doesn't work for the API? Any suggestion also how can I trace or sniff the communication from browser until the web server in the container?
You can find below the docker networking design, which may help on my question.

How to properly call another docker container via axios?

So I'm currently building a docker setup with a REST API and a separate frontend. My backend consists of Symfony 5.2.6 as REST API and my frontend is a simple Vue application.
When I try to call my API from the vue application via localhost or 127.0.0.1, I get a "Connection refused" error. When I try to call the API via the external IP of my server, I run into CORS issues. This is my first setup like this, so I'm kind of at a loss.
This is my docker setup:
version: "3.8"
services:
# VUE-JS Instance
client:
build: client
restart: always
logging:
driver: none
volumes:
- ./client:/app
- /app/node_modules
environment:
- CHOKIDAR_USEPOLLING=true
- NODE_ENV=development
ports:
- 8080:8080
# SERVER
php:
build: php-fpm
restart: always
ports:
- "9002:9000"
volumes:
- ./server:/var/www/:cached
- ./logs/symfony:/var/www/var/logs:cached
# WEBSERVER
nginx:
build: nginx
restart: always
ports:
- "80:80"
volumes_from:
- php
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
- ./logs/nginx/:/var/log/nginx:cached
So what is the correct way to establish the connection between those two containers?
The client app runs on port 8080 but nginx on 80 is a different URL and it should be a CORS error.
To avoid it, in the PHP app, you have to add response header:
Access-Control-Allow-Origin: http://localhost:8080 or
Access-Control-Allow-Origin: *.
Another solution is to configure all in one domain on this same port.

Local Communication Between Services

I have 2 services inside my docker cluster. frontend runs on port 8090, and backend runs on port 8000. How can I make frontend call backend via local DNS like fetch('https://backend.local/')? Because if I use docker-hostname, I need to specify the port to call the back-end. Do I need to have a local DNS Server inside my docker?
You have to create a Software Defined Network (SDN) in docker and then all containers running in that network can communicate with each other using the container names or you can define alias for each and use that. A simple docker-compose file for a backend microservice and mysql database can be created using the below configs.
version: '3.2'
networks:
testNetwork:
services:
mysql-dev:
image: mysql:latest
container_name: mysql-dev
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=root
ports:
- "3306:3306"
networks:
- testNetwork
backend:
image: backend:1.0
container_name: backend
environment:
- DB_USER=root
- DB_PASS=root
- DB_NAME=root
- DB_HOST=mysql-dev
- DB_DIALECT=mysql
ports:
- "4000:4000"
working_dir: /backend
command: npm start
networks:
- testNetwork

How to use zabbix-web-nginx-mysql with existing nginx container?

I am trying to use docker on my debian server. There are several sites using Django framework. Every project run in it's own container with gunicorn, single nginx container works as a reverse proxy, data stores in mariadb container. Everything works correctly. It is necessary to add zabbix monitoring system on server. So, I use zabbix-server-mysql image as a zabbix-backend and zabbix-web-nginx-mysql image as a frontend. Backend run successfully, frontend fails with errors such as: "can't binding to 0.0.0.0:80 port is already allocated", nginx refuse connection to domains. As I understand, zabbix-web-nginx-mysql create another nginx container and it causes problems. Is there a right way to use zabbix images with existing nginx container?
I have a nginx reverse proxy installed on the host, which I use for proxy redirect into container. I have a working configuration for docker zabbix with the following configuration (I have omitted the environment variables).
My port 80 for the web application is served through anoter which is same set on nginx proxy_pass. Here the configuration
version: '2'
services:
zabbix-server4:
container_name: zabbix-server4
image: zabbix/zabbix-server-mysql:alpine-4.0.5
user: root
networks:
zbx_net:
aliases:
- zabbix-server4
- zabbix-server4-mysql
ipv4_address: 172.16.238.5
zabbix-web4:
container_name: zabbix-web4
image: zabbix/zabbix-web-nginx-mysql:alpine-4.0.5
ports:
- 127.0.0.1:11011:80
links:
- zabbix-server4
networks:
zbx_net:
aliases:
- zabbix-web4
- zabbix-web4-nginx-alpine
- zabbix-web4-nginx-mysql
ipv4_address: 172.16.238.10
zabbix-agent4:
container_name: zabbix-agent4
image: zabbix/zabbix-agent:alpine-4.0.5
links:
- zabbix-server4
networks:
zbx_net:
aliases:
- zabbix-agent4
ipv4_address: 172.16.238.15
networks:
zbx_net:
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "false"
ipam:
driver: default
config:
- subnet: 172.16.238.0/24
gateway: 172.16.238.1

Exclude domains from Traefik Let's Encrypt

I am using Traefik as Reverse Proxy in a Docker environment. Every dockerized application is getting it's Traefik configuration as labels, like:
version: '2'
services:
whoami:
image: emilevauge/whoami:latest
labels:
- "traefik.backend=whoami"
- "traefik.frontend.rule=Host:internal.domain.com,external.domain.com;PathPrefixStrip:/whoami"
networks:
- traefik
ports:
- "80"
restart: always
networks:
traefik:
external:
name: traefik
Applications are accessible via an internal domain (intranet) and an external domain.
Now I am getting Error creating new order :: too many failed authorizations recently: see https://letsencrypt.org/docs/rate-limits/, url: " from Let's Encrypt, because Traefik tries to get a certificate for a domain which is not accessible from external.
Is there any way to exclude domains from Traefik's Let's Encrypt support?
The docker-compose label traefik.enable=false should disable it:
labels:
- traefik.enable=false

Resources