I have successfully created a client inside Keycloak using Dynamic Client Registration
The response body contains:
"registration_client_uri":"https://127.0.0.1:8443/auth/realms...",
This is because Keycloak is installed with Docker, and is fronted by NginX. I want to replace the IP address/port with the actual public hostname.
Where are the docs / configurations for this?
I started keycloak as follows:
docker run -itd --name keycloak \
--restart unless-stopped \
--env-file keycloak.env \
-p 127.0.0.1:8443:8443 \
--network keycloak \
jboss/keycloak:11.0.0 \
-Dkeycloak.profile=preview
And inside keycloak.env, I have set KEYCLOAK_HOSTNAME=example.com
Configure env variable PROXY_ADDRESS_FORWARDING=true, because Keycloak is running behind Nginx reverse proxy - https://hub.docker.com/r/jboss/keycloak/
Enabling proxy address forwarding
When running Keycloak behind a proxy, you will need to enable proxy address forwarding.
docker run -e PROXY_ADDRESS_FORWARDING=true jboss/keycloak
Related
I have a web app running in a docker container at port 9000. I need to route the traffic to Nginx in another container in the same network to access it at port 80. How do I achieve this? I tried building an Nginx image and added Nginx.conf. But my Nginx container stops immediately after it runs.
contents of Nginx.conf file
Snippet of containers
You need to bind internal port form containers to the host like:
application
docker run -d \
--network=randon_name \
<image>
nginx
You need to bind internal port form containers to the host like:
docker run -d \
--network=randon_name \
-p 80:80 \ # <host>:<containerPort>
-p 443:443 \ # <host>:<containerPort>
<image>
I need to setup nginx-proxy container to forward requests to the container with my app. I use the following commands to start containers:
# app
docker run -d -p 8080:2368 \
--name app \
app
# nginx
docker run -d -p 80:8080 \
--name nginx-proxy \
jwilder/nginx-proxy
But when I try to access port 80 on my server, I get ERR_CONNECTION_REFUSED. It's clear for me that nginx container is forwarding not the port I want because on server port 8080 I can access the app.
I tried using network like this:
# network
docker network create -d bridge net
# app
docker run -d -p 8080:2368 \
--name app \
--network net \
app
# nginx
docker run -d -p 80:8080 \
--name nginx-proxy \
--network net \
jwilder/nginx-proxy
But the result seems to be the same.
I need to understand how to make nginx container proxy requests from server port 80 to my app.
It is looking that your app is running on port 2368 which users should not need to reach directly. So the app container's port does not need to be exposed.
You are correct in creating a bridge network and create the containers on it.
You need to remove port mapping from app container and change the port mapping of nginx-proxy container from 80:8080 to 80:80.
You also need to setup nginx-proxy to proxy requests from port 80 to app:2386
This way users hitting the port 80 on the host machine Docker runs will be proxied to your app.
The VIRTUAL_HOST env var with domain name for app container was required to let nginx proxy requests to the app container. No network setup or ports forwarding is needed with this approach. Here is the working setup I came up with:
# app
docker run -d \
--name app \
-e VIRTUAL_HOST=mydomain.com \
app
# nginx
docker run -d -p 80:80 \
--name nginx-proxy \
jwilder/nginx-proxy
The docker daemon is running on an Ubuntu machine. I'm trying to start up a zookeeper ensemble in a swarm. The zookeeper nodes themselves can talk to each other. However, from the host machine, I don't seem to be able to access the published ports.
If I start the container with -
docker run \
-p 2181:2181 \
--env ZOO_MY_ID=1 \
--env ZOO_SERVERS="server.1=0.0.0.0:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888" \
zookeeper
It works like a charm. On my host machine I can say echo conf | nc localhost 2181 and zookeeper says something back.
However if I do,
docker service create \
-p 2181:2181 \
--env ZOO_MY_ID=1 \
--env ZOO_SERVERS="server.1=0.0.0.0:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888" \
zookeeper
and run the same command echo conf | nc localhost 2181,
it just gets stuck. I don't even get a new prompt on my terminal.
This works just as expected on the Docker Playground on the official Zookeeper Docker Hub page. So I expect it should for me too.
But... If I docker exec -it $container sh and then try the command in there, it works again.
Aren't published ports supposed to be accessible even by the host machine for a service?
Is there some trick I'm missing about working with overlay networks?
Try to use docket service create --publish 2181:2181 instead.
I believe the container backing the service is not directly exposed and has to go through the Swarm networking.
Otherwise, inspect your service to check which port are published: docker service inspect <service_name>
Source: documentation
I have a development server with a list of docker container running. Every one of them have an application and an nginx in it, listening on port 80 with no ssl encryption, serving the application. So if I have 10 dockers, I would have 10 nginx (I know nginx is designed to serve multiple app, here is not the question).
I would like to have a single point of entry on the server, which would be an nginx, auto redirecting to http, with a certificate generated by let's encrypt.
Is this possible? Listening on port 443 with a let's encrypt, and redirecting to the port 80 of another nginx?
The goal here is to secure all the connections to my different dockers.
For you information, I was trying to use valian/docker-nginx-auto-ssl docker with the command
docker run -d --name main-nginx \
--restart on-failure \
-p 80:80 -p 443:443 \
-e ALLOWED_DOMAINS=www.scaniat.io,dev.scaniat.io,www.dev.scaniat.io,scaniat.io \
-e SITES='scaniat.io=scaniat-frontend-master;dev.scaniat.io=scaniat-frontend-develop' \
--network custom \
valian/docker-nginx-auto-ssl
with no luck.
I found this docker image for Kafka
https://hub.docker.com/r/spotify/kafka/
and I can easily create a docker container using command documented in the link
docker run -p 2181:2181 -p 9092:9092 --env ADVERTISED_HOST=`boot2docker ip` --env ADVERTISED_PORT=9092 spotify/kafka
This is good. But I want to configure a "multiple" node Kafka cluster running on a docker swarm.
How can I do that?
Edit 28/11/2017:
Kafka added listener.security.protocol.map to their config. This allows you to set different listener addresses and protocols depending on whether you are inside or outside the cluster, and stops Kafka getting confused by any load balancing or ip translation which occurs in docker. Wurstmeister has a working docker image and example compose file here. I tried this a while back with a few docker machine nodes set up as a swarm and it seems to work.
tbh though I just attach a Kafka image to the overlay network and run the Kafka console commands when ever I want to interact with it now.
Hope that helps
Old Stuff Below
I have been trying this with docker 1.12 using docker swarm mode
create nodes
docker-machine create -d virtualbox master
docker-machine create -d virtualbox worker
master_config=$(docker-machine config master | tr -d '\"')
worker_config=$(docker-machine config worker | tr -d '\"')
master_ip=$(docker-machine ip master)
docker $master_config swarm init --advertise-addr $master_ip --listen-addr $master_ip:2377
worker_token=$(docker $master_config swarm join-token worker -q)
docker $worker_config swarm join --token $worker_token $master_ip:2377
eval $(docker-machine env master)
create the zookeeper service
docker service create --name zookeeper \
--constraint 'node.role == manager' \
-p 2181:2181 \
wurstmeister/zookeeper
create the kafka service
docker service create --name kafka \
--mode global \
-e 'KAFKA_PORT=9092' \
-e 'KAFKA_ADVERTISED_PORT=9092' \
-e 'KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092' \
-e 'KAFKA_ZOOKEEPER_CONNECT=tasks.zookeeper:2181' \
-e "HOSTNAME_COMMAND=ip r | awk '{ ip[\$3] = \$NF } END { print ( ip[\"eth0\"] ) }'" \
--publish '9092:9092' \
wurstmeister/kafka
Though for some reason this will only work from within the ingress or user defined overlay network and the connection will break to Kafka if you try and connect to it through one of the guest machines.
Changing the advertised IP doesn't make things any better...
docker service create --name kafka \
--mode global \
-e 'KAFKA_PORT=9092' \
-e 'KAFKA_ADVERTISED_PORT=9092' \
-e 'KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092' \
-e 'KAFKA_ZOOKEEPER_CONNECT=tasks.zookeeper:2181' \
-e 'KAFKA_LOG_DIRS=/kafka/kafka-logs' \
-e "HOSTNAME_COMMAND=curl 192.168.99.1:5000" \
--publish '9092:9092' \
wurstmeister/kafka
I think the new mesh networking and load balancing in docker might be interfering with the Kafka connection some how....
to get the host container I have a flask app running locally which I curl
from flask import Flask
from flask import request
app = Flask(__name__)
#app.route('/')
def hello_world():
return request.remote_addr
The previous approach raise some questions:
How to specify the IDs for the zookeeper nodes?
How to specify the id of the kafka nodes, and the zookeeper nodes?
#kafka configs
echo "broker.id=${ID}
advertised.host.name=${NAME}
zookeeper.connect=${ZOOKEEPERS}" >> /opt/kafka/config/server.properties
Everything should be resolvable in the overlay network.
Moreover, in the issue Cannot create a Kafka service and publish ports due to rout mesh network there is a comment to don't use the ingress network.
I think the best option is to specify your service by using a docker compose with swarm. I'll edit the answer with an example.
There are 2 concerns to consider: networking and storage.
Since Kafka is stateful service, until cloud native storage is figured out, it is advisable to use global deployment mode. That is each swarm node satisfying constraints will have one kafka container.
Another recommendation is to use host mode for published port.
It's also important to properly set advertised listeners option so that each kafka broker knows which host it's running on. Use swarm service templates to provide real hostname automatically.
Also make sure that published port is different from target port.
kafka:
image: debezium/kafka:0.8
volumes:
- ./kafka:/kafka/data
environment:
- ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_AUTO_CREATE_TOPICS_ENABLE=true
- KAFKA_MAX_MESSAGE_BYTES=20000000
- KAFKA_MESSAGE_MAX_BYTES=20000000
- KAFKA_CLEANUP_POLICY=compact
- LISTENERS=PLAINTEXT://:9092
- BROKER_ID=-1
- ADVERTISED_LISTENERS=PLAINTEXT://{{.Node.Hostname}}:11092
depends_on:
- zookeeper
deploy:
mode: global
ports:
- target: 9092
published: 11092
protocol: tcp
mode: host
networks:
- kafka
I can't explain all the options right now, but it's the configuration that works.
set broker.id=-1 in server.properties to allow kafka to auto generate the broker ID. Helpful in Swarm mode.