Keycloak: Email testing error with Mailhog on localhost - "Connection refused" - docker

I have containers running locally for both Keycloak and Mailhog. I then wanted to send some test emails from Keycloak, but I am always getting the error below. I have made the configuration as outlined at the bottom (localhost:1025) and tried various other things - all with no success unfortunately. I have also found a handful of questions on this topic here on stackoverflow - unfortunately however, the answers given there did either not solve my problem (changing the hostname) or I could not really understand (changing php.ini).
keycloak-custom-keycloak-1 | com.sun.mail.util.MailConnectException: Couldn't connect to host, port: localhost, 1025; timeout 10000;
keycloak-custom-keycloak-1 | nested exception is:
keycloak-custom-keycloak-1 | java.net.ConnectException: Connection refused (Connection refused)
From what I can see, I have configured everything properly, checked whether the containers are running with the correct ports, but still I am getting this error. (Details see below).
My docker-compose.yml file looks like this:
version: '3.3'
services:
keycloak:
image: jboss/keycloak #:${KEYCLOAK_VERSION}
ports:
- "8080:8080"
environment:
- KEYCLOAK_USER=${KEYCLOAK_USER}
- KEYCLOAK_PASSWORD=${KEYCLOAK_PASSWORD}
- DB_DATABASE=${KEYCLOAK_DATABASE_NAME}
- DB_USER=${KEYCLOAK_DATABASE_USER}
- DB_PASSWORD=${KEYCLOAK_DATABASE_PASSWORD}
- DB_ADDR=${KEYCLOAK_DATABASE_HOST}
- DB_VENDOR=${KEYCLOAK_DATABASE_VENDOR}
- KEYCLOAK_IMPORT=/tmp/realm-export.json
volumes:
- ./keycloak/realms/realm-export.json:/tmp/realm-export.json
- ./keycloak/scripts/disable-theme-cache.cli:/opt/jboss/startup-scripts/disable-theme-cache.cli
- ./keycloak/themes/gesetzeio:/opt/jboss/keycloak/themes/gesetzeio
networks:
internal:
depends_on:
- keycloakdb
keycloakdb:
image: postgres:${POSTGRES_VERSION}
ports:
- "5433:5432"
environment:
- POSTGRES_USER=${KEYCLOAK_DATABASE_USER}
- POSTGRES_PASSWORD=${KEYCLOAK_DATABASE_PASSWORD}
- POSTGRES_DB=${KEYCLOAK_DATABASE_NAME}
volumes:
- keycloak-postgres:/var/lib/postgresql/data
networks:
internal:
mailhog:
image: mailhog/mailhog:latest
restart: always
ports:
- 1025:1025
- 8025:8025
volumes:
keycloak-postgres:
networks:
internal:
When I start the Mailhog container, I get the following message in the logs:
2022/12/09 16:12:28 Using in-memory storage
2022/12/09 16:12:28 [SMTP] Binding to address: 0.0.0.0:1025
2022/12/09 16:12:28 Serving under http://0.0.0.0:8025/
[HTTP] Binding to address: 0.0.0.0:8025
In Keycloak, I have entered the following test configuration:
Screenshot of the Email settings in my local Keycloak container
I have further tried - alternatively to the hostname localhost - the following options
0.0.0.0
127.0.0.1
the Mailhog container's name keycloak-custom-mailhog-1 (as suggested here)
None of them however worked.
What am I doing wrong?

The communication between containers needs to happen using the hostnames declared in the docker compose file.
This means that you need to use the hostname mailhog, not 127.0.0.1 (and 0.0.0.0 should never work, because that can only be used when binding, with the meaning "all available IP addresses of this host", though some tools will use 127.0.0.1 when they are configured to connect to 0.0.0.0).
However, to be able to actually access mailhog, you need to be sure it is up and running, so you also need to add mailhog to the depends_on of keycloak. In addition, the mailhog container has to be on the same virtual network as keycloak, so you need to add a networks with value internal to mailhog.

Related

Traefik with Docker-Compose not working as expected

I am fairly new to using traefik, so I might be totally missing something simple, but I have the following docker-compose.yaml:
version: '3.8'
services:
reverse-proxy:
container_name: reverse_proxy
restart: unless-stopped
image: traefik:v2.0
command:
- --entrypoints.web.address=:80
- --entrypoints.web-secure.address=:443
- --api.insecure=true
- --providers.file.directory=/conf/
- --providers.file.watch=true
- --providers.docker=true
ports:
- "80:80"
- "8080:8080"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./scripts/certificates/conf/:/conf/
- ./scripts/certificates/ssl/:/certs/
networks:
- bnkrl.io
labels:
- "traefik.enable=true"
- "traefik.http.routers.api.rule=Host(`traefik.bnkrl.io`)"
- "traefik.docker.network=bnkrl.io"
bankroll:
container_name: bankroll
build:
context: .
ports:
- "3000"
volumes:
- .:/usr/src/app
command: yarn start
networks:
- bnkrl.io
labels:
- "traefik.http.routers.bankroll.rule=Host(`bankroll.bnkrl.io`)"
- "traefik.docker.network=bnkrl.io"
- "traefik.http.services.bankroll.loadbalancer.server.port=3000"
- "traefik.http.routers.bankroll-https.rule=Host(`bankroll.bnkrl.io`)"
- "traefik.http.routers.bankroll-https.tls=true"
networks:
bnkrl.io:
external: true
But for some reason the following is happening:
Running curl when ssh'd into my bankroll container gives the following:
/usr/src/app# curl bankroll.bnkrl.io
curl: (7) Failed to connect to bankroll.bnkrl.io port 80: Connection refused
Despite having - "traefik.http.services.bankroll.loadbalancer.server.port=3000" label set up.
I am also unable to hit traefik from my application container:
curl traefik.bnkrl.io
curl: (6) Could not resolve host: traefik.bnkrl.io
Despite my expectation to be able to do so since they are both on the same network.
Any help with understanding what I might be doing wrong would be greatly appreciated! My application (bankroll) is a very basic hello-world react app, but I don't think any of the details around that are relevant to the issue I'm facing.
EDIT: I am also not seeing any error logs on traefik side of things.
You are using host names that are not declared and therefore are unreachable.
To reach a container from another container, you need to use the service name, for example, if you connect to bankroll from the reverse-proxy it will hit the other service.
While if you want to access them from the host machine, you will have to publish the ports (which you did, it's all the stuff in ports in your Docker-compose file) and access from localhost or from your machine local IP address instead of traefik.bnkrl.io.
If you want to access from traefik.bnkrl.io, you will have to declare this host name, and point it to the place where the Docker containers are running from.
So either a DNS record in the domain bnkrl.io pointing to your local machine, or a HOSTS file entry in your computer pointing to 127.0.0.1.
Another note: For SSL you are going to need a valid certificate to use for the host name. While in local development, you can use the self-signed certificate provided by Traefik, but you may have to install it in the computer connecting to the service, or allow untrusted certificates from your browser, or wherever you are making the requests from (some browsers no longer support using self-signed certificates). For SSL on the Internet you will need to look at things like Let's Encrypt.

Local proxy server using Traefik/Docker

I'm trying to create some kind of reverse proxy server that would serve a port running on my local network (192.168.0.15:5083) through either another port (192.168.0.15:<ANOTHER PORT>) or through another path on the IP address (192.168.0.15/pathname). I want this to be reachable from other computers on the same network.
I'm trying to achieve this using Traefik with Docker through a docker-compose.yml file. Currently I have it set up like this:
lms:
container_name: lms
image: epoupon/lms
user: ${PUID}:${PGID}
ports:
- 5083:5082
volumes:
- ${USERDIR}/docker/lms:/var/lms
- /media/music:/music:ro
environment:
- TZ=${TZ}
- PUID=${PUID}
- PGID=${PGID}
labels:
- "traefik.enable=true"
- "traefik.http.routers.lms.rule=Path(`/lms`)"
restart: unless-stopped
reverse-proxy:
image: traefik:v2.6
command: --api.insecure=true --providers.docker
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
What I'm trying to do here is to create a path on the servers IP address (192.168.0.15/lms) that would serve port 5083 (192.168.0.15:5083). The purpose of this is to then be able to apply CORS headers to the proxy server.
When visiting 192.168.0.15/lms from another machine on the same network as the server, I get this error message:
Fatal error: failed loading /js/jquery-1.10.2.min.js
I interpret this as if it gets a connection to port 5083, but the assets/resources being used on the front end on that port is not loading correctly.
Am I doing this right or should I do it in a different way to succeed?

Multiple isloated elasticsearch cluster with single docker-compose file

I want to create 2 elasticsearch cluster in single docker-compose file, so that I can test few changes only on new es cluster,
My docker-compose file is look like this
version: "2.2"
services:
elasticsearch-master:
image: elasticsearch:6.6.0
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- "9200:9200"
mem_limit: '2048M'
new-elasticsearch-master:
image: elasticsearch:6.6.0
volumes:
- esdata2:/usr/share/elasticsearch/data
ports:
- "9400:9200"
mem_limit: '2048M'
search:
image: search:latest
entrypoint: java -Delasticsearch.host=elasticsearch-master -DnewElasticsearch.host=new-elasticsearch-master -DnewElasticsearch.port=9400 -jar app.jar
ports:
- "8083:8083"
depends_on:
- elasticsearch-master
- new-elasticsearch-master
mem_limit: '500M'
volumes:
esdata1:
esdata2:
I have 1 java service where I am adding both the host with different environment variable
-Delasticsearch.host=elasticsearch-master
-DnewElasticsearch.host=new-elasticsearch-master
But when I run code from java search service as follow
new RestTemplate().getForEntity("http://elasticsearch-master:9200/_cat/indices?v",String.class)
This gives me correct response.
But when I try to connect to another host on 9400.
new RestTemplate().getForEntity("http://new-elasticsearch-master:9400/_cat/indices?v",String.class)
I am getting Connection Refused error
When I try same host with 9200 then that gives me 200 response.
new RestTemplate().getForEntity("http://new-elasticsearch-master:9200/_cat/indices?v",String.class)
Can someone please tell me how can I make 2 different connection with different port as below.
http://elasticsearch-master:9200
http://new-elasticsearch-master:9400
Thanks
You got the expected behavior. The ports field in docker-compose map the ports to your localhost, which mean that the "old" Elasticsearch will be available via localhost:9200 and the "new" Elasticsearch will be available via localhost:9400.
On the other hand, docker-compose services communicate in an internal network and the service name is the hostname and the port is the original listening port.
Thus, you were able to access (internally) your old one via http://elasticsearch-master:9200 and the new one via http://new-elasticsearch-master:9200.
If you wish to use the new Elasticsearch with 9400 you need to change its settings: http.port. You can do that like:
new-elasticsearch-master:
image: elasticsearch:6.6.0
volumes:
- esdata2:/usr/share/elasticsearch/data
environment:
- http.port=9400
ports:
- "9400:9400"
mem_limit: '2048M'
note that you have to change the port mapping as well (because it will map your new port, 9400 to the localhost 9400).

Cannot connect to Mysql using Docker

I build a website using Strapi and Gatsby, everythings works well when I try to connect to a remote database, but I'm trying to create a db inside a container and so far no luck.
Essentially, what I did is create the following docker-compose:
version: '3'
services:
backend:
container_name: myapp_backend
build: ./backend/
ports:
- '3002:3002'
volumes:
- ./backend:/usr/src/myapp/backend
- /usr/src/myapp/backend/node_modules
environment:
- APP_NAME=myapp_backend
- DATABASE_CLIENT=mysql
- DATABASE_HOST=db
- DATABASE_PORT=3307
- DATABASE_NAME=myapp_db
- DATABASE_USERNAME=johnny
- DATABASE_PASSWORD=stecchino
- DATABASE_SSL=false
- DATABASE_AUTHENTICATION_DATABASE=myapp_db
- HOST=localhost
depends_on:
- db
restart: always
db:
container_name: myapp_mysql
image: mysql:5.7
volumes:
- ./db.sql:/docker-entrypoint-initdb.d/db.sql
restart: always
ports:
- 3307:3307
environment:
MYSQL_ROOT_PASSWORD: 5!JF6!FgAkvt
MYSQL_DATABASE: myapp_db
MYSQL_USER: johnny
MYSQL_PASSWORD: stecchino
command: mysqld --character-set-server=utf8 --collation-server=utf8_general_ci --init-connect='SET NAMES UTF8;' --innodb-flush-log-at-trx-commit=0
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: 'myapp_phpmyadmin'
links:
- db
environment:
PMA_HOST: db
PMA_PORT: 3307
ports:
- '8081:80'
volumes:
- /sessions
depends_on:
- db
frontend:
container_name: myapp_frontend
build: ./frontend/
ports:
- '3001:3001'
depends_on:
- backend
volumes:
- ./frontend:/usr/src/myapp/frontend
the backend service contains the Strapi application, the db service contains the mysql instance which runs on the port 3307 'cause 3306 is already in use.
Then I have also installed phpmyadmin, and last but not least the Gastby site. When I run using docker-compose up --build, and try to access to phpmyadmin using:
http://localhost:8081/index.php
with the following credentials:
user: johnny
pwd: stecchino
I get:
MySQL mysqli::real_connect():(HY000/2002): Connection refused
now, what I did for fix that situation is pass the port 3306 instead of 3307 to backend and phpmyadmin service. And magically, everything works. But why? I have mapped container and host to 3307...
There are 2 things happening here.
Mysql is running on port 3306.
This is because you never told the mysql container to run on port 3307. The default configuration is running on 3306.
phpadmin can connect to mysql at port 3306.
Of course it can. This is because when you define multiple services within the same docker-compose file, they start on the same network. This means that they can see and connect to each other's internal ports without the need for external port binding like 3306:3306
I would suggest to keep port bindings only for services that you want access outside the docker environment (like the UI), and for internal components just expose the port like this
expose:
- 3306
Both answers are useful, I am particularly fond of Manish's answer
I wanted to add some additional wording:
There are the internal docker networks which nothing from the outside can gain access to. From inside any given service (or container), you can reach every other service (or container) via:
<service-name>:<port>/path/of/resources
<container-name>:<port>/path/of/resources
In order to access resources inside the docker network from outside of docker, whether that is from your host environment, or farther upstream on the internet, the docker daemon needs to bind to host ports, and then forward information received on those ports to a docker service (and ultimately a docker container).
In your docker-compose.yml when you do the 3307:3307 you are telling the docker daemon to listen on port 3307, and forward to your db service internally on it's port 3307.
However, from what we can all see, mysql is still internally (that is, inside the container) listening for traffic on port 3306. Any containers or services on the same docker networks as your db service (mysql running container(s)) would be able to access mysql via something like:
<driver>:mysql://db:3306/<dbname>
If you wanted all host traffic and docker network traffic to access mysql on port 3307, you would also need to configure mysql to listen on port 3307 instead of 3306. That tidbit of information does not appear to be in your question at the time of writing.
I hope the additional information helps! It's a topic I chat often about when talking docker with folks.
Because 3306 is the exposed port by the official Dockerfile.
What you can do is to map the port that is running MySQL to another port on your host: 3307:3306 for instance (always host:container)

Minio / Keycloak integration: connection refused

I am trying to connect MinIO with KeyCloak and I follow the instructions provided in this documentation:
https://github.com/minio/minio/blob/master/docs/sts/keycloak.md
What I have done so far is deploy a Docker container for the MinIO server, another one for the MinioClient and a third one used for the KeyCloak server.
As you can see in the following snippet the configuration of the Minio Client container is done correctly, since I can list the buckets available in the Minio Server:
mc ls myminio
[2020-05-14 11:54:59 UTC] 0B bucket1/
[2020-05-06 12:23:01 UTC] 0B bucket2/
I have an issue arising when I try to configure MinIO as depicted in step 3 (Configure MinIO) of the documentation. In more detail, the command that I run is this one:
mc admin config set myminio identity_openid config_url="http://localhost:8080/auth/realms/demo/.well-known/openid-configuration" client_id="account"
And the error I get is this one:
mc: <ERROR> Cannot set 'identity_openid config_url=http://localhost:8080/auth/realms/demo/.well-known/openid-configuration client_id=account' to server. Get http://localhost:8080/auth/realms/demo/.well-known/openid-configuration: dial tcp 127.0.0.1:8080: connect: connection refused.
When I curl this address http://localhost:8080/auth/realms/demo/.well-known/openid-configuration from the MinIO Client container though, I retrieve the JSON file.
Turns out, all I had to do is change the localhost in the config_url, from localhost to the IP of the KeyCloak container (172.17.0.3).
This is just a temporary solution that works for now, but I will continue searching for something more concrete than just hardcoding the IP.
When I figure out the solution, this answer will be updated.
Update
I had to create a docker-compose.yml file as the one below in order to overcome the issues without having to manually place the IP of the KeyCloak container.
version: '2'
services:
miniod:
image: minio/minio
restart: always
container_name: miniod
ports:
- 9000:9000
volumes:
- "C:/data:/data"
environment:
- "MINIO_ACCESS_KEY=access_key"
- "MINIO_SECRET_KEY=secret_key"
command: ["server", "/data"]
networks:
- minionw
mcd:
image: minio/mc
container_name: mcd
networks:
- minionw
kcd:
image: quay.io/keycloak/keycloak:10.0.1
container_name: kcd
restart: always
ports:
- 8080:8080
environment:
- "KEYCLOAK_USER=admin"
- "KEYCLOAK_PASSWORD=pass"
networks:
- minionw
networks:
minionw:
driver: "bridge"
Connection refused occurs when a port is not accessible on the hostname or IP we specified.
Please try exposing the port using --expose flag along with the port number which you wish to expose when using the docker CLI. Then being exposed, you can access on it on localhost

Resources