I am fairly new to using traefik, so I might be totally missing something simple, but I have the following docker-compose.yaml:
version: '3.8'
services:
reverse-proxy:
container_name: reverse_proxy
restart: unless-stopped
image: traefik:v2.0
command:
- --entrypoints.web.address=:80
- --entrypoints.web-secure.address=:443
- --api.insecure=true
- --providers.file.directory=/conf/
- --providers.file.watch=true
- --providers.docker=true
ports:
- "80:80"
- "8080:8080"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./scripts/certificates/conf/:/conf/
- ./scripts/certificates/ssl/:/certs/
networks:
- bnkrl.io
labels:
- "traefik.enable=true"
- "traefik.http.routers.api.rule=Host(`traefik.bnkrl.io`)"
- "traefik.docker.network=bnkrl.io"
bankroll:
container_name: bankroll
build:
context: .
ports:
- "3000"
volumes:
- .:/usr/src/app
command: yarn start
networks:
- bnkrl.io
labels:
- "traefik.http.routers.bankroll.rule=Host(`bankroll.bnkrl.io`)"
- "traefik.docker.network=bnkrl.io"
- "traefik.http.services.bankroll.loadbalancer.server.port=3000"
- "traefik.http.routers.bankroll-https.rule=Host(`bankroll.bnkrl.io`)"
- "traefik.http.routers.bankroll-https.tls=true"
networks:
bnkrl.io:
external: true
But for some reason the following is happening:
Running curl when ssh'd into my bankroll container gives the following:
/usr/src/app# curl bankroll.bnkrl.io
curl: (7) Failed to connect to bankroll.bnkrl.io port 80: Connection refused
Despite having - "traefik.http.services.bankroll.loadbalancer.server.port=3000" label set up.
I am also unable to hit traefik from my application container:
curl traefik.bnkrl.io
curl: (6) Could not resolve host: traefik.bnkrl.io
Despite my expectation to be able to do so since they are both on the same network.
Any help with understanding what I might be doing wrong would be greatly appreciated! My application (bankroll) is a very basic hello-world react app, but I don't think any of the details around that are relevant to the issue I'm facing.
EDIT: I am also not seeing any error logs on traefik side of things.
You are using host names that are not declared and therefore are unreachable.
To reach a container from another container, you need to use the service name, for example, if you connect to bankroll from the reverse-proxy it will hit the other service.
While if you want to access them from the host machine, you will have to publish the ports (which you did, it's all the stuff in ports in your Docker-compose file) and access from localhost or from your machine local IP address instead of traefik.bnkrl.io.
If you want to access from traefik.bnkrl.io, you will have to declare this host name, and point it to the place where the Docker containers are running from.
So either a DNS record in the domain bnkrl.io pointing to your local machine, or a HOSTS file entry in your computer pointing to 127.0.0.1.
Another note: For SSL you are going to need a valid certificate to use for the host name. While in local development, you can use the self-signed certificate provided by Traefik, but you may have to install it in the computer connecting to the service, or allow untrusted certificates from your browser, or wherever you are making the requests from (some browsers no longer support using self-signed certificates). For SSL on the Internet you will need to look at things like Let's Encrypt.
Related
I have containers running locally for both Keycloak and Mailhog. I then wanted to send some test emails from Keycloak, but I am always getting the error below. I have made the configuration as outlined at the bottom (localhost:1025) and tried various other things - all with no success unfortunately. I have also found a handful of questions on this topic here on stackoverflow - unfortunately however, the answers given there did either not solve my problem (changing the hostname) or I could not really understand (changing php.ini).
keycloak-custom-keycloak-1 | com.sun.mail.util.MailConnectException: Couldn't connect to host, port: localhost, 1025; timeout 10000;
keycloak-custom-keycloak-1 | nested exception is:
keycloak-custom-keycloak-1 | java.net.ConnectException: Connection refused (Connection refused)
From what I can see, I have configured everything properly, checked whether the containers are running with the correct ports, but still I am getting this error. (Details see below).
My docker-compose.yml file looks like this:
version: '3.3'
services:
keycloak:
image: jboss/keycloak #:${KEYCLOAK_VERSION}
ports:
- "8080:8080"
environment:
- KEYCLOAK_USER=${KEYCLOAK_USER}
- KEYCLOAK_PASSWORD=${KEYCLOAK_PASSWORD}
- DB_DATABASE=${KEYCLOAK_DATABASE_NAME}
- DB_USER=${KEYCLOAK_DATABASE_USER}
- DB_PASSWORD=${KEYCLOAK_DATABASE_PASSWORD}
- DB_ADDR=${KEYCLOAK_DATABASE_HOST}
- DB_VENDOR=${KEYCLOAK_DATABASE_VENDOR}
- KEYCLOAK_IMPORT=/tmp/realm-export.json
volumes:
- ./keycloak/realms/realm-export.json:/tmp/realm-export.json
- ./keycloak/scripts/disable-theme-cache.cli:/opt/jboss/startup-scripts/disable-theme-cache.cli
- ./keycloak/themes/gesetzeio:/opt/jboss/keycloak/themes/gesetzeio
networks:
internal:
depends_on:
- keycloakdb
keycloakdb:
image: postgres:${POSTGRES_VERSION}
ports:
- "5433:5432"
environment:
- POSTGRES_USER=${KEYCLOAK_DATABASE_USER}
- POSTGRES_PASSWORD=${KEYCLOAK_DATABASE_PASSWORD}
- POSTGRES_DB=${KEYCLOAK_DATABASE_NAME}
volumes:
- keycloak-postgres:/var/lib/postgresql/data
networks:
internal:
mailhog:
image: mailhog/mailhog:latest
restart: always
ports:
- 1025:1025
- 8025:8025
volumes:
keycloak-postgres:
networks:
internal:
When I start the Mailhog container, I get the following message in the logs:
2022/12/09 16:12:28 Using in-memory storage
2022/12/09 16:12:28 [SMTP] Binding to address: 0.0.0.0:1025
2022/12/09 16:12:28 Serving under http://0.0.0.0:8025/
[HTTP] Binding to address: 0.0.0.0:8025
In Keycloak, I have entered the following test configuration:
Screenshot of the Email settings in my local Keycloak container
I have further tried - alternatively to the hostname localhost - the following options
0.0.0.0
127.0.0.1
the Mailhog container's name keycloak-custom-mailhog-1 (as suggested here)
None of them however worked.
What am I doing wrong?
The communication between containers needs to happen using the hostnames declared in the docker compose file.
This means that you need to use the hostname mailhog, not 127.0.0.1 (and 0.0.0.0 should never work, because that can only be used when binding, with the meaning "all available IP addresses of this host", though some tools will use 127.0.0.1 when they are configured to connect to 0.0.0.0).
However, to be able to actually access mailhog, you need to be sure it is up and running, so you also need to add mailhog to the depends_on of keycloak. In addition, the mailhog container has to be on the same virtual network as keycloak, so you need to add a networks with value internal to mailhog.
I'm trying to create some kind of reverse proxy server that would serve a port running on my local network (192.168.0.15:5083) through either another port (192.168.0.15:<ANOTHER PORT>) or through another path on the IP address (192.168.0.15/pathname). I want this to be reachable from other computers on the same network.
I'm trying to achieve this using Traefik with Docker through a docker-compose.yml file. Currently I have it set up like this:
lms:
container_name: lms
image: epoupon/lms
user: ${PUID}:${PGID}
ports:
- 5083:5082
volumes:
- ${USERDIR}/docker/lms:/var/lms
- /media/music:/music:ro
environment:
- TZ=${TZ}
- PUID=${PUID}
- PGID=${PGID}
labels:
- "traefik.enable=true"
- "traefik.http.routers.lms.rule=Path(`/lms`)"
restart: unless-stopped
reverse-proxy:
image: traefik:v2.6
command: --api.insecure=true --providers.docker
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
What I'm trying to do here is to create a path on the servers IP address (192.168.0.15/lms) that would serve port 5083 (192.168.0.15:5083). The purpose of this is to then be able to apply CORS headers to the proxy server.
When visiting 192.168.0.15/lms from another machine on the same network as the server, I get this error message:
Fatal error: failed loading /js/jquery-1.10.2.min.js
I interpret this as if it gets a connection to port 5083, but the assets/resources being used on the front end on that port is not loading correctly.
Am I doing this right or should I do it in a different way to succeed?
I am trying to connect MinIO with KeyCloak and I follow the instructions provided in this documentation:
https://github.com/minio/minio/blob/master/docs/sts/keycloak.md
What I have done so far is deploy a Docker container for the MinIO server, another one for the MinioClient and a third one used for the KeyCloak server.
As you can see in the following snippet the configuration of the Minio Client container is done correctly, since I can list the buckets available in the Minio Server:
mc ls myminio
[2020-05-14 11:54:59 UTC] 0B bucket1/
[2020-05-06 12:23:01 UTC] 0B bucket2/
I have an issue arising when I try to configure MinIO as depicted in step 3 (Configure MinIO) of the documentation. In more detail, the command that I run is this one:
mc admin config set myminio identity_openid config_url="http://localhost:8080/auth/realms/demo/.well-known/openid-configuration" client_id="account"
And the error I get is this one:
mc: <ERROR> Cannot set 'identity_openid config_url=http://localhost:8080/auth/realms/demo/.well-known/openid-configuration client_id=account' to server. Get http://localhost:8080/auth/realms/demo/.well-known/openid-configuration: dial tcp 127.0.0.1:8080: connect: connection refused.
When I curl this address http://localhost:8080/auth/realms/demo/.well-known/openid-configuration from the MinIO Client container though, I retrieve the JSON file.
Turns out, all I had to do is change the localhost in the config_url, from localhost to the IP of the KeyCloak container (172.17.0.3).
This is just a temporary solution that works for now, but I will continue searching for something more concrete than just hardcoding the IP.
When I figure out the solution, this answer will be updated.
Update
I had to create a docker-compose.yml file as the one below in order to overcome the issues without having to manually place the IP of the KeyCloak container.
version: '2'
services:
miniod:
image: minio/minio
restart: always
container_name: miniod
ports:
- 9000:9000
volumes:
- "C:/data:/data"
environment:
- "MINIO_ACCESS_KEY=access_key"
- "MINIO_SECRET_KEY=secret_key"
command: ["server", "/data"]
networks:
- minionw
mcd:
image: minio/mc
container_name: mcd
networks:
- minionw
kcd:
image: quay.io/keycloak/keycloak:10.0.1
container_name: kcd
restart: always
ports:
- 8080:8080
environment:
- "KEYCLOAK_USER=admin"
- "KEYCLOAK_PASSWORD=pass"
networks:
- minionw
networks:
minionw:
driver: "bridge"
Connection refused occurs when a port is not accessible on the hostname or IP we specified.
Please try exposing the port using --expose flag along with the port number which you wish to expose when using the docker CLI. Then being exposed, you can access on it on localhost
I'm running a selfhosted gitlab docker instance, but I'm facing some problems configuring the registry as I do get the error
Error response from daemon: Get https://example.com:4567/v2/: dial tcp <IP>:4567: connect: connection refused
for doing docker login example.com:4567.
So it seems that I have to expose the port 4567 somehow.
An (better) alternative would be to configure a second domain for the registry - like registry.example.com. As you can see below I'm using letsencrypt certificates for my gitlab instance. But how do I get a second certificate for the registry?
This is how my docker-compose looks like - I'm using jwilder/nginx-proxy for my reverse proxy.
docker-compose.yml
gitlab:
image: gitlab/gitlab-ce:11.9.0-ce.0
container_name: gitlab
networks:
- reverse-proxy
restart: unless-stopped
ports:
- '50022:22'
volumes:
- /opt/gitlab/config:/etc/gitlab
- /opt/gitlab/logs:/var/log/gitlab
- /opt/gitlab/data:/var/opt/gitlab
- /opt/nginx/conf.d:/etc/nginx/conf.d
- /opt/nginx/certs:/etc/nginx/certs:ro
environment:
VIRTUAL_HOST: example.com
VIRTUAL_PROTO: https
VIRTUAL_PORT: 443
LETSENCRYPT_HOST: example.com
LETSENCRYPT_EMAIL: certs#example.com
gitlab.rb
external_url 'https://example.com'
nginx['redirect_http_to_https'] = true
nginx['ssl_certificate'] = '/etc/nginx/certs/example.com/fullchain.pem'
nginx['ssl_certificate_key'] = '/etc/nginx/certs/example.com/key.pem'
gitlab_rails['backup_keep_time'] = 604800
gitlab_rails['backup_path'] = '/backups'
gitlab_rails['registry_enabled'] = true
registry_external_url 'https://example.com:4567'
registry_nginx['ssl_certificate'] = "/etc/nginx/certs/example.com/fullchain.pem"
registry_nginx['ssl_certificate_key'] = "/etc/nginx/certs/example.com/key.pem"
For the second alternative it would look like:
registry_external_url 'https://registry.example.com'
registry_nginx['ssl_certificate'] = "/etc/nginx/certs/registry.example.com/fullchain.pem"
registry_nginx['ssl_certificate_key'] = "/etc/nginx/certs/registry.example.com/key.pem"
But how do I set this up in my docker-compose?
Update
Im configuring nginx just via jwilder package, without changing anyhting. So this part of my docker-compose.yml file just looks like this:
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
networks:
- reverse-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /opt/nginx-proxy/vhost.d:/etc/nginx/vhost.d:rw
- /opt/nginx/certs:/etc/nginx/certs:ro
- html:/usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
nginx-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-letsencrypt
networks:
- reverse-proxy
depends_on:
- nginx-proxy
volumes:
- /opt/nginx-proxy/vhost.d:/etc/nginx/vhost.d:rw
- html:/usr/share/nginx/html
- /opt/nginx/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:rw
environment:
NGINX_PROXY_CONTAINER: "nginx-proxy"
TL; DR:
So it seems that I have to expose the port 4567 somehow.
Yes, however jwilder/nginx-proxy does not support more than one port per virtual host and port 443 is already exposed. There is a pull request for that feature but it has not been merged yet. You'll need to expose this port another way (see below)
You are using jwilder/nginx-proxy as reverse proxy to access a Gitlab instance in a container but with your current configuration onlyport 443 is exposed:
environment:
VIRTUAL_HOST: example.com
VIRTUAL_PROTO: https
VIRTUAL_PORT: 443
All other Gitlab services (including the registry on port 4567) are not proxied and therefore not reachable through example.com.
Unfortunately it is not possible yet to expose multiple port on a single hostname with jwilder/nginx-proxy. There is a pull request open for that use case but it had not been merged yet (you are not the only one with this kind of issue).
An (better) alternative would be to configure a second domain for the registry
This won't work if you keep using jwilder/nginx-proxy as even if you changed registry_external_url, you'll still be stuck with the port issue, and you cannot allocate the same port to two different services.
What you can do:
vote and comment for mentioned PR to be merged :)
try to build the Docker image from mentionned pull request's fork and configure your compose with something like VIRTUAL_HOST=example.com:443,example.com:4567
configure a reverse proxy manually fort port 4567 - you may wind-up a plain nginx container in addition with your current configuration which would specifically do this, or re-configure your entire proxying scheme without using jwilder images
update your configuration to expose example.com:4567 instead of example.com:443 but you'll lose HTTPS access. (though it's probably not what you are looking for)
I am aware this does not provide a finite solution but I hope it helps.
Context
I was planning on simplifying some development setup of multiple docker-compose.yml by introducing virtual hosts locally. I looked around and decided to use nginx-proxy for the reverse-proxy (ability to set VIRTUAL_HOST for each service).
Setup
To expose these on the host machine I went the route of dnsmasq and adding a /etc/resolver/test/ with nameserver 127.0.0.1.
I went and put the above into action using a dev/docker-compose.yml file:
version: '3.5'
services:
nginx-proxy:
image: jwilder/nginx-proxy
restart: 'always'
ports:
- "80:80"
- "443:443"
volumes:
- "/var/run/docker.sock:/tmp/docker.sock:ro"
dnsmasq:
image: andyshinn/dnsmasq
restart: 'always'
ports:
- "53:53/tcp"
- "53:53/udp"
cap_add:
- NET_ADMIN
command: --log-facility=-
volumes:
- ./data/dnsmasq.conf:/etc/dnsmasq.conf
- ./data/dnsmasq.d:/etc/dnsmasq.d
networks:
default:
external:
name: proxynet
The data/dnsmasq.conf file only contains address=/test/127.0.0.1.
I've also created an external network proxynet and use that as the default network for the docker-compose file(s) (docker network create proxynet). This then allows other docker-compose files and services to be linked to the proxy.
I have the following proj1/docker-compose.yml:
version: "3.5"
services:
proj1-web:
image: jwilder/whoami
environment:
- VIRTUAL_HOST=proj1-web.test
networks:
default:
external:
name: proxynet
Having both these of these docker-compose files running (i.e., docker-compose up) I am able to access proj1-web.test from my local machine. Everything works as expected.
Now I want to be able to reference proj1-web.test in another container and have it resolve to the running container.
I'll create proj2/docker-compose.yml (similar to previous just different name):
version: "3.5"
services:
proj2-web:
image: jwilder/whoami
environment:
- VIRTUAL_HOST=proj2-web.test
networks:
default:
external:
name: proxynet
With everything running I can access both proj1-web.test and proj2-web.test from my local machine. I can successfully curl different services using between proj1 and proj2: docker-compose run proj1-web sh -c "apk update -qq; apk add curl -qq; curl -v proj2-web:8000".
Problem
The problem is that I cannot curl the virtual host's name proj2-web.test from proj1: docker-compose run proj1-web sh -c "apk update -qq; apk add curl -qq; curl -v proj2-web.test":
* Rebuilt URL to: proj2-web.test/
* Trying 127.0.0.1...
* TCP_NODELAY set
* connect to 127.0.0.1 port 80 failed: Connection refused
* Failed to connect to proj2-web.test port 80: Connection refused
* Closing connection 0
curl: (7) Failed to connect to proj2-web.test port 80: Connection refused
Is there something I'm missing here? It appears the individual containers don't have access to the DNS being provided from dnsmasq to my local machine, I cannot figure out how to grant them that access. Maybe I'm going about this the wrong way -- I am open to suggestions.
I ended up creating a solution which addresses my question. You can see the repository here for the tool:
https://github.com/scoremedia/dcdc
I also created a blog post detailing a bit of this: https://kevinjalbert.com/docker-compose-dns-consistency-dcdc/
Hopefully this helps others.